├── .gitattributes ├── .gitignore ├── Readme.md ├── ads ├── concept │ ├── complexity_and_amortized_analysis.md │ ├── data_structure_trees.md │ ├── data_structures_hashing.md │ ├── other.md │ ├── readme.md │ └── sorting_algorithms.md ├── learn.md └── readme.md ├── compilers_design ├── cheatsheet │ ├── chomsky.png │ └── parsing.md ├── concept │ ├── parsing.md │ ├── parsing_algorithm.md │ └── types_theory.md ├── learn.md └── readme.md ├── cs ├── cheatsheet │ ├── aas_1.webp │ ├── aas_2.jpg │ ├── cap_theorem.png │ ├── cloud_benefits.svg │ ├── concurrent_programming.jpeg │ ├── horizontal_vs_vertical.png │ ├── isolation_levels_vs_read_phenomena.jpg │ ├── latencies.jpg │ ├── resilience_patterns.png │ └── sharding_partitiioning.png ├── cheatsheets.md ├── concepts.md ├── learn.md └── readme.md ├── dbms ├── cheatsheet.md ├── cheatsheet │ ├── all_normal_forms.jpg │ ├── erd_bachmans_notation.png │ ├── erd_barker.jpg │ ├── erd_barkers_notation.png │ ├── erd_barkers_notation_2.jpg │ ├── erd_barkers_notation_3.jpg │ ├── erd_chens_notation.png │ ├── erd_chens_notation_2.png │ ├── erd_chens_notation_3.jpg │ ├── erd_crows_foot_notation.png │ ├── erd_crows_foot_notation_2.png │ ├── erd_iso_notation.png │ ├── erd_many_to_many_in_relational_db.jpg │ ├── erd_self_referential.jpg │ ├── erd_selfref.jpg │ ├── erd_uml.jpg │ ├── erd_uml_notation.png │ ├── functions_mapping.jpg │ ├── normalization_steps.jpg │ ├── other_sql_join.jpg │ ├── relations_to_chen.jpg │ ├── sql_aggregate_vs_window_functions.jpg │ ├── sql_isolation_level.png │ ├── sql_join.jpg │ └── transitive_vs_full_dependencies.jpg ├── cheatsheet_erd.md ├── cheatsheet_psql.md ├── concepts.md ├── learn.md ├── questions.md ├── readme.md └── skills.md ├── devops ├── cheatsheets.md ├── concepts.md ├── learn.md └── readme.md ├── digital_product ├── cheatsheet.md ├── concepts.md ├── learn.md ├── questions.md ├── readme.md └── skills.md ├── file_format ├── cheatsheet │ ├── jpg.png │ ├── mp4.png │ └── pdf.png └── cheatsheets.md ├── gat ├── cheatsheets.md ├── concepts.md ├── learn.md └── readme.md ├── git ├── asset │ └── cheatsheet.jpg ├── cheatsheet.md ├── misconceptions_and_advices.md └── readme.md ├── hpc ├── cheatsheet │ ├── collective_communication.png │ ├── concurrent_programming.jpeg │ ├── cpu_vs_gpu.jpg │ ├── cuda_momory_model.jpg │ ├── granularity.jpg │ └── latencies.jpg ├── cheatsheets.md ├── concept.md ├── concept_cuda.md └── readme.md ├── iot ├── cheatsheet │ ├── automation_pyramid.png │ └── automation_stack.png ├── concept.md ├── learn.md └── readme.md ├── license ├── research └── blank ├── skill ├── cloud │ └── aws.md └── org_second_brain │ └── notion_second_brain.md ├── statistics ├── cheatsheet │ └── p_value.png ├── cheatsheets.md ├── concepts.md ├── learn.md └── readme.md └── terraform ├── cheatsheet └── workflow.png ├── learn.md ├── readme.md └── tutorial.md /.gitattributes: -------------------------------------------------------------------------------- 1 | *.s linguist-language=JavaScript 2 | *.ss linguist-language=JavaScript 3 | *.js linguist-language=JavaScript 4 | * -text -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | !/.github 3 | !/.circleci 4 | !/.* 5 | 6 | /build 7 | /builder 8 | /binding 9 | /target 10 | /node_modules 11 | /.module 12 | /package-lock.json 13 | /Cargo.lock 14 | /.vscode 15 | /_* 16 | 17 | target 18 | dist 19 | .module 20 | Cargo.lock 21 | .DS_Store 22 | .idea 23 | *.log 24 | *.db 25 | *.tmp 26 | *.build 27 | *.code-workspace 28 | .warchive* 29 | -* 30 | -------------------------------------------------------------------------------- /Readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Practical Computer Science Together 2 | 3 | Awesome collection of learning materials to master modern Computer Science including Operating Systems, Systems Design, Distributed Systems, DevOps and DBMS. 4 | 5 | ## What is this about? 6 | 7 | This repostory contains nearly dozen curated collections: learning materials, toolboxes, newspapers, working groups, collection of other collections. Everything you will find useful if you are interested in Computer Science. 8 | 9 | Here you can find: 10 | 11 | - [General Computer Science](./cs) 12 | - [Graphs and Automata Theory](./gat) 13 | - [DevOps](./devops) 14 | - [Algorithms and Data Structures](./ads) 15 | - [High Performance Computation](./hpc) 16 | - [Git and GitHub](./git) 17 | - [Database Management Systems](./dbms) 18 | - [Digital Product](./digital_product) 19 | - [Compilers Design](./compilers_design) 20 | -------------------------------------------------------------------------------- /ads/concept/complexity_and_amortized_analysis.md: -------------------------------------------------------------------------------- 1 | # :chart_with_upwards_trend: Complexity and Amortized Analysis Concepts 2 | 3 | ##### Complexity Analysis 4 | 5 | Complexity analysis is used to describe how the performance (usually the running time or space usage) of an algorithm or data structure scales with the size of the input. 6 | 7 | It's concerned with general trends, typically in the worst-case, average-case, or best-case scenarios. 8 | 9 | Measurement: It characterizes algorithms using Big O notation (or related notations like Θ, Ω, etc.), describing an upper or lower bound on growth as the size of the input grows. 10 | 11 | Applicability: Complexity analysis is applied to all types of algorithms and functions, from single operations to complete algorithms, regardless of specific sequences of operations. 12 | 13 | Examples: The time complexity of a quick sort is O(nlogn) on average, and the space complexity of a breadth-first search is O(∣V∣+∣E∣), where V and E are the set of vertices and edges in the graph. 14 | 15 | ##### Master Theorem 16 | 17 | The Master Theorem provides a way to analyze the time complexity of recursive algorithms, specifically divide-and-conquer algorithms. 18 | 19 | It's a handy tool for quickly determining the running time of such algorithms without having to solve the recurrence relations manually. Divide-and-conquer algorithms typically divide the problem into smaller subproblems, recursively solve the subproblems, and combine the solutions to solve the original problem. A common way to express the time complexity of a divide-and-conquer algorithm is through a recurrence relation. 20 | 21 | It's worth noting that the Master Theorem doesn't apply to all recurrence relations. If the given recurrence doesn't fit the form, or if none of the three cases apply, other methods must be used to analyze the algorithm's time complexity. 22 | 23 | The Master Theorem applies to recurrence relations of the following form: 24 | 25 | T(n)=a⋅T(nb)+f(n) 26 | 27 | Here: 28 | 29 | - n is the size of the problem. 30 | - a is the number of subproblems. 31 | - b>1 is the factor by which the problem size is divided at each recursion level. 32 | - f(n) is the time to create the subproblems and combine their results, i.e., the time - taken by the non-recursive part of the algorithm. 33 | 34 | The Master Theorem provides three cases that cover different relationships between the functions involved, and each case gives a formula for the overall time complexity of the algorithm: 35 | 36 | - Case 1: If f(n)=O(nc) where clogb​a, and if af(bn​)≤kf(n) for some constant k<1k<1 and sufficiently large nn, then T(n)=Θ(f(n)). 39 | 40 | These three cases cover different ways in which the time complexity of the recursive and non-recursive parts of the algorithm can relate to each other, allowing for a straightforward determination of the overall time complexity in a variety of common situations. 41 | 42 | ##### Amortized Analysis 43 | 44 | Amortized analysis is a strategy for understanding the time complexity of algorithms, particularly when a simple worst-case or average-case analysis might be either difficult or misleading. 45 | 46 | It helps in determining how an algorithm will perform in practice. There are three common methods used for amortized analysis: the aggregate method, the accounting method, and the potential method. Let's understand the essence of each and see what distinguishes them. 47 | 48 | Measurement: It gives a more nuanced view, often applicable to data structures that sometimes have expensive operations. The amortized cost provides an averaged, per-operation cost over a sequence, ensuring that in practice, the data structure won't be inefficient. 49 | 50 | Applicability: Amortized analysis is often applied to data structures, particularly when individual operations' cost can vary widely, and you want to understand the long-term behavior. 51 | 52 | Examples: The amortized time complexity of adding an element to a dynamic array (that doubles in size when full) is O(1), even though individual operations might sometimes be more costly. 53 | 54 | ##### Complexity Analysis vs Amortized Analysis 55 | 56 | Complexity analysis and amortized analysis are both techniques used to understand the performance of algorithms, but they focus on different aspects and are used in different contexts. 57 | 58 | Complexity analysis provides a high-level understanding of how an algorithm scales, often considering the worst-case scenario. 59 | 60 | Amortized analysis gives a more refined view of performance over a sequence of operations, averaging out the highs and lows to provide a more realistic expectation of how an algorithm or data structure performs in practice. 61 | 62 | ##### Amortized Analysis Methods 63 | 64 | - Aggregate 65 | - Accounting 66 | - Potential 67 | 68 | ##### Amortized Analysis Method: Aggregate Method 69 | 70 | Essence: This method sums the costs over the entire sequence of operations and takes the average. It often demonstrates a guarantee that no sequence of n operations can be "too expensive." 71 | 72 | Example: If you have a sequence of operations where most are cheap but occasionally one is expensive, the aggregate method helps understand the averaged cost across the entire sequence. 73 | 74 | ##### Amortized Analysis Method: Accounting Method ~ Banker's Method 75 | 76 | Essence: This method uses a fictional "credit" system where you "charge" more for some operations so that you can "pay" for others that might exceed their actual costs. The idea is to prove that the total credit never goes negative. 77 | 78 | Example: In a dynamic array implementation, you might "charge" a little extra for insertion and "save" that to pay for the eventual resizing of the array. 79 | 80 | ##### Amortized Analysis Method: Potential Method 81 | 82 | Essence: The potential method uses a carefully constructed "potential function" that maps the state of the data structure to a real number representing "potential energy" or "stored work." The changes in potential between consecutive operations pay for the actual cost of the operations. 83 | 84 | Example: For a dynamic table operation, the potential might be proportional to the difference between the number of items and the size of the allocated array. 85 | 86 | ##### Amortized Analysis Methods Differences 87 | 88 | Conceptual Framework: Aggregate looks at the whole sequence and takes an average, Accounting utilizes a credit system, and Potential relies on a mathematical function mapping states to "energy." 89 | 90 | Granularity: Aggregate is often coarser and works with the entire sequence, whereas the other two methods look at the individual transitions between operations. 91 | 92 | Flexibility: The Potential Method is often more flexible and powerful but can be more complex to construct and justify, whereas the Aggregate Method is simpler but may be less insightful for some data structures. 93 | 94 | Use Cases: Accounting is often used when overcharging some operations to pay for others, Aggregate is applied for averaging the cost over a sequence, and Potential is applied when a careful construction of a potential function is suitable to represent the state of the operation. 95 | 96 | Overall, the selection of the method depends on the specific algorithm or data structure being analyzed, and the particular insights or guarantees one wants to achieve. Different methods might be more suitable for different scenarios. 97 | -------------------------------------------------------------------------------- /ads/concept/data_structures_hashing.md: -------------------------------------------------------------------------------- 1 | # :chart_with_upwards_trend: Data Structure : Hashing Concepts 2 | 3 | ##### DAT ~ Direct Address Table 4 | 5 | Type of data structure used to represent a universe of keys explicitly. Each key has its unique slot in the table, and the value associated with a key is stored in its specific slot. This eliminates the need for any hashing function. 6 | 7 | While DATs provide constant-time operations and are straightforward, they are only suitable for specific use cases due to their space requirements. It's essential to evaluate the application's needs and the universe of keys before choosing this data structure. 8 | 9 | ##### DAT: Algorithmic Complexity 10 | 11 | - **Insertion**: O(1) 12 | - **Deletion**: O(1) 13 | - **Search**: O(1) 14 | 15 | Since DAT uses direct addressing, it can achieve constant time complexity for these basic operations. 16 | 17 | ##### DAT: Applications 18 | 19 | - The universe of possible keys is relatively small, making it feasible to allocate memory for all potential keys. 20 | - Almost all keys from this universe are present, ensuring that the memory isn't wasted. 21 | - We need constant-time operations for insertion, deletion, and search. 22 | - There's no need for any advanced operations like predecessor, successor, etc. 23 | 24 | ##### DAT: Limitations 25 | 26 | - DATs can be memory-intensive if the universe of keys is large but only a small subset is used. 27 | - Not suitable when the universe of keys is dynamic or very large, as it can lead to a lot of wasted space. 28 | -------------------------------------------------------------------------------- /ads/concept/other.md: -------------------------------------------------------------------------------- 1 | # :chart_with_upwards_trend: Data Structure Concepts 2 | 3 | ##### Tail Recursion 4 | 5 | Tail recursion is a special form of recursion where the recursive call is the last thing executed in the function. 6 | 7 | This has important implications for optimization because it allows the language's runtime to reuse the current function's stack frame for the recursive call, rather than creating a new one. 8 | 9 | ###### Tail Recursion Example 10 | 11 | ```rust 12 | fn print( n : i32 ) 13 | { 14 | if n < 0 15 | { 16 | return; 17 | } 18 | println!( "{}", n ); 19 | 20 | // The last executed statement is the recursive call 21 | print( n - 1 ); 22 | } 23 | ``` 24 | -------------------------------------------------------------------------------- /ads/concept/readme.md: -------------------------------------------------------------------------------- 1 | # :chart_with_upwards_trend: Concepts of Algorithms and Data Structures 2 | 3 | - [Complexity and Amortized Analysis](./complexity_and_amortized_analysis.md.md) 4 | - [Data Structure: Trees](./data_structure_trees.md) 5 | -------------------------------------------------------------------------------- /ads/concept/sorting_algorithms.md: -------------------------------------------------------------------------------- 1 | # :chart_with_upwards_trend: Algorithms : Sorting Concepts 2 | 3 | ##### Sorting Algorithms 4 | 5 | - Bubble Sort 6 | - Selection Sort 7 | - Insertion Sort 8 | - Merge Sort 9 | - Quick Sort 10 | - Heap Sort 11 | - Shell Sort 12 | - Radix Sort 13 | - Counting Sort 14 | - Bucket Sort 15 | 16 | ##### Bubble Sort 17 | 18 | Repeatedly compares adjacent elements and swaps them if they are in the wrong order. 19 | 20 | ``` 21 | [5, 3, 3, 4, 2] 22 | [5, 3, 3, 2, 4] 23 | [5, 3, 2, 3, 4] 24 | [5, 2, 3, 3, 4] 25 | [2, 3, 3, 4, 5] 26 | ``` 27 | 28 | ##### Selection Sort 29 | 30 | Finds the minimum element and places it at the start. 31 | 32 | ``` 33 | [5, 3, 3, 4, 2] 34 | [2, 3, 3, 4, 5] 35 | ``` 36 | 37 | ##### Insertion Sort 38 | 39 | Inserts each item into its correct position. 40 | 41 | ``` 42 | [5, 3, 3, 4, 2] 43 | [3, 5, 3, 4, 2] 44 | [3, 3, 5, 4, 2] 45 | [3, 3, 4, 5, 2] 46 | [2, 3, 3, 4, 5] 47 | ``` 48 | 49 | ##### Merge Sort 50 | 51 | Divides into sublists, then merges them. 52 | 53 | ``` 54 | Divide: [5, 3, 3, 4, 2] 55 | Merge: [2, 3, 3, 4, 5] 56 | ``` 57 | 58 | ##### Quick Sort 59 | 60 | Uses a pivot to partition elements. 61 | 62 | ``` 63 | Partition: [5, 3, 3, 4, 2] 64 | [2, 3, 3, 4, 5] 65 | ``` 66 | 67 | ##### Heap Sort 68 | 69 | Builds a heap and extracts elements. 70 | 71 | ``` 72 | Heapify: [5, 3, 3, 4, 2] 73 | [2, 3, 3, 4, 5] 74 | ``` 75 | 76 | ##### Shell Sort 77 | 78 | Insertion sort with a "gap." 79 | 80 | ``` 81 | [5, 3, 3, 4, 2] 82 | [2, 3, 3, 4, 5] 83 | ``` 84 | 85 | ##### Radix Sort 86 | 87 | Sorts numbers digit by digit. 88 | 89 | ``` 90 | Units: [5, 3, 3, 4, 2] 91 | Tens: [2, 3, 3, 4, 5] 92 | ``` 93 | 94 | ##### Counting Sort 95 | 96 | Counting sort is a non-comparison integer sorting algorithm that sorts elements based on their frequencies. 97 | 98 | The algorithm is efficient when the range of values (i.e., `k`) is not significantly larger than the number of values being sorted. 99 | 100 | ##### Counting Sort: Stages 101 | 102 | 1. *Find the Maximum Value*: Identify the maximum value in the input array to determine the range of counts. 103 | 2. *Initialize Count Array*: Create an array (count array) of zeros with length one more than the maximum value. 104 | 3. *Count Occurrences*: Iterate through the input, incrementing the corresponding index in the count array for each value. 105 | 4. *Calculate Cumulative Counts*: Replace each count with the cumulative sum of the previous counts in the count array. 106 | 5. *Create Output Array*: Create a new array (output array) for the sorted values. 107 | 6. *Place Values in Output Array*: Iterate through the input, placing values in the output array using the count array, and decrementing the count array accordingly. 108 | 7. *Copy Output Array to Input Array*: If needed, copy the output array back into the input array. 109 | 110 | ##### Counting Sort: Example 111 | 112 | *Input Array*: `[4, 2, 2, 8, 3, 3, 1]` 113 | 114 | - *Find Maximum Value*: `max = 8` 115 | - *Initialize Count Array*: `[0, 0, 0, 0, 0, 0, 0, 0, 0]` 116 | - *Count Occurrences*: `[0, 1, 2, 2, 1, 0, 0, 0, 1]` 117 | - *Calculate Cumulative Counts*: `[0, 1, 3, 5, 6, 6, 6, 6, 7]` 118 | - *Create Output Array*: `[_, _, _, _, _, _, _]` 119 | - *Place Values in Output Array*: `[1, 2, 2, 3, 3, 4, 8]` 120 | - *Copy Output Array to Input Array*: `[1, 2, 2, 3, 3, 4, 8]` 121 | 122 | ##### Counting Sort: Complexity 123 | 124 | - *Time Complexity*: `O(n + k)` 125 | - *Space Complexity*: `O(n + k)` 126 | - `n` ~ the number of elements, 127 | - `k` ~ the range of the input. 128 | 129 | ##### Bucket Sort 130 | 131 | Sorts elements into buckets. 132 | 133 | ``` 134 | Buckets: [5, 3, 3, 4, 2] 135 | [2], [3, 3], [4], [5] 136 | Result: [2, 3, 3, 4, 5] 137 | ``` 138 | -------------------------------------------------------------------------------- /ads/learn.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: Learn Algorithms and Data Structures Together 2 | 3 | Awesome collection of learning materials to master modern Algorithms and Data Structures. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ### Algorithms Complexity 11 | 12 | ( _complexity_ ) 13 | 14 | - :star: [Big-O Notation](https://www.youtube.com/watch?v=BgLTDT03QtU) by [Neet Code](https://www.youtube.com/@NeetCode) :movie_camera: 15 | - :star: [Big O Notation - Full Course](https://www.youtube.com/watch?v=Mo4vesaut8g) by [Free Code Camp](https://www.youtube.com/@freecodecamp) :movie_camera: 16 | - [What Is Big-O Notation?](https://www.youtube.com/watch?v=Q_1M2JaijjQ) by [Reducible](https://www.youtube.com/@Reducible/videos) :movie_camera: 17 | 18 | ### Sort Algorithms 19 | 20 | ( _sort_ ) 21 | 22 | - [3 Levels of Sorting Algorithms](https://www.youtube.com/watch?v=qk7b4-iyCJ4) by [kite](https://www.youtube.com/@KiteHQ) :movie_camera: ( _sort_ ) 23 | - [Quick Select](https://www.youtube.com/watch?v=XEmy13g1Qxc) by [NeetCode](https://www.youtube.com/@NeetCode) :movie_camera: ( _sort_ ) 24 | - [Quick Select](https://www.youtube.com/watch?v=BP7GCALO2v8) by [Techdose](https://www.youtube.com/@techdose4u) :movie_camera: ( _sort_ ) 25 | 26 | ### Hashing 27 | 28 | ( _hashing_ ) 29 | 30 | - [Hashing](https://www.youtube.com/playlist?list=PLEJXowNB4kPxWxRGSSn4qLdZm0h_XHqzt) by [Techdose](https://www.youtube.com/@techdose4u) :movie_camera: :mortar_board: ( _hashing_ ) 31 | 32 | ### Heap 33 | 34 | ( _tree_ ) ( _heap_ ) 35 | 36 | - [Heap Full Course](https://www.youtube.com/playlist?list=PLEJXowNB4kPyP2PdMhOUlTY6GrRIITx28) by [Techdose](https://www.youtube.com/@techdose4u) :movie_camera: :mortar_board: ( _tree_ ) ( _heap_ ) 37 | 38 | ### Tags legend 39 | 40 | - :movie_camera: - video material 41 | - :page_facing_up: - reading 42 | - :mortar_board: - online course with or without feedback 43 | - :chart_with_upwards_trend: - cheetsheets 44 | - ( _algorithm_ ) - algorithms 45 | - ( _complexity_ ) - complexity 46 | - ( _sort_ ) - sort algorithms 47 | - ( _hashing_ ) - hashing algorithms 48 | - ( _tree_ ) - tree data structure 49 | - ( _heap_ ) - heap data structure 50 | -------------------------------------------------------------------------------- /ads/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Algorithms and Data Structures Together 2 | 3 | Awesome collection of learning materials to master modern Algorithms and Data Structures. 4 | 5 | ## Here you can find 6 | 7 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Algorithms and Data Structures. 8 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of Algorithms and Data Structures. 9 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets.md) : cheatsheets on Algorithms and Data Structures. 10 | -------------------------------------------------------------------------------- /compilers_design/cheatsheet/chomsky.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/compilers_design/cheatsheet/chomsky.png -------------------------------------------------------------------------------- /compilers_design/cheatsheet/parsing.md: -------------------------------------------------------------------------------- 1 | # Parsing Theory Cheatsheet 2 | 3 | Cheatsheets for Parsing Theory. 4 | 5 | 6 | 7 | 8 | 9 | ## Chomsky's Hierarchy of Grammars 10 | 11 | ![Chomsky's Hierarchy of Grammars](./chomsky.png) 12 | 13 | ## Ambiguity in CFG 14 | 15 | ```text 16 | 17 | Context Free Grammar: E → E + E | E × E | id 18 | 19 | Left Most Derivation: Right Most Derivation: Parse Tree Derivation: 20 | 21 | E ⇒ E + E E ⇒ E + E E 22 | ⇒ id + E ⇒ E + E × E / \ 23 | ⇒ id + E x E ⇒ E + E × id E + E 24 | ⇒ id + id x E ⇒ E + id × id | /\ 25 | ⇒ id + id x id ⇒ id + id × id id E * E 26 | | | 27 | id id 28 | 29 | E ⇒ E x E E ⇒ E x E E 30 | ⇒ E + E x E ⇒ E x id / \ 31 | ⇒ id + E x E ⇒ E + E x id E * E 32 | ⇒ id + id x E ⇒ E + id x id / \ | 33 | ⇒ id + id x id ⇒ id + id x id E + E id 34 | | | 35 | id id 36 | ``` 37 | 38 | ## Left Recursion / Right Recursion 39 | 40 | ```text 41 | 42 | Left Recursion ~ Left Associativity 43 | 44 | A → Aα | β 45 | 46 | A A A 47 | | / \ / \ 48 | β A α A α 49 | | / \ 50 | β A α 51 | | 52 | β 53 | 54 | Right Recursion ~ Right Associativity 55 | 56 | A → αA | β 57 | 58 | A A A 59 | | / \ / \ 60 | β α A α A 61 | | / \ 62 | β α A 63 | | 64 | β 65 | 66 | ``` 67 | -------------------------------------------------------------------------------- /compilers_design/concept/parsing.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Parsing Theory 2 | 3 | Key concepts of Parsing Theory. 4 | 5 | 6 | 7 | 8 | 9 | ## Computational Models 10 | 11 | Theoretical frameworks used to study and understand the nature of computation. 12 | 13 | Computational models provide a formal structure for analyzing the capabilities and limitations of different computational processes. They explore fundamental questions about what can be computed, the resources required for computation, and the complexity of computational problems. These models serve as the foundation for understanding various aspects of computation, such as decidability, expressiveness, and efficiency. 14 | 15 | ### Classic Computational Models 16 | 17 | - **Turing Machines**: Model general computation and are capable of simulating any algorithmic process. 18 | - **Finite Automata**: Used for simpler computational tasks and recognize regular languages. 19 | - **Pushdown Automata**: Extend finite automata with a stack to recognize context-free languages. 20 | - **Lambda Calculus**: Provides a formal system for expressing computation through function abstraction and application. 21 | - **Register Machines**: Use a finite number of registers to perform computations. 22 | - **Cellular Automata**: Consist of a grid of cells, each in one of a finite number of states, evolving over discrete time steps according to a set of rules. 23 | - **Markov Algorithms**: A string rewriting system that provides a model of computation based on the application of rules to strings. 24 | - **Petri Nets**: A mathematical modeling language used for the description of distributed systems. 25 | 26 | Each of these models offers unique insights into different aspects of computation, and they are often used to compare the computational power and efficiency of various algorithms and systems. 27 | 28 | ## Abstract Machines 29 | 30 | Simplified, theoretical representations of computer systems used to analyze algorithms and programming languages. 31 | 32 | Abstract machines are a subset of computational models that focus on simulating the behavior of a computer or a specific computational process. They provide a high-level abstraction of how computations are executed, often ignoring the complexities of actual hardware. Abstract machines are used to study the execution of algorithms, the semantics of programming languages, and the efficiency of computational processes. Examples include the Turing machine, which is a universal model of computation, and the finite state machine, which models systems with a limited number of states. These machines are instrumental in understanding the practical implementation of computational theories. 33 | 34 | The key difference between computational models and abstract machines lies in their scope and application. **Computational models** are broad theoretical constructs that explore the fundamental principles and limits of computation, often focusing on the theoretical analysis of what can be computed. In contrast, **abstract machines** are specific instances of computational models that simulate the execution of computations, providing a practical framework for analyzing algorithms and programming languages. While computational models address the "what" and "why" of computation, abstract machines focus on the "how" of executing computational processes. 35 | 36 | ## Computational Models > Finite Automata 37 | 38 | Abstract machines used to recognize regular languages with a finite number of states. 39 | 40 | - **Memory**: Limited to the current state (no additional memory). 41 | - **Control Structure**: Finite set of states and transitions. 42 | - **Input/Output**: Processes input symbols to determine acceptance. 43 | - **Computation**: Based on state transitions for each input symbol. 44 | - **Universality**: Limited to recognizing regular languages. 45 | - **Equivalence**: Equivalent to regular expressions. 46 | 47 | ## Computational Models > Pushdown Automata 48 | 49 | Extend finite automata with a stack to recognize context-free languages. 50 | 51 | - **Memory**: Finite states plus a stack for additional memory. 52 | - **Control Structure**: Finite set of states and stack operations. 53 | - **Input/Output**: Reads input symbols and manipulates the stack. 54 | - **Computation**: Uses state transitions and stack operations. 55 | - **Universality**: Recognizes context-free languages, not universal. 56 | - **Equivalence**: Equivalent to context-free grammars. 57 | 58 | ## Computational Models > Turing Machine 59 | 60 | A theoretical model with an infinite tape and a set of rules for reading, writing, and moving the tape head. 61 | 62 | - **Memory**: Infinite tape providing unlimited memory. 63 | - **Control Structure**: Finite set of states and a transition function. 64 | - **Input/Output**: Reads and writes symbols on the tape. 65 | - **Computation**: Defined by state transitions and tape operations. 66 | - **Universality**: Capable of simulating any other computational model. 67 | - **Equivalence**: Equivalent to lambda calculus, register machines, and other universal models. 68 | 69 | ## Computational Models > Lambda Calculus 70 | 71 | A formal system for expressing computation through function abstraction and application. 72 | 73 | - **Memory**: Abstract representation through variables and functions. 74 | - **Control Structure**: Function abstraction and application. 75 | - **Input/Output**: Function application and variable substitution. 76 | - **Computation**: Based on function evaluation and substitution. 77 | - **Universality**: Equivalent to Turing machines in computational power. 78 | - **Equivalence**: Equivalent to Turing machines and other universal models. 79 | 80 | ## Computational Models > Register Machines 81 | 82 | Abstract machines using a finite number of registers to perform computations. 83 | 84 | - **Memory**: Finite number of registers for storing data. 85 | - **Control Structure**: Instructions for manipulating register values. 86 | - **Input/Output**: Operates on data stored in registers. 87 | - **Computation**: Defined by a sequence of instructions. 88 | - **Universality**: Capable of simulating any Turing machine. 89 | - **Equivalence**: Equivalent to Turing machines and other universal models. 90 | 91 | ![Chomsky's Hierarchy of Grammars](../cheatsheet/chomsky.png) 92 | 93 | ##### Type 0: **Unrestricted Grammar** 94 | 95 | - **Production Rule**: (Σ ∪ N)+ → (Σ ∪ N)* 96 | - No restrictions on production rules. 97 | - Alternatively: α → β 98 | - α is a string of terminals and/or non-terminals with at least one non-terminal 99 | - β is a string of terminals and/or non-terminals 100 | - **Generative Power**: Can generate any language that can be recognized by a Turing machine. 101 | - **Computational Models**: Recognized by Turing machines. 102 | - **Memory**: Unlimited, allowing for complex computations and context sensitivity. 103 | - **Recognition Complexity**: Generally undecidable, as it can require arbitrary computation. 104 | - **Limitations**: Computationally expensive and difficult to implement for practical use. 105 | 106 | ##### Type 1: **Context-Sensitive Grammar** ~ **CSG** 107 | - **Production Rule**: αAβ → αγβ 108 | - A is a non-terminal 109 | - α, β, γ are strings of terminals and/or non-terminals 110 | - **Generative Power**: Can generate context-sensitive languages. 111 | - **Computational Models**: Recognized by linear bounded automata. 112 | - **Memory**: Limited but sufficient for context-sensitive rules. 113 | - **Recognition Complexity**: Polynomial time, as it is constrained by the linear bounded automaton. 114 | - **Limitations**: More complex than CFGs, making them harder to parse and implement. 115 | 116 | ##### Type 2: **Context-Free Grammar** ~ **CFG** 117 | - **Production Rule**: N → (Σ ∪ N)* 118 | - Non-terminals are replaced by combinations of terminals and non-terminals 119 | - Alternatively: A → γ 120 | - A is a non-terminal 121 | - γ is a string of terminals and/or non-terminals 122 | - **Generative Power**: Can generate context-free languages. 123 | - **Computational Models**: Recognized by pushdown automata. 124 | - **Memory**: Uses a stack, allowing for nested and recursive structures. 125 | - **Recognition Complexity**: Generally cubic time, O(n^3), but can be optimized to O(n^2) or O(n) for specific grammars. 126 | - **Limitations**: Cannot handle context-sensitive languages or enforce certain constraints. 127 | 128 | ##### Type 3: **Regular Grammar** 129 | - **Production Rule**: N → Σ * N? 130 | - Production rules replace non-terminals with a terminal followed by an optional non-terminal. 131 | - Can be either Right Linear Grammar (RLG) or Left Linear Grammar (LLG). 132 | - **Right Linear Grammar (RLG)**: A → aB or A → a 133 | - A and B are non-terminals 134 | - a is a terminal 135 | - **Left Linear Grammar (LLG)**: A → Ba or A → a 136 | - A and B are non-terminals 137 | - a is a terminal 138 | - **Generative Power**: Can generate regular languages. 139 | - **Computational Models**: Recognized by finite automata. 140 | - **Memory**: Limited to current state, no additional memory or stack. 141 | - **Recognition Complexity**: Linear time, O(n), as it is processed by finite automata. 142 | - **Limitations**: Cannot handle nested or recursive structures, counting, or context sensitivity. 143 | 144 | ## RLG ~ Right Linear Grammar 145 | 146 | A type of regular grammar where production rules are of the form A → aB or A → a, with non-terminals appearing on the right side of the terminal. Where A and B are non-terminals and a is a terminal. 147 | 148 | ## LLG ~ Left Linear Grammar 149 | 150 | A type of regular grammar where production rules are of the form A → Ba or A → a, with non-terminals appearing on the left side of the terminal. Where A and B are non-terminals and a is a terminal. 151 | 152 | ## Left Most Derivative ~ Leftmost Derivation 153 | 154 | A process in formal grammar where the leftmost non-terminal in a string is replaced first during each step of the derivation sequence. This concept is particularly relevant to context-free grammars (CFGs) and is commonly used in parsing techniques like LL parsers, which read input from left to right and construct a leftmost derivation of the sentence. 155 | 156 | ## Right Most Derivative ~ Rightmost Derivation 157 | 158 | A process in formal grammar where the rightmost non-terminal in a string is replaced first during each step of the derivation sequence. This is especially relevant to context-free grammars (CFGs) and is used in parsing techniques like LR parsers, which read input from left to right and construct a rightmost derivation in reverse. 159 | 160 | ## Context-Free Grammar / Regular Grammar 161 | 162 | **Regular Grammar** 163 | 164 | - **Definition**: Regular grammars have rules where a non-terminal is replaced by a terminal, optionally followed by another non-terminal. 165 | - **Expressive Power**: Less powerful than CFGs; cannot handle nested structures. 166 | - **Parsing**: Uses simple and efficient algorithms like finite state machines or regular expression engines. 167 | - **Applications**: Used in lexical analysis to define the structure of tokens in programming languages and in text processing tools like grep and sed. 168 | 169 | **Context-Free Grammar (CFG)** 170 | 171 | - **Definition**: CFGs have production rules where a single non-terminal is replaced by a string of terminals and/or non-terminals. 172 | - **Expressive Power**: More powerful than regular grammars; can describe languages with nested structures, such as balanced parentheses. 173 | - **Parsing**: Requires more complex parsing techniques, such as LL, LR, or CYK parsers, to handle recursive and nested structures. 174 | - **Applications**: Widely used in the design of programming languages and compilers, where the syntax of the language is often context-free. 175 | 176 | **Example of CFG**: Balanced Parentheses 177 | - Grammar: 178 | - S → empty string 179 | - S → (S) 180 | - S → SS 181 | 182 | This CFG can generate all strings of balanced parentheses, a task that regular grammars cannot accomplish due to their inability to handle nested or recursive patterns. 183 | 184 | ## Ambiguity in CFG 185 | 186 | Ambiguity in grammar occurs when a single string can be generated by a grammar in more than one distinct way, resulting in multiple parse trees or derivations. This means that the grammar allows for more than one interpretation of the structure of the string. 187 | 188 | Ambiguity is particularly relevant to context-free grammars (CFGs) because they are often used to define the syntax of programming languages, where unambiguous interpretation is crucial. However, ambiguity can also occur in other types of grammars, but it is most commonly discussed in the context of CFGs due to their widespread use in language parsing and compiler design. 189 | 190 | ```text 191 | 192 | Context Free Grammar: E → E + E | E × E | id 193 | 194 | Left Most Derivation: Right Most Derivation: Parse Tree Derivation: 195 | 196 | E ⇒ E + E E ⇒ E + E E 197 | ⇒ id + E ⇒ E + E × E / \ 198 | ⇒ id + E x E ⇒ E + E × id E + E 199 | ⇒ id + id x E ⇒ E + id × id | /\ 200 | ⇒ id + id x id ⇒ id + id × id id E * E 201 | | | 202 | id id 203 | 204 | E ⇒ E x E E ⇒ E x E E 205 | ⇒ E + E x E ⇒ E x id / \ 206 | ⇒ id + E x E ⇒ E + E x id E * E 207 | ⇒ id + id x E ⇒ E + id x id / \ | 208 | ⇒ id + id x id ⇒ id + id x id E + E id 209 | | | 210 | id id 211 | ``` 212 | 213 | ## Parsing Expression Grammar ~ PEG 214 | 215 | A formal grammar framework used to describe the syntax of languages, offering an alternative to Context-Free Grammars (CFGs) with unique features. 216 | 217 | ## PEG / CFG 218 | 219 | PEGs offer a powerful alternative to CFGs for defining and implementing parsers, especially when deterministic parsing is required. 220 | 221 | PEGs differ from CFGs in that they use a deterministic choice operator, which eliminates ambiguity by always selecting the first successful match in a sequence of alternatives. This makes PEGs particularly useful for programming language parsers, as they can handle complex syntax rules without ambiguity. PEGs are often implemented using **packrat parsers**, which provide linear-time parsing by caching intermediate results. 222 | 223 | PEGs are well-suited for tasks where precise control over parsing decisions is required, and they allow for more straightforward implementation of certain language features compared to CFGs. 224 | 225 | - **Determinism**: PEGs use ordered choice, ensuring deterministic parsing by selecting the first successful match, which can lead to the prefix capture problem where earlier alternatives capture input, requiring careful rule ordering. CFGs allow ambiguity with multiple parse trees for the same input string. 226 | - **Parsing Strategy**: PEGs typically use packrat parsers for linear-time parsing by caching results, whereas CFGs use strategies like LL, LR, or CYK, which may involve backtracking or complex table-driven methods. 227 | - **Expressive Power**: PEGs can express all deterministic context-free languages and some languages CFGs cannot due to ambiguity, while CFGs can express a broader range of context-free languages, including ambiguous ones. 228 | - **Grammar**: PEGs include constructs like sequence, ordered choice, repetition, and lookahead predicates, whereas CFGs use production rules that map non-terminals to strings of terminals and non-terminals. 229 | - **Complexity**: PEGs achieve linear-time complexity for deterministic parsing with packrat parsing, while CFGs' best parsing algorithms, like CYK, have a time complexity of cubic time, though deterministic parts of CFGs can run in linear time. 230 | - **Semantics**: PEGs have intensional semantics, where the language is defined by the sentences recognized, similar to top-down recursive-descent parsers. CFGs have extensional semantics, where the language is the set of sentences obtained by derivation. 231 | 232 | ###### Origin 233 | 234 | The term "Parsing Expression Grammar" was introduced by Bryan Ford in his 2004 paper, proposing PEGs as an alternative to CFGs for deterministic and unambiguous syntax specification. 235 | 236 | ###### Example 237 | 238 | A simple PEG for arithmetic expressions: 239 | 240 | ``` 241 | Expression <- Term (('+' / '-') Term)* 242 | Term <- Factor (('*' / '/') Factor)* 243 | Factor <- Number / '(' Expression ')' 244 | Number <- [0-9]+ 245 | ``` 246 | 247 | ## PEG - Key Characteristics 248 | 249 | 1. **Deterministic Parsing**: PEGs are designed to be deterministic, eliminating ambiguity through ordered choice, where the first matching alternative is selected. 250 | 251 | 2. **Syntax**: Uses constructs like sequence, ordered choice, repetition, and lookahead predicates (not and and) to define grammar rules. 252 | 253 | 3. **Expressive Power**: Can express all deterministic context-free languages and some languages CFGs cannot due to ambiguity, but not all context-sensitive languages. 254 | 255 | 4. **Applications**: Used in parsing programming languages, data formats, and structured text, particularly where deterministic parsing is desired. 256 | 257 | 5. **Advantages**: Provides a clear, concise way to define grammars without ambiguity, allowing for straightforward parsing algorithms without backtracking. 258 | 259 | ## Left Recursion / Right Recursion 260 | 261 | ```text 262 | 263 | Left Recursion ~ Left Associativity 264 | 265 | A → Aα | β 266 | 267 | A A A 268 | | / \ / \ 269 | β A α A α 270 | | / \ 271 | β A α 272 | | 273 | β 274 | 275 | Right Recursion ~ Right Associativity 276 | 277 | A → αA | β 278 | 279 | A A A 280 | | / \ / \ 281 | β α A α A 282 | | / \ 283 | β α A 284 | | 285 | β 286 | 287 | ``` 288 | 289 | ## Elimination of Left Recursion 290 | 291 | If left recursion is indirect (involving multiple non-terminals), the grammar must first be transformed to remove indirect recursion by reordering or rewriting rules. 292 | 293 | Direct left recursion occurs when a non-terminal directly calls itself on the left side of its production. By eliminating left recursion, the grammar becomes suitable for top-down parsing methods, which require non-left-recursive ( in case of parsing from left to right ) grammars to function correctly. 294 | 295 | ```text 296 | 297 | Original Grammar Rule: 298 | A → Aα | β 299 | 300 | Transformed Grammar to Remove Left Recursion: 301 | A → βA' 302 | A' → αA' | ε 303 | ``` 304 | 305 | ## Left Refactoring 306 | 307 | Left factoring is a technique used to transform a grammar to convert it into deteministic one to make it suitable for predictive parsing, such as LL parsers. 308 | 309 | It is applied when a grammar has production rules with common prefixes, which can cause ambiguity in deciding which production to use during parsing. Left factoring helps eliminate this ambiguity by restructuring the grammar. But bear in mind that deterministic grammar does not guarantee unambiguous. 310 | 311 | Left Factoring Process 312 | 313 | ```text 314 | Identify Common Prefixes: 315 | Look for non-terminals with multiple productions that share a common prefix. For example, consider the productions: 316 | A → αβ₁ 317 | A → αβ₂ 318 | 319 | Factor Out the Common Prefix: 320 | Introduce a new non-terminal to represent the common prefix and rewrite the productions. For the example above: 321 | Introduce a new non-terminal A'. 322 | Rewrite the productions as: 323 | A → αA' 324 | A' → β₁ | β₂ 325 | 326 | Example 327 | 328 | Original Grammar: 329 | A → aAB | aBc | aAc 330 | 331 | Intermidiary Step: 332 | A → aA' 333 | A' → AB | Ac | Bc 334 | 335 | Refactored Grammar: 336 | A → aA' 337 | A' → AA'' | Bc 338 | A'' → B | c 339 | 340 | Ambigious Example 341 | 342 | Original Grammar: 343 | S → iEtS | iEtSeS | a 344 | E → b 345 | 346 | Refactored Grammar: 347 | S → iEtSS' | a 348 | S' → ε | eS 349 | E → b 350 | 351 | Example 352 | 353 | Original Grammar: 354 | S → aSSbS | aSaSb | abb | b 355 | 356 | Intermidiary Step: 357 | S → aS' | b 358 | S' → SSbS | SaSb | bb 359 | 360 | Refactored Grammar: 361 | S → aS' | b 362 | S' → SS'' | bb 363 | S'' → SbS | aSb 364 | 365 | Example 366 | 367 | Original Grammar: 368 | S → bSSaaS | bSSaSb | bSb | a 369 | 370 | Intermidiary Step: 371 | S → bS' | a 372 | S' → SaaS | SaSb | b 373 | 374 | Refactored Grammar: 375 | S → bSS' | a 376 | S' → SaS'' | b 377 | S'' → aS | Sb 378 | 379 | Example 380 | 381 | Original Grammar: 382 | S → a | ab | abc | abcd 383 | 384 | Refactored Grammar: 385 | S → aS' 386 | S' → bS'' | ε 387 | S'' → cS''' | ε 388 | S''' → d | ε 389 | 390 | ``` 391 | 392 | ## First, Follow Functions 393 | 394 | The First function helps determine which terminal symbols can appear at the beginning of strings derived from a given grammar symbol. 395 | 396 | The Follow function determines which terminal symbols can appear immediately after a non-terminal in some "sentential" form. 397 | 398 | > Example 399 | 400 | | Production | FIRST | FOLLOW | 401 | |------------|-----------|----------| 402 | | S → ABCDE | {a, b, c} | {$} | 403 | | A → a | {a, ε} | {b, c} | 404 | | B → b | {b, ε} | {c} | 405 | | C → c | {c} | {d, e, $}| 406 | | D → d | {d, ε} | {e, $} | 407 | | E → e | {e, ε} | {$} | 408 | 409 | > Example 410 | 411 | | Production | FIRST | FOLLOW | 412 | |------------|----------|----------| 413 | | S → aBDh | {a} | {$} | 414 | | B → c | {c} | {g, f, h}| 415 | | C → bC ε | {b, ε} | {g, f, h}| 416 | | D → EF | {g, f, ε}| {g, f, h}| 417 | | E → g ε | {g, ε} | {f, h} | 418 | | F → f ε | {f, ε} | {h} | 419 | -------------------------------------------------------------------------------- /compilers_design/concept/parsing_algorithm.md: -------------------------------------------------------------------------------- 1 | # Parsing Algorithms 2 | 3 | ## CFG Parsing Methods Expressivity Comparison 4 | 5 | A comparison of the expressivity and restrictions of various context-free grammar (CFG) parsing methods. 6 | 7 | ##### Grammars Restrictions 8 | - **CFG > LR(k) > LR(1) > LALR(1) > SLR(1)**: Context-Free Grammars (CFGs) are the most expressive, with each subsequent method imposing more restrictions on the grammar. 9 | - **LR(k) > LL(k) > LL(1)**: LR(k) parsers can handle a broader range of grammars compared to LL(k) parsers, with LL(1) being the most restrictive among them. 10 | - **LR(1) > LL(1)**: LR(1) parsers are more expressive than LL(1) parsers, allowing for a wider range of grammars. 11 | - **LALR(1) != LL(1)**: LALR(1) and LL(1) parsers are not directly comparable in terms of expressivity, as they handle different types of grammar restrictions. 12 | 13 | ##### Languages Restrictions 14 | - **CFG > LR(k)**: Context-Free Grammars can describe a broader set of languages than LR(k) parsers. 15 | - **LR(k) = LR(1) = LALR(1) = SLR(1)**: These parsing methods are equivalent in terms of the languages they can recognize, despite differences in grammar restrictions. 16 | - **LR(1) > LL(k) > LL(1)**: LR(1) parsers can recognize a broader set of languages compared to LL(k) and LL(1) parsers. 17 | - **LALR(1) != LL(1)**: LALR(1) and LL(1) parsers are not directly comparable in terms of the languages they can recognize, as they are designed for different parsing strategies. 18 | 19 | ## Traversing Order > Top-down Parser / Bottom-up Parser 20 | 21 | Two parsing methods distinguished by the order in which they process grammar rules and input strings. 22 | 23 | **Top-down parsers** start from the highest-level rule of a grammar and recursively break down the rules to match the input string, often using techniques like recursive descent or LL parsing. They are easier to implement but may have limitations with left-recursive grammars. 24 | 25 | **Bottom-up parsers** begin with the input string and work upwards to construct the parse tree by identifying and reducing substrings to non-terminals, using techniques like LR parsing. They are more robust and can handle a broader range of grammars, including left-recursive ones, but are generally more complex to implement. 26 | 27 | ## Comparison of Operator-precedence Parsers > 28 | 29 | - **Precedence Climbing**: Recursive, suitable for simple expression parsing. 30 | - **Shunting Yard**: Non-recursive, stack-based, ideal for infix to postfix conversion. 31 | - **Pratt Parsing**: Flexible, efficient, handles complex precedence and associativity in programming languages. 32 | 33 | ## Precedence Climbing Method 34 | 35 | - **Family**: Operator-precedence parser 36 | - **Traversing Order**: Top-down 37 | - **Complexity**: Generally O(n) for parsing expressions, though recursive calls can introduce minor inefficiencies 38 | - **Grammar**: Suitable for LR(1) grammars where two consecutive non-terminals and epsilon never appear in the right-hand side of any rule 39 | - **Overview**: A recursive approach to parsing expressions based on operator precedence. It is a top-down parsing technique. 40 | - **How it Works**: Processes tokens from left to right, recursively descending into sub-expressions for operators with higher precedence. Uses a recursive function to handle operators and operands. 41 | - **Use Case**: Often used in hand-written parsers for expressions with well-defined operator precedence. 42 | - **Working Example**: 43 | ``` 44 | E -> E + T | T 45 | T -> T * F | F 46 | F -> ( E ) | number 47 | ``` 48 | - **Broken Example**: 49 | ``` 50 | E -> E E | epsilon 51 | ``` 52 | 53 | ## Shunting Yard Algorithm 54 | 55 | - **Family**: Operator-precedence parser 56 | - **Traversing Order**: Bottom-up 57 | - **Complexity**: Generally O(n) for parsing expressions 58 | - **Grammar**: Suitable for LR(1) grammars where two consecutive non-terminals and epsilon never appear in the right-hand side of any rule 59 | - **Overview**: Developed by Edsger Dijkstra, it is a non-recursive method for parsing infix expressions, converting them to postfix (RPN) or evaluating them. 60 | - **How it Works**: Uses a stack for operators and an output queue for the expression. Processes tokens, respecting operator precedence and associativity, and pops operators to the output queue when necessary. 61 | - **Use Case**: Widely used in calculators and interpreters for converting infix to postfix notation. 62 | - **Working Example**: 63 | ``` 64 | E -> E + T | T 65 | T -> T * F | F 66 | F -> ( E ) | number 67 | ``` 68 | - **Broken Example**: 69 | ``` 70 | E -> E E | epsilon 71 | ``` 72 | 73 | ## Pratt Parsing 74 | 75 | - **Family**: Operator-precedence parser 76 | - **Traversing Order**: Top-down 77 | - **Complexity**: Generally O(n) for parsing expressions 78 | - **Grammar**: Flexible with operator precedence and associativity; suitable for LR(1) grammars where two consecutive non-terminals and epsilon never appear in the right-hand side of any rule 79 | - **Overview**: A top-down operator precedence parsing technique introduced by Vaughan Pratt. It is flexible and efficient. 80 | - **How it Works**: Uses parsing functions associated with token types. Processes tokens based on precedence, using prefix and infix functions to handle expressions. Decides functions dynamically based on token and precedence. 81 | - **Use Case**: Useful in programming language interpreters and compilers for handling complex operator precedence and associativity. 82 | - **Working Example**: 83 | ``` 84 | E -> E + T | T 85 | T -> T * F | F 86 | F -> ( E ) | number 87 | ``` 88 | - **Broken Example**: 89 | ``` 90 | E -> E E | epsilon 91 | ``` 92 | 93 | ## Recursive Descent Parser 94 | 95 | A top-down parsing technique that uses a set of recursive procedures to process the input according to a grammar's production rules. 96 | 97 | Recursive descent parsers are straightforward to implement and involve writing a separate function for each non-terminal in the grammar. These functions recursively call each other to match the input string against the grammar rules, making decisions based on the current input token. While they are intuitive and easy to understand, recursive descent parsers can struggle with left-recursive grammars, which can lead to infinite recursion. 98 | 99 | To handle left recursion, grammars often need to be transformed or rewritten. Despite this limitation, recursive descent parsers are popular for their simplicity and are often used in educational settings and for small to medium-sized language parsers. 100 | 101 | - **Family**: Top-down parser 102 | - **Traversing Order**: Top-down 103 | - **Complexity**: Typically O(n) for LL(1) grammars, but can become exponential with certain grammars and inputs requiring extensive backtracking 104 | - **Grammar**: Suitable for LL(1) grammars; left recursion must be eliminated, and epsilon productions are typically not allowed 105 | - **Overview**: A top-down parsing technique that uses a set of recursive procedures to process the input according to a grammar's production rules. 106 | - **How it Works**: Involves writing a separate function for each non-terminal in the grammar, with these functions recursively calling each other to match the input string against the grammar rules. Decisions are made based on the current input token. 107 | - **Use Case**: Popular for their simplicity, they are often used in educational settings and for small to medium-sized language parsers. 108 | - **Working Example**: 109 | ``` 110 | E -> T E' 111 | E' -> + T E' | epsilon 112 | T -> F T' 113 | T' -> * F T' | epsilon 114 | F -> ( E ) | number 115 | ``` 116 | - **Broken Example**: 117 | ``` 118 | E -> E + T | T 119 | ``` 120 | 121 | ## Packrat Parsing 122 | 123 | A parsing technique that efficiently implements Parsing Expression Grammars (PEGs) using memoization to achieve linear-time performance. 124 | 125 | **Packrat parsers** store intermediate parsing results in a memoization table, avoiding redundant computations and backtracking, which are common in other parsing methods. This ensures each part of the input is processed only once, leading to predictable and efficient parsing times, even for complex grammars. The trade-off is increased memory usage due to the storage of parsing results. 126 | 127 | These parsers are particularly well-suited for implementing PEGs, as they can handle the deterministic choice operator and other PEG-specific constructs without ambiguity or performance degradation. By leveraging memoization, packrat parsers provide a robust solution for parsing tasks that require precise control over grammar rules and parsing decisions. 128 | 129 | - **Family**: Parsing Expression Grammar (PEG) parser 130 | - **Traversing Order**: Top-down 131 | - **Complexity**: Linear time, O(n), due to memoization 132 | - **Grammar**: Suitable for PEGs, which allow for deterministic parsing without ambiguity; can handle left recursion 133 | - **Overview**: A top-down parsing technique that efficiently implements PEGs using memoization to achieve linear-time performance. 134 | - **How it Works**: Stores intermediate parsing results in a memoization table to avoid redundant computations and backtracking, ensuring each part of the input is processed only once. This approach handles deterministic choice operators and other PEG-specific constructs without ambiguity. 135 | - **Use Case**: Ideal for programming language parsers where precise control over parsing decisions is required, providing robust solutions for complex grammar parsing tasks. 136 | - **Working Example**: 137 | ``` 138 | Expr <- Term (('+' / '-') Term)* 139 | Term <- Factor (('*' / '/') Factor)* 140 | Factor <- '(' Expr ')' / number 141 | ``` 142 | - **Broken Example**: 143 | ``` 144 | Expr <- Expr '+' Expr / epsilon 145 | ``` 146 | ``` 147 | A <- aAa | bAb | a | b | ε 148 | ``` 149 | -------------------------------------------------------------------------------- /compilers_design/concept/types_theory.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Parsing Theory 2 | 3 | Cheatsheets for Parsing Theory. 4 | 5 | 6 | 7 | 8 | 9 | ## **Type Analysis** > Type Checking / Type Inference 10 | 11 | Processes that ensure type correctness and deduce missing type information in programming. 12 | 13 | **Type Checking** verifies adherence to type rules, catching errors at compile time, while **Type Inference** deduces missing type information, reducing the need for explicit type annotations. 14 | 15 | ## **Typing Strength** > Strongly Typed Languages / Weakly Typed Languages 16 | 17 | The strictness of type enforcement in programming languages affects error detection and conversion behavior. 18 | 19 | **Strongly Typed Languages** enforce strict type rules, preventing implicit conversions and catching errors early, whereas **Weakly Typed Languages** allow implicit conversions, which can lead to unexpected behaviors. 20 | 21 | ## **Typing Explicitness** ~ **Level of Inference** > Implicit Typing / Explicit Typing 22 | 23 | The level of type inference required in programming languages determines the explicitness of type declarations. 24 | 25 | **Implicit Typing** relies on type inference to deduce types without explicit declarations, while **Explicit Typing** requires programmers to declare types explicitly, reducing reliance on inference. 26 | 27 | ## **Typing Timing** > Dynamic Language / Static Typed Language 28 | 29 | The timing of type checking in programming languages influences flexibility and error detection. 30 | 31 | **Dynamic Languages** perform type checking at runtime, offering flexibility but potentially leading to runtime errors, whereas **Static Typed Languages** perform type checking at compile time, ensuring type safety before execution. 32 | 33 | ## Symbol Table 34 | 35 | A data structure used by compilers to store information about identifiers and scope. 36 | 37 | Symbol tables are crucial in the compilation process, as they keep track of variable names, function names, objects, and other identifiers, along with their associated attributes such as type, scope, and memory location. This information is used during semantic analysis and code generation to ensure correct program execution and to optimize resource allocation. 38 | 39 | Symbol tables are typically implemented as hash tables or linked lists, allowing efficient insertion, deletion, and lookup operations. They are often organized hierarchically to reflect the scope of identifiers, with nested tables representing nested scopes in the source code. -------------------------------------------------------------------------------- /compilers_design/learn.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: Learn Parsing Theory 2 | 3 | Curated collection of lists of useful resources to master Parsing Theory together. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## Basics - 0 11 | 12 | ( _introductory_ ) ( _level_0_ ) 13 | 14 | - [Compiler Design](https://www.youtube.com/playlist?list=PLBlnK6fEyqRjT3oJxFXRgjPNzeS-LFY-q) by [Neso Academy](https://www.youtube.com/@nesoacademy/playlists) :mortar_board: :movie_camera: 15 | 16 | ## Tags legend 17 | 18 | ##### Kind of resource 19 | 20 | - :movie_camera: - video material to watch 21 | - :page_facing_up: - reading 22 | - :book: - a book 23 | - :mortar_board: - online course with or without feedback 24 | - :chart_with_upwards_trend: - cheat sheets 25 | - :card_file_box: - reference or manual or a standard 26 | - :open_file_folder: - collections of collections 27 | - :pirate_flag: - non-english 28 | - :page_facing_up: - either single article or single video-tutorial 29 | - :building_construction: - ideas for inspiration of mini-projects to add to your portfolio 30 | - :moneybag: - paid 31 | - 🔽 - download 32 | - ⚡ - practice, it is possible to interact and get feedback from the system 33 | - ( _official_ ) - official material 34 | - ( _blog_ ) - blogs 35 | - ( _example_ ) - code sample that can be executed 36 | 37 | ##### Specific Domain 38 | 39 | - ( _introductory_ ) - introductory learning material 40 | 41 | ##### Specific Technology 42 | 43 | - ( _unity_ ) - related to Unity 44 | -------------------------------------------------------------------------------- /compilers_design/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Parsing Theory Together 2 | 3 | Awesome collection of learning materials to master modern Compiler Design. 4 | 5 | ## What is this about? 6 | 7 | This repostory contains nearly dozen curated collections: learning materials, toolboxes, newspapers, working groups, collection of other collections. Everything you will find useful if you are interested in Parsing Theory. 8 | 9 | Here you can find: 10 | 11 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Parsing Theory. 12 | - __:old_key:__ [Comprehend](./concept/parsing.md) : key concepts and dichotomies of Parsing Theory. 13 | - __:old_key:__ [Comprehend](./concept/types_theory.md) : key concepts and dichotomies of Types Theory. 14 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheet/parsing.md) : cheatsheets on Parsing Theory. 15 | -------------------------------------------------------------------------------- /cs/cheatsheet/aas_1.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/aas_1.webp -------------------------------------------------------------------------------- /cs/cheatsheet/aas_2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/aas_2.jpg -------------------------------------------------------------------------------- /cs/cheatsheet/cap_theorem.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/cap_theorem.png -------------------------------------------------------------------------------- /cs/cheatsheet/concurrent_programming.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/concurrent_programming.jpeg -------------------------------------------------------------------------------- /cs/cheatsheet/horizontal_vs_vertical.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/horizontal_vs_vertical.png -------------------------------------------------------------------------------- /cs/cheatsheet/isolation_levels_vs_read_phenomena.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/isolation_levels_vs_read_phenomena.jpg -------------------------------------------------------------------------------- /cs/cheatsheet/latencies.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/latencies.jpg -------------------------------------------------------------------------------- /cs/cheatsheet/resilience_patterns.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/resilience_patterns.png -------------------------------------------------------------------------------- /cs/cheatsheet/sharding_partitiioning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/cs/cheatsheet/sharding_partitiioning.png -------------------------------------------------------------------------------- /cs/cheatsheets.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Computer Science 2 | 3 | Cheatsheets for Computer Science. 4 | 5 | 6 | 7 | 8 | 9 | ## Asynchronous vs Multithreading 10 | 11 | ![Asynchronous vs Multithreading](./cheatsheet/concurrent_programming.jpeg) 12 | 13 | ## Latencies 14 | 15 | ![Asynchronous vs Multithreading](./cheatsheet/latencies.jpg) 16 | 17 | ## CAP Theorem 18 | 19 | ![CAP Theorem](./cheatsheet/cap_theorem.png) 20 | 21 | ## Horizontal / Vertical scaling 22 | 23 | ![Horizontal vs Vertical scaling](./cheatsheet/horizontal_vs_vertical.png) 24 | 25 | ## Sharding ~ database partitiioning 26 | 27 | ![Horizontal vs Vertical scaling](./cheatsheet/sharding_partitiioning.png) 28 | 29 | ## Resilience design patterns 30 | 31 | ![Horizontal vs Vertical scaling](./cheatsheet/resilience_patterns.png) 32 | 33 | ## Isolation levels vs read phenomena 34 | 35 | ![Isolation levels vs read phenomena](./cheatsheet/isolation_levels_vs_read_phenomena.jpg) 36 | 37 | ## IaaS vs PaaS vs SaaS 38 | 39 | ![Horizontal vs Vertical scaling](./cheatsheet/aas_2.jpg) 40 | ![Horizontal vs Vertical scaling](./cheatsheet/aas_1.webp) 41 | 42 | ## Benefits 43 | 44 | ![Benefits](./cheatsheet/cloud_benefits.svg) 45 | 46 | 47 | 48 | 49 | -------------------------------------------------------------------------------- /cs/concepts.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Computer Science 2 | 3 | Key concepts of Computer Science. 4 | 5 | 6 | 7 | 8 | 9 | ## Architecture 10 | 11 | SOA ~ Service-oriented architecture 12 | : An architectural style that focuses on discrete services instead of a monolithic design. Loosely coupled service-oriented architecture with bounded context. 13 | 14 | MVC 15 | : ... 16 | 17 | DDD 18 | : ... 19 | 20 | BDD 21 | : ... 22 | 23 | ## Cloud 24 | 25 | Horizontal / Vertical scaling 26 | : ... 27 | 28 | High-level design / Low-level design 29 | : High-level design : explains the architecture that would be used to develop a system. Low-level design is a component-level design process that follows a step-by-step refinement process, this process can be used for designing data structures, required software architecture, source code and ultimately, performance algorithms. 30 | 31 | Distributed consensus 32 | : ... 33 | Types: 2PC, 3PC, SAGA 34 | 35 | Replication strategy 36 | : ... 37 | 38 | Split brain 39 | : failure when more than single component believes it is a master 40 | 41 | Sharding 42 | : a database architecture pattern related to horizontal partitioning — the practice of separating one table’s rows into multiple different tables, known as partitions. 43 | 44 | Infrastructure as code 45 | : methodology ... 46 | 47 | IaaS 48 | : ... 49 | 50 | PaaS 51 | : ... 52 | 53 | SaaS 54 | : ... 55 | 56 | On-Premises Service 57 | : ... 58 | 59 | Fail Over 60 | : ... 61 | 62 | Disaster Recovery 63 | : ... 64 | 65 | Business Continuity Plan 66 | : ... 67 | 68 | Recovery Point Objective 69 | : ... 70 | 71 | Recovery Time Objective 72 | : ... 73 | 74 | ## Database 75 | 76 | ### Data Lake 77 | : ... 78 | 79 | ### Data Lakehouse 80 | : ... 81 | 82 | ### Data Mesh 83 | : method ... 84 | 85 | ### Data Fabric 86 | : method ... 87 | 88 | ### Data base / Data Mesh / Data Warehouse / Data Lake 89 | : ... 90 | 91 | ### Data Warehouse ~ EDW 92 | : ... 93 | 94 | ### ETL ~ Extract Transform Load 95 | : ... 96 | 97 | ### BI ~ Business Intelligence 98 | : ... 99 | 100 | ### PAML ~ Prediction Analytic Machine Learning 101 | : ... 102 | 103 | ### Column-oriented database / Row-oriented database 104 | : ... 105 | 106 | ### Column-wide database ~ Big Table database 107 | 108 | A column-based database, also known as a columnar database, is a type of database management system that stores data in columns rather than in rows. In a column-based database, each column of a table is stored separately on disk, rather than storing each row as a separate record. This makes columnar databases well-suited for analytics workloads that require complex queries across large volumes of data, as it allows for much faster querying and analysis of data. Column-based databases are often used for data warehousing and business intelligence applications, as well as for storing large amounts of structured and semi-structured data. They are known for their high performance, scalability, and ability to handle large volumes of data with low latency. Some popular examples of column-based databases include Apache Cassandra, Apache HBase, and Google Bigtable. 109 | 110 | Data Model: Wide column databases use a non-relational or NoSQL data model, while relational databases use a relational data model. 111 | 112 | High Availability: Wide column databases can achieve high availability and fault tolerance through data replication across multiple nodes or servers. Relational databases typically rely on backup and recovery methods. 113 | 114 | Query Language: Wide column databases use a variety of query languages, including CQL (Cassandra Query Language) and SQL-like query languages. Relational databases use SQL. 115 | 116 | Use Cases: Wide column databases are often used for big data applications, such as IoT, real-time analytics, and high-traffic websites. Relational databases are commonly used for transactional systems, such as e-commerce and financial applications. 117 | 118 | Schema flexibility: In a wide column database, the schema can be more flexible, allowing for the addition or removal of columns without affecting the rest of the schema. This is because the data is stored in columns rather than rows, which allows for a more dynamic schema. 119 | 120 | Distributed architecture: Wide column databases are often designed to be distributed across multiple nodes, allowing for horizontal scalability and fault tolerance. This is in contrast to relational databases, which are often designed to run on a single node or a small cluster of nodes. 121 | 122 | High write performance: Because wide column databases are optimized for write-heavy workloads, they often have very high write performance compared to relational databases. This is because they use a log-structured merge tree (LSM tree) data structure, which allows for efficient writes and compaction of data. 123 | 124 | Query performance: Wide column databases may have slower query performance than relational databases, especially for complex queries that require joins or aggregation. However, they are often optimized for specific types of queries, such as range queries or key-value lookups. 125 | 126 | Strong consistency: Some wide column databases, such as Apache Cassandra, offer strong consistency guarantees through features like tunable consistency levels and lightweight transactions. However, achieving strong consistency in a distributed system can be challenging, and it may come at the cost of performance or availability. 127 | 128 | NoSQL paradigm: Wide column databases are part of the NoSQL movement, which emphasizes flexibility, scalability, and performance over strict adherence to the relational data model. This means that wide column databases may have different trade-offs and design principles than relational databases. 129 | 130 | Column-oriented storage: Wide column databases store data in column families, which are groups of columns that are often accessed together. This allows for efficient storage and retrieval of data, especially when only a subset of columns is needed for a particular query. Relational databases, on the other hand, store data in rows, which can make it harder to optimize queries for specific subsets of columns. 131 | 132 | ##### Example 133 | 134 | HBase, Google Big Table, Cassandra, ScyllaDB 135 | 136 | ### Graph DB 137 | ... 138 | 139 | ##### Example 140 | 141 | Neo4J, FlockDB 142 | 143 | ### Consistency in reads : Strict Consistence / Eventual Consistency / Causal consistency 144 | 145 | **Strict consistency** is a guarantee that any read operation on the data will always return the latest version of the data. In other words, if a data is updated, any subsequent read of that data will always return the latest version. Strict consistency is often achieved by locking data during updates, which can cause delays and potentially slow down the system. 146 | 147 | **Causal consistency** ensures that causally related operations are seen in a specific order by all nodes in the system, but does not necessarily require that all nodes see all operations in the same order. This means that there may be some operations that are seen in a different order by different nodes, as long as there is a causal relationship between them. 148 | 149 | **Eventual consistency** is a weaker form of consistency that allows for temporary inconsistencies to exist between replicas of the data, but guarantees that the replicas will eventually converge to the same value. This means that if a data is updated, there may be a delay before all replicas of the data are updated with the latest version. However, given enough time and no new updates, all replicas will eventually be updated and converge to the same value. 150 | 151 | In summary, strict consistency guarantees immediate consistency at the expense of performance, while eventual consistency allows for temporary inconsistencies in exchange for higher performance and availability. Causal consistency strikes a balance between strong consistency and weak consistency, and is often used in distributed systems where strict consistency is not feasible due to performance or network limitations. Causal consistency provides strong ordering guarantees while eventual consistency provides weaker ordering guarantees but allows for more flexibility and scalability. 152 | 153 | ### Consistency in data 154 | 155 | Consistency in data refers to the property of data being accurate and valid over time. It means that the data remains consistent in all copies of the database, even when it is updated or modified. In a consistent database, all data is correctly synchronized, and there are no conflicting or contradictory copies of the same data. Consistency ensures that data is reliable and can be trusted by users and applications. It is a fundamental aspect of database design and is achieved through various mechanisms such as transactions, locking, and replication. 156 | 157 | ### Durability of data storage 158 | 159 | In the context of data storage and database systems, durability refers to the ability of a transaction to survive permanently in the event of a system failure. In other words, once a transaction has been committed, its changes must be stored and protected in a way that ensures they will not be lost or undone due to hardware or software failure. 160 | 161 | There are typically two types of durability guarantees: 162 | 163 | **Volatile Durability**: In this level of durability, changes are written to volatile memory (such as RAM) before they are stored on disk. As a result, if the system fails before the changes are written to disk, they may be lost. 164 | 165 | **Non-Volatile Durability**: In this level of durability, changes are written directly to disk or other non-volatile storage before a transaction is considered to be complete. This guarantees that the changes will be retained even in the event of a system failure. 166 | 167 | Most modern database systems provide some form of non-volatile durability, as it is considered to be a critical component of transaction processing. 168 | 169 | ### Atomicity : All-or-nothing atomicity / Atomicity with partial failure 170 | 171 | **All-or-nothing atomicity**: In this type of atomicity, a transaction is either completed in its entirety or not at all. If any part of the transaction fails, the entire transaction is rolled back and the database is left in the state it was in before the transaction began. 172 | 173 | **Atomicity with partial failure**: In this type of atomicity, a transaction can be partially completed even if some part of it fails. The completed part of the transaction is still committed to the database, while the failed part is rolled back. 174 | 175 | ### Isolation : Read uncommitted / Read committed / Repeatable read / Serializable 176 | 177 | **Read uncommitted**: In this level of isolation, transactions are not isolated from each other, and a transaction can read uncommitted data from another transaction. This level provides no protection against dirty reads, non-repeatable reads, or phantom reads. 178 | 179 | **Read committed**: In this level of isolation, a transaction can only read committed data from other transactions. This level provides protection against dirty reads, but still allows non-repeatable and phantom reads. 180 | 181 | **Repeatable read**: In this level of isolation, a transaction can read data that has been committed by other transactions, but cannot read data that has been modified but not yet committed. This level provides protection against dirty reads and non-repeatable reads, but still allows phantom reads. 182 | 183 | **Serializable**: This is the highest level of isolation, where transactions are completely isolated from each other. A transaction can only read data that has been committed by other transactions, and no other transaction can modify the data until the transaction is complete. This level provides complete protection against dirty reads, non-repeatable reads, and phantom reads, but can result in slower performance due to locking. 184 | 185 | In summary, the differences between these levels of isolation are based on the degree of protection against different types of read inconsistencies and the level of concurrency that is allowed between transactions. 186 | 187 | ### Document-based database 188 | ... 189 | 190 | ### OLTP DB / OLAP DB 191 | : ... 192 | 193 | ### Relational database / Document-based database 194 | : ... 195 | 196 | ## Characteristics 197 | 198 | [Resilience](https://www.youtube.com/watch?v=NIy9HMRlpjQ) 199 | : ability to handle and recovery from failure gracefully. 200 | 201 | Idempotent 202 | : in computer science, the term is used more comprehensively to describe an operation that will produce the same results if executed once or multiple times. 203 | 204 | CAP Theorem 205 | : in theoretical computer science, the CAP theorem, states that any distributed data store can provide only two of the following three guarantees. 206 | 207 | ## Communication 208 | 209 | APM ~ application performance monitoring 210 | : ... 211 | 212 | Pub-Sub ~ Publish-Subscribe ~ Broker 213 | : Implementation of publish-subscribe as a service. Conceptually it is message queue + namespace ~ channels ~ types of events. 214 | 215 | Message queue 216 | : Implementation of publish-subscribe with a single channel as a service. Conceptually it is pub-sub with a single channel ~ type of event. 217 | 218 | CQRS ~ Command and Query Responsibility Segregation 219 | : Pattern that separates read and update operations for a data store. Implementing CQRS in your application can maximize its performance, scalability, and security. The flexibility created by migrating to CQRS allows a system to better evolve over time and prevents update commands from causing merge conflicts at the domain level. 220 | 221 | Event Sourcing 222 | : Pattern that defines an approach to handling operations on data that's driven by a sequence of events, each of which is recorded in an append-only store. 223 | 224 | Consistent hashing 225 | : ... 226 | 227 | Proxy / Reverse Proxy 228 | : ... 229 | 230 | Load Balancing 231 | : ... 232 | 233 | 234 | 235 | 236 | -------------------------------------------------------------------------------- /cs/learn.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: Learn Computer Science 2 | 3 | Awesome collection of learning materials to master modern Computer Science. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## Systems Desgin 11 | 12 | ( _systems_design_ ) 13 | 14 | - [The System Design Primer](https://github.com/donnemartin/system-design-primer) by [ View donnemartin's full-sized avatar 15 | Donne Martin ](https://github.com/donnemartin) :open_file_folder: ( _systems_design_ ) 16 | - [Collection on System Design Interview](https://github.com/checkcheckzz/system-design-interview) by [Zach](https://github.com/checkcheckzz) ( _collection_ ) :open_file_folder: ( _systems_design_ ) 17 | - [System Design for Beginners](https://www.youtube.com/playlist?list=PL8hP5HjAnJ3_mT7IHXjlbpYX_xiz4v_kP) by [Code with Irtiza](https://www.youtube.com/@irtizahafiz) :mortar_board: :movie_camera: ( _systems_design_ ) 18 | - [System Design Fundamentals](https://www.youtube.com/playlist?list=PL1MM4yIzUdPnU8n75kjTIbgewdLr5DFox) by [Software Interviews Prep](https://www.youtube.com/@SoftwareInterviewsPrep) :mortar_board: :movie_camera: ( _systems_design_ ) 19 | - [System Design for Beginners](https://www.youtube.com/watch?v=m8Icp_Cid5o) by [freeCodeCamp](https://www.youtube.com/@freecodecamp) :mortar_board: :movie_camera: ( _systems_design_ ) 20 | 21 | ## Architecture 22 | 23 | ( _architecture_ ) 24 | 25 | - [What is a microservice architecture and it's advantages?](https://www.youtube.com/watch?v=qYhRvH9tJKw) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _architecture_ ) 26 | - [Introduction to DDD](https://www.youtube.com/watch?v=H5--9pMmuK4) :movie_camera: by [Geekific](https://www.youtube.com/c/Geekific/playlists) ( _architecture_ ) 27 | - [Domain Driven Design with BDD](https://www.youtube.com/watch?v=Ju50D11EIoE) :movie_camera: by [Continuous Delivery](https://www.youtube.com/c/ContinuousDelivery) 28 | - [Event Sourcery Full Course](https://www.youtube.com/playlist?list=PLQuwqoolg4aI6v1GvtRg3NgT0PBBHVqii) :mortar_board: :movie_camera: by [Event Sourcery](https://www.youtube.com/c/EventSourcery/playlists) 29 | 30 | ## Architecture Principles 31 | 32 | ( _architecture_ ) ( _principle_ ) 33 | 34 | - [SOLID principles](https://www.youtube.com/playlist?list=PLrhzvIcii6GMQceffIgKCRK98yTm0oolm) :mortar_board: :movie_camera: by [Christopher Okhravi](https://www.youtube.com/c/ChristopherOkhravi/playlists) ( _architecture_ ) ( _principle_ ) 35 | - [SOLID principles](https://www.youtube.com/playlist?list=PLZlA0Gpn_vH9kocFX7R7BAe_CvvOCO_p9) :mortar_board: :movie_camera: by [Web Dev Simplified](https://www.youtube.com/c/WebDevSimplified/playlists) ( _architecture_ ) ( _principle_ ) 36 | - [Liskov Substitution Principle](https://www.youtube.com/watch?v=bVwZquRH1Vk) :movie_camera: by [Christopher Okhravi](https://www.youtube.com/c/ChristopherOkhravi/playlists) ( _architecture_ ) ( _principle_ ) 37 | - [Interface Segregation Principle](https://www.youtube.com/watch?v=xahwVmf8itI) :movie_camera: by [Christopher Okhravi](https://www.youtube.com/c/ChristopherOkhravi/playlists) ( _architecture_ ) ( _principle_ ) 38 | - [Dependency Inversion - how, what, why?](https://www.youtube.com/watch?v=S9awxA1wNNY) :movie_camera: by [Christopher Okhravi](https://www.youtube.com/c/ChristopherOkhravi/playlists) ( _architecture_ ) ( _principle_ ) 39 | - [Single Responsibility Principle](https://www.youtube.com/watch?v=AEnePs2Evg0) :movie_camera: by [Christopher Okhravi](https://www.youtube.com/c/ChristopherOkhravi/playlists) ( _architecture_ ) ( _principle_ ) 40 | - [Open/closed principle](https://www.youtube.com/watch?v=DJF_sGOs2V4) :movie_camera: by [Christopher Okhravi](https://www.youtube.com/c/ChristopherOkhravi/playlists) ( _architecture_ ) ( _principle_ ) 41 | - [Robustness Principle Is WRONG?](https://www.youtube.com/watch?v=1B4KjAhQJoQ) :movie_camera: by [Christopher Okhravi](https://www.youtube.com/c/ChristopherOkhravi/playlists) ( _architecture_ ) ( _principle_ ) 42 | 43 | ## Patterns 44 | 45 | ( _architecture_ ) ( _pattern_ ) 46 | 47 | - [Design Patterns](https://www.youtube.com/playlist?list=PLlsmxlJgn1HJpa28yHzkBmUY-Ty71ZUGc) :mortar_board: :movie_camera: by [Geekific](https://www.youtube.com/c/Geekific/playlists) ( _architecture_ ) ( _pattern_ ) 48 | - [Design Patterns](https://www.youtube.com/playlist?list=PLrhzvIcii6GNjpARdnO4ueTUAVR9eMBpc) :mortar_board: :movie_camera: by [Christopher Okhravi](https://www.youtube.com/c/ChristopherOkhravi/playlists) ( _architecture_ ) ( _pattern_ ) 49 | - [Design Patterns](https://www.youtube.com/playlist?list=PLF206E906175C7E07) :mortar_board: :movie_camera: by [Derek Banas](https://www.youtube.com/c/derekbanas/playlists) ( _architecture_ ) ( _pattern_ ) 50 | 51 | ## Architecture and Testing 52 | 53 | ( _architecture_ ) ( _testing_ ) 54 | 55 | - [A Testing Strategy for Hexagonal Applications](https://www.youtube.com/watch?v=LtbHAFsEu5g) :movie_camera: by [DocPlanner Tech](https://www.youtube.com/c/DocPlannerTech/playlists) ( _architecture_ ) ( _testing_ ) 56 | - [More Testable Code with the Hexagonal Architecture](https://www.youtube.com/watch?v=ujb_O6myknY) :movie_camera: by [JitterTed](https://www.youtube.com/c/JitterTed) ( _architecture_ ) ( _testing_ ) 57 | - [Onion Architecture - Software Design Patterns Explained](https://www.youtube.com/watch?v=oC2Ty8H9jck) :movie_camera: by [Professional Programming](https://www.youtube.com/channel/UCG1YLXtmiGyNyRfEFO6ifKw) ( _architecture_ ) ( _testing_ ) 58 | 59 | ## Low-level design 60 | 61 | ( _lld_ ) 62 | 63 | - [Low Level System Design](https://www.youtube.com/playlist?list=PL564gOx0bCLqTolRIHIsR2JPv11w8LESW) :mortar_board: :movie_camera: by [Udit Agarwal](https://www.youtube.com/c/anomaly2104/playlists) 64 | 65 | ## Systems design key concepts 66 | 67 | ( _systems_design_ ) 68 | 69 | - [Distributed Systems](https://www.youtube.com/playlist?list=PLeKd45zvjcDFUEv_ohr_HdUFe97RItdiB) :mortar_board: :movie_camera: by [Martin Kleppmann](https://www.youtube.com/channel/UClB4KPy5LkJj1t3SgYVtMOQ/videos) ( _systems_design_ ) ( _distributed_system_ ) 70 | - [Moving from Monoliths to Microservices](https://www.youtube.com/watch?v=rckfN7xFig0) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _systems_design_ ) 71 | - [Microservices explained](https://www.youtube.com/watch?v=rv4LlmLmVWk) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 72 | - [Microservices Explained and their Pros & Cons](https://www.youtube.com/watch?v=T-m7ZFxeg1A) :movie_camera: by [Hussein Nasser](https://www.youtube.com/c/HusseinNasser-software-engineering/videos) 73 | - [Horizontal vs. Vertical Scaling](https://www.youtube.com/watch?v=xpDnVSmNFX0) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _systems_design_ ) 74 | - [Distributed Consensus and Data Replication strategies](https://www.youtube.com/watch?v=GeGxgmPTe4c) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _systems_design_ ) 75 | 76 | ## Systems design characteristics 77 | 78 | ( _systems_design_ ) 79 | 80 | - [CAP Theorem Simplified](https://www.youtube.com/watch?v=BHqjEjzAicA) by [ :movie_camera: byteByteGo](https://www.youtube.com/@ByteByteGo) 81 | - [CAP theorem](https://www.youtube.com/watch?v=KmGy3sU6Xw8) :movie_camera: by [Hussein Nasser](https://www.youtube.com/c/HusseinNasser-software-engineering/videos) 82 | - [What is Resiliency?](https://www.youtube.com/watch?v=NIy9HMRlpjQ) :movie_camera: by [Knowledge Powerhouse](https://www.youtube.com/channel/UC27-kZjEVZmon1rQ3Qk_AyQ/videos) 83 | - [Resiliency design patterns](https://www.youtube.com/watch?v=xdYBB3-5aEU) :movie_camera: by [Knowledge Powerhouse](https://www.youtube.com/channel/UC27-kZjEVZmon1rQ3Qk_AyQ/videos) 84 | 85 | ## Communication 86 | 87 | ( _communication_ ) 88 | 89 | - [The Journey of an HTTP request](https://www.youtube.com/watch?v=K2qV6VpfR7I) :movie_camera: by [Hussein Nasser](https://www.youtube.com/c/HusseinNasser-software-engineering/videos) 90 | - [Proxy vs Reverse Proxy](https://www.youtube.com/watch?v=SqqrOspasag) :movie_camera: by [Hussein Nasser](https://www.youtube.com/c/HusseinNasser-software-engineering/videos) 91 | - [What is Load Balancing](https://www.youtube.com/watch?v=K0Ta65OqQkY) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _systems_design_ ) 92 | _systems_design_ ) 93 | - [A Brief Introduction to Consistent Hashing](https://www.youtube.com/watch?v=tHEyzVbl4bg) :movie_camera: by [Hannah Barton](https://www.youtube.com/channel/UCs9KZhtpBMuKkS_My8CuxGg/videos) 94 | - [What is a Message Queue and Where is it used?](https://www.youtube.com/watch?v=oUJbuFMyBDk) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _systems_design_ ) 95 | - [What is the Publisher Subscriber Model?](https://www.youtube.com/watch?v=FMhbR_kQeHw) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _systems_design_ ) 96 | - [What's an Event Driven System?](https://www.youtube.com/watch?v=rJHTK2TfZ1I) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _systems_design_ ) 97 | - [What are the differences between publisher subscriber and message queue models?](https://www.youtube.com/watch?v=PT-FO_6wdZM) :movie_camera: by [Knowledge Powerhouse](https://www.youtube.com/channel/UC27-kZjEVZmon1rQ3Qk_AyQ/videos) 98 | - [HTTP Request vs HTTP Long-Polling vs WebSockets vs Server-Sent Events](https://www.youtube.com/watch?v=k56H0DHqu5Y) :movie_camera: by [AfterAcademy](https://www.youtube.com/c/AfterAcademy/videos) 99 | - [Long Polling and how it differs from Push, Poll and SSE](https://www.youtube.com/watch?v=J0okraIFPJ0) :movie_camera: by [Hussein Nasser](https://www.youtube.com/c/HusseinNasser-software-engineering/videos) 100 | 101 | ## Monitoring, logging, aggregation and diagnostics of a distributed system 102 | 103 | ( _monitoring_ ) 104 | 105 | - [Prometheus Architecture explained](https://www.youtube.com/watch?v=mLPg49b33sA) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 106 | - [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 107 | - [Logging in Kubernetes with Elasticsearch, Fluentd and Kibana](https://www.youtube.com/watch?v=I5c8Pfg2tys) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 108 | - [Grafana vs Kibana](https://www.youtube.com/watch?v=xXmOmFyN3Hs) :movie_camera: by [IT Security Labs](https://www.youtube.com/c/ITSecurityLabs/videos) 109 | 110 | ## Infrastructure as a code 111 | 112 | ( _infrastructure_as_a_code_ ) 113 | 114 | - [Terraform in 100 Seconds](https://www.youtube.com/watch?v=tomUWcQ0P3k) :movie_camera: by [Fireship](https://www.youtube.com/c/Fireship) ( _short_ ) ( _infrastructure_as_a_code_ ) 115 | - [Terraform in 15 mins](https://www.youtube.com/watch?v=l5k1ai_GBDE) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) ( _infrastructure_as_a_code_ ) 116 | - [Terraform Course](https://www.youtube.com/watch?v=7xngnjfIlK4) :movie_camera: by [DevOps Directive](https://www.youtube.com/c/DevOpsDirective/playlists) ( _infrastructure_as_a_code_ ) 117 | - [What is GitOps](https://www.youtube.com/watch?v=f5EpcWp0THw) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 118 | 119 | ## Virtualization 120 | 121 | ( _virtualization_ ) 122 | 123 | - [Docker Tutorial](https://www.youtube.com/watch?v=3c-iBn73dDE) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 124 | - [Docker Compose](https://www.youtube.com/watch?v=DM65_JyGxCo) :movie_camera: by [ NetworkChuck ](https://www.youtube.com/c/NetworkChuck/videos) 125 | 126 | ## CI/CD 127 | 128 | ( _ci_cd_ ) 129 | 130 | - [Github Actions Tutorial](https://www.youtube.com/watch?v=eB0nUzAI7M8) :movie_camera: by [Fireship](https://www.youtube.com/c/Fireship) 131 | - [GitHub Actions Tutorial](https://www.youtube.com/watch?v=R8_veQiYBjI) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 132 | 133 | - [Continuous Deployment vs. Continuous Delivery](https://www.youtube.com/watch?v=LNLKZ4Rvk8w) :movie_camera: by [IBM Technology](https://www.youtube.com/c/IBMTechnology/playlists) 134 | 135 | ## Shell environment, Linux 136 | 137 | ( _shell_ ) 138 | 139 | - [Linux Crash Course](https://www.youtube.com/playlist?list=PLT98CRl2KxKHKd_tH3ssq0HPrThx2hESW) :mortar_board: :movie_camera: by [Learn Linux TV](https://www.youtube.com/c/LearnLinuxtv/playlists) 140 | - [100 Linux Commands](https://www.youtube.com/playlist?list=PL5gAM72D0Cq-R8SrNsuOMB1zX4KOy7W-L) :mortar_board: :movie_camera: by [Jae Nulton](https://www.youtube.com/@jaenulton9953) 141 | - [Linux Commands](https://www.youtube.com/playlist?list=PLT98CRl2KxKHaKA9-4_I38sLzK134p4GJ) :mortar_board: :movie_camera: by [Learn Linux TV](https://www.youtube.com/c/LearnLinuxtv/playlists) 142 | - :page_facing_up: [30 Handy Shell Aliases](https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html) by [cyberciti.biz](https://www.cyberciti.biz/) 143 | - :chart_with_upwards_trend: [Bash Cheatsheet](https://devhints.io/bash) by [Rico](https://devhints.io/) 144 | - :chart_with_upwards_trend: [Bash Cheatsheet](https://github.com/LeCoupa/awesome-cheatsheets/blob/master/languages/bash.sh) by [Julien Le Coupanec](https://github.com/LeCoupa) 145 | - :chart_with_upwards_trend: [Pure Bash Bible](https://github.com/dylanaraps/pure-bash-bible) by [Dylan Araps](https://github.com/dylanaraps) : a collection of bash recipies 146 | - :chart_with_upwards_trend: [Bash Built-in Variables](https://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html) 147 | 148 | ## Shell Scripting 149 | 150 | ( _shell_ ) 151 | 152 | - [Bash Scripting on Linux](https://www.youtube.com/playlist?list=PLT98CRl2KxKGj-VKtApD8-zCqSaN2mD4w) :mortar_board: :movie_camera: by [Learn Linux TV](https://www.youtube.com/c/LearnLinuxtv/playlists) 153 | - [Bash Scripting Full Course](https://www.youtube.com/watch?v=e7BufAVwDiM) :mortar_board: :movie_camera: by [linuxhint](https://www.youtube.com/@linuxhint) 154 | 155 | ## Inter-process communication 156 | 157 | ( _ipc_ ) 158 | 159 | - [API Essentials](https://www.youtube.com/playlist?list=PLOspHqNVtKAAAq9pHWlEiRUVcYMCcu4X0) :mortar_board: :movie_camera: by [IBM Technology](https://www.youtube.com/c/IBMTechnology/playlists) ( _ipc_ ) 160 | - [REST API and OpenAPI](https://www.youtube.com/watch?v=pRS9LRBgjYg) :movie_camera: by [IBM Technology](https://www.youtube.com/c/IBMTechnology/playlists) ( _ipc_ ) 161 | - [How do you design API?](https://www.youtube.com/watch?v=_YlYuNMTCc8) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _ipc_ ) 162 | 163 | ## Remote Access 164 | 165 | ( _remote_access_ ) 166 | 167 | - [Linux Desktop in the Cloud](https://www.youtube.com/watch?v=633OWaW3cyo) :movie_camera: by [Linode](https://www.youtube.com/c/linode/videos) 168 | - [Linux Firewall Tutorial](https://www.youtube.com/watch?v=XtRXm4FFK7Q) :movie_camera: by [Linode](https://www.youtube.com/c/linode/videos) 169 | - [Hardening Access to Your Server](https://www.youtube.com/watch?v=eeaFoZlSq6I) :movie_camera: by [Linode](https://www.youtube.com/c/linode/videos) 170 | 171 | ## Benchmarking 172 | 173 | ( _about:benchmarking_ ) 174 | 175 | - :page_facing_up: [Linux Perf Analysis](https://brendangregg.com/Articles/Netflix_Linux_Perf_Analysis_60s.pdf) by [Netflix](https://netflixtechblog.com/) 176 | 177 | ## Tags legend 178 | 179 | - :movie_camera: - video material 180 | - :page_facing_up: - reading 181 | - :mortar_board: - online course with or without feedback 182 | - :chart_with_upwards_trend: - cheetsheets 183 | - :open_file_folder: - collections of collections 184 | - ( _short_ ) - short overview 185 | - ( _systems_design_ ) - systems design 186 | - ( _communication_ ) - back-end, networking, RPC, IPC 187 | - ( _monitoring_ ) - monitoring, logging, aggregation and diagnostics of a distributed system 188 | - ( _infrastructure_as_a_code_ ) - infrastructure as a code 189 | - ( _time_series_ ) - time-series database 190 | - ( _virtualization_ ) - virtualization 191 | - ( _ci_cd_ ) - continuous integration / continuous deployment 192 | - ( _ipc_ ) - inter-process communication 193 | - ( _remote_access_ ) - remote access 194 | - ( _shell_ ) - shell environment, Linux 195 | - ( _architecture_ ) - software architecture in general 196 | - ( _principle_ ) - principles of software engineering 197 | - ( _pattern_ ) - patterns of software engineering 198 | - ( _testing_ ) - testing related aspects of architecture 199 | - ( _lld_ ) - low-level design 200 | -------------------------------------------------------------------------------- /cs/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Computer Science Together 2 | 3 | Awesome collection of learning materials to master modern Computer Science including Operating Systems, Systems Design, Distributed Systems. 4 | 5 | ## What is this about? 6 | 7 | This repostory contains nearly dozen curated collections: learning materials, toolboxes, newspapers, working groups, collection of other collections. Everything you will find useful if you are interested in Computer Science. 8 | 9 | Here you can find: 10 | 11 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Computer Sicence. 12 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of Computer Sicence. 13 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets.md) : cheatsheets on Computer Sicence. 14 | -------------------------------------------------------------------------------- /dbms/cheatsheet.md: -------------------------------------------------------------------------------- 1 | # Cheat Sheets 2 | 3 | ## SQL Join 4 | 5 | ![SQL Join](./cheatsheet/sql_join.jpg) 6 | 7 | ## Other SQL Join 8 | 9 | ![OTher SQL Join](./cheatsheet/other_sql_join.jpg) 10 | 11 | ## SQL Isolation Levels 12 | 13 | ![SQL Isolation Levels](./cheatsheet/sql_isolation_level.png) 14 | 15 | ## SQL : Aggregate vs Window Functions 16 | 17 | ![SQL : Aggregate vs Window Functions](./cheatsheet/sql_aggregate_vs_window_functions.jpg) 18 | 19 | ## Normalization Steps 20 | 21 | ![Normalization Steps](./cheatsheet/normalization_steps.jpg) 22 | 23 | ## Transitive vs Full Dependencies 24 | 25 | ![Transitive vs Full Dependencies](./cheatsheet/transitive_vs_full_dependencies.jpg) 26 | 27 | ## Functions Mapping 28 | 29 | ![Functions Mapping](./cheatsheet/functions_mapping.jpg) 30 | 31 | ## All Normal Forms 32 | 33 | ![All Normal Forms](./cheatsheet/all_normal_forms.jpg) 34 | -------------------------------------------------------------------------------- /dbms/cheatsheet/all_normal_forms.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/all_normal_forms.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_bachmans_notation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_bachmans_notation.png -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_barker.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_barker.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_barkers_notation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_barkers_notation.png -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_barkers_notation_2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_barkers_notation_2.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_barkers_notation_3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_barkers_notation_3.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_chens_notation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_chens_notation.png -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_chens_notation_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_chens_notation_2.png -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_chens_notation_3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_chens_notation_3.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_crows_foot_notation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_crows_foot_notation.png -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_crows_foot_notation_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_crows_foot_notation_2.png -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_iso_notation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_iso_notation.png -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_many_to_many_in_relational_db.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_many_to_many_in_relational_db.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_self_referential.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_self_referential.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_selfref.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_selfref.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_uml.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_uml.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/erd_uml_notation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/erd_uml_notation.png -------------------------------------------------------------------------------- /dbms/cheatsheet/functions_mapping.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/functions_mapping.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/normalization_steps.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/normalization_steps.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/other_sql_join.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/other_sql_join.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/relations_to_chen.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/relations_to_chen.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/sql_aggregate_vs_window_functions.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/sql_aggregate_vs_window_functions.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/sql_isolation_level.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/sql_isolation_level.png -------------------------------------------------------------------------------- /dbms/cheatsheet/sql_join.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/sql_join.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet/transitive_vs_full_dependencies.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/dbms/cheatsheet/transitive_vs_full_dependencies.jpg -------------------------------------------------------------------------------- /dbms/cheatsheet_erd.md: -------------------------------------------------------------------------------- 1 | # ERD Cheat Sheets 2 | 3 | ## Many to Many in Relational DB 4 | 5 | ![Many to Many in Relational DB](./cheatsheet/erd_many_to_many_in_relational_db.jpg) 6 | 7 | ## Self-Referential Relationship ( Crows Foot Style ) 8 | 9 | ![Self-Referential Relationship](./cheatsheet/erd_selfref.jpg) 10 | 11 | ## Entity Relationship Diagram ( Crows Foot Style ) 12 | 13 | ![Entity Relationship Diagram](./cheatsheet/erd_barker.jpg) 14 | 15 | ## Entity Relationship Diagram ( UML Style ) 16 | 17 | ![Entity Relationship Diagram](./cheatsheet/erd_uml.jpg) 18 | 19 | ## ERD: Chens Notation 20 | 21 | ![Chens Notation](./cheatsheet/erd_chens_notation.png) 22 | ![Chens Notation](./cheatsheet/erd_chens_notation_2.png) 23 | ![Chens Notation](./cheatsheet/erd_chens_notation_3.jpg) 24 | ![Relations to Chens Notation](./cheatsheet/relations_to_chen.jpg) 25 | 26 | ## ERD: ISO Notation 27 | 28 | ![ISO Notation](./cheatsheet/erd_iso_notation.png) 29 | 30 | ## ERD: UML Notation 31 | 32 | ![UML Notation](./cheatsheet/erd_uml_notation.png) 33 | 34 | ## ERD: Crows Food Notation 35 | 36 | ![Crows Food Notation](./cheatsheet/erd_crows_foot_notation.png) 37 | ![Crows Food Notation](./cheatsheet/erd_crows_foot_notation_2.png) 38 | 39 | ## ERD: Barkers Notation 40 | 41 | ![Barker Notation](./cheatsheet/erd_barkers_notation.png) 42 | ![Barker Notation](./cheatsheet/erd_barkers_notation_2.jpg) 43 | ![Barker Notation](./cheatsheet/erd_barkers_notation_3.jpg) 44 | 45 | ## ERD: Bachmans Notation 46 | 47 | ![Bachman Notation](./cheatsheet/erd_bachmans_notation.png) 48 | -------------------------------------------------------------------------------- /dbms/cheatsheet_psql.md: -------------------------------------------------------------------------------- 1 | # PostgresSQL Cheat sheets 2 | 3 | ## String Conversion 4 | 5 | Retrieves the ASCII code of a given character. 6 | 7 | ```sql 8 | ASCII( 'y' ) = 121 9 | ASCII( 'B' ) = 66 10 | ``` 11 | 12 | Transforms a string to ASCII from a different encoding system. 13 | 14 | ```sql 15 | TO_ASCII( 'hello' ) = 'hello' 16 | ``` 17 | 18 | Maps a given code to its corresponding character. 19 | 20 | ```sql 21 | CHR( 66 ) = 'B' 22 | CHR( 128 ) = 'Ç' 23 | CHR( NULL ) = NULL 24 | ``` 25 | 26 | Converts text to a specified encoding format. 27 | 28 | ```sql 29 | CONVERT( 'Example', 'UTF8', 'LATIN9' ) = 'Example' 30 | ``` 31 | 32 | Translates binary data into a textual representation or vice versa. 33 | 34 | ```sql 35 | ENCODE( '4321', 'base64' ) = 'NDMyMQ==' 36 | DECODE( 'NDMyMQ==', 'base64' ) = 4321 37 | ``` 38 | 39 | Capitalizes the first letter of each word in a string. 40 | 41 | ```sql 42 | INITCAP( 'good morning' ) = 'Good Morning' 43 | INITCAP( 'welcome-home' ) = 'Welcome-Home' 44 | INITCAP( 'sleepwell' ) = 'Sleepwell' 45 | ``` 46 | 47 | Alters the case of a string to lower or upper. 48 | 49 | ```sql 50 | LOWER( 'LOWER' ) = 'lower' 51 | UPPER( 'upper' ) = 'UPPER' 52 | ``` 53 | 54 | Computes the MD5 hash of a string and returns the result in hexadecimal form. 55 | 56 | ```sql 57 | MD5( 'xyz' ) = 'd16fb36f0911f878998c136191af705e' 58 | ``` 59 | 60 | Identifies the current client encoding. 61 | 62 | ```sql 63 | PG_CLIENT_ENCODING( ) = 'LATIN1' 64 | ``` 65 | 66 | Appropriately quotes a string for use as an identifier in SQL statements. 67 | 68 | ```sql 69 | QUOTE_IDENT( 'USER_ID' ) = '"USER_ID"' 70 | ``` 71 | 72 | Properly quotes a string for use as a literal in SQL queries. 73 | 74 | ```sql 75 | QUOTE_LITERAL( 'xyz' ) = '''xyz''' 76 | QUOTE_LITERAL( 'D''Angelo' ) = '''D''Angelo''' 77 | QUOTE_LITERAL( 100 ) = '100' 78 | QUOTE_LITERAL( 'XYZ' ) = '''XYZ''' 79 | ``` 80 | 81 | Quotes a string for use in SQL, or returns NULL if the input is null. 82 | 83 | ```sql 84 | QUOTE_NULLABLE( NULL ) = NULL 85 | QUOTE_NULLABLE( 100.5 ) = '''100.5''' 86 | ``` 87 | 88 | Converts numbers into their hexadecimal equivalents. 89 | 90 | ```sql 91 | TO_HEX( 15 ) = 'f' 92 | TO_HEX( 123 ) = '7b' 93 | TO_HEX( 1023 ) = '3ff' 94 | ``` 95 | ## String Measurement 96 | 97 | Determines the bit length of a string. 98 | 99 | ```sql 100 | BIT_LENGTH( 'hello' ) = 40 101 | BIT_LENGTH( 'ß' ) = 16 102 | BIT_LENGTH( 'ñ' ) = 16 103 | ``` 104 | 105 | Counts the characters in a string. 106 | 107 | ```sql 108 | CHAR_LENGTH( 'hello' ) = 5 109 | CHARACTER_LENGTH( 'world' ) = 5 110 | LENGTH( 'text', 'UTF8' ) = 4 111 | ``` 112 | 113 | Calculates the byte length of a string. 114 | 115 | ```sql 116 | OCTET_LENGTH( 'CD' ) = 2 117 | OCTET_LENGTH( 'ß' ) = 2 118 | OCTET_LENGTH( 'ñ' ) = 2 119 | ``` 120 | 121 | ## String Modification 122 | 123 | Concatenates two or more strings together. 124 | 125 | ```sql 126 | 'Hello' || 'World' = 'HelloWorld' 127 | 'Count: ' || 100 = 'Count: 100' 128 | ``` 129 | 130 | Trims characters from both ends of a string. 131 | 132 | ```sql 133 | BTRIM( ' XY ' ) = 'XY' 134 | BTRIM( '+XY+', '+' ) = 'XY' 135 | TRIM( ' ' FROM ' XY ' ) = 'XY' 136 | LTRIM( '+XY+', '+' ) = 'XY+' 137 | RTRIM( leading '_' from '__XY__' ) = 'XY__' 138 | RTRIM( '__XY__','_' ) = '__XY' 139 | RTRIM( trailing '_' from '__XY__' ) = '__XY' 140 | ``` 141 | 142 | Pads a string to a certain length with another string. 143 | 144 | ```sql 145 | LPAD( 'DEF', 6 ) = ' DEF' 146 | RPAD( 'DEF', 6 ) = 'DEF ' 147 | LPAD( '456', 6, '0' ) = '000456' 148 | RPAD( 'e', 3, '-' ) = 'e--' 149 | ``` 150 | 151 | Substitutes part of a string with a different string. 152 | 153 | ```sql 154 | OVERLAY( '456' placing '2' from 2 for 3 )='42' 155 | OVERLAY( '456' placing '-' from 1 for 2 )='-6' 156 | OVERLAY( '456' placing 'L' from 2 for 1 )='4L6' 157 | ``` 158 | 159 | Duplicates a string a specified number of times 160 | 161 | ```sql 162 | REPEAT( 'Ab', 3 ) = 'AbAbAb' 163 | ``` 164 | 165 | Substitutes characters in a string based on a mapping set 166 | 167 | ```sql 168 | TRANSLATE( '45654', '45', 'xy' ) = 'xy6yx' 169 | ``` 170 | 171 | Alters a string by replacing sequences that match a regular expression pattern 172 | 173 | ```sql 174 | REGEXP_REPLACE( 'Jump','p','l' ) = 'Juml' 175 | REGEXP_REPLACE( 'Jump','p( .* )', '1' ) = 'J1' 176 | REGEXP_REPLACE( 'Jump','p','q' ) = 'Jumq' 177 | REGEXP_REPLACE( 'Jump','p( .{1} )','1' ) = 'Ju1' 178 | REGEXP_REPLACE( '20 hh','[^\d]','0' ) = '200 hh' 179 | REGEXP_REPLACE( '20 h','\s','+' ) = '20+h' 180 | REGEXP_REPLACE( '20 h','[\s]{2,}',' ' ) = '20 h' 181 | REGEXP_REPLACE( 'def','\W.+','' ) = '' 182 | ``` 183 | 184 | Divides a string based on a delimiter and returns a specified segment 185 | 186 | ```sql 187 | SPLIT_PART( '4,5,6',',',2 ) = '5' 188 | ``` 189 | 190 | Retrieves a portion of a string starting at a particular position for a specified length 191 | 192 | ```sql 193 | SUBSTRING( '67890' from 2 for 3 ) = '789' 194 | SUBSTR( '67890', 3, 2 ) = '78' 195 | ``` 196 | 197 | Extracts a substring from a string that matches a regular expression pattern 198 | 199 | ```sql 200 | SUBSTRING( 'Abcdef' from '\( .{3}\ )' ) = 'bcd' 201 | ``` 202 | 203 | Extracts a substring using a SQL pattern matching condition 204 | 205 | ```sql 206 | SUBSTRING( 'FGHIJ' from '%#"FG_"H' for'#' ) = 'GH' 207 | SUBSTRING( 'FGHIJ' from '%#"FG_"H' for'_'' ) = NULL 208 | ``` 209 | 210 | Replaces all instances of a specified substring within a string with a different substring 211 | 212 | ```sql 213 | REPLACE( 'X-Y-Z', '-', '+' ) = 'X+Y+Z' 214 | ``` 215 | ## String Search and Match 216 | 217 | Identifies the position of a specified substring within a string 218 | 219 | ```sql 220 | POSITION( 'la' in 'Salad' ) = 3 221 | STRPOS( 'Salad', 'la' ) = 3 222 | ``` 223 | 224 | Captures all the substrings that match a regular expression pattern within a string 225 | 226 | ```sql 227 | REGEXP_MATCHES( 'a,b,c', '( \w ),( \w ),( \w )' ) = {"a","b","c"} 228 | REGEXP_MATCHES( '123', '\d' ) = {'1','2','3'} 229 | ``` 230 | 231 | Separates a string into an array or table based on a regular expression delimiter 232 | 233 | ```sql 234 | REGEXP_SPLIT_TO_ARRAY( 'Hello World', E'\\s+' ) = {Hello,World} 235 | REGEXP_SPLIT_TO_TABLE( 'Hello World', E'\\s+' ) = Hello | World( 2 rows ) 236 | ``` 237 | 238 | Determines if a string matches a specified pattern 239 | 240 | ```sql 241 | 'Hello' LIKE '_e%' = true 242 | 'Hello' NOT LIKE 'H%' = false 243 | 'Hello' SIMILAR TO 'He%' = true 244 | 'Hello' NOT SIMILAR TO '_e%' = false 245 | ``` 246 | ## DCL Permissions 247 | 248 | Instructions to become the postgres superuser to avoid permission errors. 249 | 250 | ```shell 251 | sudo su - postgres 252 | psql 253 | ``` 254 | 255 | Commands to grant all permissions on a specific database. 256 | 257 | ```sql 258 | GRANT ALL PRIVILEGES ON DATABASE db_example TO user_example; 259 | ``` 260 | 261 | Commands to allow a user to connect to a specific database. 262 | 263 | ```sql 264 | GRANT CONNECT ON DATABASE db_example TO user_example; 265 | ``` 266 | 267 | Commands to allow usage of a specific schema. 268 | 269 | ```sql 270 | GRANT USAGE ON SCHEMA public TO user_example; 271 | ``` 272 | 273 | Commands to permit execution of all functions within a schema. 274 | 275 | ```sql 276 | GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO user_example; 277 | ``` 278 | 279 | Commands to allow various operations on all tables within a schema. 280 | 281 | ```sql 282 | GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO user_example; 283 | ``` 284 | 285 | Commands to grant specific operations on a single table. 286 | 287 | ```sql 288 | GRANT SELECT, UPDATE, INSERT ON table_example TO user_example; 289 | ``` 290 | 291 | Commands to permit only select operation on all tables within a schema. 292 | 293 | ```sql 294 | GRANT SELECT ON ALL TABLES IN SCHEMA public TO user_example; 295 | ``` 296 | 297 | ## DCL Users 298 | 299 | Command to list all roles. 300 | 301 | ```sql 302 | SELECT rolname FROM pg_roles; 303 | ``` 304 | 305 | Command to create a new user. 306 | 307 | ```sql 308 | CREATE USER user_example WITH PASSWORD 'password123'; 309 | ``` 310 | 311 | Command to remove a user. 312 | 313 | ```sql 314 | DROP USER IF EXISTS user_example; 315 | ``` 316 | 317 | Command to change a user's password. 318 | 319 | ```sql 320 | ALTER ROLE user_example WITH PASSWORD 'newpassword123'; 321 | ``` 322 | -------------------------------------------------------------------------------- /dbms/learn.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn DBMS Together 2 | 3 | Awesome list of high-quality study materials for learning database management systems. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## DBMS Introduction 11 | 12 | ( _dbms_ ) 13 | 14 | - [Database Systems and PostgresSQL](https://www.youtube.com/watch?v=4cWkVbC2bNE) | [2 part](https://www.youtube.com/watch?v=lxEdaElkQhQ) :movie_camera: :mortar_board: ( _level_1_ ) by [Professor Immanuel Trummer](https://itrummer.github.io/) at Cornell University 15 | - ⭐ [Databases Course](https://www.youtube.com/playlist?list=PL3TE2CsKK478JNAXYLD_hzQFx5JE360KO) :movie_camera: :mortar_board: ( _level_1_ ) by [Professor Jörg Endrullis](https://www.youtube.com/@TheComputerScience/playlists) at Vrije Universiteit Amsterdam 16 | - ⭐ [Database Design & Management](https://www.youtube.com/playlist?list=PL1LIXLIF50uURxYXfBCaAXDzSdZlQiESy) :movie_camera: :mortar_board: ( _level_1_ ) by [Dr. Daniel Soper](https://www.youtube.com/@DanielSoper) at California State University 17 | - [Database Design & Management](https://www.youtube.com/playlist?list=PLeb33PCuqDdfS_ljg40TaA0e9p-m_ZF4_) :movie_camera: :mortar_board: ( _level_2_ ) by [Professor Joseph Hellerstein](https://dsf.berkeley.edu/jmh/index.html) at Berkley Univirsity 18 | - [Databases](https://www.youtube.com/@CS186Berkeley/playlists) :movie_camera: :mortar_board: ( _level_1_ ) by [CS186Berkeley](https://www.youtube.com/@CS186Berkeley/playlists) 19 | - [SQL Tutorial | MySQL](https://www.youtube.com/watch?v=-fW2X7fh7Yg) :movie_camera: :mortar_board: ( _level_0_ ) by [ freeCodeCamp](https://www.youtube.com/@freecodecamp/playlists) 20 | - [SQL Tutorial](https://www.youtube.com/watch?v=HXV3zeQKqGY) :movie_camera: :mortar_board: ( _level_0_ ) by [ freeCodeCamp](https://www.youtube.com/@freecodecamp/playlists) 21 | - [SQL For Web Developer](https://www.youtube.com/watch?v=KBDSJU3cGkc) :movie_camera: :mortar_board: ( _level_0_ ) by [ freeCodeCamp](https://www.youtube.com/@freecodecamp/playlists) 22 | - [Database Systems](https://www.youtube.com/playlist?list=PLBlnK6fEyqRiyryTrbKHX1Sh9luYI0dhX) :movie_camera: :mortar_board: ( _level_0_ ) by [Neso Academy](https://www.youtube.com/@nesoacademy/playlists) 23 | 24 | ## DBMS Specific 25 | 26 | ( _dbms_ ) 27 | 28 | - [Database Engineering](https://www.youtube.com/playlist?list=PLQnljOFTspQXjD0HOzN7P2tgzu7scWpl2) :movie_camera: ( _level_2_ ) by [Hussein Nasser](https://www.youtube.com/@hnasr) : playlist cover many unrelated PostgresSQL-specific topcis 29 | - [PostgresSQL Lessons](https://www.youtube.com/playlist?list=PLQnljOFTspQWGrOqslniFlRcwxyY94cjj) :movie_camera: ( _level_2_ ) by [Hussein Nasser](https://www.youtube.com/@hnasr) : playlist cover many unrelated PostgresSQL-specific topcis 30 | - [Database Systems](https://www.youtube.com/playlist?list=PL78V83xV2fYlT11CJXE77H0LD7C_gZmyf) :movie_camera: :mortar_board: ( _level_2_ ) by [The Magic of SQL](https://www.youtube.com/@TheMagicofSQL) 31 | - [SQL Course](https://www.youtube.com/playlist?list=PLeb33PCuqDdcezLKJLBM9KgtycqrPBY0x) :movie_camera: :mortar_board: ( _level_3_ ) by [SQL TV](https://www.youtube.com/@SQLTVChannel/playlists) 32 | - [PostgreSQL Perfomance Tuning](https://www.youtube.com/playlist?list=PLBrWqg4Ny6vX8e2LnQbNajGSKnFDe94kg) :movie_camera: :mortar_board: ( _level_3_ ) ( _postgresql_ ) 33 | by [High-Performance Programming](https://www.youtube.com/@HighPerformanceProgramming) 34 | 35 | ## SQL 36 | 37 | ( _sql_ ) ( _skill_ ) 38 | 39 | - ⭐ [SQL Skills](https://www.youtube.com/playlist?list=PLeb33PCuqDdff8tXh93kKrglcS6YEe-SR) :movie_camera: ( _level_1_ ) ( _sql_ ) ( _skill_ ) by [techTFQ](https://www.youtube.com/@techTFQ/playlists) 40 | - [SQL Lessons](https://www.youtube.com/playlist?list=PL2WDxXzl0Y2BVRdpYqqmkBv7fj0zb5Vk8) :movie_camera: ( _level_3_ ) ( _sql_ ) ( _skill_ ) by [Bert Wagner](https://www.youtube.com/@DataWithBert) : playlist cover many unrelated SQL-specific topcis 41 | 42 | ## SQL Drills 43 | 44 | ⚡ 45 | 46 | - [SQL Practice](https://www.sql-practice.com/) ⚡ 47 | - [Data Lemur](https://datalemur.com/questions?category=SQL) ⚡ 48 | - [Hacker Rank](https://www.hackerrank.com/domains/sql) ⚡ 49 | - [Leet Code](https://leetcode.com/studyplan/top-sql-50/) ⚡ 50 | - [Strata Scratch](https://platform.stratascratch.com/coding?code_type=1) ⚡ 51 | 52 | ## Data Stacks 53 | 54 | ⚡ 55 | 56 | - [mode.com](https://mode.com/) ⚡ 57 | 58 | ## Certifications 59 | 60 | ( _certificate_ ) 61 | 62 | - [Hacker Rank](https://www.hackerrank.com/skills-verification) ( _certificate_ ) 63 | 64 | ## DBMS concepts 65 | 66 | ( _dbms_ ) 67 | 68 | - [7 Database Paradigms](https://www.youtube.com/watch?v=W2Z7fbCLSTw) :movie_camera: by [Fireship](https://www.youtube.com/c/Fireship) ( _short_ ) 69 | - [ACID Explained: Atomic, Consistent, Isolated & Durable](https://www.youtube.com/watch?v=yaQ5YMWkxq4) :movie_camera: by [the roadmap](https://www.youtube.com/c/theroadmap/playlists) 70 | - [What is Database Sharding?](https://www.youtube.com/watch?v=5faMjKuB9bc) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) 71 | - [Database Indexing](https://www.youtube.com/watch?v=-qNSXK7s7_w) :movie_camera: by [Hussein Nasser](https://www.youtube.com/c/HusseinNasser-software-engineering/videos) 72 | 73 | ## DBMS comparison 74 | 75 | ( _dbms_ ) 76 | 77 | - [Introduction to NoSQL databases](https://www.youtube.com/watch?v=xQnIN9bW0og) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) 78 | - [Relational database index vs. NoSQL index](https://www.youtube.com/watch?v=mTNkqMDCasI) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) 79 | - [Column vs Row Oriented Databases](https://www.youtube.com/watch?v=Vw1fCeD06YI) :movie_camera: by [Hussein Nasser](https://www.youtube.com/c/HusseinNasser-software-engineering/videos) 80 | - [OLAP vs OLTP](https://www.youtube.com/watch?v=iw-5kFzIdgY) :movie_camera: by [IBM Technology](https://www.youtube.com/@IBMTechnology/playlists) 81 | - [Database vs Data Warehouse vs Data Lake](https://www.youtube.com/watch?v=-bSkREem8dM) :movie_camera: by [Alex Freberg](https://www.youtube.com/@AlexTheAnalyst) 82 | 83 | ## Key-value database 84 | 85 | ( _dbms_ ) ( _key-value_ ) 86 | 87 | - [Redis in 100 Seconds](https://www.youtube.com/watch?v=G1rOthIU-uo) :movie_camera: by [Fireship](https://www.youtube.com/c/Fireship) ( _key-value_ ) ( _short_ ) 88 | - [Memcache Basics](https://www.youtube.com/watch?v=TGl81wr8lz8) :movie_camera: by [Google Developers](https://www.youtube.com/googlecode) ( _key-value_ ) ( _short_ ) 89 | 90 | ## Wide column database 91 | 92 | ( _dbms_ ) ( _wide_column_ ) 93 | 94 | - [Cassandra Database Crash Course](https://www.youtube.com/watch?v=KZsVSfQVU4I) :movie_camera: by [Code with Irtiza](https://www.youtube.com/channel/UCDankIVMXJEkhtjv5yLSN4g/videos) 95 | 96 | ## Document-oriented database 97 | 98 | ( _dbms_ ) ( _document-oriented_ ) 99 | 100 | - [MongoDB in 100 Seconds](https://www.youtube.com/watch?v=-bt_y4Loofg) :movie_camera: by [Fireship](https://www.youtube.com/c/Fireship) ( _document-oriented_ ) ( _short_ ) 101 | 102 | ## Time series database 103 | 104 | ( _dbms_ ) ( _time_series_ ) 105 | 106 | - [What is InfluxDB?](https://www.youtube.com/watch?v=qye_c4_pWQ4) :movie_camera: by [CBT Nuggets](https://www.youtube.com/c/cbtnuggets/videos) 107 | 108 | ## Useful Tools 109 | 110 | ( _tool_ ) 111 | 112 | - [Mermaid Cheatsheet](https://github.com/JakeSteam/Mermaid) 113 | - [Mermaid Documentation](https://mermaid.js.org/syntax/entityRelationshipDiagram.html) 114 | 115 | ## Amazing People and Organizations 116 | 117 | ( _subjects_ ) 118 | 119 | - [Professor Immanuel Trummer](https://itrummer.github.io/) and [his lecture on DBMS](https://www.youtube.com/watch?v=4cWkVbC2bNE) : lecturer at Cornell University 120 | - [Professor Jörg Endrullis](http://joerg.endrullis.de/teaching/) and [his lectures](https://www.youtube.com/@TheComputerScience/playlists) : lecturer and amazing source of knowledge at Vrije Universiteit Amsterdam 121 | - [Dr. Daniel Soper](https://www.danielsoper.com) and [his lectures](https://www.youtube.com/@DanielSoper/playlists) : lecturer and amazing source of knowledge at California State University 122 | - [Professor Joseph Hellerstein](https://dsf.berkeley.edu/jmh/index.html) and [his lecture on DBMS](https://www.youtube.com/playlist?list=PLeb33PCuqDdfS_ljg40TaA0e9p-m_ZF4_) : lecturer at Berkley Univirsity 123 | 124 | ## Tags legend 125 | 126 | ##### Kind of Resource 127 | 128 | - :movie_camera: - video material to watch 129 | - :page_facing_up: - reading 130 | - :book: - a book 131 | - :mortar_board: - online course with or without feedback 132 | - :chart_with_upwards_trend: - cheat sheets 133 | - :card_file_box: - reference or manual or a standard 134 | - :open_file_folder: - collections of collections 135 | - :pirate_flag: - non-english 136 | - :page_facing_up: - either single article or single video-tutorial 137 | - :building_construction: - ideas for inspiration of mini-projects to add to your portfolio 138 | - :moneybag: - paid 139 | - ( _certificate_ ) - certification 140 | - 🔽 - download 141 | - ⚡ - practice, it is possible to interact and get feedback from the system 142 | - ( _official_ ) - official material 143 | - ( _blog_ ) - blogs 144 | - ( _example_ ) - code sample that can be executed 145 | - ( _short_ ) - short overview 146 | - ( _tool_ ) - a tool 147 | - ( _subjects_ ) - people and organizations 148 | - ( _skill_ ) - not a theory, but practices oriented 149 | 150 | ##### Level of Expertise 151 | 152 | - ( _level_0_ ) - very basic and trivial, every professional should know no matter which specialization 153 | - ( _level_1_ ) - basic but not trivial, every professional should know no matter which specialization 154 | - ( _level_2_ ) - medium proficiency, most professional should know no matter which specialization 155 | - ( _level_3_ ) - advanced, not recommended unless it fits specialization 156 | 157 | ##### Other Categories 158 | 159 | - ( _dbms_ ) - database manageement systems 160 | - ( _key-value_ ) - key-value database 161 | - ( _systems_design_ ) - systems design key concepts 162 | - ( _document-oriented_ ) - document-oriented DBMS 163 | - ( _wide_column_ ) - wide column database 164 | - ( _relational_ ) - relational database 165 | - ( _sql_ ) - sql related 166 | -------------------------------------------------------------------------------- /dbms/questions.md: -------------------------------------------------------------------------------- 1 | # Questions for interview on DBMS 2 | 3 | 4 | 5 | ## SQL :: Optimization 6 | 7 | - Outline the procedure for optimizing an SQL query. 8 | - Describe the sequence of operations in SQL statement execution. 9 | 10 | ## SQL :: General 11 | 12 | - What are the definitions and examples of DDL and DML in SQL? 13 | - Can you elucidate the differences between the DELETE and TRUNCATE statements in SQL? 14 | - Can you elucidate the differences between the DELETE and DROP statements in SQL? 15 | - For what reasons would one utilize the CASE statement in SQL, and could you provide an illustrative example? 16 | - How do the DISTINCT clause and GROUP BY clause differ in their functionality? 17 | - What are the essential principles to follow when employing the UNION operator in SQL queries? 18 | - What are the distinctions between the UNION and UNION ALL operators? 19 | - Describe the role of aggregate functions in SQL and enumerate various types with explanations. 20 | - How do scalar functions differ from aggregate functions in SQL? 21 | - What distinguishes COUNT(*) from COUNT(column) in terms of their output? 22 | - Differentiate between the WHERE and HAVING clauses in the context of SQL queries. 23 | - Which SQL syntax would be used to express a year in words? 24 | - How can one convert a textual representation of a date, such as "22-05-2013", into a recognized date format in SQL? 25 | - What function would you use to obtain yesterday's date in SQL, and can you provide an example? 26 | - In SQL, if you have a FNAME column with entries like "James Bond", "Avraam Lincoln", and "Merlin Menson", which functions would you use to extract only the first name? Include an example. 27 | - Define the MERGE statement in SQL and illustrate its application with an example. 28 | - Which SQL operations are reversible with a rollback, and which are irreversible? 29 | - What is the purpose of the `COALESCE` function in SQL? 30 | 31 | ## SQL :: Window Functions 32 | 33 | - Can aggregate functions be repurposed as window functions in SQL, and if so, how is this achieved? 34 | - Detail the differences among the ROW_NUMBER, RANK, DENSE_RANK and NTILE window functions in SQL. 35 | - Detail the differences among the CUME_DIST and PERCENT_RANK window functions in SQL. 36 | - Define Analytical functions in SQL. 37 | 38 | ## SQL :: Code 39 | 40 | - What are the key differences between SQL and PL/SQL? 41 | - Under what circumstances is a function not callable within a SELECT statement? 42 | - Compare and contrast triggers with functions in SQL. 43 | - Highlight the distinctions between SQL functions and procedures. 44 | - Define the concept of a trigger in SQL. 45 | - Explain the role of PRAGMA AUTONOMOUS TRANSACTION in SQL. 46 | 47 | ## SQL :: Join 48 | 49 | - Describe the relationship between the cartesian product and the SQL JOIN operation. 50 | - When would you utilize a Cross Join in SQL? 51 | - Elucidate the differences among LEFT JOIN, RIGHT JOIN, FULL OUTER JOIN, and INNER JOIN. 52 | - Explain the concept of Joins in SQL and list the various kinds of Joins that exist. 53 | - What is the minimum number of Joins needed to connect 6 separate tables? 54 | 55 | ## SQL :: Constraints and Relationships 56 | 57 | - Define Constraints in the context of SQL and enumerate the different types available. 58 | - What is the maximum number of primary keys and unique keys allowed in a single SQL table? 59 | - Delineate the differences among primary key, unique key, and foreign key constraints. 60 | - What ranges of values are possible for minimal and maximal cardinality? Does this range vary? 61 | - Identify which constraints affect the minimum cardinality and which influence the maximum cardinality. Is this consistent across all cases? 62 | 63 | ## SQL :: Subqueries 64 | 65 | - Is repeatedly including the same subquery in a single query advisable? If not, what strategies can be employed to address this? 66 | - Define subqueries and explain their possible applications. 67 | - Can subqueries be assigned a name for reuse within the same SQL statement? 68 | - Is there a method to make a subquery permanent? 69 | - How do subqueries differ from Common Table Expressions (CTEs)? 70 | - Contrast subqueries with views in SQL. 71 | - Compare and contrast Common Table Expressions (CTEs) with views. 72 | - Describe the distinctions between views and synonyms, providing examples. 73 | - Compare views with materialized views in terms of functionality and usage. 74 | - How to avoid creating a view if you need to add few request-time columns? 75 | 76 | ## Index 77 | 78 | - Define indexes in the context of databases and explain their purpose. 79 | - Enumerate the various types of indexes available in SQL databases. 80 | - Contrast btree indexes with hash indexes in terms of structure and usage. 81 | - Explain the purpose of a bloom filter index and provide an illustrative example. 82 | 83 | ## Normalization 84 | 85 | - Discuss the benefits and drawbacks of data normalization and denormalization. 86 | - Identify the normal form that ensures all functional dependencies are maintained. Which normal form fails to guarantee this? 87 | 88 | ### 89 | 90 | - Can foreign key refer non-primary key? When? Yes, if it has unique constraint 91 | - Can unqiue cooumn have duplicates? When? Only nulls. 92 | - Can foreign key refer nullable key? Yes 93 | - Can primary key field be changed? 94 | -------------------------------------------------------------------------------- /dbms/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn DBMS Together 2 | 3 | Awesome collection of learning materials to master modern Database Management Systems. 4 | 5 | ![SQL Join](./cheatsheet/sql_join.jpg) 6 | 7 | ## What is this about? 8 | 9 | This repostory contains dozen curated collections: learning materials, toolboxes, newspapers, working groups, collection of other collections. Everything you will find useful if you are interested in DBMS. 10 | 11 | Here you can find: 12 | 13 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master DBMS. 14 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of DBMS. 15 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets.md) : cheatsheets on DBMS. 16 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets_erd.md) : cheatsheets on ERD. 17 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets_psql.md) : cheatsheets on PostgreSQL. 18 | -------------------------------------------------------------------------------- /dbms/skills.md: -------------------------------------------------------------------------------- 1 | # Key skills/competences of DBMS 2 | 3 | - Database Design 4 | - - ERD 5 | - - Types of relationships 6 | - - Normalization 7 | - SQL 8 | - - reading ( DQL ) 9 | - - - projecting ( select ) 10 | - - - filtering ( where, distinct ) 11 | - - - top clause ( limit ) 12 | - - - sorting ( order by ) 13 | - - - aggregation clause ( group by ) 14 | - - - aggregation functions ( group by ) 15 | - - - window functions 16 | - - - pivoting 17 | - - - renaming 18 | - - - string functions and formatting 19 | - - - time, date and formatting 20 | - - - array, sets and formatting 21 | - - - case 22 | - - - set operations 23 | - - - cartesian product 24 | - - - joining ( join ) 25 | - - - - taxonomy of joins 26 | - - - subquery 27 | - - - - anonimous subquery 28 | - - - - named subquery ( CTE ) 29 | - - - - recursive subquery ( hierarchical CTE ) 30 | - - defining ( DDL ) 31 | - - - data types 32 | - - - create 33 | - - - alter 34 | - - - drop 35 | - - - constrains 36 | - - - views 37 | - - - materialzid views 38 | - - - prepare statement 39 | - - - stored column ~ generated column 40 | - - - synonym 41 | - - - truncate 42 | - - - enum 43 | - - - sequence 44 | - - - index 45 | - - - - btree 46 | - - - - hash 47 | - - - - bloom filter 48 | - - - - GIN 49 | - - - - GiST 50 | - - - - SP-GiST 51 | - - - - BRIN 52 | - - manipulation ( DML ) 53 | - - - add, insert 54 | - - - update 55 | - - - delete 56 | - - access control ( DCL ) 57 | - - transaction ( TCL ) 58 | - - performance tuning 59 | - - - explain 60 | - - - table statistics 61 | - - - table partitioning 62 | - - - profiling and tracing 63 | - - programming 64 | - - - user defined functions 65 | - - - stored procedures 66 | - - - trigger 67 | - - - PL/SQL 68 | - - - - collections types 69 | - - - - cursor 70 | - - - - packages 71 | - - - - control statements 72 | -------------------------------------------------------------------------------- /devops/cheatsheets.md: -------------------------------------------------------------------------------- 1 | # Key concepts of DevOps 2 | 3 | Cheatsheets for DevOps. 4 | 5 | 6 | -------------------------------------------------------------------------------- /devops/concepts.md: -------------------------------------------------------------------------------- 1 | # Key concepts of DevOps 2 | 3 | Key concepts of DevOps. 4 | 5 | 6 | -------------------------------------------------------------------------------- /devops/learn.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: Learn DevOps 2 | 3 | Awesome collection of learning materials to master modern DevOps. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## Monitoring, logging, aggregation and diagnostics of a distributed system 11 | 12 | ( _monitoring_ ) 13 | 14 | - [Prometheus Architecture explained](https://www.youtube.com/watch?v=mLPg49b33sA) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 15 | - [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 16 | - [Logging in Kubernetes with Elasticsearch, Fluentd and Kibana](https://www.youtube.com/watch?v=I5c8Pfg2tys) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 17 | - [Grafana vs Kibana](https://www.youtube.com/watch?v=xXmOmFyN3Hs) :movie_camera: by [IT Security Labs](https://www.youtube.com/c/ITSecurityLabs/videos) 18 | 19 | ## Infrastructure as a code 20 | 21 | ( _infrastructure_as_a_code_ ) 22 | 23 | - [Terraform in 100 Seconds](https://www.youtube.com/watch?v=tomUWcQ0P3k) :movie_camera: by [Fireship](https://www.youtube.com/c/Fireship) ( _short_ ) ( _infrastructure_as_a_code_ ) 24 | - [Terraform in 15 mins](https://www.youtube.com/watch?v=l5k1ai_GBDE) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) ( _infrastructure_as_a_code_ ) 25 | - [Terraform Course](https://www.youtube.com/watch?v=7xngnjfIlK4) :movie_camera: by [DevOps Directive](https://www.youtube.com/c/DevOpsDirective/playlists) ( _infrastructure_as_a_code_ ) 26 | - [What is GitOps](https://www.youtube.com/watch?v=f5EpcWp0THw) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 27 | 28 | ## Virtualization 29 | 30 | ( _virtualization_ ) 31 | 32 | - [Docker Tutorial](https://www.youtube.com/watch?v=3c-iBn73dDE) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 33 | - [Docker Compose](https://www.youtube.com/watch?v=DM65_JyGxCo) :movie_camera: by [ NetworkChuck ](https://www.youtube.com/c/NetworkChuck/videos) 34 | 35 | ## CI/CD 36 | 37 | ( _ci_cd_ ) 38 | 39 | - [Github Actions Tutorial](https://www.youtube.com/watch?v=eB0nUzAI7M8) :movie_camera: by [Fireship](https://www.youtube.com/c/Fireship) 40 | - [GitHub Actions Tutorial](https://www.youtube.com/watch?v=R8_veQiYBjI) :movie_camera: by [TechWorld with Nana](https://www.youtube.com/c/TechWorldwithNana/videos) 41 | 42 | - [Continuous Deployment vs. Continuous Delivery](https://www.youtube.com/watch?v=LNLKZ4Rvk8w) :movie_camera: by [IBM Technology](https://www.youtube.com/c/IBMTechnology/playlists) 43 | 44 | ## Shell environment, Linux 45 | 46 | ( _shell_ ) 47 | 48 | - [Linux Crash Course](https://www.youtube.com/playlist?list=PLT98CRl2KxKHKd_tH3ssq0HPrThx2hESW) :mortar_board: :movie_camera: by [Learn Linux TV](https://www.youtube.com/c/LearnLinuxtv/playlists) 49 | - [100 Linux Commands](https://www.youtube.com/playlist?list=PL5gAM72D0Cq-R8SrNsuOMB1zX4KOy7W-L) :mortar_board: :movie_camera: by [Jae Nulton](https://www.youtube.com/@jaenulton9953) 50 | - [Linux Commands](https://www.youtube.com/playlist?list=PLT98CRl2KxKHaKA9-4_I38sLzK134p4GJ) :mortar_board: :movie_camera: by [Learn Linux TV](https://www.youtube.com/c/LearnLinuxtv/playlists) 51 | - :page_facing_up: [30 Handy Shell Aliases](https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html) by [cyberciti.biz](https://www.cyberciti.biz/) 52 | - :chart_with_upwards_trend: [Bash Cheatsheet](https://devhints.io/bash) by [Rico](https://devhints.io/) 53 | - :chart_with_upwards_trend: [Bash Cheatsheet](https://github.com/LeCoupa/awesome-cheatsheets/blob/master/languages/bash.sh) by [Julien Le Coupanec](https://github.com/LeCoupa) 54 | - :chart_with_upwards_trend: [Pure Bash Bible](https://github.com/dylanaraps/pure-bash-bible) by [Dylan Araps](https://github.com/dylanaraps) : a collection of bash recipies 55 | - :chart_with_upwards_trend: [Bash Built-in Variables](https://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html) 56 | 57 | ## Shell Scripting 58 | 59 | ( _shell_ ) 60 | 61 | - [Bash Scripting on Linux](https://www.youtube.com/playlist?list=PLT98CRl2KxKGj-VKtApD8-zCqSaN2mD4w) :mortar_board: :movie_camera: by [Learn Linux TV](https://www.youtube.com/c/LearnLinuxtv/playlists) 62 | - [Bash Scripting Full Course](https://www.youtube.com/watch?v=e7BufAVwDiM) :mortar_board: :movie_camera: by [linuxhint](https://www.youtube.com/@linuxhint) 63 | 64 | ## Inter-process communication 65 | 66 | ( _ipc_ ) 67 | 68 | - [API Essentials](https://www.youtube.com/playlist?list=PLOspHqNVtKAAAq9pHWlEiRUVcYMCcu4X0) :mortar_board: :movie_camera: by [IBM Technology](https://www.youtube.com/c/IBMTechnology/playlists) ( _ipc_ ) 69 | - [REST API and OpenAPI](https://www.youtube.com/watch?v=pRS9LRBgjYg) :movie_camera: by [IBM Technology](https://www.youtube.com/c/IBMTechnology/playlists) ( _ipc_ ) 70 | - [How do you design API?](https://www.youtube.com/watch?v=_YlYuNMTCc8) :movie_camera: by [Gaurav Sen](https://www.youtube.com/c/GauravSensei/videos) ( _ipc_ ) 71 | 72 | ## Remote Access 73 | 74 | ( _remote_access_ ) 75 | 76 | - [Linux Desktop in the Cloud](https://www.youtube.com/watch?v=633OWaW3cyo) :movie_camera: by [Linode](https://www.youtube.com/c/linode/videos) 77 | - [Linux Firewall Tutorial](https://www.youtube.com/watch?v=XtRXm4FFK7Q) :movie_camera: by [Linode](https://www.youtube.com/c/linode/videos) 78 | - [Hardening Access to Your Server](https://www.youtube.com/watch?v=eeaFoZlSq6I) :movie_camera: by [Linode](https://www.youtube.com/c/linode/videos) 79 | 80 | ## Benchmarking 81 | 82 | ( _about:benchmarking_ ) 83 | 84 | - :page_facing_up: [Linux Perf Analysis](https://brendangregg.com/Articles/Netflix_Linux_Perf_Analysis_60s.pdf) by [Netflix](https://netflixtechblog.com/) 85 | 86 | ## Tags legend 87 | 88 | - :movie_camera: - video material 89 | - :page_facing_up: - reading 90 | - :mortar_board: - online course with or without feedback 91 | - :chart_with_upwards_trend: - cheetsheets 92 | - ( _short_ ) - short overview 93 | - ( _systems_design_ ) - systems design 94 | - ( _communication_ ) - back-end, networking, RPC, IPC 95 | - ( _monitoring_ ) - monitoring, logging, aggregation and diagnostics of a distributed system 96 | - ( _infrastructure_as_a_code_ ) - infrastructure as a code 97 | - ( _time_series_ ) - time-series database 98 | - ( _virtualization_ ) - virtualization 99 | - ( _ci_cd_ ) - continuous integration / continuous deployment 100 | - ( _ipc_ ) - inter-process communication 101 | - ( _remote_access_ ) - remote access 102 | - ( _shell_ ) - shell environment, Linux 103 | - ( _architecture_ ) - software architecture in general 104 | - ( _principle_ ) - principles of software engineering 105 | - ( _pattern_ ) - patterns of software engineering 106 | - ( _testing_ ) - testing related aspects of architecture 107 | - ( _lld_ ) - low-level design 108 | -------------------------------------------------------------------------------- /devops/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn DevOps Together 2 | 3 | Awesome collection of learning materials to master modern DevOps. 4 | 5 | Here you can find: 6 | 7 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Computer Sicence. 8 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of Computer Sicence. 9 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets.md) : cheatsheets on Computer Sicence. 10 | -------------------------------------------------------------------------------- /digital_product/cheatsheet.md: -------------------------------------------------------------------------------- 1 | # Cheat Sheets 2 | -------------------------------------------------------------------------------- /digital_product/concepts.md: -------------------------------------------------------------------------------- 1 | # Key Concepts 2 | 3 | ### Failure Mode and Effects Analysis ~ FMEA 4 | 5 | Systematic, proactive method for evaluating a process to identify where and how it might fail and to assess the relative impact of different failures, in order to identify the parts of the process that are most in need of change. FMEA includes the identification of potential failure modes in a system, classification of the severity of the effects of failures, and listing of possible causes of failures, along with the likelihood of the failures occurring. 6 | 7 | ### Test Plan 8 | 9 | Detailed document that outlines the strategy, resources, scope, and schedule of intended test activities. It is a roadmap devised by project managers, test managers, or testing teams to guide the testing process of a software project. The test plan is critical for ensuring that the newly developed or modified software system meets its requirements and works as expected in the real-world scenario. 10 | -------------------------------------------------------------------------------- /digital_product/learn.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Digital Product Together 2 | 3 | Awesome list of high-quality study materials for learning Digital Product. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## Problem Solving 11 | 12 | - [Root cause analysis](https://www.youtube.com/playlist?list=PLxENun6kHfaB4vJsQAO4zXgDbXZ-1i8l_) by [skilltecs](https://www.youtube.com/@skilltecs) :movie_camera: :mortar_board: 13 | - [Root cause analysis](https://www.youtube.com/playlist?list=PL55ZwvYHxcu2QqhSl-NwKvIF8kUeww-JB) by [Lean and Six Sigma](https://www.youtube.com/@learnandapply) :movie_camera: :mortar_board: 14 | - [FMEA](https://www.youtube.com/playlist?list=PLHm5EVKgoDDSj-5mcaHSRQ66CrtCL0fj5) by [Paul Allen](https://www.youtube.com/@paulallen5321) :movie_camera: :mortar_board: 15 | 16 | ## Testing 17 | 18 | - [Software Testing](https://www.youtube.com/playlist?list=PLL34mf651faM_nn8uKlnwbQPw5zSh_F84) by [Software Testing Mentor](https://www.youtube.com/@softwaretestingmentor) :movie_camera: :mortar_board: 19 | 20 | ## Management 21 | 22 | [Agile Totorial](https://www.youtube.com/playlist?list=PLL34mf651faPu9Ouit44hlt3yoP0vIMi4) by [Software Testing Mentor](https://www.youtube.com/@softwaretestingmentor) :movie_camera: :mortar_board: 23 | 24 | ## Tags legend 25 | 26 | ##### Kind of Resource 27 | 28 | - :movie_camera: - video material to watch 29 | - :page_facing_up: - reading 30 | - :book: - a book 31 | - :mortar_board: - online course with or without feedback 32 | - :chart_with_upwards_trend: - cheat sheets 33 | - :card_file_box: - reference or manual or a standard 34 | - :open_file_folder: - collections of collections 35 | - :pirate_flag: - non-english 36 | - :page_facing_up: - either single article or single video-tutorial 37 | - :building_construction: - ideas for inspiration of mini-projects to add to your portfolio 38 | - :moneybag: - paid 39 | - ( _certificate_ ) - certification 40 | - 🔽 - download 41 | - ⚡ - practice, it is possible to interact and get feedback from the system 42 | - ( _official_ ) - official material 43 | - ( _blog_ ) - blogs 44 | - ( _example_ ) - code sample that can be executed 45 | - ( _short_ ) - short overview 46 | - ( _tool_ ) - a tool 47 | - ( _subjects_ ) - people and organizations 48 | - ( _skill_ ) - not a theory, but practices oriented 49 | 50 | ##### Level of Expertise 51 | 52 | - ( _level_0_ ) - very basic and trivial, every professional should know no matter which specialization 53 | - ( _level_1_ ) - basic but not trivial, every professional should know no matter which specialization 54 | - ( _level_2_ ) - medium proficiency, most professional should know no matter which specialization 55 | - ( _level_3_ ) - advanced, not recommended unless it fits specialization 56 | 57 | ##### Other Categories 58 | 59 | - ( _Digital Product_ ) - database manageement systems 60 | -------------------------------------------------------------------------------- /digital_product/questions.md: -------------------------------------------------------------------------------- 1 | # Questions for interview 2 | -------------------------------------------------------------------------------- /digital_product/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Digital Product Together 2 | 3 | Awesome collection of learning materials to master modern Digital Product. 4 | 5 | ![SQL Join](./cheatsheet/sql_join.jpg) 6 | 7 | ## What is this about? 8 | 9 | This repostory contains dozen curated collections: learning materials, toolboxes, newspapers, working groups, collection of other collections. Everything you will find useful if you are interested in Digital Product. 10 | 11 | Here you can find: 12 | 13 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Digital Product. 14 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of Digital Product. 15 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets.md) : cheatsheets on Digital Product. 16 | -------------------------------------------------------------------------------- /digital_product/skills.md: -------------------------------------------------------------------------------- 1 | # Key skills/competences 2 | -------------------------------------------------------------------------------- /file_format/cheatsheet/jpg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/file_format/cheatsheet/jpg.png -------------------------------------------------------------------------------- /file_format/cheatsheet/mp4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/file_format/cheatsheet/mp4.png -------------------------------------------------------------------------------- /file_format/cheatsheet/pdf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/file_format/cheatsheet/pdf.png -------------------------------------------------------------------------------- /file_format/cheatsheets.md: -------------------------------------------------------------------------------- 1 | # File formats cheat sheets 2 | 3 | ## JPG 4 | 5 | ![JPG](./cheatsheet/jpg.png) 6 | 7 | ## MP4 8 | 9 | ![MP4](./cheatsheet/mp4.png) 10 | 11 | ## PDF 12 | 13 | ![PDF](./cheatsheet/pdf.png) 14 | -------------------------------------------------------------------------------- /gat/cheatsheets.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Graphs and Automata Theory 2 | 3 | Cheatsheets for Graphs and Automata Theory. 4 | 5 | 6 | -------------------------------------------------------------------------------- /gat/concepts.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Graphs and Automata Theory 2 | 3 | Key concepts of Graphs and Automata Theory. 4 | 5 | 6 | -------------------------------------------------------------------------------- /gat/learn.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: Learn Graphs and Automata Theory 2 | 3 | Awesome collection of learning materials to master modern Graphs and Automata Theory. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## General 11 | 12 | - [Automata Theory Course](https://www.youtube.com/playlist?list=PL3TE2CsKK478cNgQa7UTnEdFkXzii5CII) :movie_camera: :mortar_board: by [Theoretical Computer Science](https://www.youtube.com/@TheComputerScience) 13 | 14 | ## Tags legend 15 | 16 | - :movie_camera: - video material 17 | - :page_facing_up: - reading 18 | - :mortar_board: - online course with or without feedback 19 | - :chart_with_upwards_trend: - cheetsheets 20 | - ( _short_ ) - short overview 21 | - ( _systems_design_ ) - systems design 22 | -------------------------------------------------------------------------------- /gat/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Graphs and Automata Theory Together 2 | 3 | Awesome collection of learning materials to master modern Graphs and Automata Theory. 4 | 5 | Here you can find: 6 | 7 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Computer Sicence. 8 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of Computer Sicence. 9 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets.md) : cheatsheets on Computer Sicence. 10 | -------------------------------------------------------------------------------- /git/asset/cheatsheet.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/git/asset/cheatsheet.jpg -------------------------------------------------------------------------------- /git/cheatsheet.md: -------------------------------------------------------------------------------- 1 | # Cheat sheets 2 | 3 | #### rebase 4 | 5 | ```shell 6 | git rebase master 7 | git rebase master feature1 8 | 9 | # attach to master current branch ( feature1 ) moving its pointer 10 | # before after 11 | # A---B---C---F---G (master) A---B---C---F---G (master) 12 | # \ \ 13 | # D---E (HEAD feature1) D'---E' (HEAD feature1) 14 | ``` 15 | 16 | #### rebase without moving branch pointer 17 | 18 | ```shell 19 | git rebase master G 20 | git rebase master feature1~0 21 | 22 | # attach to master current branch ( feature1 ) without moving its pointer 23 | # before after 24 | # A---B---C---F---G (master) A---B---C---F---G (master) 25 | # \ | \ 26 | # D---E (HEAD feature1) | D'---E' (HEAD) 27 | # D---E (feature1) 28 | ``` 29 | 30 | #### rebase selective moving branch pointer 31 | 32 | ```shell 33 | git rebase --onto F D 34 | git rebase --onto F D feature1 35 | 36 | # attach to F what goes after D and moving current branch ( feature1 ) pointer 37 | # before after 38 | # A---B---C---F---G (branch) A---B---C---F---G (branch) 39 | # \ \ 40 | # D---E---H---I (HEAD feature1) E'---H'---I' (HEAD feature1) 41 | ``` 42 | 43 | #### rebase selective without moving branch pointer 44 | 45 | ```shell 46 | git rebase --onto F D H 47 | git rebase --onto F D HEAD 48 | 49 | # attach to F what goes after D stopping at H 50 | # before after 51 | # A---B---C---F---G (branch) A---B---C---F---G (branch) 52 | # \ | \ 53 | # D---E---H---I (HEAD feature1) | E'---H' (HEAD) 54 | # \ 55 | # D---E---H---I (feature1) 56 | ``` 57 | 58 | #### commits drop 59 | 60 | ```shell 61 | git rebase --onto B D 62 | git rebase --onto B D branch1 63 | 64 | # attach to B what goes after D moving branch pointer 65 | # before after 66 | # A---B---C---D---E---F (HEAD branch) A---B---E'---F' (HEAD branch) 67 | ``` 68 | 69 | ### git areas 70 | 71 | ![cheatsheet](./asset/cheatsheet.jpg) 72 | -------------------------------------------------------------------------------- /git/misconceptions_and_advices.md: -------------------------------------------------------------------------------- 1 | # Some git misconceptions 2 | 3 | #### ~~Branch of a tree is a good conceptualization of git branch~~ 4 | 5 | Not at all. Git branches get merged into the trunk( master branch ). That's not a tree at all. It's a rather directed acyclic graph ( DAG ). Also, it's not very useful to think that branches have commits, [branch](https://www.youtube.com/watch?v=zcaeZHmMjUI) is moving tag rather. 6 | 7 | #### ~~It's better to use official documentation on git~~ 8 | 9 | Official documentation on git is rather poor and difficult to understand. 10 | -------------------------------------------------------------------------------- /git/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Git Together 2 | 3 | Awesome list of high-quality study materials for learning Git and GitHub. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | ![cheatsheet](/asset/cheatsheet.jpg) 8 | 9 | 10 | 11 | ## For beginners 12 | 13 | - ⭐ [Learn Git Branching through game](https://learngitbranching.js.org/) ⚡ ( _reading_ ) 14 | - ⭐ [Git in 1 Hour](https://youtu.be/8JJ101D3knE) by [Programming with Mosh](https://www.youtube.com/c/programmingwithmosh) ( _video_ ) 15 | - ⭐ [Git How To](https://githowto.com/uk) ( _reading_ ) ( _course_ ) 16 | - [Course on Git](https://www.codecademy.com/learn/learn-git) from [Code Academy](https://www.codecademy.com/) ( _course_ ) 17 | - [Git: Complete Course Tutorial](https://www.youtube.com/watch?v=vMdSqMf6BPY&list=PL_euSNU_eLbegnt7aR8I1gXfLhKZbxnYX) by [Leela Web Dev](https://www.youtube.com/c/LeelaWebDev) ( _video_ ) ( _course_ ) 18 | - [Git. Большой практический выпуск](https://www.youtube.com/watch?v=SEvR78OhGtw) by [Артем Матяшов](https://www.youtube.com/channel/UCJHS22_QyRowmNAaxoUd4dA) ( _video_ ) ( _course_ ) ( _non-eng_ ) 19 | - [Git: курс](https://www.youtube.com/playlist?list=PLDyvV36pndZFHXjXuwA_NywNrVQO0aQqb) ( _video_ ) ( _course_ ) ( _non-eng_ ) 20 | - [Learn Git with Bitbucket Cloud](https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud) ( _reading_ ) ( _course_ ) 21 | - [Git Tutorial by w3school](https://www.w3schools.com/git/) ( _reading_ ) 22 | - [Git - the simple guide](http://up1.github.io/git-guide/index.html) ( _reading_ ) 23 | - [Git Immersion](https://gitimmersion.com/index.html) ( _reading_ ) ( _course_ ) 24 | - [Udemy Free Course](https://www.udacity.com/course/version-control-with-git--ud123?irclickid=X8KTyLyCxxyNTbgQNSSAlymTUkAxo6zuc2jQTM0&irgwc=1&utm_source=affiliate&utm_medium=&aff=245992&utm_term=&utm_campaign=__&utm_content=&adid=786224) ( _course_ ) 25 | - [Git Branching and Merging](https://www.youtube.com/watch?v=Q1kHG842HoI) by [SuperSimpleDev](https://www.youtube.com/@SuperSimpleDev) 26 | - [Mini-course on Git](https://www.codecademy.com/learn/learn-git/modules/learn-git-git-workflow-u/cheatsheet) from [Code Academy](https://www.codecademy.com/) ( _course_ ) 27 | 28 | ## Advanced 29 | 30 | - ⭐ [A Visual Git Reference](https://marklodato.github.io/visual-git-guide/index-en.html) ( _reading_ ) 31 | - [Git - Book](https://git-scm.com/book/en/v2) ( _reading_ ) ( _official_ ) 32 | - [Git for Profesionals](https://www.youtube.com/watch?v=Uszj_k0DGsg) by [freeCodeCamp.org](https://www.youtube.com/c/Freecodecamp) ( _video_ ) 33 | - [Three-git-tips](https://github.com/saraford/three-git-tips) by [saraford](https://github.com/saraford) ( _reading_ ) 34 | - [13 Advanced Git Techniques and Shortcuts](https://www.youtube.com/watch?v=ecK3EnyGD8o) by [Fireship](https://www.youtube.com/@Fireship/playlists) : ( _video_ ) 35 | - [Git Internals](https://www.youtube.com/watch?v=P6jD966jzlk) by [GitLab](https://www.youtube.com/@Gitlab) ( _video_ ) 36 | - [Git rebase explained](https://womanonrails.com/git-rebase-onto) by [womanonrails](https://womanonrails.com/) ( _reading_ ) 37 | 38 | ## Git & GitHub 39 | 40 | - [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk) by [freeCodeCamp.org](https://www.youtube.com/c/Freecodecamp) ( _video_ ) 41 | - [Git and GitHub Tutorial For Beginners](https://www.youtube.com/watch?v=3fUbBnN_H2c) by [Amigoscode](https://www.youtube.com/c/amigoscode) ( _video_ ) 42 | - [Getting started with GitHub](https://docs.github.com/en/get-started) ( _official_ ) ( _reading_ ) 43 | 44 | ## GIT Certification 45 | 46 | - ⭐ [Source Control Management with Git](https://training.linuxfoundation.org/certification/git/?SSAID=746540&sscid=11k7_os2de) (_cource_) (_certification_) 47 | - [Foundations of Git](https://learn.gitkraken.com/courses/git-foundations) (_cource_) (_certification_) 48 | - [GitLab Certified Git Associate](https://about.gitlab.com/services/education/gitlab-certified-associate/) (_official_) (_cource_) (_certification_) 49 | 50 | ## Cheatsheets 51 | 52 | ( _cheatsheet_ ) 53 | 54 | - [Git Cheat Sheet](https://about.gitlab.com/images/press/git-cheat-sheet.pdf) by [GitLab](https://gitlab.com) 55 | 56 | ## See also 57 | 58 | - [Misconceptions and advices](./misconceptions_and_advices.md) 59 | - [Cheatsheets](./cheatsheets.md) 60 | 61 | ## Tags legend 62 | 63 | - ( _official_ ) - official material 64 | - ( _non-eng_ ) - non-english language 65 | - ( _course_ ) - consists of series of text/video articles trying to give to a reader solid foundation 66 | - ( _certification_ ) - formal attestation to validate your proficiency through exams 67 | - ( _reading_ ) - material to read 68 | - ( _video_ ) - material to watch 69 | - ( _cheatsheet_ ) - cheat sheet 70 | - ⚡ - interactive 71 | -------------------------------------------------------------------------------- /hpc/cheatsheet/collective_communication.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/hpc/cheatsheet/collective_communication.png -------------------------------------------------------------------------------- /hpc/cheatsheet/concurrent_programming.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/hpc/cheatsheet/concurrent_programming.jpeg -------------------------------------------------------------------------------- /hpc/cheatsheet/cpu_vs_gpu.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/hpc/cheatsheet/cpu_vs_gpu.jpg -------------------------------------------------------------------------------- /hpc/cheatsheet/cuda_momory_model.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/hpc/cheatsheet/cuda_momory_model.jpg -------------------------------------------------------------------------------- /hpc/cheatsheet/granularity.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/hpc/cheatsheet/granularity.jpg -------------------------------------------------------------------------------- /hpc/cheatsheet/latencies.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/hpc/cheatsheet/latencies.jpg -------------------------------------------------------------------------------- /hpc/cheatsheets.md: -------------------------------------------------------------------------------- 1 | # Key concepts of HPC 2 | 3 | Cheatsheets for HPC and parallel computing. 4 | 5 | 6 | 7 | 8 | 9 | ##### Asynchronous vs Multithreading 10 | 11 | ![Asynchronous vs Multithreading](./cheatsheet/concurrent_programming.jpeg) 12 | 13 | ##### Latencies 14 | 15 | ![Asynchronous vs Multithreading](./cheatsheet/latencies.jpg) 16 | 17 | ##### Collective Communication 18 | 19 | ![Collective Communication](./cheatsheet/collective_communication.png) 20 | 21 | ##### Granularity 22 | 23 | ![Granularity](./cheatsheet/granularity.jpg) 24 | 25 | ##### CPU vs GPU 26 | 27 | ![Asynchronous vs Multithreading](./cheatsheet/cpu_vs_gpu.jpg) 28 | 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /hpc/concept.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: High Performance Computing 2 | 3 | ### Concepts 4 | 5 | ##### Work 6 | 7 | The total amount of computation that a particular task or algorithm involves. 8 | 9 | This includes all the operations, like additions, multiplications, comparisons, etc. The work of a parallel computation is the total time it would take to execute on a single processor. 10 | 11 | ##### Span ~ Critical Path Length ~ Depth 12 | 13 | The longest sequence of dependent computations in a parallel task. 14 | 15 | In other words, it's the minimum time that the computation can take, even if you had an unlimited number of processors. This is because some tasks cannot begin until others have been completed. 16 | 17 | ##### Average Available Parallelism ~ AAP ~ Parallelism 18 | 19 | Average amount of work that could potentially be executed concurrently during the execution of a program 20 | 21 | ``` 22 | AAP = ceil( W(n) / D(n) ) 23 | W(n) - work 24 | D(n) - span 25 | ``` 26 | 27 | ##### Span Law 28 | 29 | No parallel algorithm can execute in less time than the span or depth of the computation graph. 30 | 31 | ``` 32 | Tp(n) >= D(n) 33 | Tp(n) ~ time to compute 34 | D(n) ~ span 35 | ``` 36 | 37 | No matter how many processors you have, the time it takes to execute a parallel program can never be less than the total time it takes to execute the longest chain (or span) of dependent calculations. 38 | 39 | This is due to dependencies in the computation that create a 'critical path' through the computation graph, which must be executed serially. This law is a measure of the inherent limit to how much a computation can be sped up by parallelization, regardless of how many processors you have. 40 | 41 | ##### Work Law 42 | 43 | No parallel algorithm can execute in time less than the work divided by the number of processors. 44 | 45 | ``` 46 | Tp(n) >= W(n) / P 47 | Tp(n) ~ time to compute 48 | W(n) ~ work 49 | p ~ number of processors 50 | ``` 51 | 52 | In other words, even if you had an infinite number of processors, you cannot execute faster than the total work divided by the infinity. This law shows that the best-case scenario is that linear speedup can be achieved (time reduces proportionally with the increase in processors). 53 | 54 | ##### Brent's theorem 55 | 56 | Given a parallel algorithm and work W (total computations), the algorithm can be executed on P processors in time Tp <= ( W - D ) / p + D, where D is the time complexity of the algorithm on an infinite number of processors (the critical path length). 57 | 58 | ``` 59 | Tp <= ( W - D ) / p + D 60 | D ~ span 61 | p ~ number of processors 62 | ``` 63 | 64 | This theorem highlights the trade-off between the number of processors used and the total amount of work done. It shows that even if an infinite number of processors are used, the total execution time can't be less than the time taken by the longest (critical) sequence of dependent computations, which is represented by D. 65 | 66 | ##### Speed Up 67 | 68 | ``` 69 | Sp(n) = Ts(n) / Tp(n) = Ws(n) / Tp(n) 70 | Sp(n) >= P / ( Wp / Ws + P * D / Ws ) 71 | Ts(n) ~ time to compute sequentially 72 | Tp(n) ~ time to compute in parallel 73 | Ws(n) ~ work to compute 74 | p ~ number of processors 75 | D ~ span 76 | ``` 77 | 78 | ##### Amdahl's Law 79 | 80 | The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential part of the program. 81 | 82 | ``` 83 | Speedup ≤ 1 / ( S + ( 1 - S ) / N ) 84 | Speedup is the factor by which the time to complete the task is reduced, 85 | S is the proportion of execution time for the sequential part of the process, 86 | N is the number of processors. 87 | ``` 88 | 89 | A key takeaway from Amdahl's law is that if a fraction of a program is serial and thus cannot be parallelized (let's call this fraction B), then the program's execution time cannot be improved beyond 1/B by just using parallelization. This limitation is often referred to as Amdahl's bottleneck. As such, even with an infinite number of processors and perfect load balancing, this inherently serial part of the program dictates a limit on the speedup that parallelization can achieve. 90 | 91 | ##### Ceiling / Flooring 92 | 93 | ``` 94 | ceil( a / b ) = floor( a + b - 1 / b ) 95 | ceil( a / b ) = floor( a - 1 / b ) + 1 96 | 97 | floor( a / b ) = ceil( a - b + 1 / b ) 98 | floor( a / b ) = ceil( a + 1 / b ) - 1 99 | ``` 100 | 101 | ##### Weak Scalability 102 | 103 | System's ability to process fixed-size problem faster instead of processing increasingly larger problem sizes. 104 | 105 | The condition for weak scalability is that when you increase the number of processors, you also have to increase the problem size proportionally, with the goal of keeping the workload per processor constant. 106 | 107 | ``` 108 | P = O( W_s / D ) 109 | W_s / P = Omega( D ) 110 | W_s ~ total work of sequential algorithm 111 | D ~ span 112 | ``` 113 | 114 | ##### Work Optimality 115 | 116 | The total work done by the parallel algorithm is within a constant factor of the total work done by the best sequential algorithm. 117 | 118 | In other words, a work optimal parallel algorithm does not do asymptotically more work than the best sequential algorithm. 119 | 120 | ``` 121 | W(n) = O( W_s(n) ) 122 | W(n) ~ total work of parallel algorithm 123 | W_s(n) ~ total work of sequential algorithm 124 | ``` 125 | 126 | ##### Decomposition Granularity: Coarse-Grained / Fine-Grained 127 | 128 | Measure of the amount of computation in relation to communication or coordination between processes or tasks. 129 | 130 | Coarse-Grained: A coarse-grained system is one in which the computation to communication ratio is high. This means that the tasks can perform a significant amount of computation without needing to interact with other tasks. This is beneficial for parallel computing as it reduces the overhead of communication between tasks and can increase efficiency. 131 | 132 | Fine-Grained: In a fine-grained system, the computation to communication ratio is low. This means that tasks frequently need to interact with each other. While this can lead to more overhead due to the increased need for communication, fine-grained systems can provide better load balancing and can utilize parallel hardware more fully than coarse-grained systems. 133 | 134 | The optimal level of granularity depends on several factors, including the nature of the computations being performed, the architecture of the parallel system, and the costs associated with communication or synchronization. 135 | 136 | ##### Decomposition: Structural / Functional 137 | 138 | Functional Decomposition and Domain Decomposition are two different ways of structuring parallel computing tasks. 139 | 140 | In **functional decomposition**, the problem is divided based on the computations that need to be performed, rather than on the data that is being processed. Each parallel task represents a specific function or computation that needs to be performed. The same function may be performed on different pieces of data, or different functions might be performed in parallel. For example, in a graphics rendering application, one task might be responsible for calculating lighting, another for handling physics, and so forth. 141 | 142 | In **domain decomposition**, the problem is divided based on the data domain of the problem. Each parallel task performs the same operation but on different subsets of the data. This is often used in problems where the same operation needs to be performed on a large set of data. For example, in a weather simulation, the area being modeled might be divided into a grid, with each task responsible for simulating the weather in one cell of the grid. 143 | 144 | The main difference between functional and domain decomposition lies in how the problem is divided: functional decomposition is based on the different computations or functions that need to be performed, while domain decomposition is based on splitting the data domain into subsets. The choice between functional and domain decomposition depends on the nature of the problem being solved and the characteristics of the data and computations involved. 145 | 146 | ##### Deadlock 147 | 148 | Deadlock is a state in a multi-process system where a process cannot proceed because it needs to access a resource held by another process that is similarly waiting for a resource, creating a cycle of dependencies. 149 | 150 | The four conditions necessary for a deadlock: 151 | 152 | - Mutual Exclusion: This condition states that at least one resource must be held in a non-sharable mode, or else, the process holding the resource does not release it until it has finished its task. 153 | - Hold and Wait: This condition states that a process must be holding at least one resource and waiting for additional resources that are currently being held by other processes. 154 | - No Preemption: This condition states that a resource can be released only voluntarily by the process holding it, after that process has completed its task. It cannot be forcibly taken away from the process. 155 | - Circular Wait: The fourth condition implies that there exists a set (possibly the set of all system processes) such that for each process in the set, it is waiting for a resource that the next process in the set holds. 156 | 157 | ##### Communication Attributes 158 | 159 | - Latency 160 | - Bandwidth 161 | - Throughput 162 | - Concurrency ~ Synchronicity 163 | - Visibility ~ Explicitness 164 | 165 | ##### Latency of Communication 166 | 167 | Delay between a sender sending a message and the receiver receiving it. 168 | 169 | It's usually measured in units of time like milliseconds or microseconds. Latency can be affected by several factors, including the physical distance between sender and receiver, the quality of the transmission medium, and the number of hops (i.e., intermediate devices like routers or switches) that the message must pass through. In HPC, low latency is crucial, especially in systems that require frequent communication between tasks or nodes, as high latency can significantly reduce the performance of the system. 170 | 171 | ##### Bandwidth of Communication 172 | 173 | Maximum amount of data that can be transferred in a specific period of time between nodes in a cluster or across a network. 174 | 175 | It's usually measured in bits per second (bps), or its multiples (e.g., Mbps, Gbps). High bandwidth is desired in HPC to allow the rapid transfer of large volumes of data. 176 | 177 | ##### Throughput of Communication 178 | 179 | Actual rate at which data is successfully transferred from one part of a system to another over a given period of time. 180 | 181 | While bandwidth refers to potential capacity, throughput is what's actually achieved. It is influenced by factors like network congestion, system load, and the computational efficiency of the processes involved. 182 | 183 | ##### Concurrency ~ Synchronicity of Communication: Synchronous / Asynchronous 184 | 185 | **Synchronous communication**, also known as blocking communication, is a method of exchanging information where a handshake process is involved. 186 | 187 | In this approach, the sender initiates the communication and waits for the receiver to acknowledge receipt of the message before proceeding. This sequential operation ensures the completion of one process before the next one starts, hence the term "blocking". 188 | 189 | **Asynchronous communication**, on the other hand, is also known as non-blocking communication. 190 | 191 | It is a communication method where the sender doesn't have to wait for the receiver's acknowledgment before proceeding. The sender can continue with other tasks or computations while the communication is still in progress. This is beneficial as it allows for the interleaving of computation and communication, effectively utilizing resources and enhancing overall system performance. 192 | 193 | ##### Visibility ~ Explicitness of Communication 194 | 195 | Level of control and awareness a programmer has over the process of data exchange within a parallel computing system. 196 | 197 | It's a concept related to how communication mechanisms are exposed or hidden in the design of parallel programming models. 198 | 199 | In Message Passing communication, the visibility is explicit, which means that the programmer has direct control over the communication process. They need to explicitly write code to send and receive messages between processes. This level of control allows for fine-tuning, making it potentially more efficient, but also requires a deeper understanding of communication mechanisms. The programmer must manage the data exchange, deciding when and where data is sent or received. 200 | 201 | On the other hand, with the Data Parallel model, the communication visibility is implicit. The programmer doesn't need to write specific communication code as data exchanges are automatically handled by the system. The programmer simply specifies the parallel operations, and the underlying system decides when and how to exchange data to fulfill those operations. While this can make programming simpler and more streamlined, it may not be as efficient or flexible since the programmer does not have direct control over when communication occurs. In this model, communication is abstracted away, allowing the programmer to focus more on computation logic. 202 | 203 | Visibility of communication is a crucial factor when choosing a programming model for parallel computing, as it directly influences programming complexity, control over efficiency, and potential performance optimization. 204 | 205 | ##### Bitonic Sequence 206 | 207 | Sequence of numbers that first increases monotonically (either strictly or not), and then decreases monotonically. In other words, it's a sequence that first goes up, then goes down. Alternatively, it can be a sequence that first decreases, then increases. 208 | -------------------------------------------------------------------------------- /hpc/concept_cuda.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: CUDA 2 | 3 | ### Concepts 4 | 5 | ##### CPU vs GPU 6 | 7 | ##### Heterogeneous System > Host / Device 8 | 9 | A computing architecture that combines the CPU and its memory with an additional device to optimize performance by utilizing their distinct capabilities. 10 | 11 | **Host** refers to the CPU and its on-chip memory, responsible for executing the main program and managing system operations. 12 | 13 | **Device** typically involves a GPU, which is optimized for parallel processing and accelerates computation by handling data-intensive tasks. In a heterogeneous system, the host and device work together to enhance overall performance. 14 | 15 | ![Asynchronous vs Multithreading](./cheatsheet/cpu_vs_gpu.jpg) 16 | 17 | ##### Threads 18 | 19 | Threads are the smallest unit of execution in CUDA. Each thread executes a kernel, which is a function running on the GPU. Threads are designed to perform the same operation on different pieces of data, following the Single Instruction, Multiple Threads (SIMT) model. 20 | 21 | ##### Blocks 22 | 23 | Threads are grouped into blocks. A block is a collection of threads that can cooperate with each other by sharing data through shared memory and synchronizing their execution. Blocks are executed independently, allowing for scalability across different GPU architectures. 24 | 25 | ``` 26 | +------------------+ 27 | | Thread Block | 28 | | +------------+ | 29 | | | Thread 0 | | 30 | | +------------+ | 31 | | | Thread 1 | | 32 | | +------------+ | 33 | | | ... | | 34 | | +------------+ | 35 | | | Thread N | | 36 | | +------------+ | 37 | +------------------+ 38 | ``` 39 | 40 | ##### Grid 41 | 42 | Blocks are further organized into a grid. A grid is a collection of blocks that execute a kernel. Each kernel launch creates a single grid, which contains all the blocks needed to execute the kernel. The grid structure allows for the execution of a large number of threads, maximizing the GPU's parallel processing capabilities. 43 | 44 | ``` 45 | +------------------+ 46 | | Grid | 47 | | +------------+ | 48 | | | Block 0 | | 49 | | +------------+ | 50 | | | Block 1 | | 51 | | +------------+ | 52 | | | ... | | 53 | | +------------+ | 54 | | | Block M | | 55 | | +------------+ | 56 | +------------------+ 57 | ``` 58 | 59 | ##### Execution Model 60 | 61 | The execution model in CUDA efficiently manages the execution of threads, blocks, and grids. When a kernel is launched, the CUDA runtime system schedules the execution of blocks on the available streaming multiprocessors (SMs) of the GPU. Each SM can execute multiple blocks concurrently, depending on the resources available. 62 | 63 | ``` 64 | +------------------+ 65 | | Streaming | 66 | | Multiprocessor | 67 | | +------------+ | 68 | | | Block 0 | | 69 | | +------------+ | 70 | | | Block 1 | | 71 | | +------------+ | 72 | | | ... | | 73 | | +------------+ | 74 | | | Block M | | 75 | | +------------+ | 76 | +------------------+ 77 | ``` 78 | ##### Memory Hierarchy 79 | 80 | CUDA architecture includes a memory hierarchy designed to optimize data access and minimize latency. The hierarchy consists of registers, shared memory, local memory, global memory, constant memory, and host memory. Below is a compact list of each type of memory, ordered from fastest to slowest, followed by a detailed description: 81 | 82 | - **Registers** 83 | - **Shared Memory** 84 | - **Constant Memory** 85 | - **Local Memory** 86 | - **Global Memory** 87 | - **Host Memory (PCIe)** 88 | 89 | ``` 90 | +------------------------------------------------------------------------+ 91 | | Device | 92 | | | 93 | | +--------------------------------------------------------+ | 94 | | | DRAM | | 95 | | | | | 96 | | | +-----------------------+ +-----------------------+ | | 97 | | | | Local Memory | | Global Memory | | | 98 | | | +-----------------------+ +-----------------------+ | | 99 | | +--------------------------------------------------------+ | 100 | | | 101 | | +--------------------------------------------------------+ | 102 | | | GPU | | 103 | | | | | 104 | | | +-----------------------+ +-----------------------+ | | 105 | | | | Multiprocessor | | Multiprocessor | | | 106 | | | | +-----------------+ | | +-----------------+ | | | 107 | | | | | Registers | | | | Registers | | | | 108 | | | | +-----------------+ | | +-----------------+ | | | 109 | | | | | Shared Memory | | | | Shared Memory | | | | 110 | | | | +-----------------+ | | +-----------------+ | | | 111 | | | +-----------------------+ +-----------------------+ | | 112 | +------------------------------------------------------------------------+ 113 | ``` 114 | 115 | - Device: Represents the entire GPU system, including both memory and processing units. 116 | - DRAM: Contains both local and global memory, which are located off-chip. Local memory is private to each thread, while global memory is accessible by all threads. 117 | - GPU: Contains multiple multiprocessors, each with its own on-chip memory. 118 | - Multiprocessor: Each multiprocessor has registers and shared memory. Registers are private to each thread, while shared memory is accessible by all threads within a block. 119 | 120 | An important observation is that local memory is significantly slower than register memory, often by hundreds of times. Variables that are too large to be stored in registers are instead placed in DRAM, which is where local memory resides. This means that accessing local memory is much slower compared to accessing registers. Therefore, it's crucial to manage variable sizes and register usage efficiently to minimize reliance on local memory and maintain optimal performance in CUDA applications. 121 | 122 | ![CUDA Memory Model](./cheatsheet/cuda_momory_model.jpg) 123 | 124 | ##### Registers 125 | 126 | Fastest memory, used for storing frequently accessed variables by individual threads. 127 | 128 | - **Speed**: Fastest 129 | - **Bandwidth**: ~8TB/s 130 | - **Latency**: ~1 clock 131 | - **Scope**: Accessible by individual threads 132 | - **Size**: Limited size per thread 133 | - **Usage**: Storing frequently accessed variables 134 | - **Persistence**: Data persists only during thread execution 135 | - **Access Pattern**: No conflicts, direct access 136 | - **Caching**: Not cached, directly accessed by the thread 137 | - **Location**: Located on-chip, within each streaming multiprocessor (SM) 138 | 139 | ##### Shared Memory 140 | 141 | Faster memory shared among threads within the same block. 142 | 143 | - **Speed**: Faster than global memory 144 | - **Bandwidth**: ~1.5TB/s 145 | - **Latency**: ~32 clocks 146 | - **Scope**: Accessible by threads within a block 147 | - **Size**: Limited size per block 148 | - **Usage**: Storing data shared within a block 149 | - **Persistence**: Data persists only during block execution 150 | - **Access Pattern**: Bank conflicts can affect performance 151 | - **Caching**: Not cached, but acts as a user-managed cache 152 | - **Location**: Located on-chip, within each streaming multiprocessor (SM) 153 | 154 | ##### Constant Memory 155 | 156 | Special region of device memory used for data with unchanging contents throughout kernel execution. 157 | 158 | - **Speed**: Faster than global memory due to caching 159 | - **Bandwidth**: Limited by cache size and access patterns 160 | - **Latency**: Low latency when cached 161 | - **Scope**: Read-only from the kernel, writable by the host 162 | - **Size**: Limited to 64KB 163 | - **Usage**: Storing read-only data that remains constant during kernel execution 164 | - **Persistence**: Data persists across kernel launches 165 | - **Access Pattern**: Optimized for broadcast to all threads 166 | - **Caching**: Aggressively cached into on-chip memory 167 | - **Location**: Located off-chip, in the GPU's DRAM, but cached on-chip 168 | 169 | ##### Local Memory 170 | 171 | Used for storing data private to each thread, typically when registers are insufficient. 172 | 173 | - **Speed**: Slower than registers, similar to global memory 174 | - **Bandwidth**: ~200GB/s 175 | - **Latency**: ~800 clocks 176 | - **Scope**: Private to each thread 177 | - **Size**: Limited by the available global memory 178 | - **Usage**: Storing temporary variables and data specific to a thread's execution 179 | - **Persistence**: Data persists only during thread execution 180 | - **Access Pattern**: Accessed by the thread that owns it 181 | - **Caching**: Cached in L1 cache on some architectures 182 | - **Location**: Located off-chip, in the GPU's DRAM, but managed as part of the thread's context 183 | 184 | ##### Global Memory 185 | 186 | Accessible by all threads, used for storing data shared across blocks. 187 | 188 | - **Speed**: Slowest among GPU memories 189 | - **Bandwidth**: ~200GB/s 190 | - **Latency**: ~800 clocks 191 | - **Scope**: Accessible by all threads 192 | - **Size**: Largest 193 | - **Usage**: Storing data shared across blocks 194 | - **Persistence**: Data persists across kernel launches 195 | - **Access Pattern**: Coalesced access improves performance 196 | - **Caching**: Cached in L2 cache, with some architectures also supporting L1 caching 197 | - **Location**: Located off-chip, in the GPU's DRAM 198 | 199 | Global memory is managed using functions such as `cudaMalloc()` for allocation, `cudaMemset()` for initialization, `cudaMemcpy()` for data transfer, and `cudaFree()` for deallocation. 200 | 201 | ##### Host Memory (PCIe) 202 | 203 | Memory located on the host system, accessed via the PCIe bus. 204 | 205 | - **Speed**: Slowest overall 206 | - **Bandwidth**: ~5GB/s 207 | - **Latency**: Higher than GPU memory types, as it involves data transfer over the PCIe bus 208 | - **Scope**: Accessible by the host and can be accessed by the GPU through data transfer 209 | - **Size**: Limited by the host system's memory capacity 210 | - **Usage**: Storing data that needs to be transferred to and from the GPU 211 | - **Persistence**: Data persists across program execution 212 | - **Access Pattern**: Accessed via data transfer operations 213 | - **Caching**: Managed by the host system 214 | - **Location**: Located on the host system 215 | 216 | ##### Indices in a CUDA Program 217 | 218 | In a CUDA program, indices are used to uniquely identify threads and blocks within the grid. These indices are crucial for determining which portion of the data each thread will process. The main indices in a CUDA program are: 219 | 220 | - **Thread Indices**: `threadIdx.x`, `threadIdx.y`, `threadIdx.z` 221 | - **Block Indices**: `blockIdx.x`, `blockIdx.y`, `blockIdx.z` 222 | - **Dimension Sizes**: `blockDim.x`, `blockDim.y`, `blockDim.z`, `gridDim.x`, `gridDim.y`, `gridDim.z` 223 | - **Global Indices**: Calculated using block and thread indices 224 | 225 | ##### Thread Indices 226 | 227 | - **`threadIdx.x`**: The x-coordinate of a thread within its block. It ranges from 0 to `blockDim.x - 1`. 228 | - **`threadIdx.y`**: The y-coordinate of a thread within its block. It ranges from 0 to `blockDim.y - 1`. 229 | - **`threadIdx.z`**: The z-coordinate of a thread within its block. It ranges from 0 to `blockDim.z - 1`. 230 | 231 | These indices allow you to access the specific thread within a block, which is useful for operations that require knowledge of the thread's position within its block. 232 | 233 | ##### Block Indices 234 | 235 | - **`blockIdx.x`**: The x-coordinate of a block within the grid. It ranges from 0 to `gridDim.x - 1`. 236 | - **`blockIdx.y`**: The y-coordinate of a block within the grid. It ranges from 0 to `gridDim.y - 1`. 237 | - **`blockIdx.z`**: The z-coordinate of a block within the grid. It ranges from 0 to `gridDim.z - 1`. 238 | 239 | These indices are used to identify the specific block within the grid, which is important for determining the portion of the data that the block is responsible for processing. 240 | 241 | ##### Dimension Sizes 242 | 243 | - **`blockDim.x`**: The number of threads in the x-dimension of a block. 244 | - **`blockDim.y`**: The number of threads in the y-dimension of a block. 245 | - **`blockDim.z`**: The number of threads in the z-dimension of a block. 246 | 247 | - **`gridDim.x`**: The number of blocks in the x-dimension of the grid. 248 | - **`gridDim.y`**: The number of blocks in the y-dimension of the grid. 249 | - **`gridDim.z`**: The number of blocks in the z-dimension of the grid. 250 | 251 | These dimension sizes are used to configure the execution of the kernel and to calculate the global indices of threads. 252 | 253 | ##### Global Indices 254 | 255 | To calculate the global index of a thread, which is its unique position across the entire grid, you can use the following formulas: 256 | 257 | - **1D Global Index**: 258 | ```cpp 259 | int globalIndex = blockIdx.x * blockDim.x + threadIdx.x; 260 | ``` 261 | 262 | - **2D Global Index**: 263 | ```cpp 264 | int globalIndexX = blockIdx.x * blockDim.x + threadIdx.x; 265 | int globalIndexY = blockIdx.y * blockDim.y + threadIdx.y; 266 | ``` 267 | 268 | - **3D Global Index**: 269 | ```cpp 270 | int globalIndexX = blockIdx.x * blockDim.x + threadIdx.x; 271 | int globalIndexY = blockIdx.y * blockDim.y + threadIdx.y; 272 | int globalIndexZ = blockIdx.z * blockDim.z + threadIdx.z; 273 | ``` 274 | 275 | These global indices are used to map each thread to a specific element in the data array, allowing for parallel processing of data. 276 | -------------------------------------------------------------------------------- /hpc/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn HPC 2 | 3 | Awesome collection of learning materials to master modern HPC 4 | 5 | Here you can find: 6 | 7 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master HPC. 8 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of HPC. 9 | - __:old_key:__ [Comprehend CUDA](./concepts_cuda.md) : key concepts and dichotomies of CUDA. 10 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets.md) : cheatsheets on HPC. 11 | -------------------------------------------------------------------------------- /iot/cheatsheet/automation_pyramid.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/iot/cheatsheet/automation_pyramid.png -------------------------------------------------------------------------------- /iot/cheatsheet/automation_stack.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/iot/cheatsheet/automation_stack.png -------------------------------------------------------------------------------- /iot/concept.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Internet of Things 2 | 3 | Key concepts of Internet of Things. 4 | 5 | 6 | 7 | 8 | 9 | ## PLC ~ Programmable Logic Controller 10 | 11 | A **Programmable Logic Controller (PLC)** is a digital computer used for automation of industrial processes, such as control of machinery on factory assembly lines. 12 | 13 | PLCs are designed for multiple input and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. They are programmed using a specialized computer language, often ladder logic, and are used in various industries to automate tasks that require high reliability and ease of programming and process fault diagnosis. 14 | 15 | ## PLC / PAC 16 | 17 | While both **Programmable Logic Controllers (PLC)** and **Programmable Automation Controllers (PAC)** are used for industrial automation, they differ in terms of capabilities and applications. 18 | 19 | **PLC** is a digital computer used primarily for automating control processes in industrial environments. It is designed for high reliability, ease of programming, and process fault diagnosis, typically using ladder logic. PLCs are well-suited for discrete control applications where tasks are repetitive and require precise timing. 20 | 21 | **PAC**, on the other hand, is a more advanced controller that combines the features of a PLC with those of a PC-based control system. PACs offer greater flexibility, scalability, and processing power, making them suitable for complex control applications that require data handling, network connectivity, and integration with other systems. They support multiple programming languages and can handle both discrete and continuous processes. 22 | 23 | ## Ladder Logic 24 | 25 | A graphical programming language used to develop software for programmable logic controllers (PLCs). 26 | 27 | Ladder logic is designed to resemble electrical relay logic diagrams, using symbols to represent control logic elements like switches, relays, and timers. It is widely used in industrial control applications due to its intuitive visual representation, which makes it easier for engineers and technicians to design, troubleshoot, and maintain control systems. 28 | 29 | ## SCADA ~ Supervisory Control and Data Acquisition 30 | 31 | **Supervisory Control and Data Acquisition (SCADA)** is a system used for monitoring and controlling industrial processes and infrastructure. 32 | 33 | SCADA systems gather real-time data from remote locations to control equipment and conditions. They are widely used in industries such as energy, water, and manufacturing to ensure efficient and safe operations. SCADA systems consist of hardware and software components, including sensors, controllers, and user interfaces, allowing operators to monitor system performance, analyze data, and make informed decisions to optimize processes. 34 | 35 | Both **HMI** and **SCADA** coexist because SCADA systems require HMIs to provide operators with the necessary interface to interact with the data and control systems effectively. HMIs serve as the front-end component of SCADA systems, enabling users to visualize and manage the data collected and processed by SCADA, thus ensuring efficient and informed decision-making in industrial operations. 36 | 37 | ## HMI ~ Human-Machine Interface 38 | 39 | A **Human-Machine Interface (HMI)** is a user interface or dashboard that connects a person to a machine, system, or device. 40 | 41 | HMIs are used in various industries to allow operators to interact with and control machinery and processes. They provide visual representations of data, enable monitoring of system performance, and allow for manual input to control operations. HMIs can range from simple screens displaying data to complex, interactive touchscreens that provide detailed insights and control over industrial processes. 42 | 43 | Both **HMI** and **SCADA** coexist because SCADA systems require HMIs to provide operators with the necessary interface to interact with the data and control systems effectively. HMIs serve as the front-end component of SCADA systems, enabling users to visualize and manage the data collected and processed by SCADA, thus ensuring efficient and informed decision-making in industrial operations. 44 | 45 | ## MES ~ Manufacturing Execution System 46 | 47 | A **Manufacturing Execution System (MES)** is a software system that monitors, tracks, documents, and controls the process of manufacturing goods from raw materials to finished products. 48 | 49 | MES provides real-time data and insights into production processes, helping manufacturers optimize operations, improve productivity, and ensure quality control. It acts as a bridge between enterprise resource planning (ERP) systems and the shop floor, facilitating communication and coordination across different levels of production. MES can include functionalities such as scheduling, resource management, production tracking, and performance analysis, enabling manufacturers to respond quickly to changing conditions and demands. 50 | 51 | While both **ERP** and **MES** systems are used to enhance business operations, they focus on different aspects of an organization. ERP systems provide a broad overview of business processes, integrating various departments to improve overall efficiency and decision-making. In contrast, MES systems are specifically designed to manage and optimize manufacturing operations on the shop floor, focusing on real-time production data and process control. 52 | 53 | ## ERP ~ Enterprise Resource Planning 54 | 55 | **Enterprise Resource Planning (ERP)** is a type of software used by organizations to manage and integrate the essential parts of their businesses. 56 | 57 | ERP systems centralize data and processes across various departments, such as finance, human resources, supply chain, and manufacturing, into a unified system. This integration helps improve efficiency, accuracy, and decision-making by providing a comprehensive view of business operations. ERP systems often include modules for accounting, inventory management, order processing, and customer relationship management, among others, allowing organizations to streamline workflows and enhance collaboration across departments. 58 | 59 | While both **ERP** and **MES** systems are used to enhance business operations, they focus on different aspects of an organization. ERP systems provide a broad overview of business processes, integrating various departments to improve overall efficiency and decision-making. In contrast, MES systems are specifically designed to manage and optimize manufacturing operations on the shop floor, focusing on real-time production data and process control. 60 | 61 | ## Automation Pyramid 62 | 63 | The **Automation Pyramid** is a conceptual model that illustrates the hierarchy of systems and technologies used in industrial automation, from the physical devices at the base to the enterprise-level systems at the top. 64 | 65 | **Enterprise Level** involves systems like **ERP** (Enterprise Resource Planning) that manage and integrate business processes across an organization. 66 | 67 | **Management Level** includes **MES** (Manufacturing Execution Systems) that monitor and control manufacturing operations on the shop floor. 68 | 69 | **Supervision Level** is represented by **SCADA** (Supervisory Control and Data Acquisition) systems, which provide real-time monitoring and control of industrial processes. 70 | 71 | **Control Level** consists of **PLC** (Programmable Logic Controllers) and **PAC** (Programmable Automation Controllers), which execute control tasks and manage the operation of machinery and processes. 72 | 73 | **Field Level** encompasses **sensors** and **actuators**, which are the physical devices that interact directly with the manufacturing environment, collecting data and executing control commands. 74 | 75 | ![](cheatsheet/automation_pyramid.png) 76 | ![](cheatsheet/automation_stack.png) 77 | -------------------------------------------------------------------------------- /iot/learn.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: Learn Internet of Things 2 | 3 | Curated collection of lists of useful resources to master Internet of Things together. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## Tags legend 11 | 12 | ##### Kind of resource 13 | 14 | - :movie_camera: - video material to watch 15 | - :page_facing_up: - reading 16 | - :book: - a book 17 | - :mortar_board: - online course with or without feedback 18 | - :chart_with_upwards_trend: - cheat sheets 19 | - :card_file_box: - reference or manual or a standard 20 | - :open_file_folder: - collections of collections 21 | - :pirate_flag: - non-english 22 | - :page_facing_up: - either single article or single video-tutorial 23 | - :building_construction: - ideas for inspiration of mini-projects to add to your portfolio 24 | - :moneybag: - paid 25 | - 🔽 - download 26 | - ⚡ - practice, it is possible to interact and get feedback from the system 27 | - ( _official_ ) - official material 28 | - ( _blog_ ) - blogs 29 | - ( _example_ ) - code sample that can be executed 30 | 31 | ##### Specific Domain 32 | 33 | - ( _introductory_ ) - introductory learning material 34 | 35 | ##### Specific Technology 36 | 37 | - ( _unity_ ) - related to Unity 38 | -------------------------------------------------------------------------------- /iot/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Internet of Things Together 2 | 3 | Awesome collection of learning materials to master modern Internet of Things. 4 | 5 | ## What is this about? 6 | 7 | This repostory contains nearly dozen curated collections: learning materials, toolboxes, newspapers, working groups, collection of other collections. Everything you will find useful if you are interested in Internet of Things. 8 | 9 | Here you can find: 10 | 11 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Internet of Things. 12 | - __:old_key:__ [Comprehend](./concept.md) : key concepts and dichotomies of Internet of Things. 13 | -------------------------------------------------------------------------------- /license: -------------------------------------------------------------------------------- 1 | Copyright Learn Together (c) 2013-2023 2 | 3 | Permission is hereby granted, free of charge, to any person 4 | obtaining a copy of this software and associated documentation 5 | files (the "Software"), to deal in the Software without 6 | restriction, including without limitation the rights to use, 7 | copy, modify, merge, publish, distribute, sublicense, and/or sell 8 | copies of the Software, and to permit persons to whom the 9 | Software is furnished to do so, subject to the following 10 | conditions: 11 | 12 | The above copyright notice and this permission notice shall be 13 | included in all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 17 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 19 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 21 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 22 | OTHER DEALINGS IN THE SOFTWARE. 23 | -------------------------------------------------------------------------------- /research/blank: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/research/blank -------------------------------------------------------------------------------- /skill/cloud/aws.md: -------------------------------------------------------------------------------- 1 | # Cloud AWS VPC 2 | 3 | ### Starting 4 | 5 | - [Create free account](https://aws.amazon.com/free/) 6 | - [Detailed tutorial on AWS VPC](https://youtu.be/g2JOHLHh4rI) 7 | - [Short tutorial on AWS VPC](https://youtu.be/2doSoMN2xvI) 8 | - [Short tutorial on AWS VPC + terraform](https://youtu.be/nvNqfgojocs) 9 | -------------------------------------------------------------------------------- /skill/org_second_brain/notion_second_brain.md: -------------------------------------------------------------------------------- 1 | # Notion Second Brain 2 | 3 | - [Notion Task Manager](https://www.youtube.com/watch?v=32dLXdB4ozs) 4 | - [Notion Second Brain](https://www.youtube.com/watch?v=vs8WQh2k-Ow) 5 | - [Notion Second Brain](https://www.youtube.com/watch?v=PaIQxvr99cM) 6 | - [Course on Notion](https://www.youtube.com/watch?v=PaIQxvr99cM) 7 | -------------------------------------------------------------------------------- /statistics/cheatsheet/p_value.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/statistics/cheatsheet/p_value.png -------------------------------------------------------------------------------- /statistics/cheatsheets.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Probability Theory and Satistics 2 | 3 | Cheatsheets for Probability Theory and Satistics. 4 | 5 | 6 | -------------------------------------------------------------------------------- /statistics/concepts.md: -------------------------------------------------------------------------------- 1 | # Key concepts of Probability Theory and Satistics 2 | 3 | Key concepts of Probability Theory and Satistics. 4 | 5 | ## Sample 6 | 7 | A sample is a subset of individuals, items, or observations selected from a larger population for the purpose of statistical analysis. It is used to make inferences about the population without having to study the entire group. 8 | 9 | ## Null Hypothesis 10 | 11 | Statement that assumes there is no effect or no difference in a particular situation or experiment. The null hypothesis serves as a starting point for statistical testing. 12 | 13 | ## Alternative Hypothesis 14 | 15 | The alternative hypothesis is a statement in statistics that suggests there is an effect or a difference in a particular situation or experiment. It is the opposite of the null hypothesis and is what researchers aim to support through their analysis. 16 | 17 | ## P-Value ~ Rareness 18 | 19 | The p-value is a statistical measure that helps determine the significance of the results obtained in a hypothesis test. 20 | 21 | It quantifies the probability of observing the data, or something more extreme, assuming that the null hypothesis is true. 22 | 23 | - **Purpose**: To assess the strength of the evidence against the null hypothesis. 24 | - **Interpretation**: 25 | - A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, leading to its rejection. 26 | - A large p-value (> 0.05) suggests weak evidence against the null hypothesis, so it is not rejected. 27 | - **Significance Level**: The threshold for determining significance is usually set at 0.05, but it can vary depending on the context. 28 | 29 | A p-value is composed of three parts: 30 | 31 | 1. The probability random chance would result in the observation. 32 | 2. The probability of observing something else that is equally rare. 33 | 3. The probability of observing something rarer or more extreme. 34 | 35 | ![Calculation of P-Value](cheatsheet/p_value.png) 36 | 37 | ## Two-sided / One-sided test 38 | 39 | One-sided tests look for effects in one direction, while two-sided tests consider both directions. Usually Two-sided P-Value is what really needed. 40 | 41 | ## Power in Statistics 42 | 43 | Statistical power is the probability that a test will correctly reject a false null hypothesis. 44 | 45 | It measures a test's ability to detect an effect when there is one. Power is the likelihood of avoiding a Type II error (failing to reject a false null hypothesis). 46 | 47 | **Influencing Factors**: 48 | 49 | - **Sample Size**: Larger sample sizes generally increase power. 50 | - **Effect Size**: Larger effect sizes make it easier to detect differences, increasing power. 51 | - **Significance Level (α)**: Higher significance levels can increase power but also increase the risk of Type I errors. 52 | - **Variability**: Lower variability in data increases power. 53 | 54 | ## Effect Size 55 | 56 | Effect size is a quantitative measure of the magnitude of a phenomenon, combining mean and standard deviation into single metric. 57 | 58 | It provides an indication of the strength or importance of a relationship or difference observed in a study, independent of sample size. 59 | 60 | d = (X̄₁ - X̄₂) / sₚ 61 | 62 | Where: 63 | - X̄₁ and X̄₂ are the means of the two groups. 64 | - sₚ is the pooled standard deviation, calculated as: 65 | 66 | ## Sample Size for Desired Power 67 | 68 | To calculate the sample size needed to achieve a desired statistical power, you can use the following formula, which is often used for comparing two means: 69 | 70 | n = [(Z₁₋α/₂ + Z₁₋β)² * (σ₁² + σ₂²)] / Δ² 71 | 72 | Where: 73 | - n is the sample size per group. 74 | - Z₁₋α/₂ is the Z-score corresponding to the desired significance level (α), typically 0.05 for a two-tailed test. 75 | - Z₁₋β is the Z-score corresponding to the desired power (1 - β), typically 0.80 or 0.90. 76 | - σ₁² and σ₂² are the variances of the two groups. 77 | - Δ is the minimum detectable effect size (difference in means). 78 | 79 | > Steps to Calculate Sample Size 80 | 81 | 1. **Determine the Desired Power**: Commonly set at 0.80 or 0.90. 82 | 2. **Set the Significance Level (α)**: Typically 0.05 for a two-tailed test. 83 | 3. **Estimate the Effect Size (Δ)**: The smallest difference you want to detect. 84 | 4. **Estimate the Variance (σ²)**: Based on previous studies or pilot data. 85 | 5. **Use the Formula**: Plug the values into the formula to calculate the required sample size. 86 | 87 | Statistical software or online calculators can also be used to simplify this process and provide more accurate results. 88 | 89 | ## Covariance 90 | 91 | Covariance is a statistical measure that indicates the extent to which two variables change together. It helps to determine whether an increase in one variable corresponds to an increase or decrease in another variable. 92 | 93 | ##### Indication 94 | 95 | - **Positive Covariance**: Indicates that the two variables tend to increase or decrease together. 96 | - **Negative Covariance**: Indicates that as one variable increases, the other tends to decrease. 97 | - **Zero Covariance**: Suggests no linear relationship between the variables. 98 | 99 | ##### Formula for Covariance 100 | 101 | The covariance between two variables X and Y is calculated using the following formula: 102 | 103 | Cov(X, Y) = Σ((Xᵢ - X̄)(Yᵢ - Ȳ)) / (n - 1) 104 | 105 | Where: 106 | - Xᵢ and Yᵢ are the individual data points for variables X and Y. 107 | - X̄ and Ȳ are the means of X and Y, respectively. 108 | - n is the number of data points. 109 | 110 | ## Expected Value 111 | 112 | The expected value is a fundamental concept in probability and statistics that represents the average or mean value of a random variable over a large number of trials. 113 | 114 | It provides a measure of the central tendency of the probability distribution of the random variable. 115 | 116 | For a discrete random variable X with possible values x₁, x₂, ..., xₙ and corresponding probabilities P(x₁), P(x₂), ..., P(xₙ), the expected value E(X) is calculated as: 117 | 118 | E(X) = Σ [xᵢ * P(xᵢ)] 119 | 120 | For a continuous random variable with probability density function f(x), the expected value is calculated as: 121 | 122 | E(X) = ∫ x * f(x) dx 123 | 124 | ## Binomial Distribution 125 | 126 | The binomial distribution is a discrete probability distribution that models the number of successes in a fixed number of independent and identically distributed Bernoulli trials. 127 | 128 | Each trial has two possible outcomes: success or failure. 129 | 130 | - **Number of Trials (n)**: The fixed number of independent trials. 131 | - **Probability of Success (p)**: The probability of success on each trial. 132 | - **Probability of Failure (q)**: The probability of failure on each trial, where q = 1 - p. 133 | - **Mean (Expected Value)**: E(X) = n * p 134 | - **Variance**: Var(X) = n * p * q 135 | 136 | The probability of observing exactly k successes in n trials is given by the binomial probability mass function: 137 | 138 | P(X = k) = C(n, k) * p^k * q^(n-k) 139 | 140 | Where: 141 | - P(X = k) is the probability of k successes. 142 | - C(n, k) = n! / (k! * (n-k)!) is the binomial coefficient, representing the number of ways to choose k successes from n trials. 143 | - p is the probability of success on each trial. 144 | - q is the probability of failure on each trial. 145 | 146 | ## Central Limit Theorem 147 | 148 | The Central Limit Theorem (CLT) is a fundamental principle in statistics that describes the behavior of the sampling distribution of the sample mean. 149 | 150 | It states that, given a sufficiently large sample size, the distribution of the sample mean will approximate a normal distribution, regardless of the shape of the population distribution. 151 | 152 | - **Sample Size**: The theorem holds when the sample size is large enough, typically n ≥ 30 is considered sufficient. 153 | - **Independence**: The samples must be independent of each other. 154 | - **Population Distribution**: The original population distribution can be of any shape (e.g., skewed, uniform, etc.). 155 | - **Normal Approximation**: As the sample size increases, the sampling distribution of the sample mean becomes approximately normal. 156 | - **Mean of Sampling Distribution**: The mean of the sampling distribution of the sample mean is equal to the mean of the population (μ). 157 | - **Standard Deviation of Sampling Distribution**: The standard deviation of the sampling distribution (standard error) is equal to the population standard deviation (σ) divided by the square root of the sample size (n), i.e., σ/√n. 158 | 159 | ## Difference Between Technical and Biological Replicates 160 | 161 | In scientific experiments, replicates are used to ensure the reliability and accuracy of results. There are two main types of replicates: technical replicates and biological replicates. Understanding the difference between them is crucial for experimental design and data interpretation. 162 | 163 | ##### Technical Replicates 164 | 165 | - **Definition**: Technical replicates are repeated measurements of the same sample or experimental condition. They are used to assess the precision and consistency of the measurement technique or instrument. 166 | - **Purpose**: To account for variability in the measurement process and ensure that the results are reproducible. 167 | - **Example**: Running the same sample multiple times in a PCR machine to check for consistency in the results. 168 | 169 | ##### Biological Replicates 170 | 171 | - **Definition**: Biological replicates are independent samples that are biologically distinct but treated under the same experimental conditions. They are used to capture the natural biological variability. 172 | - **Purpose**: To ensure that the results are generalizable and not specific to a single biological sample. 173 | - **Example**: Using different animals or cell cultures from the same species to test the effect of a drug. 174 | 175 | ## Effective Sample Size 176 | 177 | The effective sample size is a concept used in statistics to account for the amount of information or precision that a sample provides, especially when the data are not independent or identically distributed. 178 | 179 | It is particularly relevant in complex survey designs, time series data, or clustered data where observations may be correlated. 180 | 181 | - **Purpose**: To adjust the nominal sample size to reflect the true amount of information available, considering any dependencies or correlations within the data. 182 | - **Importance**: Provides a more accurate estimate of the sample's ability to represent the population, which is crucial for statistical inference and hypothesis testing. 183 | 184 | The effective sample size can be calculated using different methods depending on the context. Here are a few examples: 185 | 186 | - Effective Sample Size (n_eff) = n / (1 + (n - 1) * ρ) 187 | - Where n is the nominal sample size, and ρ (rho) is the intra-cluster correlation coefficient. 188 | 189 | ## Standard Error 190 | 191 | Standard error (SE) just a standard deviation of multiple means taken on the same population. 192 | 193 | The standard error (SE) is a statistical measure that quantifies the amount of variability or dispersion of a sample statistic, such as the sample mean, from the true population parameter. It provides an estimate of the precision of the sample statistic as an estimate of the population parameter. 194 | 195 | - **Purpose**: To measure the accuracy with which a sample statistic represents the population parameter. 196 | - **Relation to Sample Size**: The standard error decreases as the sample size increases, indicating more precise estimates with larger samples. 197 | 198 | For the sample mean, the standard error is calculated as: 199 | 200 | SE = σ / √n 201 | 202 | Where: 203 | - σ is the standard deviation of the population. 204 | - n is the sample size. 205 | 206 | If the population standard deviation is unknown, the sample standard deviation (s) is used: 207 | 208 | SE = s / √n 209 | 210 | ## Confidence Interval 211 | 212 | A confidence interval (CI) is a range of values, derived from sample data, that is likely to contain the true population parameter (such as the mean or proportion) with a specified level of confidence. 213 | 214 | It provides an estimate of the uncertainty associated with a sample statistic. If two confidence interval don't overlap it's statistically significant if does overlap then t-test is necessary. 215 | 216 | - **Purpose**: To give an estimated range of values which is likely to include the population parameter. 217 | - **Confidence Level**: The probability that the confidence interval contains the true parameter. Common confidence levels are 90%, 95%, and 99%. 218 | - A 95% confidence interval means that if you were to take 100 different samples and compute a confidence interval for each sample, approximately 95 of the intervals will contain the true population mean. 219 | - The width of the confidence interval indicates the precision of the estimate; narrower intervals suggest more precise estimates. 220 | 221 | For a sample mean, the confidence interval is calculated as: 222 | 223 | CI = X̄ ± Z * (SE) 224 | 225 | Where: 226 | - X̄ is the sample mean. 227 | - Z is the Z-score corresponding to the desired confidence level (e.g., 1.96 for 95% confidence). 228 | - SE is the standard error of the sample mean ( variance ) 229 | 230 | ## Correlation ~ Pearson's R 231 | 232 | Correlation is a statistical measure that describes the strength and direction of a linear relationship between two variables. 233 | 234 | It quantifies how changes in one variable are associated with changes in another. 235 | 236 | - **Range**: Correlation coefficients range from -1 to 1. 237 | - **+1**: Perfect positive correlation, meaning as one variable increases, the other also increases proportionally. 238 | - **-1**: Perfect negative correlation, meaning as one variable increases, the other decreases proportionally. 239 | - **0**: No linear correlation, indicating no linear relationship between the variables. 240 | 241 | The most common measure of correlation is the Pearson correlation coefficient, which is calculated as: 242 | 243 | r = Cov(X, Y) / √(Var(X) * Var(Y)) 244 | 245 | Where: 246 | - Cov(X, Y) is the covariance between X and Y. 247 | - Var(X) is the variance of X. 248 | - Var(Y) is the variance of Y. --> 249 | 250 | ## R² (R-Squared) 251 | 252 | R², or R-squared, is a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. 253 | 254 | It provides an indication of how well the independent variables explain the variability of the dependent variable. 255 | 256 | - **Range**: R² values range from 0 to 1. 257 | - **0**: Indicates that the independent variables do not explain any of the variability of the dependent variable. 258 | - **1**: Indicates that the independent variables explain all the variability of the dependent variable. 259 | 260 | - **Interpretation**: 261 | - A higher R² value indicates a better fit of the model to the data. 262 | - R² is often expressed as a percentage. For example, an R² of 0.75 means that 75% of the variance in the dependent variable is predictable from the independent variables. 263 | 264 | R² = ( SS_mean - SS_res ) / SS_mean 265 | 266 | Where: 267 | - SS_res is the sum of squares of residuals (the difference between observed and predicted values). 268 | - SS_tot is the total sum of squares (the difference between observed values and the mean of the dependent variable). 269 | - SS_mean is the sum of squares of mean. 270 | 271 | ## Linear Regression 272 | 273 | Linear regression is a statistical method used to model the relationship between a dependent variable and one or more independent variables by fitting a linear equation to observed data. 274 | 275 | It is one of the most commonly used techniques for predictive modeling and data analysis. 276 | 277 | - **Simple Linear Regression**: Involves one independent variable and one dependent variable. The relationship is modeled with a straight line (y = mx + b), where: 278 | - y is the dependent variable. 279 | - x is the independent variable. 280 | - m is the slope of the line. 281 | - b is the y-intercept. 282 | 283 | - **Multiple Linear Regression**: Involves two or more independent variables. The relationship is modeled with a linear equation (y = b₀ + b₁x₁ + b₂x₂ + ... + bₙxₙ), where: 284 | - y is the dependent variable. 285 | - x₁, x₂, ..., xₙ are the independent variables. 286 | - b₀ is the y-intercept. 287 | - b₁, b₂, ..., bₙ are the coefficients representing the change in y for a one-unit change in each x. 288 | 289 | - **Linearity**: The relationship between the dependent and independent variables is linear. 290 | - **Independence**: Observations are independent of each other. 291 | - **Homoscedasticity**: Constant variance of errors. 292 | - **Normality**: The residuals (differences between observed and predicted values) are normally distributed. 293 | -------------------------------------------------------------------------------- /statistics/learn.md: -------------------------------------------------------------------------------- 1 | # :mortar_board: Learn Probability Theory and Satistics 2 | 3 | Awesome collection of learning materials to master modern Probability Theory and Satistics. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## General 11 | 12 | ## Tags legend 13 | 14 | - :movie_camera: - video material 15 | - :page_facing_up: - reading 16 | - :mortar_board: - online course with or without feedback 17 | - :chart_with_upwards_trend: - cheetsheets 18 | - ( _short_ ) - short overview 19 | - ( _systems_design_ ) - systems design 20 | -------------------------------------------------------------------------------- /statistics/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Probability Theory and Satistics Together 2 | 3 | Awesome collection of learning materials to master modern Probability Theory and Satistics. 4 | 5 | Here you can find: 6 | 7 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Computer Sicence. 8 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of Computer Sicence. 9 | - __:chart_with_upwards_trend:__ [Systemize](./cheatsheets.md) : cheatsheets on Computer Sicence. 10 | -------------------------------------------------------------------------------- /terraform/cheatsheet/workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Learn-Together-Pro/ComputerScience/177a53368e1566dea1c823a4f4869c102ed19b5f/terraform/cheatsheet/workflow.png -------------------------------------------------------------------------------- /terraform/learn.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Terraform Together 2 | 3 | Awesome list of high-quality study materials for learning database management systems. 4 | 5 | [:arrow_down: Tags legend](#tags-legend) at the end of the page. 6 | 7 | 8 | 9 | 10 | ## Terraform Introduction 11 | 12 | ( _Terraform_ ) 13 | 14 | ## Tags legend 15 | 16 | ##### Kind of Resource 17 | 18 | - :movie_camera: - video material to watch 19 | - :page_facing_up: - reading 20 | - :book: - a book 21 | - :mortar_board: - online course with or without feedback 22 | - :chart_with_upwards_trend: - cheat sheets 23 | - :card_file_box: - reference or manual or a standard 24 | - :open_file_folder: - collections of collections 25 | - :pirate_flag: - non-english 26 | - :page_facing_up: - either single article or single video-tutorial 27 | - :building_construction: - ideas for inspiration of mini-projects to add to your portfolio 28 | - :moneybag: - paid 29 | - ( _certificate_ ) - certification 30 | - 🔽 - download 31 | - ⚡ - practice, it is possible to interact and get feedback from the system 32 | - ( _official_ ) - official material 33 | - ( _blog_ ) - blogs 34 | - ( _example_ ) - code sample that can be executed 35 | - ( _short_ ) - short overview 36 | - ( _tool_ ) - a tool 37 | - ( _subjects_ ) - people and organizations 38 | - ( _skill_ ) - not a theory, but practices oriented 39 | 40 | ##### Level of Expertise 41 | 42 | - ( _level_0_ ) - very basic and trivial, every professional should know no matter which specialization 43 | - ( _level_1_ ) - basic but not trivial, every professional should know no matter which specialization 44 | - ( _level_2_ ) - medium proficiency, most professional should know no matter which specialization 45 | - ( _level_3_ ) - advanced, not recommended unless it fits specialization 46 | 47 | ##### Other Categories 48 | -------------------------------------------------------------------------------- /terraform/readme.md: -------------------------------------------------------------------------------- 1 | # 🧭 Learn Terraform Together 2 | 3 | Awesome collection of learning materials to master modern Database Management Systems. 4 | 5 | ![SQL Join](./cheatsheet/sql_join.jpg) 6 | 7 | ## What is this about? 8 | 9 | This repostory contains dozen curated collections: learning materials, toolboxes, newspapers, working groups, collection of other collections. Everything you will find useful if you are interested in Terraform. 10 | 11 | Here you can find: 12 | 13 | - __:mortar_board:__ [Learn](./learn.md) : collection of materials to master Terraform. 14 | - __:old_key:__ [Comprehend](./concepts.md) : key concepts and dichotomies of Terraform. 15 | --------------------------------------------------------------------------------