├── .prettierrc ├── bun.lockb ├── notes ├── images │ ├── lego-blocks.png │ ├── big-o-complexity.png │ ├── log-exponential.png │ └── big-o-complexity-chart.png ├── 1-memory.md └── 0-big-o-time-complexity.md ├── questions └── README.md ├── README.md ├── package.json └── .gitignore /.prettierrc: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /bun.lockb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moelzanaty3/algorithm-and-data-structure/HEAD/bun.lockb -------------------------------------------------------------------------------- /notes/images/lego-blocks.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moelzanaty3/algorithm-and-data-structure/HEAD/notes/images/lego-blocks.png -------------------------------------------------------------------------------- /questions/README.md: -------------------------------------------------------------------------------- 1 | # Questions 2 | 3 | here you will find all the coding interview questions that I have solved. 4 | 5 | ## [TBC] 6 | -------------------------------------------------------------------------------- /notes/images/big-o-complexity.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moelzanaty3/algorithm-and-data-structure/HEAD/notes/images/big-o-complexity.png -------------------------------------------------------------------------------- /notes/images/log-exponential.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moelzanaty3/algorithm-and-data-structure/HEAD/notes/images/log-exponential.png -------------------------------------------------------------------------------- /notes/images/big-o-complexity-chart.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moelzanaty3/algorithm-and-data-structure/HEAD/notes/images/big-o-complexity-chart.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Algorithm & Data Structure with Zanaty 2 | 3 | this repo will have my journey to learn algorithm and data structure 4 | 5 | ## Basic 6 | 7 | - [Big O Time Complexity](notes/0-big-o-time-complexity.md) 8 | - [Memory](notes/1-memory.md) 9 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "algorithm-and-data-structure", 3 | "version": "1.0.0", 4 | "description": "this repo will have my journey to learn algorithm and data structure", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "repository": { 10 | "type": "git", 11 | "url": "git+https://github.com/moelzanaty3/algorithm-and-data-structure.git" 12 | }, 13 | "keywords": [], 14 | "author": "", 15 | "license": "ISC", 16 | "bugs": { 17 | "url": "https://github.com/moelzanaty3/algorithm-and-data-structure/issues" 18 | }, 19 | "homepage": "https://github.com/moelzanaty3/algorithm-and-data-structure#readme", 20 | "volta": { 21 | "node": "18.13.0", 22 | "yarn": "4.0.2" 23 | }, 24 | "dependencies": { 25 | "prettier": "^3.1.0" 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Logs 2 | logs 3 | *.log 4 | npm-debug.log* 5 | yarn-debug.log* 6 | yarn-error.log* 7 | lerna-debug.log* 8 | .pnpm-debug.log* 9 | 10 | # Diagnostic reports (https://nodejs.org/api/report.html) 11 | report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json 12 | 13 | # Runtime data 14 | pids 15 | *.pid 16 | *.seed 17 | *.pid.lock 18 | 19 | # Directory for instrumented libs generated by jscoverage/JSCover 20 | lib-cov 21 | 22 | # Coverage directory used by tools like istanbul 23 | coverage 24 | *.lcov 25 | 26 | # nyc test coverage 27 | .nyc_output 28 | 29 | # Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files) 30 | .grunt 31 | 32 | # Bower dependency directory (https://bower.io/) 33 | bower_components 34 | 35 | # node-waf configuration 36 | .lock-wscript 37 | 38 | # Compiled binary addons (https://nodejs.org/api/addons.html) 39 | build/Release 40 | 41 | # Dependency directories 42 | node_modules/ 43 | jspm_packages/ 44 | 45 | # Snowpack dependency directory (https://snowpack.dev/) 46 | web_modules/ 47 | 48 | # TypeScript cache 49 | *.tsbuildinfo 50 | 51 | # Optional npm cache directory 52 | .npm 53 | 54 | # Optional eslint cache 55 | .eslintcache 56 | 57 | # Optional stylelint cache 58 | .stylelintcache 59 | 60 | # Microbundle cache 61 | .rpt2_cache/ 62 | .rts2_cache_cjs/ 63 | .rts2_cache_es/ 64 | .rts2_cache_umd/ 65 | 66 | # Optional REPL history 67 | .node_repl_history 68 | 69 | # Output of 'npm pack' 70 | *.tgz 71 | 72 | # Yarn Integrity file 73 | .yarn-integrity 74 | 75 | # dotenv environment variable files 76 | .env 77 | .env.development.local 78 | .env.test.local 79 | .env.production.local 80 | .env.local 81 | 82 | # parcel-bundler cache (https://parceljs.org/) 83 | .cache 84 | .parcel-cache 85 | 86 | # Next.js build output 87 | .next 88 | out 89 | 90 | # Nuxt.js build / generate output 91 | .nuxt 92 | dist 93 | 94 | # Gatsby files 95 | .cache/ 96 | # Comment in the public line in if your project uses Gatsby and not Next.js 97 | # https://nextjs.org/blog/next-9-1#public-directory-support 98 | # public 99 | 100 | # vuepress build output 101 | .vuepress/dist 102 | 103 | # vuepress v2.x temp and cache directory 104 | .temp 105 | .cache 106 | 107 | # Docusaurus cache and generated files 108 | .docusaurus 109 | 110 | # Serverless directories 111 | .serverless/ 112 | 113 | # FuseBox cache 114 | .fusebox/ 115 | 116 | # DynamoDB Local files 117 | .dynamodb/ 118 | 119 | # TernJS port file 120 | .tern-port 121 | 122 | # Stores VSCode versions used for testing VSCode extensions 123 | .vscode-test 124 | 125 | # yarn v2 126 | .yarn/cache 127 | .yarn/unplugged 128 | .yarn/build-state.yml 129 | .yarn/install-state.gz 130 | .pnp.* 131 | 132 | # General 133 | .DS_Store 134 | .AppleDouble 135 | .LSOverride 136 | 137 | # Icon must end with two \r 138 | Icon 139 | 140 | 141 | # Thumbnails 142 | ._* 143 | 144 | # Files that might appear in the root of a volume 145 | .DocumentRevisions-V100 146 | .fseventsd 147 | .Spotlight-V100 148 | .TemporaryItems 149 | .Trashes 150 | .VolumeIcon.icns 151 | .com.apple.timemachine.donotpresent 152 | 153 | # Directories potentially created on remote AFP share 154 | .AppleDB 155 | .AppleDesktop 156 | Network Trash Folder 157 | Temporary Items 158 | .apdisk 159 | -------------------------------------------------------------------------------- /notes/1-memory.md: -------------------------------------------------------------------------------- 1 | # Memroy 2 | 3 | Hello there 👋! In this comprehensive guide, we'll delve into the world of computer memory. We'll explore what memory is, why it's crucial to understand, and how we can utilize it effectively. 4 | 5 | ## Understanding Computer Memory 6 | 7 | Imagine computer memory as a series of storage boxes, with each box capable of holding a small, yet significant amount of data. This is where your data structures, such as arrays, lists, and trees, reside when your program executes. Essentially, memory stores information in binary form and is a fundamental component of any computing system, responsible for storing and retrieving data swiftly and efficiently. 8 | 9 | Memory is classified into two main types: 10 | 11 | 1. **Primary Memory (RAM)**: This volatile memory is your computer's main memory. It temporarily stores data while programs are running but loses this data when the computer is switched off. 12 | ![Memory DDR4](https://www.computerhope.com/jargon/m/memory-ddr4.png) 13 | 2. **Secondary Memory (Hard Drive/SSD):**: This non-volatile memory is used for long-term data storage. It retains data even when the computer is powered down. 14 | ![Hard Drive](https://www.computerhope.com/jargon/h/harddriv.jpg) 15 | 16 | ## Memory Allocation 17 | 18 | Now that we understand what memory is, let's discuss how it's allocated. Memory allocation is the process of assigning blocks of memory to various parts of a program or process. In the context of data structures, this is crucial for creating instances of structures like arrays, trees, lists, etc. There are two types of memory allocation:- 19 | 20 | 1. **Static** Memory Allocation **(Stack Memory)**: also known as Compile-time Memory Allocation, is a process where memory is allocated during the program's compilation phase. This type of memory allocation is used for data that has a fixed size, determined before the program runs. The compiler plays a key role here, as it allocates memory for variables present in the program based on their declared size and type. 21 | 22 | *Characteristics of Static Memory Allocation:* 23 | 24 | - **Fixed Size**: The size of each variable is known at compile time. 25 | - **Efficient Access**: Since the location and size of the data are known, memory access is fast and predictable. 26 | - **Stack Structure**: Variables are stored in stack memory, which operates on a Last-In-First-Out (LIFO) principle. 27 | - **Scope and Lifetime**: The lifetime of variables in stack memory is limited to the scope in which they are declared. 28 | - **Automatic Management**: Memory is automatically managed by the system, with variables being created and destroyed following the program's flow. 29 | 30 | *Example:* in JavaScript, we have two types of variables:- 31 | 32 | ```javascript 33 | let x = 10; // An integer value 34 | const y = 'A'; // A character value 35 | ``` 36 | 37 | as you can see:- `x` and `y` are allocated memory at the time the script is parsed and executed. The memory is allocated based on the type of the variable. For example, `x` is an integer, so it will be allocated 4 bytes of memory, while `y` is a character, so it will be allocated 1 byte of memory. 38 | 39 | > NOTE💡: In JavaScript, memory allocation for primitive types is managed by the JavaScript engine, with the specifics of the allocation size and mechanism abstracted away from the programmer. The JavaScript engine handles these details, which can vary between different engines (like V8 in Chrome, SpiderMonkey in Firefox). This abstraction is a key aspect of JavaScript's design as a high-level language. 40 | 41 | 2. **Dynamic** Memory Allocation **(Heap Memory)**: also known as Run-time Memory Allocation, is crucial for managing memory during a program's execution phase. Unlike static memory allocation, which allocates memory at compile time, dynamic allocation is performed at runtime. This approach is particularly useful for allocating memory for data structures whose size cannot be determined at compile time and may change during the program's execution. 42 | 43 | *Characteristics of Dynamic Memory Allocation:* 44 | 45 | - **Runtime Allocation**: Memory is allocated as the program runs, allowing for flexible memory management. 46 | - **Variable Size**: The size of the memory block can vary, accommodating different data sizes as needed. 47 | - **Heap Structure**: Allocated in heap memory, which is a large pool of memory available for dynamic allocation. 48 | - **Manual Management**: In some languages, programmers must explicitly allocate and deallocate memory. In others, like JavaScript, this is managed automatically. 49 | - **Efficient Utilization**: By allocating only the needed memory during runtime, dynamic allocation helps in optimizing memory usage and reducing wastage. 50 | 51 | *Example:* in JavaScript, objects, arrays, and functions are examples of entities that use dynamic memory allocation: 52 | 53 | ```javascript 54 | let user = { name: 'Alice', age: 30 }; // An object allocated in heap memory 55 | let numbers = [1, 2, 3, 4, 5]; // An array allocated in heap memory 56 | ``` 57 | 58 | as you can see:- The `user` object and the `numbers` array are dynamically allocated in heap memory, while JavaScript automates the management of heap memory, including garbage collection, an understanding of how dynamic memory allocation works can be crucial for optimizing application performance and memory usage. 59 | 60 | the question now in what unit of information is stored in memory? let's talk about that in the next section. 61 | 62 | ## Memory Units 63 | 64 | The storage capacity of the memory is expressed in various units of memory. These are as follows:- 65 | 66 | ### Bit 67 | 68 | Short for binary digit, The bit is the most basic unit of information in computing and digital communications. The bit represents a logical state with one of two possible values. These values are most commonly represented as either `1` or `0`, but other representations such as `true`/`false`, `yes`/`no`, `on`/`off`, or `+`/`−` are also widely used. 69 | 70 | A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. **A group of eight bits is called one byte**, but historically the size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. **A string of four bits is usually a nibble**. 71 | 72 | ### Byte 73 | 74 | The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. A single byte can represent up to 256 data values (28). 75 | 76 | A **byte** can vary in size depending on the system, but **it commonly consists of 8 bits**. network protocol documents such as the Internet Protocol refer to an 8-bit byte as an *octet*. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the [bit endianness](https://en.wikipedia.org/wiki/Endianness#Bit_endianness). a byte can effectively represent all of the numbers between 0 and 255, inclusive, in binary format. 77 | 78 | *The following bytes represent the numbers 1, 2, 3, 4 and 5 in binary format*:- 79 | 80 | | Number | Binary Format | 81 | |--------|---------------| 82 | | 1 | 00000001 | 83 | | 2 | 00000010 | 84 | | 3 | 00000011 | 85 | | 4 | 00000100 | 86 | | 5 | 00000101 | 87 | 88 | ## Key Points for Coding Interviews 89 | 90 | When preparing for coding interviews, a strong understanding of memory management is crucial. Here are the key points, rephrased and elaborated with examples: 91 | 92 | 1. **Byte-Based Memory Storage**: 93 | - In computers, data is stored in memory as a collection of bytes, with each byte comprising 8 bits. 94 | - **Example**: A character like `A` in [ASCII](https://www.asciitable.com/) is represented as `01000001` in binary, which is `65` in decimal. 95 | 96 | 2. **Pointer References in Memory**: 97 | - Memory addresses, or `pointers,` allow bytes to reference other bytes, forming links between different data points. 98 | - **Example**: In C++, `int *ptr = &x;` creates a pointer `ptr` that holds the memory address of the integer `x`. 99 | 100 | 3. **Limited Memory Resources**: 101 | - Memory is a finite resource in any computing system, making it important to optimize the memory usage of algorithms. 102 | - **Example**: In an algorithm, using an integer array of size `n` consumes `n * sizeof(int)` bytes of memory, so choosing the right data structure and size is crucial. 103 | 104 | 4. **Access Efficiency**: 105 | - Accessing a single byte or a fixed-size group of bytes is a basic operation in memory manipulation. These accesses are generally considered as single operations in computational complexity. 106 | - **Example**: Reading a `32-bit` integer from memory is treated as a single operation, regardless of the number of bits involved. This is because the memory controller reads the entire `32-bit` block from memory, even if only a single byte is needed. so if we think we to story only charcter `A` in memory, we will need to store it in 8 bits, but in reality, we will store it in 32 bits. which will repesent in binary as `00000000 00000000 00000000 01000001`. 107 | 108 | ## Understanding 32-bit vs. 64-bit Architectures 109 | 110 | The difference between 32-bit and 64-bit refers to the size of memory addresses that a processor can use. 111 | 112 | - **32-bit Systems**: 113 | - Can address 2^32 memory locations, which equates to 4 GB of RAM. 114 | - **Example**: In a 32-bit system, an address is represented using 32 bits, limiting the total addressable memory space. 115 | 116 | - **64-bit Systems**: 117 | - Can address 2^64 memory locations, significantly more than 32-bit, allowing access to a much larger amount of RAM (theoretically up to 16 exabytes). 118 | - **Example**: In a 64-bit system, the larger address space allows for more efficient processing of larger data sets and the use of more memory, enhancing overall performance. 119 | 120 | ## References 121 | 122 | - [Computer Memory](https://en.wikipedia.org/wiki/Computer_memory) 123 | - [Bit - Wikipedia](https://en.wikipedia.org/wiki/Bit) 124 | - [Byte - Wikipedia](https://en.wikipedia.org/wiki/Byte) 125 | - [Endianness - Wikipedia](https://en.wikipedia.org/wiki/Endianness) 126 | - [ASCII Table and Description](https://www.asciitable.com/) 127 | -------------------------------------------------------------------------------- /notes/0-big-o-time-complexity.md: -------------------------------------------------------------------------------- 1 | # Big O Time Complexity 2 | 3 | Hi there 👋, in this note, we will talk about Big O Notation, what is it? why we need it? and how to use it? 4 | 5 | - [Big O Time Complexity](#big-o-time-complexity) 6 | - [What is Big O Notation?](#what-is-big-o-notation) 7 | - [Why do we need Big O Notation?](#why-do-we-need-big-o-notation) 8 | - [Complexity Analysis](#complexity-analysis) 9 | - [Time complexity](#time-complexity) 10 | - [Space complexity](#space-complexity) 11 | - [Time Complexities](#time-complexities) 12 | - [Constant Time - O(1) Algorithm 🕒](#constant-time---o1-algorithm-) 13 | - [Logarithmic Time - O(log n) Algorithm 📉](#logarithmic-time---olog-n-algorithm-) 14 | - [Linear Time - O(n) Algorithm 📈](#linear-time---on-algorithm-) 15 | - [Quadratic Time - O(n²) Algorithm 🔄](#quadratic-time---on-algorithm-) 16 | - [Exponential Time - O(2^n) Algorithm 📈](#exponential-time---o2n-algorithm-) 17 | - [Key Considerations in Analyzing Time Complexity ⏱️](#key-considerations-in-analyzing-time-complexity-️) 18 | - [References](#references) 19 | 20 | ## What is Big O Notation? 21 | 22 | 23 | 24 | 34 | 37 | 38 |
25 | 26 | Imagine you have a huge box of `LEGO blocks` and you want to find a specific blue block. Big O notation is like a way of describing how long it might take you to find that block. So if you have a box of 10 blocks, it might take you 10 seconds to find that block. If you have a box of 100 blocks, it might take you 100 seconds to find that block. If you have a box of 1000 blocks, it might take you 1000 seconds to find that block. So the time it takes you to find that block is directly proportional to the size of the box. And that's what Big O notation is. It's a way of describing how long it might take you to find that block as the size of the box grows. 27 | 28 | **Technically speaking, Big O is theoretical definition of the complexity of an algorithm as function of the size. a notation used to describe complexity and what i mean by notation is that it simplifies everything in the algorithm down into a single variable** 29 | 30 | Big O is a useful notation for understanding both time and space complexity but only when comparing amongst algorithms that solve the same problem the last bit in that definition of Big O is a function of the size and all this means is that Big O measures complexity as the input size grows because it's not important to understand how an algorithm performs in a single data set but in all possible data sets you will 31 | 32 | > TIP💡: Big O referred to as the `upper bound` of the algorithm and what that means is that Big O measures how the algorithm performs in the worst case scenario so that's all Big O is nothing special. 33 | 35 | LEGO blocks 36 |
39 | 40 | ## Why do we need Big O Notation? 41 | 42 | Every problem has multiple solutions, and each solution has its own pros and cons. So when you're trying to solve a problem, you need to consider the time and space complexity of each solution to determine which one is the best for your use case. Big O notation is a way to measure the efficiency of an algorithm, and it's used to compare different solutions to the same problem. It's also used to determine the best solution for a given problem. 43 | 44 | for example, let's say we need to implement a function that reverse a string, we can do it in many ways, but we will focus on three ways: 45 | 46 | - Using the built-in `reverse` method. 47 | 48 | ```js 49 | function reverseString(str) { 50 | return str.split("").reverse().join(""); 51 | } 52 | ``` 53 | 54 | - Using a `for` loop. 55 | 56 | ```js 57 | function reverseString(str) { 58 | let reversed = ""; 59 | for (let i = str.length - 1; i >= 0; i--) { 60 | reversed += str[i]; 61 | } 62 | return reversed; 63 | } 64 | ``` 65 | 66 | - Using the `reduce` method. 67 | 68 | ```js 69 | function reverseString(str) { 70 | return str.split("").reduce((reversed, character) => { 71 | return character + reversed; 72 | }, ""); 73 | } 74 | ``` 75 | 76 | more implementations can be found [here](https://stackoverflow.com/a/51751393/18863976), but my point is that we have multiple solutions to the same problem, and each solution has its own pros and cons, so we need to know which solution is the best for our use case, and that's where Big O notation comes in so Is Big Notation important? the answer is 77 | 78 | - Yes, because it's important to know how long your code will take to run. 79 | - It's important to know how much memory your code will take up. 80 | - It's important to know how your code will scale. 81 | 82 | > NOTE💡: The world we live in today consists of complicated apps and software, each running on various devices and each having different capabilities. Some devices like desktops can run heavy machine learning software, but others like phones can only run apps. So when you create an application, you’ll need to optimize your code so that it runs smoothly across devices to give you an edge over your competitors. ⏤ [Big O Notation Cheat Sheet](https://flexiple.com/algorithms/big-o-notation-cheat-sheet) 83 | 84 | ## Complexity Analysis 85 | 86 | The process of determining how efficient an algorithm is. Complexity analysis usually involves finding both the time complexity and the space complexity of an algorithm. 87 | 88 | simply it determines how `good` an algorithm is, and by `good` I mean how fast it runs and how much memory it takes up. weather it's `better` than another algorithm or not. 89 | 90 | ### Time complexity 91 | 92 | A measure of how fast an algorithm runs, it describes the amount of time necessary to execute an algorithm. 93 | 94 | When ⏱️ analyzing an algorithm's time complexity, we encounter three scenarios: *best-case*, *average-case*, and *worst-case*, each portraying different performance scenarios. Suppose we have the following unsorted list [1, 5, 3, 9, 2, 4, 6, 7, 8] and we need to find the index of a value in this list using linear search. 95 | 96 | - *Best-case*: 🌟 this is the complexity of solving the problem for the best input. In our example, the best case would be to search for the value 1. Since this is the first value of the list, it would be found in the first iteration. 97 | 98 | - *Average-case*: 📊 this is the average complexity of solving the problem. This complexity is defined with respect to the distribution of the values in the input data. Maybe this is not the best example but, based on our sample, we could say that the average-case would be when we’re searching for some value in the “middle” of the list, for example, the value 2. 99 | 100 | - *Worst-case*: ⚠️ this is the complexity of solving the problem for the worst input of size n. In our example, the worst-case would be to search for the value 8, which is the last element from the list. 101 | 102 | Generally, when discussing an algorithm's time complexity, emphasis is often placed on *the worst-case* scenario as it illustrates the maximum time required for a given input size, providing a conservative estimation of performance. 103 | 104 | worth to mention also that the time complexity is not a measure of the actual time taken to run an algorithm, instead, it is a measure of how the time taken scales with change in the input length. so we are not talking about seconds or milliseconds, or how many cycles it takes to run an algorithm, but rather how many operations it takes to run an algorithm as *a function of the size of the input*. 105 | 106 | the question now is **What's function of the size of the input 😂?** in terms of time:- 107 | 108 | the `Function of the size of the input` in terms of time means that the time it takes for a computer program (or algorithm) to finish depends on how many items it has to work with. 109 | 110 | Let's look at two examples using JavaScript, which involve doing something with an array of numbers. 111 | 112 | - Example 1: Fixed Number of Steps 113 | 114 | ```javascript 115 | let arr = [1, 2, 3, ...]; // This is an array with some numbers 116 | 117 | for (let i = 0; i < 4; i++) { 118 | console.log(arr[1]); 119 | } 120 | ``` 121 | 122 | In this example, no matter how big the array is, the program always does the same thing **four times**. It's like saying, "No matter how many toys I have, I will only play with the fourth one, four times." This takes the same amount of time, whether you have 10 toys or 1000. We call this O(1) or constant time because it doesn't change with the number of items. 123 | 124 | - Example 2: Steps Depend on Number of Items 125 | 126 | ```javascript 127 | let arr = [1, 2, 3, ...]; // This is an array with some numbers 128 | 129 | for (let i = 0; i < arr.length; i++) { 130 | console.log(arr[i]); 131 | } 132 | ``` 133 | 134 | In this second example, the program looks at each number in the array, one by one. If you have 5 numbers, it looks 5 times. If you have 100 numbers, it looks 100 times. The more numbers you have, the longer it takes. This is like saying, "I will look at each toy I have, one by one." If you have more toys, it takes more time. We call this O(n), where 'n' is the number of items, because the time it takes grows with the number of items. 135 | 136 | So, when we say `function of the size of the input` about time, we're talking about how the number of items in the input (like numbers in an array) affects how long the program takes to run. 137 | 138 | ![Big O Complexity Chart Time](./images/big-o-complexity.png "© phan801") 139 | 140 | ### Space complexity 141 | 142 | A measure of how much auxiliary memory an algorithm takes up, simply compute how much space the variables in an algorithm take up. 143 | 144 | > The best algorithms should have the least space complexity. The lesser the space used, the faster it executes as *a function of the size of the input*. 145 | 146 | the question now is **What's function of the size of the input 😂?** in terms of space complexity:- 147 | 148 | Talking about `space complexity` is like thinking about how much room you need in your backpack for your school stuff. Just like your backpack can only hold so many books and pencils, a computer program can only use a certain amount of space (or memory) on your computer. 149 | 150 | When we say `function of the size of the input` in terms of space complexity, we're looking at how much extra space a computer program needs based on the number of things it's working with. 151 | 152 | Let's use two JavaScript examples to understand this better: 153 | 154 | - Example 1: Fixed Space Usage 155 | 156 | ```javascript 157 | function sumOfFirstTwo(arr) { 158 | return arr[0] + arr[1]; 159 | } 160 | ``` 161 | 162 | In this program, no matter how big the array `arr` is, we only use a tiny bit of extra space to store the sum of the first two numbers. It's like only needing space for two pencils in your backpack, regardless of how many pencils you actually have. This has a constant space complexity, or O(1), because it doesn't need more space if you have more items. 163 | 164 | - Example 2: Space Depends on Number of Items 165 | 166 | ```javascript 167 | function copyArray(arr) { 168 | let newArr = []; 169 | for (let i = 0; i < arr.length; i++) { 170 | newArr.push(arr[i]); 171 | } 172 | return newArr; 173 | } 174 | ``` 175 | 176 | In this second example, we make a new array that's just like the one we started with. If your original array has 5 numbers, the new array also has 5 numbers. If it has 100, the new one does too. It's like needing a bigger backpack if you have more books. The more items you have, the more space you need. This is called linear space complexity, or O(n), where `n` is the number of items, because the space needed grows with the number of items. 177 | 178 | So, `function of the size of the input` in terms of space complexity means how the amount of memory a program needs changes based on the number of items it's dealing with. 179 | 180 | ## Time Complexities 181 | 182 | The performance of an algorithm, in terms of speed and memory usage, isn't constant; it can vary based on the input. So, how can we articulate the efficiency of an algorithm? 🤔 183 | 184 | 💡 Big O Notation comes into play here. It's a potent instrument that enables us to express the space-time complexity of an algorithm in relation to the size of its input. 185 | 186 | In Big O, there are six major types of complexities (time and space) and The following are examples of common complexities and their Big O notations, ordered from fastest to slowest:: 187 | 188 | | Name | Time Complexity | 189 | |-------------------|-----------------| 190 | | Constant Time | O(1) | 191 | | Logarithmic Time | O(log n) | 192 | | Linear Time | O(n) | 193 | | Log-linear Time | O(n log n) | 194 | | Quadratic Time | O(n^2) | 195 | | Cubic Time | O(n^3) | 196 | | Exponential Time | O(2^n) | 197 | | Factorial Time | O(n!) | 198 | 199 | the above list is sorted from the best to the worst, it basically used to express the performance of algorithms or the complexity of algorithms based on the input. so we can say that the best algorithm is the one that has the least time complexity and the worst algorithm is the one that has the highest time complexity. 200 | 201 | ![Big O Complexity Chart](./images/big-o-complexity-chart.png) 202 | 203 | The next question that comes to mind is how you know which algorithm has which time complexity 🤔 204 | 205 | ### Constant Time - O(1) Algorithm 🕒 206 | 207 | - **Definition:** An algorithm operates in constant time (O(1)) when its execution is independent of the input data size (n). Regardless of the input's scale, the algorithm's runtime remains consistent. 208 | 209 | - ***Example:*** 210 | 211 | ```js 212 | function getFirstFruit(fruits) { 213 | return fruits[0]; 214 | } 215 | 216 | const fruits = ["🍎", "🍌", "🍇", "🍉", "🍊", "🍍", "🍓", "🍒"]; 217 | getFirstFruit(fruits); // 🍎 218 | ``` 219 | 220 | - **Explanation:** The provided function `getFirstFruit` retrieves the first element from an array. Regardless of the array's length, the function's runtime remains constant because it only accesses the initial value. 221 | - **no matter how many fruits you have it will always return '🍎' which is the first fruit in the array, so it's constant time.** 222 | 223 | ### Logarithmic Time - O(log n) Algorithm 📉 224 | 225 | - **Definition:** An algorithm operates in logarithmic time complexity (O(log n)) when it reduces the size of the input data in each step without needing to examine all values. 226 | 227 | > TIP💡: Logarithmic simply is the opposite of exponential, so if you have an exponential function like 2^x, the logarithmic function is log2(x). 228 | 229 | ![log-exponential](./images/log-exponential.png) 230 | 231 | - **Example** 232 | 233 | ```javascript 234 | function binarySearch(numbersArr, value) { 235 | let left = 0 236 | let right = numbersArr.length - 1 237 | let guess 238 | 239 | while (left <= right) { 240 | const middle = Math.floor((left + right) / 2) 241 | guess = numbersArr[middle] 242 | if (guess === value) { 243 | return middle 244 | } else { 245 | if (guess > value) { 246 | right = middle - 1 247 | } else { 248 | left = middle + 1 249 | } 250 | } 251 | } 252 | throw new Error('Value is not in the list') 253 | } 254 | 255 | const numbersArr = [1, 2, 3, 4, 5, 6, 7, 8, 9] 256 | binarySearch(numbersArr, 8) // 7 257 | // 1st iteration => left = 0, right = 8, middle = 4, guess = 5, value = 8 258 | // 2nd iteration => left = 5, right = 8, middle = 6, guess = 7, value = 8 259 | // 3rd iteration => left = 7, right = 8, middle = 7, guess = 8, value = 8 260 | const fruits = ["🍎", "🍌", "🍇", "🍉", "🍊", "🍍", "🍓", "🍒"]; 261 | binarySearch(fruits, "🍇"); // 2 262 | // 1st iteration => left = 0, right = 7, middle = 3, guess = "🍉", value = "🍇" 263 | // 2nd iteration => left = 0, right = 2, middle = 1, guess = "🍌", value = "🍇" 264 | // 3rd iteration => left = 2, right = 2, middle = 2, guess = "🍇", value = "🍇" 265 | ``` 266 | 267 | - **Explanation:** The `binarySearch` function locates the position of an element in a sorted list using the binary search algorithm. It repeatedly divides the search interval in half until the value is found or the search space becomes empty. 268 | 269 | - **Binary Search Steps:** 270 | 1. Calculate the middle of the list. 271 | 2. Adjust the boundaries based on whether the value is greater or smaller than the middle element. 272 | 3. Continue dividing the search space until the value is found or the boundaries converge. 273 | 274 | > TIP💡: Algorithms with logarithmic time complexity are frequently employed in binary trees or binary search operations. They efficiently handle large datasets by reducing the search space with each iteration, making them highly efficient for sizable inputs. 275 | 276 | - Often hear algorithms with logarithmic called sub-linear algorithms, which is a more general term that includes any algorithm that is less than linear time complexity, normally because it's more efficient than linear time complexity. 277 | 278 | ### Linear Time - O(n) Algorithm 📈 279 | 280 | - **Definition:** An algorithm operates in linear time complexity (O(n)) when its running time increases at most linearly with the size of the input data. It examines all values in the input data, and this complexity represents the best possible scenario for such algorithms. 281 | 282 | - **Example** 283 | 284 | ```javascript 285 | function linearSearch(fruits, fruit) { 286 | for (let index = 0; index < fruits.length; index++) { 287 | if (fruit === fruits[index]) { 288 | return index; // Return index if value is found 289 | } 290 | } 291 | throw new Error('fruit not found in the list'); 292 | } 293 | 294 | const fruits = ["🍎", "🍌", "🍇", "🍉", "🍊", "🍍", "🍓", "🍒"]; 295 | linearSearch(fruits, "🍇"); // 2 296 | // 1st iteration => index = 0, value = "🍎" 297 | // 2nd iteration => index = 1, value = "🍌" 298 | // 3rd iteration => index = 2, value = "🍇" 299 | ``` 300 | 301 | - **Explanation:** The `linearSearch` function performs a linear search in an unsorted array to find the position of an element. It iterates through each element in the array and compares it with the desired value until a match is found. 302 | 303 | - **Linear Search Steps:** 304 | 1. Iterate through each element in the array. 305 | 2. Check if the current element matches the value being searched. 306 | 3. Return the index if the value is found, or raise an error if not found after examining all elements. 307 | 308 | ### Quadratic Time - O(n²) Algorithm 🔄 309 | 310 | - **Definition:** An algorithm operates in quadratic time complexity (O(n²)) when it needs to perform a linear time operation for each value in the input data. This complexity arises when operations are nested, leading to exponential growth in execution time as the input size increases. 311 | 312 | - **Example** 313 | 314 | ```javascript 315 | function bubbleSort(numbersArr) { 316 | let swapped 317 | do { 318 | swapped = false 319 | for (let i = 0; i < numbersArr.length - 1; i++) { 320 | if (current > next) { 321 | // Swap elements using destructuring assignment 322 | [numbersArr[i], numbersArr[i + 1]] = [numbersArr[i + 1], numbersArr[i]]; 323 | swapped = true 324 | } 325 | } 326 | } while (swapped) 327 | } 328 | 329 | const numbersArr = [2, 5, 1, 4, 3] 330 | bubbleSort(numbersArr) 331 | // 1st Iteration 2,1,5,4,3 => index: 1, Swapped 5 and 1 => 2,1,5,4,3 332 | // 2nd Iteration 2,1,5,4,3 => index: 2, Swapped 5 and 4 => 2,1,4,5,3 333 | // 3rd Iteration 2,1,4,5,3 => index: 3, Swapped 5 and 3 => 2,1,4,3,5 334 | // 4th Iteration 2,1,4,3,5 => index: 4, Swapped 4 and 3 => 2,1,3,4,5 335 | // 5th Iteration 2,1,3,4,5 => index: 5, Swapped 2 and 1 => 1,2,3,4,5 336 | const fruits = ["🍎", "🍌", "🍇", "🍉", "🍊", "🍍", "🍓", "🍒"]; 337 | bubbleSort(fruits); 338 | // 1st Iteration 🍎,🍌,🍇,🍉,🍊,🍍,🍓,🍒 => index: 1, Swapped 🍌 and 🍎 => 🍌,🍎,🍇,🍉,🍊,🍍,🍓,🍒 339 | // 2nd Iteration 🍌,🍎,🍇,🍉,🍊,🍍,🍓,🍒 => index: 2, Swapped 🍇 and 🍎 => 🍌,🍇,🍎,🍉,🍊,🍍,🍓,🍒 340 | // 3rd Iteration 🍌,🍇,🍎,🍉,🍊,🍍,🍓,🍒 => index: 3, Swapped 🍉 and 🍎 => 🍌,🍇,🍉,🍎,🍊,🍍,🍓,🍒 341 | // 4th Iteration 🍌,🍇,🍉,🍎,🍊,🍍,🍓,🍒 => index: 4, Swapped 🍊 and 🍎 => 🍌,🍇,🍉,🍊,🍎,🍍,🍓,🍒 342 | // 5th Iteration 🍌,🍇,🍉,🍊,🍎,🍍,🍓,🍒 => index: 5, Swapped 🍍 and 🍎 => 🍌,🍇,🍉,🍊,🍍,🍎,🍓,🍒 343 | // 6th Iteration 🍌,🍇,🍉,🍊,🍍,🍎,🍓,🍒 => index: 6, Swapped 🍓 and 🍎 => 🍌,🍇,🍉,🍊,🍍,🍓,🍎,🍒 344 | // 7th Iteration 🍌,🍇,🍉,🍊,🍍,🍓,🍎,🍒 => index: 7, Swapped 🍒 and 🍎 => 🍌,🍇,🍉,🍊,🍍,🍓,🍒,🍎 345 | // 8th Iteration 🍌,🍇,🍉,🍊,🍍,🍓,🍒,🍎 => index: 8, Swapped 🍒 and 🍎 => 🍌,🍇,🍉,🍊,🍍,🍓,🍒,🍎 346 | ``` 347 | 348 | - **Explanation:** The `bubbleSort` function implements the bubble sort algorithm, which has quadratic time complexity. It repeatedly steps through the array, compares adjacent elements, and swaps them if they are in the wrong order. This process continues until the entire array is sorted. 349 | 350 | - **Bubble Sort Steps:** 351 | 1. Compare each element with its adjacent element. 352 | 2. Swap elements if they are in the wrong order. 353 | 3. Repeat this process until the array is sorted. 354 | 355 | - **Importance:** Quadratic time complexity algorithms, like bubble sort, perform comparisons for each element, resulting in increased time as the input size grows. This makes them less efficient for large datasets compared to algorithms with lower complexities. 356 | 357 | The provided JavaScript code demonstrates the bubble sort algorithm using a sample array of fruits for sorting, showcasing the quadratic time complexity. 358 | 359 | ### Exponential Time - O(2^n) Algorithm 📈 360 | 361 | - **Definition:** An algorithm operates in exponential time complexity (O(2^n)) when its growth rate doubles with each addition to the input data set. These algorithms are often found in brute-force methods and recursive computations. 362 | 363 | - **Explanation:** Exponential time complexity becomes notably resource-intensive as the input size increases. Brute-force attacks, such as systematically checking all possible elements of a password, exemplify this complexity. Longer passwords become considerably more secure due to the exponentially increasing resources required to crack them compared to shorter ones. 364 | 365 | - ***Example:* Recursive Fibonacci Algorithm** 366 | 367 | ```javascript 368 | function fibonacci(n) { 369 | if (n <= 1) { 370 | return n; 371 | } 372 | return fibonacci(n - 1) + fibonacci(n - 2); 373 | } 374 | 375 | fibonacci(4)); 376 | ``` 377 | 378 | - **Explanation of Recursive Function:** The `fibonacci` function calculates Fibonacci numbers using a recursive approach. As the value of 'n' increases, the number of recursive calls grows exponentially, leading to a significant increase in computation time. 379 | 380 | - **Visualization:** The Fibonacci recursion tree demonstrates how the number of function calls grows exponentially with increasing 'n': 381 | - [Recursion Tree of Fibonacci(4)](https://visualgo.net/bn/recursion) 382 | - [Recursion Tree of Fibonacci(6)](https://visualgo.net/bn/recursion) 383 | 384 | - **Importance:** Exponential time complexity algorithms, like the recursive Fibonacci computation, showcase the immense resource requirements and computational burden as the input size expands. Understanding such complexities is crucial when designing efficient algorithms. 385 | 386 | > TIP💡: there are other complexities like Linearithmic time complexity (O(n log n)), Factorial time complexity (O(n!)), and Cubic time complexity (O(n^3)). 387 | 388 | ## Key Considerations in Analyzing Time Complexity ⏱️ 389 | 390 | - **Algorithmic Complexity Analysis Rules**: 391 | - **Growth Relative to Input**: Time complexity analysis considers the growth pattern in relation to the input size. I mean how the time complexity of an algorithm changes as the size of the input changes. 392 | - *Example:* If you have an algorithm that loops through an array of n elements, the time complexity is O(n). The running time grows linearly with the size of the input. 393 | 394 | ```javascript 395 | function printElements(arr) { 396 | arr.forEach(element => { 397 | console.log(element); 398 | }); 399 | } 400 | ``` 401 | 402 | - **Disregarding Constants**: Constants in time complexity calculations are omitted as they don't significantly affect scalability. 403 | - *Example:* If you have an algorithm that loops through an array twice, the time complexity is still O(n), not O(2n). We disregard the constant factor of 2. 404 | 405 | ```javascript 406 | function printElementsTwice(arr) { 407 | arr.forEach(element => { 408 | console.log(element); 409 | }); 410 | arr.forEach(element => { 411 | console.log(element); 412 | }); 413 | } 414 | ``` 415 | 416 | - **Emphasis on Worst Case Scenario**: Evaluating the algorithm's worst-case scenario provides a crucial measurement of its efficiency. 417 | - *Example:* In a linear search algorithm, the worst-case scenario is when the element is at the end of the array or not present at all. The time complexity in this case is `O(n)`. 418 | 419 | ```javascript 420 | 421 | function linearSearch(arr, target) { 422 | for(let i = 0; i < arr.length; i++) { 423 | if(arr[i] === target) { 424 | return i; 425 | } 426 | } 427 | return -1; 428 | } 429 | 430 | ``` 431 | 432 | - **Largest Complexity Among Operations**: When an algorithm comprises multiple operations, its time complexity is defined by the operation with the most substantial impact on execution time. 433 | - *Example:* If an algorithm has two parts, one with time complexity O(n) and another with time complexity O(n^2), the overall time complexity is O(n^2), which is the larger of the two. 434 | 435 | ```javascript 436 | function complexAlgorithm(arr) { 437 | // Part 1: O(n) 438 | arr.forEach(element => { 439 | console.log(element); 440 | }); 441 | 442 | // Part 2: O(n^2) 443 | for(let i = 0; i < arr.length; i++) { 444 | for(let j = 0; j < arr.length; j++) { 445 | console.log(arr[i], arr[j]); 446 | } 447 | } 448 | } 449 | ``` 450 | 451 | - **Focus on Input**: Time complexity focuses on how an algorithm's runtime scales with changes in input size. it mainly focus on the importance of considering the size of the input when analyzing the time complexity of an algorithm. It emphasizes that the time complexity is not about the actual time in seconds or milliseconds it takes to run the algorithm, but how the running time scales with changes in the size of the input. 452 | - *Example:* an algorithm with time complexity O(log n) will have its running time increase logarithmically as the input size increases. 453 | 454 | ```javascript 455 | function binarySearch(sortedArray, target) { 456 | let left = 0; 457 | let right = sortedArray.length - 1; 458 | 459 | while(left <= right) { 460 | const mid = Math.floor((left + right) / 2); 461 | 462 | if(sortedArray[mid] === target) { 463 | return mid; 464 | } 465 | 466 | if(sortedArray[mid] < target) { 467 | left = mid + 1; 468 | } else { 469 | right = mid - 1; 470 | } 471 | } 472 | 473 | return -1; 474 | } 475 | ``` 476 | 477 | This set of rules guides the analysis of time complexity, emphasizing the significance of considering the most influential operation, disregarding constants, and prioritizing worst-case scenarios for accurate assessments. 478 | 479 | I recommend also to take a look at [Big O Cheat Sheet](https://www.bigocheatsheet.com/) as it has more samples and examples. 480 | 481 | ## References 482 | 483 | - [Computational complexity](https://en.wikipedia.org/wiki/Computational_complexity) 484 | - [Big-O notation](https://en.wikipedia.org/wiki/Big_O_notation) 485 | - [Time complexity](https://en.wikipedia.org/wiki/Time_complexity) 486 | - [Big O Cheat Sheet – Time Complexity Chart](https://www.freecodecamp.org/news/big-o-cheat-sheet-time-complexity-chart/) 487 | - [Big O Notation Cheat Sheet](https://flexiple.com/algorithms/big-o-notation-cheat-sheet) 488 | --------------------------------------------------------------------------------