├── CONTRIBUTING.md ├── LICENSE.md ├── Pages ├── Concepts.md ├── Concepts │ ├── Idempotency.md │ ├── Scalability.md │ └── Zkp.md ├── Concurrency.md ├── Concurrency │ ├── Async & Task Internals.md │ ├── Basics.md │ ├── Cancellation Token.md │ ├── Locking.md │ ├── Strategies.md │ ├── Thread Pool.md │ ├── Threading.md │ └── Time-to-Live.md ├── Cryptography.md ├── DDD.md ├── DDD │ ├── Definition.md │ ├── Patterns.md │ ├── Strategic Design.md │ └── Tactical Design.md ├── Data Structures.md ├── Data Structures │ ├── Binary Tree.md │ ├── Graphs.md │ ├── Linear.md │ ├── Search.md │ ├── Sorting.md │ ├── String Manipulation.md │ ├── Time Space Complexity.md │ ├── Tree.md │ └── Tries.md ├── Database.md ├── Database │ ├── ACID in Depth.md │ ├── CAP Theorem.md │ ├── Concepts.md │ ├── Deadlocks.md │ ├── Indexing.md │ ├── NoSQL.md │ ├── SQL.md │ └── Transactions and Isolation Levels.md ├── Design Patterns.md ├── Design Patterns │ ├── Code Smells.md │ ├── Creational Structural Behavioral.md │ └── Principles.md ├── Docker.md ├── Entity Framework.md ├── Entity Framework │ ├── Basics.md │ ├── Code-First DB-First.md │ ├── Database-Provider Mechanisms.md │ ├── DbContext Lifetime.md │ ├── Fluent-API.md │ ├── Transaction Management.md │ └── Value-Converters.md ├── Event-Driven.md ├── Event-Driven │ ├── Definition.md │ ├── Flows Layer.md │ ├── Models.md │ ├── Patterns.md │ └── Topology.md ├── Git.md ├── Microservices.md ├── Microservices │ ├── API Gateway.md │ ├── Communication and Integration Patterns.md │ ├── Distributed Transactions.md │ ├── Fault-Tolerant System.md │ ├── Fault-Tolerant System │ │ ├── Asynchronous Communication.md │ │ ├── Cascading Failures.md │ │ ├── Deadline.md │ │ ├── Fallback.md │ │ ├── Rate Limiter.md │ │ ├── Retries.md │ │ ├── Single Point of Failure.md │ │ └── Timeouts.md │ ├── Introduction.md │ ├── Key Vaults.md │ ├── Load Balancing.md │ ├── Metrics, Monitoring, Tracing, Logging.md │ ├── Service Mesh.md │ └── Service Registry & Discovery.md ├── NET.md ├── NET │ ├── Assemblies.md │ ├── CLR BCL.md │ ├── Collections.md │ ├── Comparing Strings.md │ ├── Data Types and Memory Allocation.md │ ├── Dynamics.md │ ├── Events.md │ ├── GC and Memory.md │ ├── Generics.md │ ├── LINQ Query.md │ ├── Lambda Expressions.md │ ├── Network.md │ ├── Parallel Programming.md │ ├── Reflection.md │ ├── Serialization.md │ ├── Span and Memory.md │ ├── Standard Equality Protocols.md │ ├── Stream.md │ ├── StringBuilder.md │ ├── Try Statements and Exceptions.md │ └── Various Aspects.md ├── Network.md ├── OOP.md ├── Operating System.md ├── Operating System │ ├── Concurrent Concepts.md │ ├── Disk Scheduling.md │ ├── File System.md │ ├── Memory Management.md │ ├── Process Management.md │ ├── Process Schedulers.md │ └── Process Synchronization.md ├── Test.md └── Test │ ├── Code Coverage.md │ ├── TDD BDD.md │ ├── Test Isolation.md │ └── Test Pyramid.md └── README.md /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to DotNet Engineer Masterclass 2 | 3 | First off, thank you for considering contributing to DotNet Engineer Masterclass! It's people like you that make this resource richer and more valuable for everyone. This document outlines how you can contribute and what you can expect from the process. 4 | 5 | ## Getting Started 6 | 7 | Before you begin, ensure you have a GitHub account and are familiar with GitHub repositories. If you're new to GitHub, check out [GitHub's documentation](https://docs.github.com/en/get-started) to get started. 8 | 9 | ## How to Contribute 10 | 11 | Contributions to the repository can take many forms. Here are some ways you can help improve the project: 12 | 13 | - **Reporting Bugs**: If you find a bug or an error in any of the documentation. 14 | 15 | - **Suggesting Enhancements**: This can include new features, improvements to existing documentation, or new topics you believe should be covered. Open an issue to suggest your enhancement, providing a clear and detailed explanation of your ideas. 16 | 17 | - **Writing or Editing Documentation**: Whether it's fixing typos, clarifying explanations, or adding new sections, your writing contributions are welcome. To submit your content, fork the repository, make your changes, and submit a pull request. 18 | 19 | ### Pull Request Process 20 | 21 | 1. Fork the repository and create your branch from `main`. 22 | 2. Ensure any new or changed documentation follows the existing format and structure. 23 | 3. Update the README.md with details of changes. 24 | 4. Issue your pull request to the `main` branch. 25 | 26 | ## Questions? 27 | 28 | If you have any questions or need further clarification about contributing, please open an issue with your question. 29 | 30 | Thank you for contributing to the DotNet Engineer Masterclass! 31 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | # MIT License 2 | 3 | Copyright (c) 2024 @CHashtager 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Pages/Concepts.md: -------------------------------------------------------------------------------- 1 | # Concepts 2 | 3 | [Idempotency](Concepts/Idempotency.md) 4 | 5 | [Zero-Knowledge Proof](Concepts/Zkp.md) 6 | 7 | [Scalability](Concepts/Scalability.md) 8 | -------------------------------------------------------------------------------- /Pages/Concepts/Idempotency.md: -------------------------------------------------------------------------------- 1 | # Idempotency 2 | 3 | Refers to the property of certain operations or HTTP methods where the effect of performing the same operation multiple times is the same as performing it a single time. This concept is particularly important for reliability and error handling in distributed systems. For example, an HTTP **`GET`** request is idempotent because no matter how many times it's executed, it returns the same result without causing any side effects. Similarly, an HTTP **`PUT`** request is designed to be idempotent because making the same request multiple times with the same data will not have additional effects after the first request; the state of the resource is updated to the same state with each request. 4 | 5 | ## **Implementing Idempotent Operations** 6 | 7 | ### Web APIs 8 | 9 | For Web APIs built with ASP.NET Core, idempotency is often considered when designing HTTP endpoints: 10 | 11 | - **GET**: Naturally idempotent, as it retrieves information without changing the server state. 12 | - **PUT**: Used for updating resources, designed to be idempotent. Regardless of how many times the PUT request is made, the resource is updated to the same state. 13 | - **DELETE**: Also idempotent, as deleting the same resource multiple times results in the same server state (the resource remains deleted). 14 | - **POST**: Generally not idempotent, as it creates a new resource each time it's called. However, idempotency can be achieved through additional mechanisms like idempotency keys. 15 | 16 | ### Idempotency Keys 17 | 18 | For operations that are inherently non-idempotent, like POST requests in a RESTful API, idempotency keys can enforce idempotency. An idempotency key is a unique value provided by the client on the initial request. If the operation needs to be retried, the same key is used, allowing the server to recognize it and prevent duplicate processing. This can be implemented by storing the key and the response of the operation, returning the stored response for subsequent requests with the same key. 19 | 20 | ### Message Queuing 21 | 22 | In distributed systems or microservices architectures, message queuing (e.g., using Azure Service Bus, RabbitMQ, or Kafka) might be involved. Ensuring idempotency in such systems often involves deduplication logic, where each message is processed only once. This might require tracking message identifiers or using idempotency keys similar to those in API requests. 23 | 24 | ### **.NET Techniques and Tools** 25 | 26 | - **Entity Framework Core**: When updating data in a database, operations can be made idempotent by checking the current state before making any changes. Entity Framework Core handles this efficiently with its change tracking mechanisms. 27 | - **ASP.NET Core Middleware**: Custom middleware can intercept requests and implement logic for handling idempotency keys, including validating them and ensuring responses are cached and reused appropriately. 28 | - **Distributed Cache**: Systems like Redis, used as a distributed cache, can store response data associated with idempotency keys, ensuring fast retrieval for repeated operations. 29 | 30 | ### **Best Practices** 31 | 32 | 1. **Define Idempotency at the API Design Phase**: Clearly define which operations should be idempotent and design your endpoints accordingly. 33 | 2. **Use Idempotency Keys for Non-Idempotent Operations**: Implement idempotency keys for operations that cannot be made idempotent by their nature, like POST requests creating resources. 34 | 3. **Leverage Existing Middleware and Libraries**: Look for existing solutions that can help implement idempotency, such as ASP.NET Core middleware or libraries designed for this purpose. 35 | 4. **Consider the Persistence Layer**: Ensure your data access layer supports idempotent operations, especially when operations involve complex transactions or state changes. 36 | 5. **Testing**: Rigorously test your API for idempotent behavior, especially under edge cases and failure conditions, to ensure the system behaves as expected. 37 | -------------------------------------------------------------------------------- /Pages/Concepts/Scalability.md: -------------------------------------------------------------------------------- 1 | # Scalability 2 | As the system grows the performance starts to degrade unless we adapt it to deal with that growth. 3 | Scalability is the property of a system to handle a growing amount of load by adding resources to the system. 4 | ## How can a system Grow? 5 | 1. **User Base**: More users started using the system, leading to increased number of requests. 6 | 2. **Features**: Introducing new functionality to expand the system 7 | 3. **Data Volume**: Growth in the amount of data the system stores and manages. 8 | 4. **Complexity**: The system's architecture evolves to accommodate new features, scale, or integrations, resulting in additional components and dependencies. 9 | 5. **Geographic Reach**: Serve users in new regions or countries. 10 | ## How to Scale a system? 11 | 1. **Vertical Scaling (Scale up)**: Add power resource, increase ram, cpu. 12 | 2. **Horizontal Scaling (Scale out)**: Add more machines to spread the workload. 13 | 3. **Load Balancing**: Distribute traffic 14 | 4. **Caching**: Store frequently accessed data in-memory (RAM) 15 | 5. **Microservices Architecture**: Independent services that can be scaled independently. 16 | 6. Other ways: CDN, Partitioning, Asynchronous communication, Auto-Scaling, Multi-region Deployment. -------------------------------------------------------------------------------- /Pages/Concepts/Zkp.md: -------------------------------------------------------------------------------- 1 | # Zero-Knowledge Proofs (ZKP) 2 | 3 | ## Introduction 4 | 5 | Zero-Knowledge Proofs (ZKP) are a cryptographic method by which one party (the prover) can prove to another party (the verifier) that a given statement is true, without conveying any information apart from the fact that the statement is indeed true. This concept is crucial for enhancing privacy and security in various digital applications, from blockchain technology to secure online voting systems. 6 | 7 | ## The Alibaba Cave Analogy 8 | 9 | ### The Setting 10 | 11 | Imagine a circular cave with a magical door that requires a secret spell to open. This cave has two entrances, **A** and **B**, on opposite ends. Inside the cave, there is a path that splits in two, with each path leading to one of the entrances, and the magical door that blocks the way from one side of the cave to the other. 12 | 13 | ### The Scenario 14 | 15 | - **Prover (Alibaba)**: Claims to know the secret spell to open the magical door. 16 | - **Verifier (The skeptic)**: Wants proof that Alibaba knows the secret spell without learning the spell themselves. 17 | 18 | ### The Proof Process 19 | 20 | 1. **Preparation**: Alibaba enters the cave and takes either path at random, while the skeptic waits outside. 21 | 22 | 2. **Challenge**: Without entering the cave, the skeptic shouts to Alibaba to come out from either entrance **A** or **B**. 23 | 24 | 3. **Execution**: If Alibaba truly knows the secret spell, he can use the spell to open the magical door (if needed) and exit from the requested entrance, thus proving his claim without revealing the spell. 25 | 26 | 4. **Verification**: This process is repeated several times. If Alibaba consistently responds to the challenge by appearing at the requested entrance, the skeptic becomes convinced that Alibaba knows the secret, all without ever learning the spell. 27 | 28 | ### Key Points 29 | 30 | - **Zero-Knowledge**: Throughout the process, the verifier learns nothing about the secret itself, only that the prover knows it. 31 | - **Repeatability**: To ensure the proof is reliable, the challenge must be repeated multiple times, reducing the chance of the prover cheating by luck. 32 | - **Privacy**: The prover's secret (the spell) remains protected, demonstrating the essence of zero-knowledge proofs. 33 | 34 | ## Applications 35 | 36 | Zero-Knowledge Proofs have various applications, including but not limited to: 37 | 38 | - **Blockchain and Cryptocurrencies**: For transactions that require privacy and security without revealing transaction details. 39 | - **Identity Verification**: Allowing users to prove their identity or credentials without disclosing sensitive information. 40 | - **Secure Voting**: Enabling voters to cast votes without revealing who they voted for, yet ensuring the vote is counted. 41 | -------------------------------------------------------------------------------- /Pages/Concurrency.md: -------------------------------------------------------------------------------- 1 | # Concurrency 2 | 3 | [Basics](Concurrency/Basics.md) 4 | 5 | [Threading](Concurrency/Threading.md) 6 | 7 | [Async & Task Internals](Concurrency/Async%20&%20Task%20Internals.md) 8 | 9 | [Cancellation Token](Concurrency/Cancellation%20Token.md) 10 | 11 | [Time-to-Live](Concurrency/Time-to-Live.md) 12 | 13 | [Locking](Concurrency/Locking.md) 14 | 15 | [Strategies](Concurrency/Strategies.md) 16 | 17 | [Thread Pool](Concurrency/Thread%20Pool.md) 18 | -------------------------------------------------------------------------------- /Pages/Concurrency/Basics.md: -------------------------------------------------------------------------------- 1 | # Basics 2 | 3 | ## **Concurrency vs Multi-Threading vs Async vs Parallelism** 4 | 5 | - **Concurrency** is about dealing with lots of things at once, like managing multiple tasks within an application. 6 | - **Multi-Threading** involves executing multiple threads simultaneously, allowing for parallel execution of code. 7 | - **Async** programming is about performing tasks without blocking the execution thread, improving responsiveness. 8 | - **Parallelism** is the simultaneous execution of (possibly related) computations across multiple processors or cores. 9 | 10 | --- 11 | 12 | - **Concurrency:** Multiple tasks making progress, not necessarily simultaneously. 13 | - **Multi-Threading:** Multiple threads executing code simultaneously. 14 | - **Async:** Asynchronous programming, allowing non-blocking execution. 15 | - **Parallelism:** Simultaneous execution of tasks on multiple processors. 16 | 17 | ## Single-core vs Multicore/Multiprocessor Machine 18 | 19 | - On a **single-core** machine, multi-threading and async programming can improve responsiveness but don't increase execution speed. 20 | - **Multicore/multiprocessor** machines can run threads in parallel, truly speeding up execution. 21 | 22 | ## Related Data-Structures 23 | 24 | - **Concurrent Collections** like **`ConcurrentDictionary`**, **`BlockingCollection`**, and channels in **`System.Threading.Channels`** enable safe and efficient management of data in multithreaded scenarios. 25 | 26 | ## ConcurrentDictionary 27 | 28 | - A thread-safe dictionary that allows concurrent read and write operations. 29 | 30 | ## ConcurrentQueue 31 | 32 | - A thread-safe queue for concurrent producer and consumer scenarios. 33 | 34 | ## Channels 35 | 36 | - A more advanced concurrency primitive for communication between producers and consumers. 37 | -------------------------------------------------------------------------------- /Pages/Concurrency/Cancellation Token.md: -------------------------------------------------------------------------------- 1 | # Cancellation Token 2 | 3 | - **Usages:** Cancels asynchronous operations. 4 | - Managing cancellation in asynchronous and long-running operations. 5 | - These mechanisms allow developers to cooperatively cancel tasks or operations when needed, such as when the user requests cancellation, an operation times out, or when shutting down an application gracefully. 6 | 7 | ## **Creating a CancellationToken Using CancellationTokenSource** 8 | 9 | A **`CancellationToken`** cannot be instantiated directly; instead, it is created through a **`CancellationTokenSource`** (CTS). The CTS provides the capability to signal cancellation to one or more tokens. 10 | 11 | ```csharp 12 | var cancellationTokenSource = new CancellationTokenSource(); 13 | var cancellationToken = cancellationTokenSource.Token; 14 | ``` 15 | 16 | ## **Canceling a CPU-Bound Task** 17 | 18 | You can pass a **`CancellationToken`** to any task or operation that supports cancellation. It's up to the operation to check the cancellation token periodically and stop its work if cancellation has been requested. 19 | 20 | ```csharp 21 | Task.Run(() => 22 | { 23 | for (int i = 0; i < 100; i++) 24 | { 25 | if (cancellationToken.IsCancellationRequested) 26 | { 27 | Console.WriteLine("Cancellation requested."); 28 | break; // Exit the loop to cancel the operation 29 | } 30 | 31 | // Simulate work 32 | Thread.Sleep(100); 33 | } 34 | }, cancellationToken); 35 | ``` 36 | 37 | To request cancellation from another part of the application, you call **`Cancel`** on the **`CancellationTokenSource`**: 38 | 39 | ```csharp 40 | cancellationTokenSource.Cancel(); 41 | ``` 42 | 43 | ## **Timeout an Async Task Using CancellationTokens** 44 | 45 | To implement a timeout for an asynchronous operation, you can use the **`CancellationTokenSource`** constructor that takes a timespan or milliseconds as an argument. This automatically cancels the token after the specified duration. 46 | 47 | ```csharp 48 | var timeout = TimeSpan.FromSeconds(30); 49 | var cancellationTokenSource = new CancellationTokenSource(timeout); 50 | var cancellationToken = cancellationTokenSource.Token; 51 | 52 | try 53 | { 54 | await Task.Run(async () => 55 | { 56 | // Long-running operation 57 | await Task.Delay(TimeSpan.FromMinutes(1), cancellationToken); 58 | }, cancellationToken); 59 | } 60 | catch (TaskCanceledException) 61 | { 62 | Console.WriteLine("The operation has been canceled due to a timeout."); 63 | } 64 | ``` 65 | 66 | In this example, if the task does not complete within 30 seconds, the **`CancellationToken`** is cancelled, which in turn throws a **`TaskCanceledException`**. This allows the operation to be stopped due to the timeout. 67 | 68 | ## **Best Practices** 69 | 70 | - **Cooperative Cancellation**: Cancellation in .NET is cooperative, meaning the operation being canceled must periodically check the **`CancellationToken`** and respond to cancellation requests. 71 | - **Handle TaskCanceledException**: When awaiting a task that receives a **`CancellationToken`**, be prepared to handle **`TaskCanceledException`** to manage task cancellation gracefully. 72 | - **Dispose of CancellationTokenSource**: It's good practice to dispose of **`CancellationTokenSource`** instances using **`using`** statements or manually calling **`Dispose()`** to free up resources. 73 | -------------------------------------------------------------------------------- /Pages/Concurrency/Locking.md: -------------------------------------------------------------------------------- 1 | # Locking 2 | 3 | Prevent multiple threads from modifying shared resources simultaneously, which can lead to data corruption and unpredictable behavior. 4 | 5 | - Ways 6 | - `lock` keyword 7 | - **Usage**: The **`lock`** keyword is a simple and convenient way to synchronize access to blocks of code to ensure that only one thread can execute them at a time. It's primarily used for locking small sections of code for short durations. 8 | - **Internally**: Under the hood, the **`lock`** keyword uses the **`Monitor`** class to acquire an exclusive lock on an object. If another thread has already acquired the lock, the current thread will block until the lock becomes available. 9 | - **Example**: 10 | 11 | ```csharp 12 | private readonly object _lockObject = new object(); 13 | 14 | public void CriticalSection() 15 | { 16 | lock (_lockObject) 17 | { 18 | // Thread-safe code here 19 | } 20 | } 21 | ``` 22 | 23 | - **`Monitor.Enter` / `Monitor.Exit`** 24 | - **Usage**: Provides functionality similar to the **`lock`** keyword but with more control over the locking mechanism. It's useful when more flexibility is needed, such as attempting to acquire a lock without blocking indefinitely. 25 | - **Internally**: Directly manages the acquisition and release of locks on objects. You must ensure **`Monitor.Exit`** is called to release the lock, typically within a **`finally`** block to guarantee execution. 26 | - **Example**: 27 | 28 | ```csharp 29 | private readonly object _lockObject = new object(); 30 | 31 | public void CriticalSection() 32 | { 33 | bool lockTaken = false; 34 | try 35 | { 36 | Monitor.Enter(_lockObject, ref lockTaken); 37 | // Thread-safe code here 38 | } 39 | finally 40 | { 41 | if (lockTaken) 42 | { 43 | Monitor.Exit(_lockObject); 44 | } 45 | } 46 | } 47 | ``` 48 | 49 | - Semaphore 50 | - **Usage:** Limits the number of threads that can access a resource concurrently. A semaphore maintains a count of permits, and threads must acquire a permit to proceed, which they release when they're done. 51 | - **Internally**: Manages a counter to keep track of the number of available permits. Threads wait if no permits are available and proceed when they can acquire a permit. 52 | - **Note**: Cannot used in async. 53 | - **Example:** 54 | 55 | ```csharp 56 | private static Semaphore _semaphore = new Semaphore(3, 3); // Maximum of 3 concurrent threads 57 | 58 | public void AccessResource() 59 | { 60 | _semaphore.WaitOne(); // Acquire a permit 61 | try 62 | { 63 | // Access the shared resource 64 | } 65 | finally 66 | { 67 | _semaphore.Release(); // Release the permit 68 | } 69 | } 70 | ``` 71 | 72 | - Semaphore-Slim 73 | - **Usage:** A lightweight alternative to Semaphore for limiting concurrent access. 74 | - **Internally:** Uses efficient signaling mechanisms. 75 | - **Note**: Asynchronously 76 | - **Example:** 77 | 78 | ```csharp 79 | private static SemaphoreSlim _semaphoreSlim = new SemaphoreSlim(3, 3); 80 | 81 | public async Task AccessResourceAsync() 82 | { 83 | await _semaphoreSlim.WaitAsync(); // Asynchronously wait to acquire the semaphore 84 | try 85 | { 86 | // Access the shared resource 87 | } 88 | finally 89 | { 90 | _semaphoreSlim.Release(); 91 | } 92 | } 93 | ``` 94 | -------------------------------------------------------------------------------- /Pages/Concurrency/Strategies.md: -------------------------------------------------------------------------------- 1 | # Strategies 2 | 3 | ## Optimistic 4 | 5 | - **Approach:** Assumes that conflicts between threads are infrequent. 6 | - **Locking:** Limited use of locks, optimistic about minimal contention. 7 | 8 | ## Pessimistic 9 | 10 | - **Approach:** Assumes that conflicts between threads are frequent and takes precautions. 11 | - **Locking:** More liberal use of locks to avoid potential contention. 12 | 13 | ## **Optimistic Concurrency Control** 14 | 15 | - **Approach**: Optimistic concurrency control operates under the assumption that conflicts for resources are rare. Instead of locking resources to manage access, it allows multiple transactions or threads to proceed concurrently, checking for conflicts only at the time of committing the changes. 16 | - **Locking Mechanism**: Typically involves little to no use of traditional locks. Instead, it checks if the resource was modified by another transaction or thread since it was last read. This can be implemented using version numbers, timestamps, or checksums. If a conflict is detected (e.g., the resource was modified by another transaction), the current operation may be retried or aborted. 17 | - **Use Cases**: Best suited for environments with low contention and where the cost of rolling back a transaction is less than the cost of locking resources. It's commonly used in web applications and scenarios where read operations significantly outnumber write operations. 18 | 19 | ## **Pessimistic Concurrency Control** 20 | 21 | - **Approach**: Pessimistic concurrency control takes a more cautious approach by assuming that conflicts are likely to occur. To prevent conflicts, it locks resources when they are being read or written to ensure that no other transaction or thread can access the same resource simultaneously. 22 | - **Locking Mechanism**: Involves explicit locking of resources for the duration of a transaction or operation. This can be implemented using database locks (such as row-level locks), mutexes, or semaphores in application code. The locked resources are only released when the transaction is completed or aborted, ensuring exclusive access. 23 | - **Use Cases**: Suitable for environments with high contention or when the integrity of a transaction is critical. It is often used in financial applications, inventory systems, or any scenario where ensuring the success of a transaction outweighs the cost of locking resources. 24 | 25 | ## **Choosing Between Optimistic and Pessimistic Concurrency Control** 26 | 27 | The choice between optimistic and pessimistic concurrency control depends on several factors, including: 28 | 29 | - **Contention Level**: Optimistic concurrency is preferred in low-contention scenarios, whereas pessimistic concurrency is suitable for high-contention environments. 30 | - **Operation Type**: Read-heavy workloads might benefit more from optimistic concurrency, while write-heavy or critical operations might require the use of pessimistic concurrency to ensure consistency. 31 | - **Performance and Scalability Requirements**: Optimistic concurrency can offer better performance and scalability in some cases by reducing the overhead associated with locking. However, it might lead to more transaction rollbacks in high-contention scenarios. 32 | - **Application Specifics**: The specific requirements and characteristics of the application, including the cost of transaction rollbacks, the criticality of operations, and user expectations, play a crucial role in determining the appropriate strategy. 33 | -------------------------------------------------------------------------------- /Pages/Concurrency/Thread Pool.md: -------------------------------------------------------------------------------- 1 | # Thread Pool 2 | 3 | - A pool of worker threads managed by the runtime. 4 | - These threads are created in advance and maintained in a pool, ready to execute asynchronous tasks, background operations, or any short-lived operations enqueued to the pool. 5 | - **Why should we use that?** 6 | - Efficiently manages and reuses threads, reducing the overhead of creating new threads. 7 | - **Efficiency in Managing Threads**: The `ThreadPool` efficiently manages the creation, destruction, and recycling of threads, which can be resource-intensive operations. By reusing threads for different tasks, the overhead associated with thread management is greatly reduced. 8 | - **Optimized Resource Utilization**: The `ThreadPool` automatically adjusts the number of threads in the pool based on the workload and the system's capability, optimizing the use of system resources. 9 | - **Improved Application Performance**: By minimizing the time and resources spent on thread management, applications can execute asynchronous operations more quickly and efficiently, leading to improved overall performance. 10 | - **Simplification of Multithreading Code**: Using the `ThreadPool` abstracts away the complexities of direct thread management, making it easier to write and maintain multithreading code. 11 | - **Running Code on a `ThreadPool` Thread (`ThreadPool.QueueUserWorkItem`)** 12 | - Uses `ThreadPool.QueueUserWorkItem` to queue work for execution on a `ThreadPool` thread. 13 | - This method takes a **`WaitCallback`** delegate, which represents the method to be executed. The delegate can optionally take an **`object`** parameter, allowing you to pass state information to the method being executed. 14 | 15 | ## **Considerations** 16 | 17 | - **Use for Short-Lived Operations**: The ThreadPool is optimized for short-lived operations. Long-running tasks can exhaust the ThreadPool, leading to scalability issues and degraded performance. For long-running operations, consider creating a dedicated thread or using **`Task.Run`** with **`TaskCreationOptions.LongRunning`**. 18 | - **Limitations on Customization**: The ThreadPool offers limited control over the characteristics of its threads, such as priority or names. For scenarios requiring more control over thread behavior, creating dedicated threads might be more appropriate. 19 | -------------------------------------------------------------------------------- /Pages/Concurrency/Threading.md: -------------------------------------------------------------------------------- 1 | # Threading 2 | 3 | - **Sleep, Yield, Blocking, and Spinning** are mechanisms to manage thread execution and synchronization. 4 | - **Foreground vs Background Threads:** Foreground threads keep an application running, while background threads don't prevent an application from terminating. 5 | - **The Thread Pool** optimizes and manages a pool of threads for short-lived operations. 6 | - **Tasks and `Task.Run()`** are used for asynchronous operations, offering a higher-level abstraction over threads. 7 | - **Exception Handling** in asynchronous code involves capturing exceptions in tasks and using **`try-catch`** blocks. 8 | - **`async void`** methods are generally discouraged except for event handlers due to exception handling complexities. 9 | - **`ConfigureAwait(false)`** can be used in library code to avoid deadlocks by not capturing the synchronization context. 10 | - **Cancellation** involves using **`CancellationToken`** and **`CancellationTokenSource`** to cancel asynchronous operations gracefully. 11 | - **Synchronization** objects like **`SemaphoreSlim`**, **`Mutex`**, and **`ReaderWriterLockSlim`** help manage access to shared resources. 12 | -------------------------------------------------------------------------------- /Pages/Concurrency/Time-to-Live.md: -------------------------------------------------------------------------------- 1 | # Time-to-Live 2 | 3 | Time-to-Live (TTL) is an essential concept in the management of tasks in queues, especially in distributed systems, message brokers, and task queues. TTL refers to the duration that a task or message is allowed to live or stay in the queue before it is automatically removed or marked as expired. The implementation and use of TTL can vary depending on the system, but the core idea remains the same: controlling the lifespan of tasks in a queue to manage resources efficiently and ensure timely processing. 4 | 5 | ## **Importance of TTL in Task Queues** 6 | 7 | - **Resource Management**: TTL helps in preventing queues from becoming overloaded with stale tasks that are no longer relevant or have been superseded by more recent tasks. 8 | - **Timeliness**: Ensures that tasks are processed within a relevant timeframe. This is particularly important for tasks that are time-sensitive, where processing a task after its TTL has expired might be meaningless or could lead to incorrect outcomes. 9 | - **System Health and Performance**: By automatically removing expired tasks, TTL mechanisms help maintain system performance and prevent potential bottlenecks caused by the accumulation of unprocessed tasks. 10 | - **Failure Recovery**: In scenarios where a task cannot be processed due to system failures or temporary issues, TTL provides a mechanism to retry tasks until the TTL expires, after which the system can take appropriate actions, such as alerting administrators or logging errors. 11 | 12 | ## **Implementing TTL** 13 | 14 | The implementation of TTL can vary based on the underlying technology or platform being used: 15 | 16 | - **Message Brokers (e.g., RabbitMQ, Apache Kafka)**: These systems often provide built-in support for TTL on messages. For instance, RabbitMQ allows setting TTL for individual messages or for an entire queue. Apache Kafka handles message expiration differently, using log retention policies based on time or size. 17 | - **Custom Task Queues**: When implementing custom task queues, developers must explicitly manage TTL. This can involve checking the timestamp of each task and comparing it against the current time to determine if it has expired, then removing or ignoring tasks that are past their TTL. 18 | 19 | ## **Best Practices** 20 | 21 | - **Choose Appropriate TTL Values**: Set realistic TTL values based on the nature of the tasks and the expected processing timeframes. Consider the implications of both short and long TTL values on system behavior and resource usage. 22 | - **Monitor and Alert on TTL Expirations**: Implement monitoring to track when tasks expire without being processed, as this could indicate underlying system issues or misconfigurations. 23 | - **Handle Expired Tasks Appropriately**: Define clear policies for what happens to tasks when they expire, including logging, alerting, and retry mechanisms, if applicable. 24 | -------------------------------------------------------------------------------- /Pages/Cryptography.md: -------------------------------------------------------------------------------- 1 | # Cryptography 2 | 3 | ## Types of Attacks 4 | 5 | - **Cryptanalysis:** Attacker tries to crack the cipher by exploiting weaknesses in the algorithm or implementation. 6 | - **Brute Force:** Attacker tries every possible key combination until the correct one is found. 7 | 8 | ## Types of Encryption 9 | 10 | - **Symmetric:** Same key is used for encryption and decryption. 11 | - Examples: DES, AES 12 | - **Asymmetric:** Different keys are used for encryption and decryption. 13 | - Example: RSA 14 | 15 | ## Block vs Stream Cipher 16 | 17 | - **Block Cipher:** Data is divided into fixed-size blocks and each block is encrypted/decrypted separately. 18 | - Examples: DES, AES 19 | - **Stream Cipher:** Data is encrypted/decrypted one byte or bit at a time. 20 | 21 | ## DES (Data Encryption Standard) 22 | 23 | - Symmetric block cipher 24 | - 64-bit block size 25 | - 56-bit key size (relatively short and insecure over time) 26 | - **Triple DES:** Improved variant that applies DES three times with different keys 27 | 28 | ## AES (Advanced Encryption Standard) 29 | 30 | - Symmetric block cipher (also known as Rijndael) 31 | - 128-bit block size 32 | - Stronger key sizes: 128, 192, or 256 bits 33 | 34 | ## RSA (Rivest–Shamir–Adleman) 35 | 36 | - Asymmetric encryption algorithm 37 | - Public key used for encryption, private key for decryption 38 | - Widely used for secure data transmission and key exchange 39 | 40 | ## Other Concepts 41 | 42 | - **Diffie-Hellman Key Exchange:** Secure method for exchanging cryptographic keys over an insecure channel 43 | - **Digital Signature:** Verifies the authenticity and integrity of digital data 44 | - **Digital Certificate:** Contains public key and identity information, issued by a trusted Certificate Authority (CA) 45 | - **SSL/TLS Handshake Protocol:** Establishes a secure connection between client and server, involving cipher suite negotiation, key exchange, and identity verification using digital certificates. 46 | -------------------------------------------------------------------------------- /Pages/DDD.md: -------------------------------------------------------------------------------- 1 | # DDD 2 | 3 | [Definition](DDD/Definition.md) 4 | 5 | [Strategic Design](DDD/Strategic%20Design.md) 6 | 7 | [Tactical Design](DDD/Tactical%20Design.md) 8 | 9 | [Patterns](DDD/Patterns.md) 10 | -------------------------------------------------------------------------------- /Pages/DDD/Definition.md: -------------------------------------------------------------------------------- 1 | # Definition 2 | 3 | Domain-Driven Design (DDD) is a software design approach focused on modeling complex software systems according to the domain they operate in and the core problems they aim to solve. DDD emphasizes close collaboration between technical experts and domain experts to ensure the software accurately reflects and addresses real-world requirements and challenges. 4 | 5 | ## Problem-Domain 6 | 7 | Understanding and defining the core issues and challenges in the domain. It's about grasping the "what" that needs to be solved or managed without necessarily delving into "how" it will be solved. The problem domain is essentially the set of problems, requirements, and contexts that the software application needs to address. 8 | 9 | ## Solution-Domain 10 | 11 | Creating solutions based on the identified problems in the domain. It is about the "how" – how to design, architect, and implement the software solution that meets the needs of the problem domain. 12 | 13 | ### **Connecting Problem and Solution Domains in DDD** 14 | 15 | DDD emphasizes a strong alignment between the problem domain and the solution domain through the use of a ubiquitous language—a shared language that bridges domain experts and technical team members. This ensures that the software model accurately reflects the complexities and nuances of the business it aims to serve. 16 | -------------------------------------------------------------------------------- /Pages/DDD/Patterns.md: -------------------------------------------------------------------------------- 1 | # Patterns 2 | 3 | ## Factory 4 | 5 | Creating objects without specifying the exact class. 6 | 7 | - **Purpose:** Factories encapsulate the logic of creating instances of complex objects or aggregates, ensuring that all necessary initializations are performed. 8 | - **Application:** Use factories when creating an object involves more than just instantiating a class, especially when dealing with complex aggregates. 9 | 10 | ## Repository 11 | 12 | Mediating between the domain and data mapping layers. 13 | 14 | - **Purpose:** Repositories abstract the mechanism of storing and retrieving domain entities, providing a collection-like interface for accessing domain objects. 15 | - **Application:** Implement repositories for each Aggregate Root to encapsulate all code needed to retrieve and persist those aggregates. 16 | 17 | ## Unit of Work 18 | 19 | Maintaining a list of objects affected by a business transaction. 20 | 21 | - **Purpose:** Maintains knowledge of all changes to objects (entities or aggregates) within a business transaction and coordinates the writing out of changes and the resolution of concurrency problems. 22 | - **Application:** Use in scenarios where multiple changes to the domain model need to be coordinated and persisted atomically. 23 | 24 | ## Event Sourcing 25 | 26 | Storing the state of an entity as a sequence of events. 27 | 28 | - **Purpose:** Instead of storing just the current state of an entity, Event Sourcing stores each state-changing operation as a unique event. The current state can be reconstructed by replaying these events. 29 | - **Application:** Useful for systems where understanding the sequence of events leading to a state is crucial, such as in auditing or complex business processes. 30 | 31 | ## CQRS 32 | 33 | Separating read and write operations for data storage. 34 | 35 | - **Purpose:** Separates the models for reading data from the models for updating data, allowing optimizations for each function and improving scalability and performance. 36 | - **Application:** Implement in systems where the read and write workloads are significantly different or where a clear separation can simplify the design and improve performance. 37 | -------------------------------------------------------------------------------- /Pages/DDD/Strategic Design.md: -------------------------------------------------------------------------------- 1 | # Strategic Design 2 | 3 | ## Ubiquitous Language 4 | 5 | Establishing a common language that is shared between developers and domain experts. 6 | 7 | - **Purpose:** Ubiquitous Language is about establishing a common vocabulary that is shared among all stakeholders involved in the project, including developers, domain experts, and business stakeholders. This shared language is used both in the code and in discussions about the system, ensuring clarity of communication and that all parties have a shared understanding of the domain concepts. 8 | - **Benefits:** Reduces misunderstandings and miscommunication by ensuring that terms and phrases are used consistently throughout the project. It also helps in making the code more readable and aligned with the domain, facilitating easier maintenance and development. 9 | 10 | ## Bounded Context 11 | 12 | Defining explicit boundaries within which a particular model is defined and applicable. 13 | 14 | - **Purpose:** A Bounded Context is a logical boundary within which a specific domain model is defined and applicable. It marks the limits of a particular subsystem or area of interest, within which a particular model is valid. Different bounded contexts may have different models for the same concept, depending on their specific needs and interpretations. 15 | - **Benefits:** Helps in dealing with the complexity of large systems by dividing them into more manageable and loosely coupled subsystems. It allows different teams to work independently on different parts of the system without the need for constant coordination. It also facilitates the integration of legacy systems or external systems by defining clear boundaries and interfaces. 16 | 17 | ## Sub-domains 18 | 19 | Identifying distinct areas or categories within the overall domain. 20 | 21 | - **Purpose:** Sub-domains are smaller parts of the overall domain, each focusing on a specific aspect of the business or system. They are identified during the domain exploration phase and can be categorized into core domains, supporting sub-domains, and generic sub-domains, based on their relevance and strategic importance to the business. 22 | - **Benefits:** Identifying sub-domains allows organizations to prioritize development efforts, focusing on the core domains that are critical to the business's success. It also helps in organizing the development team structure and in deciding when to build custom solutions versus when to buy or outsource. 23 | -------------------------------------------------------------------------------- /Pages/DDD/Tactical Design.md: -------------------------------------------------------------------------------- 1 | # Tactical Design 2 | 3 | ## Aggregate Root & Aggregate 4 | 5 | Defining the root entity that controls access to a cluster of related entities. 6 | 7 | - **Purpose:** An Aggregate is a cluster of domain objects that can be treated as a single unit for data changes. The Aggregate Root is the main entity within the Aggregate, through which all interactions with the Aggregate's entities should occur. This concept helps in enforcing business rules and ensuring consistency. 8 | - **Application:** Use an Aggregate Root to control access and changes to data within the Aggregate, ensuring integrity and consistency according to the domain rules. 9 | 10 | ## Value-Object 11 | 12 | Objects without an identity, defined by their attributes. 13 | 14 | - **Purpose:** Value Objects are objects that are defined entirely by their attributes and do not have a distinct identity. They are often used to represent concepts within the domain that are important for the definition but do not require identity tracking. 15 | - **Application:** Implement common domain concepts like Money, Quantity, or Address as Value Objects to enhance readability and ensure domain logic consistency. 16 | 17 | ## Domain Services 18 | 19 | Services that encapsulate domain logic not fitting naturally into entities or value objects. 20 | 21 | - **Purpose:** Domain Services encapsulate business logic that doesn't naturally fit within the context of an entity or value object. These services are stateless and usually operate on domain objects. 22 | - **Application:** Use Domain Services for operations that span multiple aggregates or when an action does not belong to a single entity or value object. 23 | 24 | ## Application Services 25 | 26 | Orchestrating the execution of application use cases. 27 | 28 | - **Purpose:** Application Services act as the interface between the outside world and the domain model. They orchestrate the execution of domain operations and transactions, coordinating the flow of data in and out of the domain model. 29 | - **Application:** Implement use cases or business processes by coordinating calls to methods on entities and domain services, managing transaction boundaries and security. 30 | 31 | ## Domain Events 32 | 33 | Events representing state changes within the domain. 34 | 35 | - **Purpose:** Domain Events are significant events within the domain model that represent state changes or important occurrences. They facilitate communication between parts of the system in a decoupled manner. 36 | - **Application:** Use Domain Events to notify other parts of the system about changes or important occurrences, enabling reactions to these events without tight coupling. 37 | 38 | ## Context Mapping 39 | 40 | Strategies for dealing with the interconnection of different bounded contexts. 41 | 42 | - **Purpose:** Context Mapping identifies and documents the relationships and interactions between different bounded contexts in a system. It helps in understanding and managing dependencies and integrations. 43 | - **Application:** Use Context Mapping to document and design the integration points between bounded contexts, choosing appropriate patterns and strategies for each interaction. 44 | 45 | ## Integration between BCs (Messaging, RPC, ...) 46 | 47 | Mechanisms for communication and integration between bounded contexts. 48 | 49 | - **Purpose:** Defines the mechanisms for bounded contexts to communicate and integrate with each other, ensuring data consistency and integrity across the system. 50 | - **Application:** Implement communication between bounded contexts using messaging for asynchronous integration or RPC for synchronous calls, based on the needs of the application. 51 | 52 | ## Entity Persistence 53 | 54 | Storing and retrieving domain entities from a data store. 55 | 56 | - **Purpose:** Concerned with how entities and aggregates are stored and retrieved from a persistent storage mechanism, like a database. 57 | - **Application:** Design persistence mechanisms that translate between the database schema and the domain model, ensuring that domain logic remains isolated from data access concerns. 58 | -------------------------------------------------------------------------------- /Pages/Data Structures.md: -------------------------------------------------------------------------------- 1 | # Data Structures 2 | 3 | [Time Space Complexity](Data%20Structures/Time%20Space%20Complexity.md) 4 | 5 | [Linear](Data%20Structures/Linear.md) 6 | 7 | [Sorting](Data%20Structures/Sorting.md) 8 | 9 | [Search](Data%20Structures/Search.md) 10 | 11 | [Tree](Data%20Structures/Tree.md) 12 | 13 | [Binary Tree and Heaps](Data%20Structures/Binary%20Tree.md) 14 | 15 | [Tries](Data%20Structures/Tries.md) 16 | 17 | [Graphs](Data%20Structures/Graphs.md) 18 | 19 | [String Manipulation](Data%20Structures/String%20Manipulation.md) 20 | -------------------------------------------------------------------------------- /Pages/Data Structures/Binary Tree.md: -------------------------------------------------------------------------------- 1 | # Binary Tree Problems 2 | 3 | - Finding the minimum value in a Binary Tree can be solved by traversing the leftmost nodes, resulting in O(log n) time complexity for a balanced tree and O(n) for a skewed tree. 4 | - Validating a Binary Tree as a Binary Search Tree can be solved by performing an in-order traversal and checking if the values are in ascending order, resulting in O(n) time complexity. 5 | - Finding nodes at distance K from the root can be solved using a combination of depth-first and breadth-first traversals, resulting in O(n) time complexity. 6 | 7 | ## Heaps 8 | 9 | Complete binary tree with heap property, used for priority queues and heap sort 10 | 11 | - A heap is a complete binary tree that satisfies the heap property (either min-heap or max-heap). 12 | - Insertion and deletion operations have a time complexity of O(log n) or O(h), where h is the height of the heap. 13 | - Heaps are more efficiently implemented using an array than a tree structure. 14 | - **Heap Sort**: O(n log n) time complexity. 15 | - **Priority Queue**: Array implementation has O(n) time complexity for insertion and O(1) for deletion, while heap implementation has O(log n) time complexity for both insertion and deletion. 16 | - **Problems**: Finding the Kth largest item in a list can be solved using a max heap, and implementing a heapify algorithm transforms an array into a heap in-place. 17 | -------------------------------------------------------------------------------- /Pages/Data Structures/Graphs.md: -------------------------------------------------------------------------------- 1 | # Graphs 2 | 3 | There are two primary types of graphs: 4 | 5 | - **Directed Graphs (Digraphs):** Where edges have a direction, from one vertex to another. 6 | - **Undirected Graphs:** Where edges are bidirectional. 7 | 8 | Collection of vertices and edges 9 | 10 | - Graphs are used to represent connected objects, and a tree is a type of graph without cycles. 11 | - **Adjacency Matrix**: 12 | - Space complexity: O(n^2) 13 | - Add/Remove Node: O(V^2), where V is the number of vertices 14 | - Add/Remove Edge: O(1) 15 | - Find adjacent nodes: O(V) 16 | - **Adjacency List**: 17 | - Space complexity: O(V+E), where E is the number of edges 18 | - Add Node: O(1) 19 | - Remove Node: O(V+E) 20 | - Add/Remove Edge: O(V) 21 | - Check if two nodes are connected: O(V) 22 | - Find adjacent nodes: O(K) or O(V), where K is the number of adjacent nodes 23 | - **Traversal**: 24 | - **Depth-First**: Uses recursion or iteration with a HashSet to track visited nodes. 25 | - **Breadth-First**: Uses a queue or iteration. 26 | - **Topological Sort**: Performed using Depth-First traversal and a stack. 27 | - Is possible only for Directed Acyclic Graphs (DAGs) and is useful in scheduling tasks, ordering of cells in spreadsheets, and determining compilation sequence in programming languages. 28 | - **Cycle Detection**: Algorithms exist to detect cycles in graphs. 29 | - Directed graphs, this can be done using algorithms like Depth-First Search (DFS) with additional data structures to track ancestors. 30 | - Undirected graphs, DFS or Union-Find can be used to detect cycles. 31 | - Cycle detection is crucial in applications such as network analysis, deadlock detection in operating systems, and more. 32 | - **Problems**: Finding the shortest path between two nodes and finding a node's "best friend" (adjacent node with the highest weight) are common graph problems. 33 | -------------------------------------------------------------------------------- /Pages/Data Structures/Linear.md: -------------------------------------------------------------------------------- 1 | # Linear Data Structures 2 | 3 | - **Array:** Contiguous block of memory, constant time access 4 | - **Lookup**: Accessing an element in an array by its index is a constant time operation, O(1). 5 | - **Insertion**: Inserting an element at the end of an array is a constant time operation, O(1), but if the array needs to grow, it requires allocating a new larger array and copying the elements, resulting in O(n) time complexity. 6 | - **Deletion**: Removing an element from the end of an array is a constant time operation, O(1), but if elements need to be shifted, it requires O(n) time complexity. 7 | - **Linked List:** Chain of nodes, efficient insertions/deletions 8 | - **Lookup**: Finding an element in a linked list by its value or index requires traversing the list, resulting in O(n) time complexity. 9 | - **Insertion**: Inserting an element at the beginning of a linked list is a constant time operation, O(1). Inserting at the end or in the middle requires traversing the list, resulting in O(n) time complexity, unless you maintain a tail pointer for inserting at the end, which is O(1). 10 | - **Deletion**: Removing an element from the beginning of a linked list is a constant time operation, O(1). Removing from the end or in the middle requires traversing the list, resulting in O(n) time complexity, unless you maintain a tail pointer for removing from the end, which is O(1). 11 | - **Problems**: Finding the Kth node from the end can be solved using two pointers with a distance of `K-1` between them, resulting in O(n) time complexity. 12 | - **Stack:** Last-In-First-Out (LIFO) data structure 13 | - All operations on a stack, such as push, pop, and peek, have a constant time complexity, O(1). 14 | - **Problem**: Checking if an expression is balanced (e.g., parentheses, brackets) can be solved using a stack. 15 | - **Queue:** First-In-First-Out (FIFO) data structure 16 | - All operations on a queue, such as enqueue and dequeue, have a constant time complexity, O(1). 17 | - **Implementations**: A queue can be implemented using a circular array with two pointers for enqueue and dequeue, or using two stacks, one for enqueue and one for dequeue. 18 | - **Problems**: Reversing the items in a queue can be solved by using an auxiliary stack, resulting in O(n) time complexity. 19 | - **Priority Queue**: A priority queue can be implemented using an array or a heap, with different time complexities for insertion and deletion operations. 20 | - **Hash Table:** Key-value storage with constant time access on average 21 | - Hash functions are deterministic, meaning they will always produce the same output for a given input. 22 | - Hash tables use an array to store items internally. 23 | - All operations (insertion, lookup, deletion) have an average time complexity of O(1), except when iterating over the values, which has a time complexity of O(n). 24 | - **Collision Handling**: Collisions (when two keys map to the same index) can be handled using techniques like chaining or open addressing (linear probing, quadratic probing, double hashing). 25 | - **Problems**: Finding the first non-repeated character, removing duplicate items in an array, or finding the first repeated character can be solved using a set, resulting in O(n) time complexity. 26 | - **Collision:** Handled via separate chaining or open addressing 27 | -------------------------------------------------------------------------------- /Pages/Data Structures/Search.md: -------------------------------------------------------------------------------- 1 | # Search Algorithms 2 | 3 | - **Linear Search:** Brute force search, O(n) time complexity 4 | - **Binary Search:** Efficient search for sorted arrays, O(log n) time complexity 5 | - O(log n) time complexity and O(log n) space complexity for the recursive implementation, or O(1) space complexity for the iterative implementation. 6 | -------------------------------------------------------------------------------- /Pages/Data Structures/Sorting.md: -------------------------------------------------------------------------------- 1 | # Sorting Algorithms 2 | 3 | - **Bubble Sort:** Simple but inefficient for large inputs 4 | - O(n^2) time complexity. 5 | - **Selection Sort:** In-place but inefficient for large inputs 6 | - O(n^2) time complexity. 7 | - **Insertion Sort:** Efficient for small or mostly sorted inputs 8 | - O(n^2) time complexity, but efficient for small or mostly sorted inputs. 9 | - **Merge Sort:** Divide-and-conquer, efficient and stable 10 | - O(n log n) time complexity and O(n) space complexity. 11 | - Divides the input array into two halves, calls itself for the two halves, and then merges the two sorted halves. 12 | - **Quick Sort:** Divide-and-conquer, efficient but unstable 13 | - O(n log n) average time complexity, but O(n^2) in the worst case when the pivot is not chosen optimally. 14 | - It picks an element as pivot and partitions the given array around the picked pivot. 15 | -------------------------------------------------------------------------------- /Pages/Data Structures/String Manipulation.md: -------------------------------------------------------------------------------- 1 | # String Manipulation 2 | 3 | - **Count Vowels**: Use a loop to iterate over the string and count vowels, resulting in O(n) time complexity. 4 | - **Reverse**: Iterate from the start, iterate from the end, or use a stack, all resulting in O(n) time complexity. 5 | - **Reverse Words in a Sentence**: Use a stack or iterate from the end of words, resulting in O(n) time complexity. 6 | - **Remove Duplicates**: Use a HashSet to track visited characters, resulting in O(n) time complexity. 7 | - **Most Repeated Character**: Use a HashMap or an array of size 256 (ASCII values) to count character frequencies, resulting in O(n) time complexity. 8 | -------------------------------------------------------------------------------- /Pages/Data Structures/Time Space Complexity.md: -------------------------------------------------------------------------------- 1 | # Time and Space Complexity 2 | 3 | - **Time Complexity:** How the runtime scales with input size 4 | - **Space Complexity:** How the memory usage scales with input size 5 | 6 | ## Time Complexity 7 | 8 | - **O(1)**: Constant time, the algorithm's time complexity does not depend on the input size. 9 | - **O(log n)**: Logarithmic time, the algorithm's time complexity grows logarithmically with the input size. 10 | - **O(n)**: Linear time, the algorithm's time complexity grows linearly with the input size. 11 | - **O(n log n)**: Linearithmic time, the algorithm's time complexity grows as the product of linear and logarithmic factors. 12 | - **O(n^2)**: Quadratic time, the algorithm's time complexity grows quadratically with the input size. 13 | - **O(n^3)**: Cubic time, the algorithm's time complexity grows as a cubic function of the input size. 14 | - **O(n^k)**: Polynomial time, the algorithm's time complexity grows as a polynomial function of the input size. 15 | - **O(a^n)**: Exponential time, the algorithm's time complexity grows exponentially with the input size. 16 | - **O(n!)**: Factorial time, the algorithm's time complexity grows as the factorial of the input size. 17 | 18 | ## Space Complexity 19 | 20 | - The amount of memory or space required by an algorithm or data structure as the input size grows. 21 | -------------------------------------------------------------------------------- /Pages/Data Structures/Tree.md: -------------------------------------------------------------------------------- 1 | # Tree Data Structures 2 | 3 | - **Binary Search Tree:** Sorted data with efficient search, insert and delete operations 4 | - The value of each node is greater than the left subtree and less than the right subtree. 5 | - Lookup, insertion, and deletion operations have an average time complexity of O(log n) and a worst-case time complexity of O(n) when the tree is skewed. 6 | - **Tree Traversals:** Different ways to visit nodes in a tree 7 | - **Breadth-First:** Level by level 8 | - Visits all nodes at the same level before moving to the next level. 9 | - **Depth-First:** Pre-order, In-order, Post-order 10 | - **Pre-order**: Root -> Left -> Right 11 | - **Post-order**: Left -> Right -> Root 12 | - **In-order**: Left -> Root -> Right (ascending order for BST) 13 | - **Balanced Trees:** Self-balancing trees for efficient operations 14 | - **Self-Balancing Trees**: AVL Tree, Red-Black Tree, and B-Tree are examples of self-balancing trees that maintain a balanced structure to ensure logarithmic time complexity for operations. 15 | - O(log n) complexity 16 | - **AVL Tree:** Height-balanced binary search tree 17 | - The heights of the two child subtrees of any node differ by no more than one. 18 | - **Red-Black Tree:** Self-balancing binary search tree 19 | - Ensures balance by coloring nodes red or black 20 | - **B-Tree:** Self-balancing tree for disk-based data structures 21 | - Allowing more than two children per node. B-Trees are optimized for systems that read and write large blocks of data, such as databases and filesystems. 22 | -------------------------------------------------------------------------------- /Pages/Data Structures/Tries.md: -------------------------------------------------------------------------------- 1 | # Tries (Digital, Radix, or Prefix Tree) 2 | 3 | Prefix tree, efficient information retrieval operations 4 | 5 | - Tries are not binary trees; they are tree-based data structures used for efficient information retrieval operations like prefix search and autocomplete. 6 | - Lookup, insertion, and deletion operations have a time complexity of O(L), where L is the length of the word being processed. 7 | - Pre-order traversal is used to print all words, and post-order traversal is used to delete a word. 8 | -------------------------------------------------------------------------------- /Pages/Database.md: -------------------------------------------------------------------------------- 1 | # Database 2 | 3 | [Concepts](Database/Concepts.md) 4 | 5 | [Indexing](Database/Indexing.md) 6 | 7 | [CAP Theorem](Database/CAP%20Theorem.md) 8 | 9 | [Transactions and Isolation Levels](Database/Transactions%20and%20Isolation%20Levels.md) 10 | 11 | [Deadlocks](Database/Deadlocks.md) 12 | 13 | [SQL](Database/SQL.md) 14 | 15 | [NoSQL](Database/NoSQL.md) 16 | 17 | [ACID in Depth](Database/ACID%20in%20Depth.md) 18 | -------------------------------------------------------------------------------- /Pages/Database/CAP Theorem.md: -------------------------------------------------------------------------------- 1 | # CAP Theorem 2 | 3 | - **Consistency** 4 | - Consistency means that all nodes see the same data at the same time. Precisely, any read operation on the system returns the value of the most recent write operation. Consistency ensures that a system behaves much like a single, non-distributed system, from the perspective of its users. 5 | - **Availability** 6 | - Availability ensures that every request receives a response, regardless of the success or failure of the operation. In practical terms, it means that every request made to the system must result in some kind of response to the client, even if a network partition occurs. 7 | - **Partition Tolerance** 8 | - Partition Tolerance means that the system continues to operate despite arbitrary message loss or failure of part of the system (i.e., network partitions). A partition-tolerant system can sustain any amount of network failure that does not result in a failure of the entire network. 9 | - Describes the trade-offs between Consistency, Availability, and Partition Tolerance 10 | - **CP (Consistency and Partition Tolerance):** The system prioritizes consistency and partition tolerance over availability. This means that in the event of a network partition, the system might choose to respond only to requests that can be served with consistent data, potentially sacrificing availability for some nodes. 11 | - **AP (Availability and Partition Tolerance):** The system prioritizes availability and partition tolerance over consistency. In the case of a network partition, the system will continue to operate and respond to requests, even if it cannot guarantee that all nodes have the most recent data. Eventually consistent systems, which allow for temporary inconsistencies but converge towards consistency over time, often fall into this category. 12 | - **CA (Consistency and Availability):** While theoretically a system could choose to prioritize consistency and availability, in practice, this choice is not viable for distributed systems because partition tolerance is a necessity in any networked environment. Thus, CA systems are typically not considered in the context of the CAP Theorem, which focuses on distributed systems where network partitions are a given. 13 | - In the presence of network partitions, a distributed system must choose between CP or AP 14 | -------------------------------------------------------------------------------- /Pages/Database/Concepts.md: -------------------------------------------------------------------------------- 1 | # Concepts 2 | 3 | ## Normalization 4 | 5 | The main goals of normalization include reducing redundancy, organizing data efficiently, and ensuring data integrity. 6 | 7 | - Suitable for Online Transaction Processing (OLTP) systems 8 | - Efficiency and speed of transactions are critical 9 | - **1NF (First Normal Form):** Eliminate duplicate columns from the same table, each piece of data in the table is stored in its smallest possible form. 10 | - **2NF (Second Normal Form):** Non-prime attributes are fully dependent on the primary key 11 | - **3NF (Third Normal Form):** No transitive dependencies, ensuring data integrity 12 | 13 | ## Denormalization 14 | 15 | Denormalization is a strategy used in database design to improve the read performance of a database at the cost of some redundancy and potential loss in data integrity. 16 | 17 | - Suitable for Online Analytical Processing (OLAP) systems 18 | - complex queries and analyses over large volumes of data 19 | - require fast query performance to handle aggregations, summaries, and analyses across vast datasets 20 | - The emphasis is on optimizing the speed and efficiency of complex queries rather than on transactional integrity. 21 | - Improves query performance by adding redundant data 22 | - Trades off data integrity for read efficiency 23 | 24 | ## **When to Use Denormalization** 25 | 26 | - The database is read-heavy, and there is a clear need for optimizing query performance over transactional updates. 27 | - Data is relatively static, and updates are infrequent, minimizing the risks and overheads associated with maintaining redundant data. 28 | - The complexity of queries and the size of the datasets involved justify the trade-offs in terms of data redundancy and integrity. 29 | -------------------------------------------------------------------------------- /Pages/Database/Deadlocks.md: -------------------------------------------------------------------------------- 1 | # Deadlocks 2 | 3 | - Occur when two or more transactions are waiting for one another to release resources 4 | - **SQL Server Deadlock Resolution:** 5 | - Chooses to terminate the transaction that did the least work (based on transaction log) 6 | - **DEADLOCK_PRIORITY:** Sets priority for which transaction is chosen 7 | - **Deadly Embrace Deadlock:** Circular chain of two or more threads, with each holding one or more resources that are being requested by another thread 8 | -------------------------------------------------------------------------------- /Pages/Database/Indexing.md: -------------------------------------------------------------------------------- 1 | # Indexing 2 | 3 | Creating a data structure (an index) that allows for faster searches. Indexes are created on columns that are used frequently in query predicates (e.g., WHERE clauses, JOIN conditions). When an index is created, the DBMS maintains a separate data structure (usually a B-tree or a hash table) that maps the values of the indexed column(s) to the corresponding row locations in the table. 4 | 5 | - Indexes improve query performance by providing faster data access 6 | - Trade-off between read and write performance 7 | - This is because the index must be updated whenever data in the indexed column is added, removed, or altered, which can slow down these operations. 8 | - Indexes consume additional desk spaces causes Storage Overhead 9 | - Maintenance Cost 10 | 11 | ## **Choosing Columns to Index** 12 | 13 | The decision to create an index on a particular column or set of columns should be informed by the specific queries that are most important for the application's performance. Key considerations include: 14 | 15 | - Columns used frequently in WHERE clauses. 16 | - Columns used in JOIN conditions. 17 | - Columns used for sorting data (ORDER BY). 18 | 19 | ## **Clustered Index** 20 | 21 | - **Definition**: A clustered index determines the physical order of data in a table. It sorts and stores the data rows in the table based on the indexed columns. Because of this, each table can have only one clustered index. 22 | - **Key Characteristics**: 23 | - The leaf nodes of a clustered index contain the actual data rows of the table. 24 | - Searching for data using the clustered index is fast because the index search can lead directly to the data row. 25 | - Clustered indexes are particularly efficient for range queries that retrieve a range of values. 26 | - **Considerations**: 27 | - Since the clustered index defines the physical order of data, inserting and updating operations can be slower, especially if the new data must be inserted in the middle of existing data, potentially causing page splits. 28 | - The choice of the clustered index is critical because it affects the overall storage and access patterns of the data. 29 | 30 | ## **Non-Clustered Index** 31 | 32 | - **Definition**: A non-clustered index is a type of index where the order of the index keys (columns) is separate from the physical order of the rows in the table. A table can have multiple non-clustered indexes. 33 | - **Key Characteristics**: 34 | - The leaf nodes of a non-clustered index contain index keys and pointers to the corresponding data rows. These pointers are either the clustered index key (if one exists) or a row identifier (RID) if the table is a heap (without a clustered index). 35 | - Non-clustered indexes are beneficial for quickly accessing data based on the indexed column(s), without affecting the physical order of the table. 36 | - They are ideal for columns used frequently in search conditions (**`WHERE`** clauses) or join conditions but not as the primary means of accessing data. 37 | - **Considerations**: 38 | - Non-clustered indexes consume additional disk space because they are stored separately from the table data. 39 | - Care must be taken not to create too many non-clustered indexes on a table, as this can degrade write performance due to the overhead of maintaining multiple indexes during data modification operations. 40 | -------------------------------------------------------------------------------- /Pages/Database/NoSQL.md: -------------------------------------------------------------------------------- 1 | # NoSQL 2 | 3 | ## Types of NoSQL Databases 4 | 5 | ### Graph Database 6 | 7 | - Graph databases store data as nodes and relationships (edges) between nodes. 8 | - Examples: Neo4j, Amazon Neptune, OrientDB 9 | - Use cases: Social networks, recommendation engines, fraud detection 10 | 11 | ### Document-Oriented Store 12 | 13 | - Document databases store data in semi-structured documents, like JSON or XML. 14 | - Examples: MongoDB, Couchbase, Amazon DocumentDB 15 | - Use cases: Content management systems, user profiles, catalogs 16 | 17 | ### Object Storage 18 | 19 | - Object storage systems store data as objects (files) with metadata. 20 | - Examples: Amazon S3, Google Cloud Storage, Azure Blob Storage 21 | - Use cases: Media storage, backup and archiving, big data analytics 22 | 23 | ### Column-Oriented 24 | 25 | - Column-oriented databases store data in columns instead of rows. 26 | - Examples: Apache Cassandra, HBase, Scylla 27 | - Use cases: Time-series data, high-throughput data ingestion, IoT data 28 | 29 | ### Key–Value Store 30 | 31 | - Key-value stores associate a key with a value, providing simple and fast data access. 32 | - Examples: Redis, Amazon DynamoDB, Apache Ignite 33 | - Use cases: Caching, session management, real-time applications 34 | 35 | Each type of NoSQL database has its strengths and use cases. The choice depends on factors such as data structure, scalability requirements, query patterns, and performance needs. 36 | 37 | For example, graph databases excel at handling highly connected data with complex relationships, while document databases provide flexibility for semi-structured data. Column-oriented databases are optimized for analytical workloads and time-series data, while key-value stores offer lightning-fast access for simple data models. 38 | -------------------------------------------------------------------------------- /Pages/Design Patterns.md: -------------------------------------------------------------------------------- 1 | # Design Patterns 2 | 3 | [Creational / Structural / Behavioral](Design%20Patterns/Creational%20Structural%20Behavioral.md) 4 | 5 | [Principles](Design%20Patterns/Principles.md) 6 | 7 | [Code Smells](Design%20Patterns/Code%20Smells.md) 8 | -------------------------------------------------------------------------------- /Pages/Design Patterns/Code Smells.md: -------------------------------------------------------------------------------- 1 | # Code Smells 2 | 3 | - **Bloaters:** Long Method, Large Class, Primitive Obsession, Long Parameter List, Data Clumps 4 | - **Data Clumps:** Different parts of the code contain identical groups of variables (e.g., parameters for connecting to a database). These clumps should be turned into their own class. 5 | - **Object-Orientation Abusers:** Switch Statements, Temporary Field, Refused Bequest, Alternative Classes with Different Interfaces 6 | - **Temporary Field:** Objects that have fields that are only used in certain situations. 7 | - **Refused Bequest:** A subclass that doesn't use all of the properties and methods inherited from its parent class. 8 | - **Alternative Classes with Different Interfaces:** Two classes perform similar functions but have different method names. 9 | - **Change Preventers:** Divergent Change, Shotgun Surgery, Parallel Inheritance Hierarchies 10 | - **Divergent Change:** When you have to change many unrelated methods when you make changes to a class. 11 | - **Shotgun Surgery:** When a single change is made to multiple classes simultaneously. 12 | - **Parallel Inheritance Hierarchies:** Every time you make a subclass of one class, you also have to make a subclass of another. 13 | - **Dispensables:** Comments, Duplicate Code, Lazy Class, Data Class, Dead Code, Speculative Generality 14 | - **Couplers:** Feature Envy, Inappropriate Intimacy, Message Chains, Middle Man 15 | - **Feature Envy:** A method seems more interested in a class other than the one it actually is in. 16 | - **Inappropriate Intimacy:** One class knows too much about the internals of another class. 17 | - **Message Chains:** A pattern like **`a.getB().getC().doSomething()`**, where a client is coupled to the structure of the navigation. 18 | - **Middle Man:** A class that seems to just delegate its work to other classes. 19 | -------------------------------------------------------------------------------- /Pages/Design Patterns/Creational Structural Behavioral.md: -------------------------------------------------------------------------------- 1 | # Creational / Structural / Behavioral 2 | 3 | - **Creational Patterns:** Provide ways to create objects while hiding the creation logic, rather than instantiating objects directly. 4 | - Factory Method, Abstract Factory, Builder, Singleton 5 | - **Factory Method:** Defines an interface for creating an object, but lets subclasses alter the type of objects that will be created. 6 | - **Abstract Factory:** Provides an interface for creating families of related or dependent objects without specifying their concrete classes. 7 | - **Builder:** Allows the construction of complex objects step by step. It separates the construction of a complex object from its representation so that the same construction process can create different representations. 8 | - **Singleton:** Ensures a class has only one instance, and provides a global point of access to it. 9 | - **Structural Patterns:** Describe ways to compose objects 10 | - Facade, Proxy, Decorator, Composite, Adapter, Flyweight, Bridge 11 | - **Facade:** Provides a simplified interface to a complex system of classes, library, or framework, making it easier to use. 12 | - **Proxy:** Provides a placeholder for another object to control access to it. This could be for the purpose of lazy initialization, access control, logging, monitoring, etc. 13 | - **Decorator:** Allows behavior to be added to an individual object, either statically or dynamically, without affecting the behavior of other objects from the same class. 14 | - **Composite:** Composes objects into tree structures to represent part-whole hierarchies. It lets clients treat individual objects and compositions of objects uniformly. 15 | - **Adapter:** Allows objects with incompatible interfaces to collaborate. 16 | - **Flyweight:** Reduces the cost of creating and manipulating a large number of similar objects. 17 | - **Bridge:** Decouples an abstraction from its implementation so that the two can vary independently. 18 | - **Behavioral Patterns:** Handle communication between objects 19 | - Strategy, Template Method, Visitor, Chain of Responsibility, Mediator, State, Observer 20 | - **Strategy:** Defines a family of algorithms, encapsulates each one, and makes them interchangeable. Strategy lets the algorithm vary independently from clients that use it. 21 | - **Template Method:** Defines the skeleton of an algorithm in the superclass but lets subclasses override specific steps of the algorithm without changing its structure. 22 | - **Visitor:** Lets you add further operations to objects without having to modify them. 23 | - **Chain of Responsibility:** Passes the request along a chain of handlers. Upon receiving a request, each handler decides either to process the request or to pass it to the next handler in the chain. 24 | - **Mediator:** Reduces coupling between classes by providing a central point of communication between them. 25 | - **State:** Allows an object to alter its behavior when its internal state changes. The object will appear to change its class. 26 | - **Observer:** Defines a dependency between objects so that when one object changes state, all its dependents are notified and updated automatically. 27 | -------------------------------------------------------------------------------- /Pages/Design Patterns/Principles.md: -------------------------------------------------------------------------------- 1 | # Principles 2 | 3 | - **SOLID:** 4 | 1. **Single Responsibility Principle (SRP)**: Every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. This means a class should have only one reason to change. 5 | 2. **Open/Closed Principle (OCP)**: Software entities (classes, modules, functions, etc.) should be open for extension but closed for modification. This means you should be able to add new functionality without changing the existing code. 6 | 3. **Liskov Substitution Principle (LSP)**: Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program. Essentially, if class B is a subtype of class A, we should be able to replace A with B without disrupting the behavior of our program. 7 | 4. **Interface Segregation Principle (ISP)**: No client should be forced to depend on methods it does not use. This principle suggests that multiple, specific client interfaces are better than one general-purpose interface. 8 | 5. **Dependency Inversion Principle (DIP)**: High-level modules should not depend on low-level modules. Both should depend on abstractions. Additionally, abstractions should not depend on details; details should depend on abstractions. This principle aims at reducing dependencies amongst the code modules. 9 | - **Dependency Inversion vs. Inversion of Control vs. Dependency Injection** 10 | - **Dependency Inversion** is the principle that high-level modules should not depend on low-level modules, but both should depend on abstractions. 11 | - **Inversion of Control (IoC)** is a broader concept that involves inverting the flow of control in a system. Instead of the caller controlling how and when to call a component, the component controls when it is called. This is often achieved through mechanisms such as callbacks, event handling, or dependency injection. 12 | - **Dependency Injection** is a pattern used to implement IoC, where the dependencies of a class are supplied by an external entity rather than instantiated directly within the class. This allows for more modular and testable code. 13 | - **DRY (Don't Repeat Yourself):** Avoid code duplication 14 | - Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. 15 | - **KISS (Keep It Simple, Stupid):** Simplicity is key 16 | - Advocates for simplicity in design. Complexity should be avoided, as simpler solutions are easier to maintain and understand. 17 | - **YAGNI (You Ain't Gonna Need It):** Don't implement features prematurely 18 | - Encourages developers not to implement functionality until it's necessary. Premature implementation can lead to wasted time and effort on features that are never used. 19 | - **Separation of Concerns:** Separate different responsibilities into distinct components 20 | - **Least Knowledge:** Minimize knowledge between components 21 | - A component should not know about the internal details of other components. It should only communicate with its immediate friends, promoting loose coupling. 22 | - **The Hollywood Principle:** Don't call us, we'll call you 23 | - This principle is related to IoC and tells components to "don't call us, we'll call you." It suggests that high-level components should not depend on the low-level components but rather be called by them. 24 | - **Favor Composition over Inheritance:** Prefer object composition over class inheritance 25 | - This principle suggests that class functionality should be achieved through composed objects' behaviors (composition) rather than inherited from a base or parent class. 26 | - Composition offers more flexibility in designing systems. 27 | - **Program to an Interface, not an Implementation:** Decouple abstraction from implementation 28 | - This principle advocates for coding against interface abstractions rather than concrete implementations. This promotes decoupling and enhances flexibility and maintainability. 29 | -------------------------------------------------------------------------------- /Pages/Docker.md: -------------------------------------------------------------------------------- 1 | # Docker 2 | 3 | ## Basic Definition 4 | 5 | Docker is an open-source platform that enables developers to build, deploy, run, and manage containerized applications. It provides an abstraction layer that packages an application with its dependencies, libraries, and runtime environment into a single, portable container. 6 | 7 | ## Containerization of Applications 8 | 9 | Containerization is the process of packaging an application and its dependencies into a lightweight, self-contained, and portable unit called a container. Containers are isolated from the host system and other containers, ensuring consistent and predictable behavior across different environments. 10 | 11 | ## Best Practices of Dockerizing Applications 12 | 13 | - **Modular Design:** Break down applications into smaller, reusable components. 14 | - **Layered Approach:** Utilize Docker's layered filesystem for efficient caching and rebuilds. 15 | - **Minimize Image Size:** Keep images as small as possible for faster distribution and reduced attack surface. 16 | - **Separate Build and Run:** Separate the build and run stages for better caching and security. 17 | - **Use Non-Root User:** Run containers with a non-root user for enhanced security. 18 | - **Leverage Docker Compose:** Use Docker Compose for managing multi-container applications. 19 | 20 | ## Containerization Tools 21 | 22 | Docker is the most popular containerization tool, but there are others like: 23 | 24 | - **Podman:** Open-source, daemonless container engine. 25 | - **containerd:** Industry-standard container runtime. 26 | - **CRI-O:** Kubernetes-native container runtime. 27 | 28 | ## Docker Compose 29 | 30 | Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to specify the services, networks, volumes, and other configurations in a declarative YAML file, making it easier to manage and deploy complex applications. 31 | 32 | ## Compose CLI 33 | 34 | The Docker Compose CLI provides a set of commands for working with Docker Compose files and managing multi-container applications. Some common commands include: 35 | 36 | - `docker compose up`: Create and start containers defined in the Compose file. 37 | - `docker compose down`: Stop and remove containers, networks, and volumes. 38 | - `docker compose start/stop/restart`: Start, stop, or restart services. 39 | - `docker compose logs`: View logs from containers. 40 | - `docker compose scale`: Scale one or more services up or down. 41 | -------------------------------------------------------------------------------- /Pages/Entity Framework.md: -------------------------------------------------------------------------------- 1 | # Entity Framework 2 | 3 | [Basics](Entity%20Framework/Basics.md) 4 | 5 | [Code-First / DB-First](Entity%20Framework/Code-First%20DB-First.md) 6 | 7 | [Database-Provider Mechanisms](Entity%20Framework/Database-Provider%20Mechanisms.md) 8 | 9 | [Fluent-API](Entity%20Framework/Fluent-API.md) 10 | 11 | [Transaction Management](Entity%20Framework/Transaction%20Management.md) 12 | 13 | [DbContext Lifetime](Entity%20Framework/DbContext%20Lifetime.md) 14 | 15 | [Value-Converters](Entity%20Framework/Value-Converters.md) 16 | -------------------------------------------------------------------------------- /Pages/Entity Framework/Basics.md: -------------------------------------------------------------------------------- 1 | # Basics 2 | 3 | ## Change-Tracker 4 | 5 | - Mechanism that keeps track of changes to entities during their lifespan. 6 | - Enables EF to generate SQL statements for database updates efficiently when **`SaveChanges`** is called. 7 | 8 | ### What's **`.AsNoTracking()`**? 9 | 10 | **`.AsNoTracking()`** is a method in Entity Framework that can be applied to a query. It indicates that the entities retrieved should not be tracked by the change tracker. This is useful when you only need to read data and don't intend to modify or update it. By disabling tracking, you can improve performance as EF doesn't need to keep track of changes for entities that won't be updated. 11 | 12 | ### How does EF detect changes when you update a property value? 13 | 14 | When you modify a property of an entity that is being tracked by the change tracker, EF compares the original property value with the new one. If there's a difference, EF marks the entity as modified. This change tracking is crucial for EF to generate the appropriate SQL statements during **`SaveChanges`**. 15 | 16 | ## Migrations 17 | 18 | - The process of updating the database schema to match changes in the application's data model. 19 | - **Code-First Migrations:** Automatically generates migration scripts based on changes in the code. 20 | 21 | ## **`SaveChanges()`, When & Why?** 22 | 23 | - **When:** Called to persist changes made to entities in memory to the database. 24 | - **Why:** Ensures changes are committed and transactions are applied to the database. 25 | 26 | The **`SaveChanges()`** method in Entity Framework is used to persist changes made to entities in the context to the underlying database. It should be called when you want to commit changes. 27 | 28 | **When to use:** 29 | 30 | - After making modifications, additions, or deletions to entities in the context. 31 | - When you're ready to persist these changes to the database. 32 | 33 | **Why:** 34 | 35 | - To ensure data consistency between your application and the database. 36 | - To execute the necessary SQL statements to reflect the changes. 37 | -------------------------------------------------------------------------------- /Pages/Entity Framework/Code-First DB-First.md: -------------------------------------------------------------------------------- 1 | # Code-First / DB-First 2 | 3 | ## Code-First vs Database-First 4 | 5 | - **Code-First:** Creating the data model in code and generating the database from it. 6 | - **Database-First:** Creating the data model from an existing database. 7 | 8 | ### **Code-First** 9 | 10 | **Description**: In the Code-First approach, developers start by defining the data model in their application code using classes. These classes are then used to generate the database schema automatically. Migrations can be used to incrementally update the database schema as the data model changes over time. 11 | 12 | **Advantages**: 13 | 14 | - **Flexibility**: Developers can work within the comfort of their programming environment without switching to database management tools. 15 | - **Version Control**: Changes to the database schema are made through code, making it easier to version control and track changes alongside application code. 16 | - **Agility**: Ideal for agile development environments where changes to the data model are frequent and iterative. 17 | 18 | **Scenarios**: 19 | 20 | - New projects where the database does not exist yet. 21 | - Projects that require close alignment between the object model and the database model. 22 | - Environments where application-driven development is preferred, and the database schema is considered a by-product of the application code. 23 | 24 | ### **Database-First** 25 | 26 | **Description**: With the Database-First approach, the database schema is created directly in a database using database management tools. Then, the data model in the application code is generated based on the existing database schema. This approach is often used when working with existing databases. 27 | 28 | **Advantages**: 29 | 30 | - **Direct Control Over Database**: This approach allows for fine-tuned control over the database schema, including optimizations and customizations that might be more complex to achieve through code. 31 | - **Familiarity**: For teams with strong database administration expertise, this approach leverages their knowledge effectively. 32 | - **Stability**: Suited for situations where the database schema is stable or changes infrequently, ensuring that the application data model is consistently synchronized with the database. 33 | 34 | **Scenarios**: 35 | 36 | - Existing projects with a pre-defined database schema. 37 | - Projects where database design and optimizations are critical and need to be managed directly by experienced database administrators. 38 | - Situations where the database serves as a shared resource across multiple applications, necessitating a database-centric design approach. 39 | 40 | ### **Choosing Between Code-First and Database-First** 41 | 42 | The choice between Code-First and Database-First depends on various factors, including project requirements, team expertise, development workflow, and the need for control over the database schema. Code-First offers a more agile and code-centric approach, ideal for new projects and rapid iterations. Database-First, on the other hand, offers more control and stability for projects with existing databases or where database schema management is a priority. 43 | -------------------------------------------------------------------------------- /Pages/Entity Framework/DbContext Lifetime.md: -------------------------------------------------------------------------------- 1 | # DbContext Lifetime 2 | 3 | - The lifespan of the DbContext instance. 4 | - Typically scoped to a single unit of work, like a web request or an operation. 5 | 6 | ## **What is `DbContext`?** 7 | 8 | **`DbContext`** is a fundamental class in EF that represents a session with the underlying database. It is responsible for managing entities (classes that represent data), tracking changes, and persisting data to the database. Given its central role, the way it's instantiated, used, and disposed of is crucial for the health of an application. 9 | 10 | ## **Typical `DbContext` Lifetimes** 11 | 12 | - **Transient**: A new **`DbContext`** instance is created and disposed of for every use. This approach is rarely used due to the overhead of establishing database connections and the inability to track changes over a meaningful scope. 13 | - **Scoped**: A **`DbContext`** instance is created per logical operation or "unit of work", such as a web request in ASP.NET Core applications. This is the most common and recommended approach, aligning the **`DbContext`** lifespan with the lifecycle of a request. 14 | - **Singleton**: A single **`DbContext`** instance is used for the lifetime of the application. This approach is strongly discouraged as it can lead to memory leaks, data inconsistency, and threading issues. 15 | 16 | ## **Scoped Lifetime in Web Applications** 17 | 18 | In ASP.NET Core applications, the scoped lifetime is typically managed by the dependency injection (DI) container. When you add **`DbContext`** to the services collection in the **`Startup.cs`** or program initialization file, you specify it to be scoped: 19 | 20 | ```csharp 21 | services.AddDbContext(options => 22 | options.UseSqlServer(configuration.GetConnectionString("YourConnectionString"))); 23 | ``` 24 | 25 | With this setup, ASP.NET Core handles the creation and disposal of **`DbContext`** instances per request. This ensures that entities are tracked correctly during the request and that resources are efficiently released at the end of the request. 26 | 27 | ## **Managing DbContext in Other Scenarios** 28 | 29 | In desktop, console, or other types of applications where there isn't an inherent request scope, you'll need to manage the **`DbContext`** lifetime explicitly. This typically involves creating a new **`DbContext`** instance at the beginning of an operation and ensuring it is disposed of once the operation is completed, often using a **`using`** statement: 30 | 31 | ```csharp 32 | using (var context = new YourDbContext()) 33 | { 34 | // Perform data operations 35 | } 36 | ``` 37 | 38 | ## **Best Practices** 39 | 40 | - **Use Scoped Lifetimes in Web Applications**: Leverage the framework's DI container to manage **`DbContext`** lifetimes per request. 41 | - **Avoid Long-Lived DbContext Instances**: To prevent performance issues, avoid using the same **`DbContext`** instance for multiple operations over a long period. 42 | - **Be Aware of DbContext's Statefulness**: Since **`DbContext`** tracks changes to entities, be mindful of its use in scenarios involving parallel processing or instances where a fresh state is essential. 43 | - **Dispose of DbContext Properly**: Ensure **`DbContext`** instances are disposed of when no longer needed to release database connections and other resources. 44 | -------------------------------------------------------------------------------- /Pages/Entity Framework/Fluent-API.md: -------------------------------------------------------------------------------- 1 | # Fluent-API 2 | 3 | - A fluent interface for configuring the Entity Framework model. 4 | - Offers a programmatic way to define the model's configuration in code. 5 | 6 | Fluent API in Entity Framework is an alternative to using attributes for configuring the data model. It provides a more fluent and code-centric way to configure entities, relationships, and other aspects of the data model. 7 | 8 | ## Advantages and downsides against Attributes 9 | 10 | - **Advantages:** 11 | - More centralized and readable configuration. 12 | - Better support for complex configurations and conventions. 13 | - Easier to maintain and refactor. 14 | - Keeps the entity classes clean from persistence-related attributes, adhering to the separation of concerns principle. 15 | - **Downsides:** 16 | - Requires additional code, which may increase the learning curve. 17 | - Some developers may prefer the simplicity of attribute-based configuration. 18 | 19 | ## **Key Features and Uses of Fluent API** 20 | 21 | - **Mapping Tables and Columns**: Customize the mapping of entities to database tables and properties to columns, including naming, data types, and constraints. 22 | - **Configuring Primary Keys**: Define primary keys, composite keys, and their mappings. 23 | - **Defining Relationships**: Configure relationships between entities, including one-to-one, one-to-many, and many-to-many relationships, along with setting up cascade delete rules and foreign key constraints. 24 | - **Configuring Indexes**: Create and customize indexes for entities to improve query performance. 25 | - **Setting Property Behaviors**: Configure properties with behaviors such as required/optional, maximum length, concurrency tokens, and value generation strategies (e.g., auto-increment). 26 | - **Complex Types**: Define complex types that do not have a key of their own and are used to organize properties within other entities. 27 | -------------------------------------------------------------------------------- /Pages/Entity Framework/Transaction Management.md: -------------------------------------------------------------------------------- 1 | # Transaction Management 2 | 3 | - Ensuring consistency when multiple operations need to be performed atomically. 4 | - **DbContext.Database.BeginTransaction():** Initiates a new database transaction. 5 | - Controlling the sequence of operations so that they are executed as a single, atomic unit. If any operation within the transaction fails, the entire set of operations can be rolled back to maintain the consistency of the database. 6 | 7 | ## **Understanding Transactions** 8 | 9 | A transaction in a database system is a sequence of operations performed as a single logical unit of work. A transaction has four main properties, often referred to as ACID properties: 10 | 11 | - **Atomicity**: Ensures that all operations within the transaction are completed successfully; if any operation fails, the transaction is aborted, and the database state is left unchanged. 12 | - **Consistency**: Ensures that a transaction takes the database from one valid state to another. 13 | - **Isolation**: Ensures that concurrent execution of transactions leaves the database in the same state that would have been obtained if the transactions were executed serially. 14 | - **Durability**: Ensures that once a transaction has been committed, it remains so, even in the event of errors or system crashes. 15 | 16 | ## **Using `DbContext.Database.BeginTransaction()`** 17 | 18 | Entity Framework provides support for controlling transactions directly through the **`DbContext`**. The **`DbContext.Database.BeginTransaction()`** method initiates a new transaction that you can use to wrap multiple operations. 19 | 20 | ```csharp 21 | using (var context = new YourDbContext()) 22 | { 23 | using (var transaction = context.Database.BeginTransaction()) 24 | { 25 | try 26 | { 27 | // Perform data operations 28 | context.SomeEntities.Add(newEntity); 29 | context.SaveChanges(); 30 | 31 | // Possibly more operations 32 | context.OtherEntities.Remove(someEntity); 33 | context.SaveChanges(); 34 | 35 | // Commit transaction if all operations succeed 36 | transaction.Commit(); 37 | } 38 | catch (Exception) 39 | { 40 | // Roll back the transaction if any operation fails 41 | transaction.Rollback(); 42 | throw; 43 | } 44 | } 45 | } 46 | ``` 47 | 48 | ## **Best Practices for Transaction Management** 49 | 50 | - **Minimize Transaction Scope**: Keep the operations within a transaction as minimal as possible to reduce locking and improve performance. 51 | - **Handle Exceptions**: Ensure proper exception handling within transactions to gracefully handle failures and rollback changes as needed. 52 | - **Dispose Transactions**: Use the **`using`** statement or explicitly call **`Dispose`** to ensure that transactions are properly cleaned up and resources are released, even if an error occurs. 53 | - **Avoid Nested Transactions**: Be cautious of nested transactions and understand how your ORM and database handle them to avoid unexpected behaviors. 54 | 55 | ## **Alternatives and Enhancements** 56 | 57 | - **TransactionScope**: For more complex scenarios or to wrap transactions across multiple contexts or databases, consider using **`System.Transactions.TransactionScope`**. It provides a higher-level abstraction for managing transactions. 58 | - **Isolation Levels**: When creating a transaction, you can specify the isolation level to control the visibility of changes made by other transactions. This helps manage concurrency but can impact performance. 59 | 60 | ```csharp 61 | using (var transaction = context.Database.BeginTransaction(IsolationLevel.Serializable)) 62 | { 63 | // Transactional operations 64 | } 65 | ``` 66 | -------------------------------------------------------------------------------- /Pages/Entity Framework/Value-Converters.md: -------------------------------------------------------------------------------- 1 | # Value-Converters 2 | 3 | - **Value-Converters:** Convert values between the CLR type and the type stored in the database. 4 | - Useful when the database representation differs from the application's representation. 5 | 6 | ## **Use Cases for Value Converters** 7 | 8 | - **Enum to String (or Integer) Conversion**: Storing enum values as strings (or integers) in the database for better readability. 9 | - **Encrypting/Decrypting Data**: Automatically encrypting data before it's saved to the database and decrypting upon retrieval for added security. 10 | - **Complex Types to JSON**: Converting complex types to JSON strings for storage in a single database column. 11 | - **Date/Time Transformations**: Adjusting **`DateTime`** values to UTC when storing in the database and converting back to local time when reading from the database. 12 | -------------------------------------------------------------------------------- /Pages/Event-Driven.md: -------------------------------------------------------------------------------- 1 | # Event-Driven 2 | 3 | [Definition](Event-Driven/Definition.md) 4 | 5 | [Patterns](Event-Driven/Patterns.md) 6 | 7 | [Topology](Event-Driven/Topology.md) 8 | 9 | [Models](Event-Driven/Models.md) 10 | 11 | [Flows Layer](Event-Driven/Flows%20Layer.md) 12 | -------------------------------------------------------------------------------- /Pages/Event-Driven/Definition.md: -------------------------------------------------------------------------------- 1 | # Definition 2 | 3 | - **Event-Driven Architecture (EDA):** A software architecture paradigm where the system is driven by events, which are produced, detected, consumed, and reacted to. 4 | - **Event:** An occurrence or change of state within a system that is significant and needs to be handled. 5 | - **Event Producer:** The component that generates or emits events. 6 | - **Event Emitter (Agents):** The component responsible for publishing events to an event channel or queue. 7 | - **Event Notification:** The mechanism of communicating that an event has occurred. 8 | - **Event Handler:** The component that receives and processes events. 9 | - **Event Loop:** A programming construct that waits for and dispatches events or messages in a program. 10 | - **Event Carried State Transfer:** An architectural pattern where the state is transferred as part of the event, avoiding the need for external state management. 11 | - **Event Store:** A database optimized for storing and retrieving events, often used in Event Sourcing. 12 | - **Event Sourcing:** A pattern where the application state is persisted as a sequence of events, enabling reconstruction of the state by replaying the events. 13 | - **Event Queue:** A queue that holds events until they can be processed by consumers. 14 | - **Event Mediator:** A component that sits between event producers and consumers, routing events and enforcing rules or transformations. 15 | - **Event Channel:** A communication mechanism that enables event producers to send events and event consumers to receive them. 16 | - **Event Processor:** A component that processes events from an event channel or queue, performing actions or transformations. 17 | - **Event Consumer:** The component that receives and acts upon events. 18 | - **Benefits of EDA:** Decoupling, scalability, flexibility, responsiveness, and better alignment with business processes. 19 | - **Considerations of EDA:** Complexity, eventual consistency, ordering, and idempotency. 20 | - **Advantages of EDA:** Loose coupling, better scalability, better performance, and better fault tolerance. 21 | - **Orchestration vs. Choreography:** Orchestration involves a central coordinator, while choreography is decentralized with each component aware of the overall process. 22 | - **Resolving Duplicates:** Use unique identifiers, deduplication mechanisms, or idempotent operations. 23 | - **Idempotence:** The property of an operation that can be applied multiple times without changing the result beyond the initial application. 24 | -------------------------------------------------------------------------------- /Pages/Event-Driven/Flows Layer.md: -------------------------------------------------------------------------------- 1 | # Flows Layer 2 | 3 | - **Event Generator:** Starting point, Produces events from various sources (e.g., user actions, system events). 4 | - **User Actions:** Interactions with a user interface, such as clicks, form submissions, or gestures. 5 | - **System Events:** Changes in the system's state, such as a completed transaction, an error occurrence, or a scheduled task. 6 | - **Event Channels:** Mechanisms for transmitting events (e.g., message brokers, queues). 7 | - **Message Brokers:** Middleware that facilitates message exchange between producers and consumers, often providing features like topic-based publishing/subscribing, message queuing, and durable storage. 8 | - **Event Queues:** Queues that temporarily store events until they are processed by a consumer, ensuring that events are not lost and can be processed asynchronously. 9 | - **Event Streams:** Continuous sequences of events that can be processed in real-time, often used in scenarios requiring immediate action or analysis. 10 | - **Event Processing:** Components that handle and process events (e.g., filters, transformations, routing). 11 | - **Filters:** Components that screen events based on certain criteria, passing only those of interest to subsequent stages. 12 | - **Transformations:** Operations that modify events, such as enriching data, changing formats, or aggregating information. 13 | - **Routing:** Mechanisms that direct events to appropriate destinations or services based on content, type, or other attributes. 14 | - **Event Correlation and Analysis:** Advanced processing that involves correlating multiple events, identifying patterns, and making decisions based on complex rules. 15 | - **Event-Driven Downstream Activity:** Final Layer, Actions or reactions triggered by events (e.g., updates, notifications, workflows). 16 | - **Updates:** Modifying data or state within the system in response to events. 17 | - **Notifications:** Alerting users or systems about significant events or conditions. 18 | - **Workflows:** Initiating or advancing business processes based on event occurrences. 19 | - **Analytics:** Generating insights through the analysis of event data, often feeding into business intelligence systems. 20 | -------------------------------------------------------------------------------- /Pages/Event-Driven/Models.md: -------------------------------------------------------------------------------- 1 | # Models 2 | 3 | - **Publish/Subscribe (Pub/Sub) Model:** Producers publish events, and consumers subscribe to events they are interested in. 4 | - **Event Streaming Model:** 5 | - **Simple Event Model:** Events are processed individually. 6 | - **Stream Event Model:** Events are processed as a continuous stream. 7 | - **Complex Event Model:** Events are correlated and analyzed to detect patterns or conditions. 8 | 9 | ## **Publish/Subscribe (Pub/Sub) Model** 10 | 11 | - **Description**: In the Pub/Sub model, event producers (publishers) emit events without knowledge of who will consume (subscribe to) them. Similarly, event consumers (subscribers) express interest in one or more types of events and react to them as they occur, without knowledge of which publishers produced the events. 12 | - **Use Cases**: Real-time notifications, messaging applications, and scenarios where multiple services need to react to the same set of events independently. 13 | 14 | ## **Event Streaming Model** 15 | 16 | Event streaming involves handling events in real-time as they occur, often leveraging a distributed log or stream processing system. It can be divided into simpler models based on the nature of event processing: 17 | 18 | ## Simple Event Model 19 | 20 | - **Description**: Each event is processed individually, usually as it arrives. The focus is on handling discrete events rather than looking at the event stream as a whole. 21 | - **Use Cases**: Applications where each event can be dealt with in isolation, such as order processing systems where each order is handled separately. 22 | 23 | ## Stream Event Model 24 | 25 | - **Description**: Events are processed as a continuous stream, allowing for operations over windows of time or across a sequence of events. This model is powerful for analyzing trends over time or aggregating information from multiple events. 26 | - **Use Cases**: Real-time analytics, monitoring applications, or any scenario requiring aggregation or analysis of event data over time. 27 | 28 | ## Complex Event Model 29 | 30 | - **Description**: Also known as Complex Event Processing (CEP), this model involves correlating, combining, and analyzing multiple events to detect patterns or specific conditions. It often requires sophisticated logic to infer more significant events or insights from raw event streams. 31 | - **Use Cases**: Fraud detection, network security monitoring, and business process management, where understanding relationships between multiple events is crucial for making decisions. 32 | 33 | ## **Choosing the Right Model** 34 | 35 | Selecting the appropriate event-driven model depends on the specific requirements of your application, such as: 36 | 37 | - **Scalability and Performance Needs**: How the system scales and how quickly events must be processed can influence the choice of model. 38 | - **Complexity of Event Processing**: The complexity of the logic required to process or analyze events may necessitate a more sophisticated model. 39 | - **Real-time vs. Batch Processing**: Whether events need to be processed immediately as they occur or can be processed in batches at intervals. 40 | - **Event Source and Nature**: The source of events and whether they are independent or related can determine the most suitable model. 41 | -------------------------------------------------------------------------------- /Pages/Event-Driven/Patterns.md: -------------------------------------------------------------------------------- 1 | # Patterns 2 | 3 | - **Event Sourcing:** 4 | - Involves storing changes to the application state as a sequence of events 5 | - Events are persisted as an immutable sequence, enabling reconstruction of application state. 6 | - Updates are atomic, with events published after successful state changes. 7 | - Outbox Pattern provides reliable event publishing, while Event Sourcing focuses on state management. 8 | - **Outbox Pattern:** 9 | - Ensures reliable event publishing by storing events in an outbox table before publishing. 10 | - Provides transactional consistency between data changes and event publishing. 11 | - Change Data Capture (CDC) is a technique for capturing and publishing data changes as events. 12 | - **CQRS (Command Query Responsibility Segregation):** 13 | - Separates read and write operations into different models and data stores. 14 | - Queries can be implemented by fetching data from multiple services and combining the results. **(Flexibility in Query Implementation)** 15 | -------------------------------------------------------------------------------- /Pages/Event-Driven/Topology.md: -------------------------------------------------------------------------------- 1 | # Topology 2 | 3 | - **Mediator Topology:** Events flow through a central mediator component. 4 | - **Centralization:** Facilitates the management and monitoring of event flows. 5 | - **Complexity Management:** While it simplifies some aspects of the architecture, it can become a bottleneck or a single point of failure, increasing the system's complexity. 6 | - **Broker Topology:** Events are published to a broker, and consumers subscribe to events of interest. 7 | - **Decentralization:** This approach allows for a more distributed system where components communicate directly through the broker without a central mediator. 8 | - **Scalability and Flexibility:** It offers better scalability and flexibility, as adding new consumers or producers is generally straightforward and does not require changes to other components. 9 | - The choice depends on factors like complexity, scalability, and observability requirements. 10 | - **Complexity:** For simpler applications, a mediator might suffice, but as complexity grows, the decoupled nature of a broker may be beneficial. 11 | - **Scalability:** If scalability is a primary concern, the broker topology might be preferred due to its inherent support for distributed systems. 12 | - **Observability:** Centralized topologies can simplify monitoring and logging but might introduce performance bottlenecks. 13 | -------------------------------------------------------------------------------- /Pages/Git.md: -------------------------------------------------------------------------------- 1 | # Git 2 | 3 | ## HEAD 4 | 5 | - The HEAD is a pointer that represents the current commit in the current branch. 6 | - It allows you to move between commits and branches. 7 | 8 | ## Remote 9 | 10 | - A remote is a reference to a remote repository, typically hosted on a server like GitHub or GitLab. 11 | - Common commands: `git remote add`, `git push`, `git pull`, `git fetch`. 12 | 13 | ## Merge 14 | 15 | - Merging combines changes from one branch into another. 16 | - `git merge` incorporates commits from another branch into the current branch. 17 | 18 | ## Rebase 19 | 20 | - Rebasing applies commits from one branch onto the head of another branch. 21 | - `git rebase` can be used to rewrite commit history, making it linear. 22 | 23 | ## Revert 24 | 25 | - Reverting undoes changes introduced by a specific commit. 26 | - `git revert` creates a new commit that inverts the changes from a previous commit. 27 | 28 | ## Squash 29 | 30 | - Squashing combines multiple commits into a single commit. 31 | - This is useful for cleaning up commit history before merging or rebasing. 32 | 33 | ## Cherry Pick 34 | 35 | - Cherry-picking allows you to apply specific commits from one branch to another. 36 | - `git cherry-pick` can be used to selectively incorporate commits. 37 | 38 | ## Pull Request 39 | 40 | - A pull request is a way to propose and review code changes before merging them into a branch. 41 | - Common in collaborative development workflows. 42 | 43 | ## Submodules 44 | 45 | - Submodules allow you to include external repositories within your main repository. 46 | - They are useful for managing dependencies or shared code. 47 | 48 | ## Reset 49 | 50 | - The `git reset` command is used to undo local changes or remove commits from the current branch. 51 | - It can be used with various options (`-soft`, `-mixed`, `-hard`) to control the scope of the reset. 52 | 53 | ## Amend 54 | 55 | - Allows you to modify the most recent commit. 56 | - It can rewrite history and potentially disrupt the workflow for others. 57 | - `git commit --amend` 58 | 59 | ## Working Area vs Staging Area 60 | 61 | - The working area is where you make local changes to files. 62 | - The staging area (index) is where you stage changes before committing them. 63 | 64 | ## Bisect 65 | 66 | - Git bisect is a tool for finding the commit that introduced a bug or regression. 67 | - It performs a binary search through your commit history, helping you isolate the problematic commit. 68 | -------------------------------------------------------------------------------- /Pages/Microservices.md: -------------------------------------------------------------------------------- 1 | # Microservices 2 | 3 | [Introduction](Microservices/Introduction.md) 4 | 5 | [Service Registry & Discovery](Microservices/Service%20Registry%20&%20Discovery.md) 6 | 7 | [Load Balancing](Microservices/Load%20Balancing.md) 8 | 9 | [Distributed Transactions](Microservices/Distributed%20Transactions.md) 10 | 11 | [Metrics, Monitoring, Tracing, Logging](Microservices/Metrics,%20Monitoring,%20Tracing,%20Logging.md) 12 | 13 | [Key Vaults](Microservices/Key%20Vaults.md) 14 | 15 | [Service Mesh](Microservices/Service%20Mesh.md) 16 | 17 | [Communication and Integration Patterns](Microservices/Communication%20and%20Integration%20Patterns.md) 18 | 19 | [Fault-Tolerant System](Microservices/Fault-Tolerant%20System.md) 20 | 21 | [API Gateway](Microservices/API%20Gateway.md) 22 | -------------------------------------------------------------------------------- /Pages/Microservices/API Gateway.md: -------------------------------------------------------------------------------- 1 | # API Gateway 2 | 3 | They act as the primary entry point for external clients to access the various services of an application. This centralized approach helps manage and secure the interactions between clients and services efficiently. 4 | 5 | ## API Gateway Responsibilities 6 | 7 | - API gateways (e.g., Kong, Nginx, Ambassador) act as an entry point for external clients. 8 | - Responsibilities include TLS termination, routing (based on criteria like geo-proximity, latency), throttling, and rate limiting. 9 | 10 | ### **1. TLS Termination** 11 | 12 | - **Description**: TLS termination refers to the process where the API gateway handles the TLS (Transport Layer Security) decryption for incoming requests and re-encrypts the responses for the clients. This offloads the encryption and decryption tasks from the backend services. 13 | - **Benefits**: Centralizing TLS termination simplifies certificate management and enhances security by providing a single point for encrypting and decrypting traffic. 14 | 15 | ### **2. Request Routing** 16 | 17 | - **Description**: The API gateway routes incoming requests to the appropriate backend service based on various criteria, such as the request path, host header, or custom routing rules. Advanced gateways can also route based on geo-proximity, latency, and other factors. 18 | - **Benefits**: Enables efficient service discovery and request distribution, facilitating scalability, and high availability of services. 19 | 20 | ### **3. Load Balancing** 21 | 22 | - **Description**: In addition to routing, API gateways often perform load balancing, distributing incoming requests across multiple instances of a service based on load, thereby preventing any single instance from becoming a bottleneck. 23 | - **Benefits**: Enhances the performance and reliability of services by ensuring requests are evenly distributed among available resources. 24 | 25 | ### **4. Throttling and Rate Limiting** 26 | 27 | - **Description**: API gateways enforce throttling and rate limiting policies to control the number of requests a client can make to an API within a given timeframe. This helps protect backend services from overload and abuse. 28 | - **Benefits**: Prevents service outages and degradation by ensuring that resources are used within their capacity limits. It also enables API providers to implement usage policies and potentially offer tiered service levels. 29 | 30 | ### **5. Authentication and Authorization** 31 | 32 | - **Description**: The gateway can handle authentication and authorization of requests before they reach the backend services, ensuring only valid and authorized requests are processed. 33 | - **Benefits**: Centralizes security policies, reducing the complexity and duplication of authentication mechanisms across services. 34 | 35 | ### **6. API Versioning and Management** 36 | 37 | - **Description**: API gateways facilitate the management of different API versions, allowing clients to access multiple versions of services simultaneously. This is useful for gradually phasing out old APIs and introducing new ones. 38 | - **Benefits**: Simplifies version control and allows for smoother transitions and backward compatibility. 39 | 40 | ### **7. Caching** 41 | 42 | - **Description**: To reduce the load on backend services and improve response times, API gateways can cache responses for common requests. 43 | - **Benefits**: Enhances the overall performance of the API by serving frequent requests quickly from the cache, reducing the need to process the same requests repeatedly on the backend. 44 | 45 | ### **8. Monitoring and Analytics** 46 | 47 | - **Description**: API gateways can collect data on API usage, performance metrics, and logs, providing insights into how the APIs are being used and how they are performing. 48 | - **Benefits**: Offers valuable information for troubleshooting, capacity planning, and understanding client behavior. 49 | -------------------------------------------------------------------------------- /Pages/Microservices/Distributed Transactions.md: -------------------------------------------------------------------------------- 1 | # Distributed Transactions 2 | 3 | - Maintaining data consistency across multiple services and databases is a challenge. 4 | - Patterns like Saga, Event Sourcing, and Outbox can be employed to ensure eventual consistency. 5 | - two-phase commit 6 | - A classic distributed transaction protocol involving two phases to ensure atomicity. 7 | - **Saga Pattern** 8 | - **Overview:** A Saga is a sequence of local transactions where each transaction updates data within a single service and publishes an event or message to trigger the next transaction in another service. Sagas can be orchestrated, where a central coordinator manages the sequence of transactions, or choreographed, with each service producing and consuming events independently. 9 | - **Consistency:** Sagas ensure eventual consistency by compensating for previous transactions in the case of a failure. If one transaction fails, compensating transactions are executed to undo the impact of the preceding transactions in the sequence. 10 | - **Use Cases:** Suitable for long-running business processes and workflows where different steps need to be executed by different microservices. 11 | - **Event Sourcing** 12 | - **Overview:** Event Sourcing persists the state of an entity as a sequence of state-changing events. Instead of storing just the current state of data, every change is captured in an event with enough information to reconstruct past states. 13 | - **Consistency:** Ensures consistency by applying events in a sequence, which can be replayed to reach the current state. It naturally aligns with the CQRS (Command Query Responsibility Segregation) pattern, allowing for separate models for reads and writes. 14 | - **Use Cases:** Useful for systems that require a detailed audit log, complex business processes, or those that benefit from analyzing past actions. 15 | - **Outbox Pattern** 16 | - **Overview:** The Outbox pattern involves storing events or messages in a local outbox (a database table or similar storage) as part of the local transaction. A separate process or worker then retrieves these messages from the outbox and publishes them to the message broker or event bus, ensuring that the local transaction and the publishing of the event are not directly coupled. 17 | - **Consistency:** Helps achieve eventual consistency by ensuring that messages are not lost even if the message broker is temporarily unavailable. Messages are guaranteed to be published once the local transaction succeeds. 18 | - **Use Cases:** Effective in scenarios where reliability and consistency of event publishing are critical, especially in distributed systems with network latency or intermittent connectivity issues. 19 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System.md: -------------------------------------------------------------------------------- 1 | # Fault-Tolerant System 2 | 3 | Continue operating without interruption despite failures in individual components or services. 4 | 5 | [Asynchronous Communication](Fault-Tolerant%20System/Asynchronous%20Communication.md) 6 | 7 | [Fallback](Fault-Tolerant%20System/Fallback.md) 8 | 9 | [Timeouts](Fault-Tolerant%20System/Timeouts.md) 10 | 11 | [Deadline](Fault-Tolerant%20System/Deadline.md) 12 | 13 | [Retries](Fault-Tolerant%20System/Retries.md) 14 | 15 | [Rate Limiter](Fault-Tolerant%20System/Rate%20Limiter.md) 16 | 17 | [Cascading Failures](Fault-Tolerant%20System/Cascading%20Failures.md) 18 | 19 | [Single Point of Failure](Fault-Tolerant%20System/Single%20Point%20of%20Failure.md) 20 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System/Asynchronous Communication.md: -------------------------------------------------------------------------------- 1 | # Asynchronous Communication 2 | 3 | - Communication between components happens asynchronously, decoupling them and preventing cascading failures. 4 | - **Temporal Decoupling:** Components do not need to be available at the same time to communicate. A component can send a message without waiting for the receiver to be available, process the message, or even acknowledge it. This reduces the dependency on component availability and allows for more flexible maintenance and scaling strategies. 5 | - **Failure Isolation:** By decoupling components, a failure in one part of the system does not directly impact others. For example, if a service that processes messages becomes overloaded or fails, incoming messages can still be queued for processing when the service recovers, preventing the failure from cascading to other parts of the system. 6 | - Messages are persisted in queues or event streams, ensuring delivery even if components are temporarily unavailable. 7 | - **Guaranteed Delivery:** Messages are persisted in queues or event streams, which means that if a component fails to process a message due to a temporary issue, the message is not lost. Instead, it can be retried or processed later once the issue is resolved. This persistence layer ensures that important communications are not missed and can be acted upon as soon as the receiving component is capable. 8 | - **Load Leveling:** Queues and event streams can act as buffers for incoming messages, smoothing out traffic spikes and preventing overload situations. This capability allows the system to handle variable loads more gracefully, providing time for autoscaling mechanisms to add processing capacity if needed. 9 | - **Scalability and Flexibility** 10 | - **Scalable Architecture:** Asynchronous communication facilitates scaling individual components independently based on their specific workloads. For instance, if one part of the system generates a high volume of events, the consuming services can be scaled out separately to handle the increased load. 11 | - **System Evolution:** Asynchronous interfaces between components make it easier to evolve the system over time. New components can be introduced, and existing ones can be modified or replaced with minimal impact on the rest of the system, as long as the message contracts are maintained. 12 | - **Handling Complex Workflows** 13 | - **Workflow Management:** Asynchronous communication is ideal for managing long-running and complex workflows where different steps may require varying amounts of processing time or depend on external resources. Workflow state can be maintained in messages themselves or through coordination services, enabling sophisticated processing sequences that are resilient to interruptions. 14 | - **Challenges** 15 | 16 | While asynchronous communication significantly enhances fault tolerance, it also introduces challenges that need to be managed: 17 | 18 | - **Eventual Consistency:** Systems relying on asynchronous communication often embrace eventual consistency models, which can complicate state management and require careful design to ensure data integrity. 19 | - **Monitoring and Tracing:** Tracking the flow of messages and diagnosing issues in a highly decoupled system can be more complex, necessitating robust monitoring, logging, and tracing mechanisms to ensure visibility across components. 20 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System/Cascading Failures.md: -------------------------------------------------------------------------------- 1 | # Cascading Failures 2 | 3 | - Cascading failures occur when the failure of one component triggers a chain of failures in dependent components. 4 | - Asynchronous communication, circuit breakers, and bulkheads help prevent cascading failures. 5 | 6 | ## Asynchronous Communication 7 | 8 | - **Description**: Decoupling service dependencies by using asynchronous communication patterns, such as event-driven architectures or message queues. 9 | - **Benefits**: Helps in absorbing fluctuations in load and isolating services from the direct impact of failures, thus preventing one service's issues from immediately impacting others. 10 | 11 | ## Circuit Breakers 12 | 13 | - **Description**: A pattern where calls to a particular service are monitored, and if failures reach a certain threshold, further calls are automatically blocked for a period, allowing the service time to recover. 14 | - **Benefits**: Prevents a service under stress from being overwhelmed by a flood of requests, which not only allows it to recover but also stops the failure from spreading to dependent services. 15 | 16 | ## Bulkheads 17 | 18 | - **Description**: Inspired by the compartments in a ship’s hull, the bulkhead pattern isolates elements of an application into pools so that if one fails, the others continue to function. 19 | - **Benefits**: Limits the impact of a failure to a portion of the system, ensuring that not all resources (e.g., threads, database connections) are consumed by a single failing component. 20 | 21 | ## **Implementation Considerations** 22 | 23 | - **Monitoring and Alerting**: Implement comprehensive monitoring to detect and alert on failure signs early before they lead to cascading failures. 24 | - **Load Shedding**: Implement the ability to shed excess load in critical components when necessary to maintain essential functions. 25 | - **Rate Limiting**: Use rate limiting to control the traffic to services, preventing them from being overwhelmed. 26 | - **Regular Stress Testing**: Conduct tests to simulate failure scenarios and validate the effectiveness of strategies in place to prevent cascading failures. 27 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System/Deadline.md: -------------------------------------------------------------------------------- 1 | # Deadline 2 | 3 | Deadlines extend the concept of timeouts by setting a fixed point in time by which an operation or a series of operations must complete. Unlike timeouts, which are typically set for individual operations and measure the duration of those operations, deadlines are absolute and can encompass multiple steps or stages in a process. 4 | 5 | - Deadlines set a maximum time limit for completing an operation, preventing requests from getting stuck indefinitely. 6 | - After the deadline, the operation can be canceled, retried, or a fallback can be used. 7 | - **Application:** Deadlines are particularly useful in scenarios involving multiple sequential or parallel operations that together accomplish a task. For instance, in a microservices architecture, a request might involve calls to several services; a deadline ensures the entire process completes within a specific timeframe. 8 | - **Behavior After Deadline:** Similar to timeouts, when a deadline is reached, the system can: 9 | - **Cancel or abort the ongoing operation**, freeing up resources and potentially notifying the user or system that initiated the request. 10 | - **Retry parts of the process** if there's a policy in place for handling partial completions or if specific operations are identified as having caused the delay. 11 | - **Use a fallback mechanism** to provide a degraded but immediate response to the request. 12 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System/Fallback.md: -------------------------------------------------------------------------------- 1 | # Fallback 2 | 3 | - Fallback mechanisms provide alternative operations or degraded functionality when a component fails or becomes unavailable. 4 | - Examples include returning cached data, default values, or alternative services. 5 | 6 | ## **Purpose of Fallback Mechanisms** 7 | 8 | - **Resilience:** Enhance the system's ability to withstand failures by providing alternatives when primary operations fail. 9 | - **User Experience:** Maintain a usable system and prevent complete service disruption, thereby preserving user trust and satisfaction. 10 | 11 | ## **Examples of Fallback Mechanisms** 12 | 13 | - Returning Cached Data 14 | - **Scenario:** When a live data source is unavailable, the system can return previously cached data. 15 | - **Benefits:** This allows users to access stale but still potentially useful data, which is particularly valuable in read-heavy applications where data freshness is not critical for every operation. 16 | - Using Default Values 17 | - **Scenario:** If a service responsible for providing personalized content fails, the system can fall back to default or generic content. 18 | - **Benefits:** Ensures that the application continues to provide content, maintaining engagement even though the personalized experience is temporarily unavailable. 19 | - Alternative Services (Secondary Services) 20 | - **Scenario:** When the primary service is down, requests are routed to a secondary service that can offer a similar functionality. 21 | - **Benefits:** Keeps the system operational, though the alternative service may offer slower performance or reduced features compared to the primary service. 22 | - Degraded Functionality 23 | - **Scenario:** In case of failure in non-critical components, the system may disable certain features while keeping the core functionality running. 24 | - **Benefits:** Focuses resources on maintaining essential services, ensuring that the system remains useful for the majority of tasks. 25 | - Throttling 26 | - **Scenario:** During peak loads or partial outages, the system may limit the number of requests it handles, prioritizing critical operations. 27 | - **Benefits:** Prevents system overload and ensures that available resources are allocated to the most important functions. 28 | - Circuit Breaker Pattern 29 | - **Scenario:** Automatically detecting failures and preventing the application from performing operations that are likely to fail, thereby protecting the system from further damage. 30 | - **Benefits:** Allows the system to detect patterns of failures and temporarily disable functionality, providing time for recovery and reducing the load on the failing system components. 31 | 32 | ## **Design Considerations** 33 | 34 | - **Timeliness of Cached Data:** Evaluate the acceptability of serving stale data and the impact on user experience. 35 | - **Default Values and User Expectations:** Ensure that default or generic responses are relevant and meaningful to users. 36 | - **Alternative Service Costs:** Consider the costs and limitations of maintaining and switching to secondary services. 37 | - **Monitoring and Alerts:** Implement comprehensive monitoring to detect when fallback mechanisms are triggered, allowing for prompt investigation and remediation. 38 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System/Rate Limiter.md: -------------------------------------------------------------------------------- 1 | # Rate Limiter 2 | 3 | - Rate limiting controls the flow of requests to prevent overwhelming components and causing failures due to overload. 4 | - It can be implemented at various levels (e.g., API gateway, service level) based on requirements. 5 | - **API Gateway Level**: Many modern architectures use an API gateway as the entry point for all incoming requests to backend services. Implementing rate limiting at this level allows for centralized control and is effective in protecting backend services from getting overwhelmed. 6 | - **Service Level**: In microservices architectures, rate limiting can also be applied at the individual service level, providing fine-grained control over the load each service can handle. This is useful for services with varying performance characteristics or importance. 7 | - **Application Level**: Within an application, rate limiting can be applied to specific functionalities or endpoints, protecting critical sections of an application from being overloaded. 8 | - **Network Level**: Rate limiting can be enforced by network devices (e.g., routers, firewalls) to control the flow of traffic entering or leaving a network segment. 9 | 10 | ## **Strategies for Rate Limiting** 11 | 12 | - **Fixed Window**: Limits the number of requests in a fixed time window (e.g., 1000 requests per hour). This is simple to implement but can allow bursts of traffic at the boundary of time windows. 13 | - **Sliding Log**: Records timestamps of each request and dynamically adjusts the rate limit based on the request pattern over time. It's more complex but prevents bursts at the edges of time windows. 14 | - **Token Bucket**: Allows a certain number of requests in a given time frame, replenishing "tokens" at a steady rate. This method smooths out bursts over time. 15 | - **Leaky Bucket**: Similar to the token bucket but enforces a more steady output rate, smoothing out incoming bursts of requests by queueing them. 16 | 17 | ## **Considerations** 18 | 19 | - **Graceful Handling of Limit Exceeding**: When a request exceeds the rate limit, it's crucial to handle it gracefully, typically by responding with a **`429 Too Many Requests`** HTTP status code and possibly including retry information. 20 | - **Configurability**: Rate limits should be configurable to adjust to changing requirements without code changes. 21 | - **Distributed Systems**: In distributed architectures, ensuring consistent rate limiting across multiple instances or nodes can be challenging. Solutions like distributed caching or centralized rate limiting services can help. 22 | - **Monitoring and Alerting**: It's essential to monitor rate-limited endpoints or services and alert administrators if the limits are hit too frequently, as it may indicate issues with the service capacity or an attack. 23 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System/Retries.md: -------------------------------------------------------------------------------- 1 | # Retries 2 | 3 | - Retry mechanisms allow for retrying failed operations, providing resiliency against transient failures. 4 | - Retries can be implemented with exponential backoff and jitter to prevent overwhelming systems during outages. 5 | 6 | ## **Exponential Backoff** 7 | 8 | Exponential backoff is a strategy used to space out retry attempts in a way that the wait time increases exponentially between retries. This approach helps in managing the load on the system and increasing the likelihood of recovery from transient failures over time. The basic idea is to double the wait time after each retry attempt, optionally applying a maximum retry limit or a maximum wait time. 9 | 10 | ## **Jitter** 11 | 12 | Jitter involves introducing randomness to the retry intervals to prevent synchronized retries across multiple systems or operations. When many clients are trying to recover from a failure simultaneously, they might all retry at the same time if their retry intervals are identical, leading to spikes in demand that can overwhelm the system. By adding jitter, the retry attempts are desynchronized, spreading the load more evenly over time. 13 | 14 | ## **Implementation Considerations** 15 | 16 | - **Limit the Number of Retries**: It's important to set a maximum number of retry attempts to prevent infinite loops in scenarios where the failure cannot be resolved by retries. 17 | - **Sensible Retry Intervals**: Choose initial retry intervals and maximum wait times that make sense for the specific operation and its requirements. 18 | - **Monitor and Log Retries**: Keeping track of retry attempts and their outcomes is crucial for diagnosing issues and understanding the system's behavior under failure conditions. 19 | - **Handle Different Types of Failures Appropriately**: Not all failures are transient or suitable for retries. Implement logic to distinguish between transient failures (where retries make sense) and permanent failures (where retries would not help). 20 | 21 | ```csharp 22 | // This code attempts an operation that might fail with a transient error, retries with exponential backoff and jitter, and stops after a maximum number of attempts. 23 | int retries = 0; 24 | int maxRetries = 5; 25 | TimeSpan maxBackoff = TimeSpan.FromSeconds(32); 26 | TimeSpan baseDelay = TimeSpan.FromSeconds(1); 27 | Random jitter = new Random(); 28 | 29 | while (true) 30 | { 31 | try 32 | { 33 | // Attempt the operation 34 | PerformOperation(); 35 | break; // Success, exit loop 36 | } 37 | catch (TransientException) // Assume TransientException identifies recoverable failures 38 | { 39 | if (++retries > maxRetries) throw; // Exceeded max retries, rethrow 40 | 41 | // Calculate backoff interval with jitter 42 | var backoff = TimeSpan.FromSeconds(Math.Pow(2, retries)) + TimeSpan.FromMilliseconds(jitter.Next(0, 1000)); 43 | var delay = TimeSpan.Min(backoff, maxBackoff); 44 | 45 | Task.Delay(delay).Wait(); // Wait before retrying 46 | } 47 | } 48 | ``` 49 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System/Single Point of Failure.md: -------------------------------------------------------------------------------- 1 | # Single Point of Failure 2 | 3 | - A single point of failure is a component whose failure causes the entire system to fail. 4 | - Eliminating single points of failure can be achieved through redundancy, load balancing, and failover mechanisms. 5 | 6 | ## Redundancy 7 | 8 | - **Description**: Introducing redundancy involves adding duplicate components, systems, or pathways that can take over in case the primary ones fail. This can be applied to hardware components (like servers and network devices), software components (like databases and application servers), and even entire data centers. 9 | - **Implementation**: Redundancy can be implemented at various levels, including data replication across servers, deploying multiple instances of a service, and using RAID configurations for storage. 10 | 11 | ## Load Balancing 12 | 13 | - **Description**: Load balancing distributes the workload evenly across multiple system components to ensure no single component becomes a bottleneck or point of failure. 14 | - **Implementation**: This typically involves deploying a load balancer that efficiently directs incoming traffic or requests to multiple servers or services based on predefined criteria, such as current load or response times. 15 | 16 | ## Failover Mechanisms 17 | 18 | - **Description**: Failover mechanisms automatically redirect requests from a failed component to a backup component without requiring human intervention. This ensures continuous operation even in the event of component failures. 19 | - **Implementation**: Failover can be implemented through clustering, where multiple servers work together and can take over for each other if one fails, or through standby systems that are kept in sync and can be brought online quickly when needed. 20 | 21 | ## **Considerations for Eliminating SPOFs** 22 | 23 | - **Cost vs. Benefit Analysis**: Adding redundancy and failover capabilities can significantly increase the cost and complexity of a system. It's important to analyze the criticality of each system component and apply these strategies where the benefit outweighs the cost. 24 | - **Regular Testing**: Failover and redundancy mechanisms need to be regularly tested to ensure they work as expected in real failure scenarios. This includes testing for both planned failover scenarios and simulating unplanned failures. 25 | - **Geographic Distribution**: For high-availability systems, consider geographic distribution of redundant components to protect against regional outages caused by natural disasters, power outages, or other regional impacts. 26 | - **Monitoring and Alerting**: Implement comprehensive monitoring and alerting for all critical components to detect failures quickly and, if possible, automatically trigger failover mechanisms. 27 | -------------------------------------------------------------------------------- /Pages/Microservices/Fault-Tolerant System/Timeouts.md: -------------------------------------------------------------------------------- 1 | # Timeouts 2 | 3 | Timeouts specify the maximum duration an operation should take before it's considered failed or stuck. 4 | 5 | - Timeouts are set for operations to prevent indefinite waiting for responses from unresponsive components. 6 | - After the timeout period, the operation can be retried, failed gracefully, or a fallback can be used. 7 | - **Application:** Timeouts are commonly set for individual operations, like HTTP requests, database queries, or remote procedure calls. 8 | - **Behavior After Timeout:** When the specified timeout duration is exceeded, the system can take several actions: 9 | - **Retry the operation**, potentially with exponential backoff strategies to avoid overwhelming the target service. 10 | - **Fail the operation gracefully**, returning an error message or status code that indicates the operation could not be completed. 11 | - **Fallback to an alternative approach**, such as serving cached data or executing a simpler, more reliable operation that doesn’t depend on the unresponsive component. 12 | -------------------------------------------------------------------------------- /Pages/Microservices/Introduction.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | ## Monolith's Downsides and Advantages 4 | 5 | - Downsides: 6 | - Monolithic codebase becomes complex and hard to maintain as the application grows 7 | - Scaling is challenging as the entire application needs to be scaled 8 | - Longer release cycles due to the need to build and deploy the entire application 9 | - Tight coupling between components makes it difficult to adopt new technologies 10 | - Advantages: 11 | - Simple to develop and deploy initially 12 | - Cross-cutting concerns (e.g., logging, security) can be centralized 13 | - No interprocess communication overhead 14 | 15 | ## Microservices Downsides and Advantages 16 | 17 | - Downsides: 18 | - Increased complexity due to distributed systems challenges (e.g., distributed transactions, service discovery, communication) 19 | - Operational overhead in managing and monitoring multiple services 20 | - Potential for data inconsistency and duplication due to decentralized data management 21 | - Testing and deployment can be more complex 22 | - Advantages: 23 | - Improved scalability and resilience by scaling individual services independently 24 | - Better fault isolation and containment of failures 25 | - Faster release cycles and easier adoption of new technologies 26 | - Increased agility and flexibility through loose coupling and separation of concerns 27 | -------------------------------------------------------------------------------- /Pages/Microservices/Key Vaults.md: -------------------------------------------------------------------------------- 1 | # Key Vaults 2 | 3 | - Secure storage and management of secrets (e.g., passwords, API keys) are essential in microservices environments. 4 | - **Importance of Key Vaults in Microservices** 5 | - **Security**: Centralizes the management of secrets in a secure way, reducing the risk of exposure in application code or configuration files. 6 | - **Access Control**: Provides fine-grained access controls, allowing only authorized services or applications to retrieve specific secrets. 7 | - **Audit Trails**: Offers auditing capabilities to track access to secrets, helping in compliance and security monitoring. 8 | - **Secrets Rotation**: Facilitates the rotation of secrets, which is a best practice for maintaining security. Automated rotation minimizes the risk associated with static secrets. 9 | - **Simplification of Secret Management**: Abstracts the complexity of securely storing and managing secrets, providing a simplified interface for applications. 10 | - Key vaults like HashiCorp Vault, Azure Key Vault, or AWS Secrets Manager can be used. 11 | -------------------------------------------------------------------------------- /Pages/Microservices/Load Balancing.md: -------------------------------------------------------------------------------- 1 | # Load Balancing 2 | 3 | - Load balancing distributes incoming requests across multiple service instances for better scalability and availability. 4 | - Even distribution of requests, preventing overload on specific services. 5 | - High availability (HA) to ensure uninterrupted service. 6 | - HA refers to a system's ability to maintain operational status and functionality despite component failures. 7 | - Load balancing can be implemented at various levels (client-side, server-side, API gateway). 8 | - Replication involves creating redundant instances of services to enhance reliability and performance. 9 | - Sharding is a database design approach to improve performance by horizontally partitioning data. 10 | - Horizontal scaling adds more instances of services, while vertical scaling involves increasing the resources (CPU, RAM) of a single instance. 11 | -------------------------------------------------------------------------------- /Pages/Microservices/Metrics, Monitoring, Tracing, Logging.md: -------------------------------------------------------------------------------- 1 | # Metrics, Monitoring, Tracing, Logging 2 | 3 | - **Metrics** 4 | - Metrics are quantitative data that measure various aspects of a system's performance and health. They are typically time-series data and can include things like request count, error rates, response times, resource utilization (CPU, memory usage), and more. Metrics provide a high-level overview of system health and performance trends over time. 5 | - **Tools**: Prometheus, Grafana, and Datadog are popular tools for collecting, storing, and visualizing metrics. 6 | - **Monitoring** 7 | - Monitoring involves the continuous observation of a system's operational state and performance. It uses metrics and logs to alert operators about anomalies, failures, or significant changes in the system's behavior. Monitoring aims to ensure that the system operates within its expected parameters and helps in identifying issues before they impact users. 8 | - **Tools**: Tools like Nagios, Zabbix, New Relic, and the previously mentioned Grafana and Prometheus are widely used for monitoring applications and infrastructure. 9 | - **Tracing** 10 | - Tracing provides detailed information about a single request's path through a system. In microservices, a single user action can involve multiple services; tracing helps in identifying how a request travels across these services, including latency and failure points. This is crucial for diagnosing problems in a distributed system where issues can span multiple services. 11 | - **Tools**: Jaeger, Zipkin, and AWS X-Ray are examples of tools that enable distributed tracing, providing insights into request flow and performance bottlenecks. 12 | - **Logging** 13 | - Logging involves recording discrete events that happen within an application, such as user actions, system errors, or informational messages. Logs provide detailed, contextual information about specific events, making them invaluable for debugging issues, understanding application behavior, and auditing purposes. In microservices, centralized logging becomes important due to the dispersed nature of log data across services. 14 | - **Tools**: Elasticsearch, Logstash, and Kibana (the ELK Stack), Fluentd, and Splunk are popular for log aggregation, storage, and analysis. 15 | - **Best Practices for Observability in Microservices** 16 | - **Centralize Observability Data**: Aggregate logs, metrics, and traces from all services into a centralized system to facilitate analysis and correlation of data across the distributed system. 17 | - **Standardize Across Services**: Adopt consistent logging formats, metric names, and trace identifiers across all microservices to simplify aggregation and analysis. 18 | - **Implement Contextual Logging and Tracing**: Include unique identifiers (such as request IDs) in logs and traces to link events and traces across services, enabling easier debugging and analysis. 19 | - **Monitor Service-Level Objectives (SLOs)**: Define and monitor key performance indicators (KPIs) and SLOs to ensure the system meets its performance and reliability targets. 20 | - **Automate Alerts Based on Anomalies and Thresholds**: Use monitoring tools to set up automated alerts for anomalies or breaches of predefined thresholds to enable quick response to potential issues. 21 | -------------------------------------------------------------------------------- /Pages/Microservices/Service Mesh.md: -------------------------------------------------------------------------------- 1 | # Service Mesh 2 | 3 | - A service mesh (e.g., Istio, Linkerd, Consul Connect) provides a dedicated infrastructure layer for service-to-service communication. 4 | - It offers features like service discovery, load balancing, encryption, authentication, and observability out of the box. 5 | - **Key Features of a Service Mesh** 6 | - **Service Discovery**: Automatically detects and manages services within the infrastructure, allowing them to find and communicate with each other without hard-coded addresses, which enhances flexibility and scalability. 7 | - **Load Balancing**: Distributes incoming requests across multiple instances of a service, improving responsiveness and availability by ensuring no single service instance becomes a bottleneck. 8 | - **Encryption and Security**: Offers end-to-end encryption for service-to-service communication, ensuring data confidentiality and integrity. It also manages authentication and authorization, providing secure access control. 9 | - **Traffic Management**: Controls the flow of traffic and API calls between services, including routing, retries, failovers, and dynamic request handling. This helps in implementing A/B testing, canary releases, and staged rollouts. 10 | - **Observability**: Provides rich telemetry data (logs, metrics, and traces) that offer insights into the behavior and performance of microservices. This data is crucial for monitoring, troubleshooting, and understanding system dynamics. 11 | - **Fault Injection and Resilience**: Supports testing the system's robustness by introducing faults (delays, errors) in a controlled manner. It also implements resilience patterns like circuit breakers and timeouts to manage service dependencies gracefully. 12 | - **Popular Service Mesh Implementations** 13 | - **Istio**: One of the most popular service mesh frameworks, Istio provides a comprehensive set of features for traffic management, security, and observability. It works with Kubernetes and other platforms and is known for its robustness and flexibility. 14 | - **Linkerd**: A CNCF (Cloud Native Computing Foundation) project, Linkerd is lightweight, fast, and easy to install. It emphasizes simplicity and minimalism, providing essential service mesh features with low overhead. 15 | - **Consul Connect**: Part of HashiCorp Consul, Consul Connect focuses on service discovery and configuration, along with providing a service mesh capability that includes secure service-to-service communication with automatic TLS encryption and identity-based authorization. 16 | - **Benefits of Using a Service Mesh** 17 | - **Improved Security**: Automated encryption and fine-grained access controls enhance the security of communications between services. 18 | - **Enhanced Observability**: Integrated logging, monitoring, and tracing functionalities provide deep insights into the system, helping with debugging and performance tuning. 19 | - **Operational Simplicity**: By abstracting the inter-service communication complexities, a service mesh simplifies the operational burden on developers and operators. 20 | - **Increased Resilience**: Built-in fault tolerance features help in maintaining system stability and availability, even when individual services or components fail. 21 | -------------------------------------------------------------------------------- /Pages/Microservices/Service Registry & Discovery.md: -------------------------------------------------------------------------------- 1 | # Service Registry & Discovery 2 | 3 | - A service registry is a centralized directory that helps in tracking and managing available services in a microservices architecture. 4 | - Consul, Redis, and similar tools are used for service discovery to locate and communicate with microservices. 5 | - Service discovery tools are crucial in dynamic environments where microservices scale dynamically, ensuring effective communication and load balancing. 6 | -------------------------------------------------------------------------------- /Pages/NET.md: -------------------------------------------------------------------------------- 1 | # .NET 2 | 3 | [CLR / BCL](NET/CLR%20BCL.md) 4 | 5 | [Data Types and Memory Allocation](NET/Data%20Types%20and%20Memory%20Allocation.md) 6 | 7 | [Various Aspects](NET/Various%20Aspects.md) 8 | 9 | [Generics](NET/Generics.md) 10 | 11 | [Events](NET/Events.md) 12 | 13 | [Lambda Expressions](NET/Lambda%20Expressions.md) 14 | 15 | [Try Statements and Exceptions](NET/Try%20Statements%20and%20Exceptions.md) 16 | 17 | [Comparing Strings](NET/Comparing%20Strings.md) 18 | 19 | [StringBuilder](NET/StringBuilder.md) 20 | 21 | [Standard Equality Protocols](NET/Standard%20Equality%20Protocols.md) 22 | 23 | [Collections](NET/Collections.md) 24 | 25 | [LINQ Query](NET/LINQ%20Query.md) 26 | 27 | [GC and Memory](NET/GC%20and%20Memory.md) 28 | 29 | [Stream](NET/Stream.md) 30 | 31 | [Network](NET/Network.md) 32 | 33 | [Serialization](NET/Serialization.md) 34 | 35 | [Assemblies](NET/Assemblies.md) 36 | 37 | [Reflection](NET/Reflection.md) 38 | 39 | [Dynamics](NET/Dynamics.md) 40 | 41 | [Parallel Programming](NET/Parallel%20Programming.md) 42 | 43 | [Span and Memory](NET/Span%20and%20Memory.md) 44 | -------------------------------------------------------------------------------- /Pages/NET/Assemblies.md: -------------------------------------------------------------------------------- 1 | # Assemblies 2 | 3 | - Managing assemblies in C#. 4 | - **The Assembly Manifest:** Each assembly contains a manifest that provides metadata about the assembly, including version information, culture, and information about the resources and types it contains. 5 | - **Resources and Satellite Assemblies:** Assemblies can embed resources such as images and strings. Satellite assemblies are used to manage resources for different cultures and languages, facilitating localization. 6 | - **Assembly Loading:** The CLR loads assemblies into an application domain when the application requires them. This process can be controlled and customized through the use of various load methods. 7 | - **Assembly Resolution:** When an assembly references another assembly, the CLR must resolve the location and version of the referenced assembly. This process can be customized using configuration files or programmatically via the **`AppDomain.AssemblyResolve`** event. 8 | - **Assembly Load Contexts:** .NET Core introduced the concept of assembly load contexts to provide isolation between different parts of an application and to support dynamic loading and unloading of assemblies. 9 | - **AssemblyDependencyResolver:** Introduced in .NET Core 3.0, it helps in resolving assembly and native library paths during dynamic loading, based on the dependencies specified in a .deps.json file. 10 | -------------------------------------------------------------------------------- /Pages/NET/CLR BCL.md: -------------------------------------------------------------------------------- 1 | # CLR / BCL 2 | 3 | ## Common Language Runtime 4 | 5 | - **Intermediate Language (IL):** The intermediate code produced by the .NET compiler. 6 | - **Overview:** When you compile a .NET application, the source code is transformed into IL rather than directly into machine code. IL is then compiled into native machine code just before execution. 7 | - **Purpose:** The use of IL ensures that .NET applications can be platform-independent at the source code level. The actual platform-specific adaptation happens at runtime through JIT compilation. 8 | - **Managed language:** A language that runs on the CLR, providing automatic memory management. 9 | - **Overview:** A managed language is any programming language that can be compiled into Intermediate Language (IL) and executed by the CLR. Examples include C#, VB.NET, and F#. 10 | - **Key Features:** Managed languages benefit from the CLR's features such as automatic memory management, type safety, exception handling, and security. This allows developers to focus more on business logic and less on low-level programming concerns. 11 | - **Managed code:** Code that is executed by the CLR, benefiting from its services like garbage collection. 12 | - **Overview:** Managed code refers to the code that targets the CLR and thus is executed under its management. Managed code benefits from the runtime services provided by the CLR, such as garbage collection, just-in-time compilation, and access to .NET Framework's class libraries. 13 | - **Advantages:** Writing managed code enables developers to take advantage of the robust, secure, and efficient execution environment provided by the CLR. It simplifies development and reduces the number of bugs related to memory management and type safety. 14 | - **Just-In-Time (JIT):** The process of converting IL to native machine code at runtime. 15 | - **Process:** Just-In-Time (JIT) compilation is the process of converting the platform-agnostic IL code into native machine code that the computer's processor can execute. This conversion happens on-the-fly, at the application's runtime. 16 | - **Benefits:** JIT compilation allows .NET applications to run natively on any supported architecture without needing a recompilation of the source code. It optimizes the application for the specific hardware configuration it is running on, potentially enhancing performance. 17 | 18 | ## Frameworks and Base Class Libraries 19 | 20 | - **The Base Class Libraries (BCL):** A set of classes available across all .NET applications. 21 | - **Application framework layers:** Different layers providing specific functionalities. 22 | - **.NET Standard:** A specification that defines a set of APIs to be supported on .NET platforms. 23 | -------------------------------------------------------------------------------- /Pages/NET/Collections.md: -------------------------------------------------------------------------------- 1 | # Collections 2 | 3 | - Various collections and their use in C#. 4 | 5 | ## **BitArray** 6 | 7 | - **Namespace**: **`System.Collections`** 8 | - **Description**: Represents an array of boolean values (true or false) that are represented as bits (1 or 0). It is a compact way to store bit flags in a single value. 9 | - **Use Cases**: Useful for handling sets of binary flags efficiently, performing bitwise operations, and when you need to manipulate large amounts of bits while minimizing memory usage. 10 | 11 | ## **HashSet``** 12 | 13 | - **Namespace**: **`System.Collections.Generic`** 14 | - **Description**: Represents a set of unique values. It is implemented as a hash table, providing fast operations for insertion, deletion, and searches. 15 | - **Use Cases**: Ideal for ensuring no duplicates are stored, performing set operations like union, intersection, and determining whether a particular item is present in a collection quickly. 16 | 17 | ## **SortedSet``** 18 | 19 | - **Namespace**: **`System.Collections.Generic`** 20 | - **Description**: Similar to **`HashSet`**, but automatically sorts the elements as they are inserted. It uses a binary search tree under the hood. 21 | - **Use Cases**: Useful when you need to maintain a sorted collection of unique items. It allows for efficient searches, insertions, and deletions while maintaining order. 22 | 23 | ## **Dictionaries** 24 | 25 | Dictionaries are collections that store key-value pairs. They are optimized for fast retrieval of values based on keys. 26 | 27 | ## Dictionary 28 | 29 | - **Namespace**: **`System.Collections.Generic`** 30 | - **Description**: A collection of key-value pairs that are organized based on the hash code of the key. It allows for fast lookups. 31 | - **Use Cases**: Ideal for scenarios where you need to quickly access data using a unique key, such as caching, lookups, and settings/preferences storage. 32 | 33 | ## SortedDictionary 34 | 35 | - **Namespace**: **`System.Collections.Generic`** 36 | - **Description**: Similar to **`Dictionary`** but maintains the keys in sorted order. It is implemented using a binary search tree. 37 | - **Use Cases**: Useful when you need the functionality of a dictionary but also need to maintain the keys in a sorted order for iteration purposes. 38 | 39 | ## **EqualityComparer``** 40 | 41 | - **Namespace**: **`System.Collections.Generic`** 42 | - **Description**: An abstract class that allows for the creation of equality comparison operations for types. The default implementation uses **`Object.Equals`** and **`Object.GetHashCode`** methods, but it can be overridden. 43 | - **Use Cases**: Customizing how collections that rely on equality checks (like **`HashSet`** and **`Dictionary`**) determine whether items are equal. This is particularly useful for implementing case-insensitive checks or comparing complex objects based on certain fields/properties. 44 | -------------------------------------------------------------------------------- /Pages/NET/Comparing Strings.md: -------------------------------------------------------------------------------- 1 | # Comparing Strings 2 | 3 | - Different methods for comparing strings. 4 | 5 | ## **Equality Comparison** 6 | 7 | Equality comparison checks if two strings are identical. In C#, you can compare strings for equality using the **`==`** operator or the **`String.Equals`** method. 8 | 9 | - **Using `==` Operator**: This compares the values of the strings. (**Case-Sensitive)** 10 | 11 | ```csharp 12 | string str1 = "hello"; 13 | string str2 = "hello"; 14 | bool areEqual = str1 == str2; // true 15 | ``` 16 | 17 | - **Using `String.Equals` Method**: Provides more flexibility, allowing you to specify the kind of comparison (case-sensitive or case-insensitive, culture-specific or ordinal). 18 | 19 | ```csharp 20 | bool areEqualOrdinal = string.Equals(str1, str2, StringComparison.Ordinal); 21 | bool areEqualIgnoreCase = string.Equals(str1, str2, StringComparison.OrdinalIgnoreCase); 22 | ``` 23 | 24 | ## **Order Comparison** 25 | 26 | Order comparison is about determining the lexical ordering of two strings, which is essential for sorting operations. You can compare strings ordinally or based on culture-specific rules. 27 | 28 | - **Ordinal Comparison**: Compares strings based on the numerical value of each character in the strings. This method is fast and culture-insensitive. 29 | 30 | ```csharp 31 | int result = string.Compare(str1, str2, StringComparison.Ordinal); 32 | ``` 33 | 34 | The **`result`** is 0 if the strings are equal, less than 0 if **`str1`** is lexically before **`str2`**, and greater than 0 if **`str1`** is lexically after **`str2`**. 35 | 36 | - **Culture-Specific Comparison**: Compares strings according to culture-specific rules. This comparison considers linguistic rules of the specified culture, including case, accents, and character options. 37 | 38 | ```csharp 39 | int result = string.Compare(str1, str2, StringComparison.CurrentCulture); 40 | ``` 41 | 42 | You can also perform a case-insensitive comparison by using **`StringComparison.CurrentCultureIgnoreCase`**. 43 | 44 | ## **StringComparison Enum** 45 | 46 | The **`StringComparison`** enumeration provides options for specifying the type of comparison: 47 | 48 | - **Ordinal**: Fast, binary comparisons. 49 | - **OrdinalIgnoreCase**: Case-insensitive ordinal comparisons. 50 | - **CurrentCulture** and **CurrentCultureIgnoreCase**: Culture-sensitive comparisons based on the current culture. 51 | - **InvariantCulture** and **InvariantCultureIgnoreCase**: Culture-sensitive comparisons based on the invariant culture, useful for displaying data. 52 | - **StringComparison.InvariantCulture**: Use this for comparisons that are culturally agnostic but still need to handle case and diacritic variations. 53 | 54 | ## **Best Practices** 55 | 56 | - Use ordinal comparisons (**`StringComparison.Ordinal`** or **`StringComparison.OrdinalIgnoreCase`**) for most general-purpose comparisons, especially for internal data structures, identifiers, and cases where performance is critical. 57 | - Use culture-specific comparisons (**`StringComparison.CurrentCulture`** or **`StringComparison.InvariantCulture`**) when comparing strings displayed to the user, where linguistic rules such as case sensitivity and diacritics are important. 58 | - Be aware of the performance implications of culture-specific comparisons and the potential for varying results across different cultures. 59 | -------------------------------------------------------------------------------- /Pages/NET/Data Types and Memory Allocation.md: -------------------------------------------------------------------------------- 1 | # Data Types and Memory Allocation 2 | 3 | - **Value Types:** Types that hold their data in the memory they allocate. 4 | - **Examples:** **`int`**, **`float`**, **`bool`**, and **`struct`**. Each has a defined size and stores values directly. 5 | - **Reference Types:** Types that store references to their data in memory. 6 | - **Examples:** Classes (**`class`**), arrays, delegates, and strings (**`string`**). The heap is managed by the garbage collector (GC), which automatically handles memory allocation and deallocation but can introduce overhead. 7 | - **double Versus decimal:** Distinctions between floating-point types. 8 | - **`double`:** A double-precision floating-point type, suitable for scientific and engineering calculations that do not require complete precision, due to its faster operations compared to **`decimal`**. However, it's not recommended for financial calculations because of potential rounding errors. 9 | - **`decimal`:** A high-precision floating-point type with more significant digits and less prone to rounding errors, making it ideal for financial and monetary calculations. The trade-off is that operations are slower and consume more memory than **`double`**. 10 | - **Stack:** Memory region for method calls and local variables. 11 | - **Purpose:** The stack is a region of memory that stores value types and the execution context of method calls, including parameters, local variables, and return addresses. It operates in a last-in, first-out (LIFO) manner, making it very efficient but limited in size. 12 | - **Characteristics:** Memory allocation and deallocation on the stack are extremely fast. However, the stack has limited space, and stack overflow can occur if too much stack memory is used (e.g., via deep recursion). 13 | - **Heap:** Dynamic memory for objects with longer lifetimes. 14 | - **Purpose:** The heap is used for dynamic memory allocation, particularly for reference types. It's managed by the garbage collector (GC) in .NET, which automates memory allocation and deallocation, preventing memory leaks. 15 | - **Characteristics:** The heap is larger than the stack, allowing for the allocation of larger objects. However, heap operations are slower due to the overhead of garbage collection and the necessity to manage memory fragmentation. 16 | -------------------------------------------------------------------------------- /Pages/NET/Dynamics.md: -------------------------------------------------------------------------------- 1 | # Dynamics 2 | 3 | ## Dynamic Code Generation 4 | 5 | - Generating and executing code at runtime. 6 | - **Generating IL with `DynamicMethod`:** This allows for the creation of methods at runtime using IL (Intermediate Language). It's powerful for scenarios requiring high performance and dynamic behavior. 7 | - **Emitting Assemblies and Types:** The **`System.Reflection.Emit`** namespace provides classes that can be used to define and create new types, methods, and even entire assemblies at runtime. 8 | 9 | ## Dynamic Programming 10 | 11 | - Dynamic Language Runtime and dynamic features in C#. 12 | 13 | The Dynamic Language Runtime (DLR) and **`dynamic`** keyword in C# simplify operations involving dynamic types. 14 | 15 | - **Dynamic Language Runtime (DLR):** Provides runtime services for dynamic languages and dynamic features in C#, enabling operations on objects whose types are not known until runtime. 16 | - **`ExpandoObject`:** A provided class that enables adding and removing members dynamically at runtime. It's useful for working with dynamically shaped data, such as JSON parsing results. 17 | -------------------------------------------------------------------------------- /Pages/NET/Events.md: -------------------------------------------------------------------------------- 1 | # Events 2 | 3 | - Mechanism for communication between objects. 4 | - **Delegate:** Delegates are type-safe pointers to methods. They are the foundation of events in .NET. An event is declared in a class as a delegate type. 5 | - **Parameter Compatibility:** The parameters of the delegate define what arguments must be passed to the methods it points to. 6 | 7 | ```csharp 8 | delegate void StringAction(string s); 9 | class Test 10 | { 11 | static void Main() 12 | { 13 | StringAction sa = new StringAction(ActOnObject); sa("hello"); 14 | } 15 | static void ActOnObject(object o) => Console.WriteLine(o); // hello 16 | } 17 | ``` 18 | 19 | - **Return Type Compatibility:** The return type of the delegate specifies the return type of the methods it can point to. 20 | 21 | ```csharp 22 | delegate object ObjectRetriever(); 23 | class Test 24 | { 25 | static void Main() 26 | { 27 | ObjectRetriever o = new ObjectRetriever(RetrieveString); 28 | object result = o(); 29 | Console.WriteLine(result); // hello 30 | } 31 | static string RetrieveString() => "hello"; 32 | } 33 | ``` 34 | 35 | - **Generic Delegate Type:** Generics can be used with delegates to define parameterized delegate types, making them more flexible and reusable. For example, **`Func`** and **`Action`** are built-in generic delegates. 36 | 37 | ```csharp 38 | delegate TResult Func(); 39 | // allowing: 40 | Func x = ...; 41 | Func y = x; 42 | 43 | delegate void Action (T arg); 44 | // allowing: 45 | Action x = ...; 46 | Action y = x; 47 | ``` 48 | -------------------------------------------------------------------------------- /Pages/NET/GC and Memory.md: -------------------------------------------------------------------------------- 1 | # GC and Memory 2 | 3 | ## Garbage Collection and Memory Consumption 4 | 5 | - Managing memory and garbage collection in C#. 6 | 7 | ### **Finalizers** 8 | 9 | Finalizers are special methods in a class that are called by the Garbage Collector when an object is being collected. They are used to release unmanaged resources that the object may have acquired during its lifetime, such as file handles or database connections. Finalizers are defined using the **`~ClassName()`** syntax. 10 | 11 | - **Usage**: Rarely needed, as unmanaged resources are better managed through the **`IDisposable`** interface and the **`Dispose`** method pattern. 12 | - **Considerations**: Finalizers delay the object's memory being reclaimed and can lead to performance overhead. They run on a separate, single-threaded finalizer thread, which can cause contention and delays in a heavily loaded system. 13 | 14 | ### **How the GC Works** 15 | 16 | The .NET Garbage Collector operates on the principle of managed memory, automatically managing the allocation and release of memory for managed objects. It works by: 17 | 18 | 1. **Marking**: Identifying which objects are still in use by traversing the object graph from root references. 19 | 2. **Compacting**: Reclaiming the memory occupied by unreachable objects and compacting the remaining objects to reduce fragmentation. 20 | 3. **Generation**: Objects are categorized into generations (0, 1, and 2) based on their lifetime, with newly created objects in generation 0. The GC collects younger generations more frequently, as short-lived objects are more common and less costly to collect. 21 | 22 | ### **Managed Memory Leaks** 23 | 24 | Despite automatic memory management, it's still possible to encounter "memory leaks" in managed code, typically due to objects that remain reachable and thus are not collected by the GC. Common causes include: 25 | 26 | - **Static References**: Objects referenced by static fields are alive for the application's lifetime unless explicitly set to null. 27 | - **Event Handlers**: Not unregistering event handlers can keep objects alive longer than intended. 28 | - **IDisposable Pattern**: Not properly disposing of objects that implement **`IDisposable`** can keep associated unmanaged resources allocated. 29 | 30 | ### **Best Practices for Memory Management** 31 | 32 | - **Dispose Pattern**: Implement the **`IDisposable`** interface and the dispose pattern for classes that use unmanaged resources. Always call **`Dispose`** on IDisposable objects once you're done with them, preferably with a **`using`** statement. 33 | - **Finalizers**: Avoid finalizers unless necessary. Prefer the **`IDisposable`** pattern for cleaning up unmanaged resources. 34 | - **Event Handlers**: Ensure that you unsubscribe from events when the subscriber needs to be collected. 35 | - **Monitor Memory Usage**: Use diagnostic tools (like Visual Studio diagnostics tools, dotMemory) to monitor your application's memory usage and to identify potential memory leaks. 36 | -------------------------------------------------------------------------------- /Pages/NET/Generics.md: -------------------------------------------------------------------------------- 1 | # Generics 2 | 3 | - **Constraints:** Restrictions on the types that can be used as generic arguments. 4 | - Examples include **`where T : class`** (reference type constraint), **`where T : struct`** (value type constraint), and **`where T : IComparable`** (interface constraint). 5 | 6 | ```csharp 7 | where T : base-class // Base-class constraint 8 | where T : interface // Interface constraint 9 | where T : class // Reference-type constraint 10 | where T : class? // Nullable Reference-type constraint 11 | where T : struct // Value-type constraint (excludes Nullable types) 12 | where T : unmanaged // Unmanaged constraint 13 | where T : new() // Parameterless constructor constraint 14 | where U : T // Naked type constraint 15 | where T : notnull // Non-nullable value type, or from C# 8 16 | // a non-nullable reference type. 17 | ``` 18 | 19 | - **Covariance:** Allowing a more derived type than originally specified. 20 | - For example, if you have an **`IEnumerable`**, you can assign it to an **`IEnumerable`** if **`IEnumerable`** is covariant. 21 | - For instance, type `IFoo` has a covariant T if the following is legal: 22 | 23 | ```csharp 24 | IFoo s = ...; 25 | IFoo b = s; 26 | ``` 27 | 28 | - Interfaces and delegates permit covariant type parameters, but classes do not because a class can implement 29 | both Covariance and Contravariance interfaces 30 | - B[] can be cast to A[] if B subclasses A (and both are reference types) 31 | 32 | ```csharp 33 | Bear[] bears = new Bear[3]; 34 | Animal[] animals = bears; // OK 35 | ``` 36 | 37 | - The downside of this reusability is that element assignments can fail at runtime: 38 | 39 | ```csharp 40 | animals[0] = new Camel(); // Runtime error 41 | ``` 42 | 43 | - **Contravariance:** Allowing a less derived type than originally specified. 44 | - If you have a delegate expecting a parameter of type **`Base`**, contravariance allows it to work with a more derived type. 45 | 46 | ```csharp 47 | public interface IPushable { void Push (T obj); } 48 | IPushable animals = new Stack(); 49 | IPushable bears = animals; // Legal 50 | bears.Push (new Bear()); 51 | ``` 52 | -------------------------------------------------------------------------------- /Pages/NET/LINQ Query.md: -------------------------------------------------------------------------------- 1 | # LINQ Query 2 | 3 | - Language-Integrated Query and its features. 4 | 5 | ## **Deferred Execution** 6 | 7 | Deferred execution means that the evaluation of a LINQ query is delayed until its results are actually needed. This allows for efficient memory use and execution because the query is not executed at the point of its declaration but rather at the point of iteration or conversion of the query into a collection. 8 | 9 | **Benefits**: 10 | 11 | - **Efficiency**: Only the required elements are retrieved and processed. 12 | - **Flexibility**: Queries can be defined once and executed multiple times with potentially different results if the underlying data has changed. 13 | 14 | ## **Subqueries** 15 | 16 | LINQ supports subqueries, which are queries nested within another query. Subqueries can be used for filtering, selecting, or projecting data based on conditions evaluated against a nested dataset. 17 | 18 | **Example**: 19 | 20 | ```csharp 21 | var highValueOrders = from o in orders 22 | where (from p in o.Products 23 | select p.Price).Average() > 100 24 | select o; 25 | ``` 26 | 27 | In this example, the subquery calculates the average price of products in each order and filters orders where this average is greater than 100. 28 | 29 | ## **Interpreted Queries** 30 | 31 | Interpreted queries, such as those used with LINQ to SQL or LINQ to Entities (Entity Framework), are translated into the query language of the underlying datastore (e.g., SQL for relational databases). This allows for seamless integration with various data sources and enables the database to optimize query execution. 32 | 33 | **Benefits**: 34 | 35 | - **Abstraction**: Developers can work with a familiar syntax without needing to know the specifics of the query language for each datastore. 36 | - **Performance**: The datastore can use its query optimization capabilities to execute the query efficiently. 37 | 38 | **Considerations**: 39 | 40 | - Not all LINQ features or C# expressions can be translated to the query language of the datastore. Unsupported features or complex queries may need to be simplified or partially evaluated in memory. 41 | - Understanding the generated queries is important for diagnosing performance issues and ensuring efficient data access. 42 | -------------------------------------------------------------------------------- /Pages/NET/Lambda Expressions.md: -------------------------------------------------------------------------------- 1 | # Lambda Expressions 2 | 3 | - Concise syntax for anonymous methods. 4 | - **Closures and Captured Variables:** Lambda expressions can reference variables from the outer scope in which they are defined. These variables are "captured" by the lambda, allowing the lambda to access and modify them. 5 | - **Expression Trees:** Lambda expressions can be compiled into expression trees, which represent code as a data structure that can be examined, modified, or executed at runtime. 6 | - **Lambda Expressions Versus Local Methods:** Lambda expressions are often compared to local methods. While local methods are named and can be reused within their containing method, lambda expressions are concise and can easily be passed as arguments or used in LINQ queries. 7 | -------------------------------------------------------------------------------- /Pages/NET/Network.md: -------------------------------------------------------------------------------- 1 | # Network 2 | 3 | ## Network Architecture 4 | 5 | - Handling network-related operations in C#. 6 | 7 | ### URIs 8 | 9 | - **`System.Uri` Class:** Represents a Uniform Resource Identifier (URI) and provides methods for parsing and constructing URIs. It's widely used across .NET networking libraries to specify the addresses of network resources. 10 | 11 | ### WebClient 12 | 13 | - **`WebClient` Class:** A higher-level abstraction for sending HTTP requests and receiving responses. It's easy to use for downloading or uploading data but is considered outdated and has been superseded by **`HttpClient`**. 14 | 15 | ### HttpClient 16 | 17 | - **`HttpClient` Class:** Provides a modern, flexible, and reusable way to send HTTP requests and receive responses. It's designed to be instantiated once and reused throughout the life of an application, supporting asynchronous operations and HTTP/2 features. 18 | 19 | ### HttpListener 20 | 21 | - **`HttpListener` Class:** A simple, programmatically controlled HTTP server that listens for HTTP requests and processes them. Useful for creating HTTP servers without relying on IIS or other external server software. 22 | 23 | ### TCP and UDP 24 | 25 | - **TCP (Transmission Control Protocol):** Provides a reliable, connection-oriented way to send and receive data over the network. In C#, **`TcpClient`** and **`TcpListener`** classes are used for client-server TCP communications. 26 | - **UDP (User Datagram Protocol):** Offers a connectionless protocol for scenarios where speed is preferred over reliability. **`UdpClient`** is used for sending and receiving datagrams over UDP. 27 | -------------------------------------------------------------------------------- /Pages/NET/Parallel Programming.md: -------------------------------------------------------------------------------- 1 | # Parallel Programming 2 | 3 | - Writing parallel and concurrent code in C#. 4 | 5 | ## **Channel** 6 | 7 | - **Overview:** Channels are used for producer-consumer scenarios, where one or more tasks are producing data and one or more tasks are consuming that data. They are part of the **`System.Threading.Channels`** namespace introduced in .NET Core 3.0. 8 | - **Usage:** Channels are useful for coordinating between async operations, allowing for efficient and safe passing of data between threads or tasks. 9 | 10 | ## **Data Parallelism and Task Parallelism** 11 | 12 | - **Data Parallelism:** Involves breaking down a data set into smaller chunks and processing each chunk in parallel. This is particularly effective for operations that can be performed independently on segments of data. 13 | - **Task Parallelism:** Refers to executing different tasks in parallel, where each task can perform a different operation. It’s useful when different operations can be performed concurrently. 14 | 15 | ## **PLINQ (Parallel LINQ)** 16 | 17 | - **Overview:** PLINQ extends LINQ to allow query operations to run in parallel, automatically splitting the data source across multiple threads and combining the results once all threads complete. 18 | - **Usage:** Best suited for CPU-intensive query operations over large data sets. You can convert a LINQ query to PLINQ by calling the **`.AsParallel()`** method on the data source. 19 | 20 | ## **The Parallel Class** 21 | 22 | - **Overview:** The **`System.Threading.Tasks.Parallel`** class provides parallel versions of **`for`** and **`foreach`** loops (**`Parallel.For`** and **`Parallel.ForEach`**), as well as a method for running a set of actions in parallel (**`Parallel.Invoke`**). 23 | - **Usage:** Simplifies parallelizing loops and tasks without manually managing threads or tasks. 24 | 25 | ## **Task Parallelism** 26 | 27 | - **Overview:** In .NET, task parallelism is implemented through the **`Task`** and **`Task`** classes in the **`System.Threading.Tasks`** namespace, allowing for asynchronous and parallel operations. 28 | - **Usage:** You can start new tasks with **`Task.Run()`** or create tasks for parallel operations using **`Task.WhenAll()`** or **`Task.WhenAny()`** for coordinating multiple tasks. 29 | 30 | ## **Concurrent Collections** 31 | 32 | - **Overview:** The **`System.Collections.Concurrent`** namespace provides thread-safe collection classes like **`ConcurrentBag`**, **`ConcurrentQueue`**, **`ConcurrentStack`**, and **`ConcurrentDictionary`**. 33 | - **Usage:** These collections are designed for high-performance scenarios where multiple threads are adding, removing, or updating items concurrently. 34 | 35 | ## **BlockingCollection``** 36 | 37 | - **Overview:** **`BlockingCollection`** is a thread-safe collection class that provides blocking and bounding capabilities for both producer and consumer scenarios. 38 | - **Usage:** It is particularly useful in scenarios where producers may produce data at a faster rate than consumers can consume it, or vice versa, as it can limit the size of the collection and block adding or taking operations based on the state of the collection. 39 | -------------------------------------------------------------------------------- /Pages/NET/Reflection.md: -------------------------------------------------------------------------------- 1 | # Reflection 2 | 3 | ## Reflection and Metadata 4 | 5 | - Examining and interacting with type information at runtime. 6 | - **`GetType` Method vs `typeof` Operator:** **`GetType`** is used on an instance to get its runtime type, while **`typeof`** obtains the **`System.Type`** object for a specified type name at compile time. 7 | - **Obtaining a Type:** Besides **`GetType`** and **`typeof`**, **`Type.GetType(string)`** can be used to get a type by its fully qualified name. 8 | - **Instantiating Types:** Reflection allows creating instances of types dynamically using **`Activator.CreateInstance`**. 9 | - **Member Types:** Reflection enables access to different member types, including methods, properties, fields, and events. 10 | - **Late Binding:** Using reflection, you can invoke methods and access properties on objects dynamically at runtime, known as late binding. 11 | - **Attributes:** Reflection can be used to read custom attributes applied to types and members, enabling metadata-driven programming. 12 | -------------------------------------------------------------------------------- /Pages/NET/Serialization.md: -------------------------------------------------------------------------------- 1 | # Serialization 2 | 3 | ## Serialization Engines 4 | 5 | - Serializing and deserializing data in C#. 6 | 7 | ### XmlSerializer 8 | 9 | - **`System.Xml.Serialization.XmlSerializer`:** Enables object serialization into XML documents and vice versa. It's straightforward to use but works only with public properties and fields and requires objects to have a parameterless constructor. 10 | 11 | ### JsonSerializer 12 | 13 | - **`System.Text.Json.JsonSerializer`:** Introduced in .NET Core 3.0, provides high-performance, low-allocating, and standards-compliant capabilities to serialize and deserialize objects to and from JSON. It's part of the **`System.Text.Json`** namespace. 14 | 15 | ### Data Contract Serializer 16 | 17 | - **`System.Runtime.Serialization.DataContractSerializer`:** Part of the Windows Communication Foundation (WCF) and designed for serializing and deserializing objects as XML or JSON. It uses opt-in model through **`DataContract`** and **`DataMember`** attributes to specify what parts of an object should be serialized. 18 | -------------------------------------------------------------------------------- /Pages/NET/Span and Memory.md: -------------------------------------------------------------------------------- 1 | # Span and Memory 2 | 3 | ## Span`` and Memory`` 4 | 5 | - Efficiently working with memory in C#. 6 | - **Spans and Slicing, Forward-Only Enumerators** 7 | 8 | ### **Span``** 9 | 10 | - **Overview**: **`Span`** represents a contiguous region of memory, similar to an array slice, but it can point to memory outside of managed heap memory, such as stack-allocated memory, native memory, or arrays. It is a ref struct type, which means it can only be allocated on the stack and cannot be boxed or assigned to variables of type **`Object`**, ensuring its use is limited to the method it's declared in or passed to. 11 | - **Use Cases**: Ideal for performance-critical sections where minimizing memory allocations and garbage collection overhead is crucial. It's particularly useful for parsing and processing of in-memory data, like bytes from a network stream or processing substrings within a larger string without creating new allocations. 12 | - **Spans and Slicing**: 13 | 14 | ```csharp 15 | byte[] array = new byte[10]; 16 | Span span = new Span(array); // Creating a Span from an array 17 | Span slice = span.Slice(start: 2, length: 5); // Slicing 18 | ``` 19 | 20 | Slicing does not involve copying data; it merely re-references a portion of the memory. 21 | 22 | ### **Memory``** 23 | 24 | - **Overview**: **`Memory`** is similar to **`Span`** but is not a ref struct, meaning it can be stored on the heap and can escape the method it's defined in, such as being stored in fields, collections, or used as a return value. This makes **`Memory`** suitable for asynchronous operations but adds a bit more overhead compared to **`Span`**. 25 | - **Use Cases**: It's often used in scenarios where you need to work with a slice of memory over async boundaries, which is not possible with **`Span`** due to its stack-only nature. 26 | 27 | ```csharp 28 | byte[] array = new byte[10]; 29 | Memory memory = new Memory(array); 30 | Memory slice = memory.Slice(start: 2, length: 5); // Slicing 31 | ``` 32 | 33 | - **Forward-Only Enumerators**: While **`Span`** and **`Memory`** do not directly provide enumerators due to their more low-level nature, you can enumerate them using a **`for`** loop or by converting to a **`Span`** and using a **`foreach`** loop if iteration is necessary. 34 | 35 | ### **Performance Implications** 36 | 37 | - Both **`Span`** and **`Memory`** are designed to minimize memory allocations and copying. They provide a more efficient way to work with data by reducing garbage collection pressure and improving overall application performance. 38 | - They allow more precise control over memory usage, enabling optimizations that were not possible or would have been too cumbersome with prior .NET constructs like arrays and strings. 39 | 40 | ### **Safety** 41 | 42 | - **`Span`** ensures type safety and memory safety, preventing common errors such as buffer overruns. 43 | - Since **`Span`** cannot be boxed or stored on the heap, it avoids issues related to garbage collection and invalid references, making it a safer option for high-performance scenarios. 44 | -------------------------------------------------------------------------------- /Pages/NET/Standard Equality Protocols.md: -------------------------------------------------------------------------------- 1 | # Standard Equality Protocols 2 | 3 | - Implementing equality checks in C#. 4 | 5 | ## **`==` and `!=` Operators** 6 | 7 | - **Usage**: For comparing two objects or values. By default, these operators check for reference equality for reference types and bit-wise equality for value types. 8 | - **Customization**: You can overload these operators to provide custom equality logic for your types. 9 | 10 | ## **Virtual `object.Equals` Method** 11 | 12 | - **Usage**: Provides a way to check for equality. By default, it checks for reference equality. 13 | - **Overriding**: You can override this method in your classes to implement value-based equality logic. 14 | 15 | ## **Static `object.Equals` Method** 16 | 17 | - **Usage**: A static method that determines if two objects are equal by calling the instance **`Equals`** method, handling **`null`** values gracefully. 18 | - **Example**: **`bool areEqual = object.Equals(obj1, obj2);`** 19 | 20 | ## **Static `object.ReferenceEquals` Method** 21 | 22 | - **Usage**: Used to determine if two object references refer to the same instance, ignoring any **`==`** operator overloads. 23 | - **Example**: **`bool areSame = object.ReferenceEquals(obj1, obj2);`** 24 | 25 | ## **The `IEquatable` Interface** 26 | 27 | - **Usage**: Allows for type-safe equality checking and should be implemented by types to provide a strongly typed method for determining equality. 28 | - **Benefit**: Avoids boxing for value types and provides a clear contract for equality. 29 | 30 | ## **When `Equals` and `==` are Not Equal** 31 | 32 | - The **`==`** operator checks for reference equality by default but can be overloaded to perform value equality checks. 33 | - The **`Equals`** method performs reference equality for reference types but is often overridden to implement value equality. 34 | 35 | ## **Overriding `GetHashCode`** 36 | 37 | - When overriding **`Equals`**, you must also override **`GetHashCode`** to ensure that two objects considered equal have the same hash code. This is crucial for types used in hash-based collections like **`Dictionary`** and **`HashSet`**. 38 | 39 | ## **Overriding `Equals`** 40 | 41 | - **Object-Level Override**: Override the **`Equals(object obj)`** method to provide custom equality logic that applies to all instances of the class. 42 | - **Type-Specific Override**: Implement the **`IEquatable`** interface and override the **`Equals(T obj)`** method to provide type-specific equality logic, improving performance and type safety. 43 | 44 | ## **Best Practices** 45 | 46 | - Consistency between **`Equals`**, **`==`**, and **`GetHashCode`** is crucial. If two objects are considered equal (**`Equals`** returns **`true`** or **`==`** returns **`true`**), they must return the same hash code. 47 | - Always override **`GetHashCode`** when overriding **`Equals`**. 48 | - Consider implementing **`IEquatable`** for types that are frequently compared for equality. 49 | - Use **`ReferenceEquals`** to check for reference identity explicitly, especially within **`Equals`** implementations to handle self-references. 50 | -------------------------------------------------------------------------------- /Pages/NET/Stream.md: -------------------------------------------------------------------------------- 1 | # Stream 2 | 3 | ## Stream Architecture 4 | 5 | - Managing streams of data in C#. 6 | 7 | ### Backing Stores 8 | 9 | - **`FileStream`**, **`NetworkStream`**, **`MemoryStream`**, and **`PipeStream`** are examples of stream implementations for different data sources. 10 | 11 | ### Decorators 12 | 13 | - Decorator streams like **`BufferedStream`**, **`GZipStream`**, and **`CryptoStream`** add additional functionality (buffering, compression, encryption) to underlying streams. 14 | 15 | ### Adapters 16 | 17 | - **`StreamReader`** and **`StreamWriter`** adapt streams for reading and writing text, converting bytes to characters and vice versa. 18 | 19 | ### Thread Safety 20 | 21 | - Streams are generally not thread-safe, meaning simultaneous reads or writes from multiple threads require synchronization. 22 | 23 | ### **`File`** and **`Directory`** Classes vs **`FileInfo`** and **`DirectoryInfo`** 24 | 25 | - **`File`** and **`Directory`** provide static methods for file and directory operations, while **`FileInfo`** and **`DirectoryInfo`** offer instance methods. 26 | 27 | ### Memory-Mapped Files 28 | 29 | - Memory-mapped files allow for efficient file access by mapping them into memory, useful for working with large files or for inter-process communication. 30 | -------------------------------------------------------------------------------- /Pages/NET/StringBuilder.md: -------------------------------------------------------------------------------- 1 | # StringBuilder 2 | 3 | - Efficiently building strings in C#. 4 | - **Creating a StringBuilder**: 5 | 6 | ```csharp 7 | StringBuilder sb = new StringBuilder(); 8 | ``` 9 | 10 | - **Appending Text**: 11 | 12 | ```csharp 13 | sb.Append("Hello"); 14 | sb.AppendLine(" World!"); 15 | ``` 16 | 17 | - **Inserting Text**: 18 | 19 | ```csharp 20 | sb.Insert(0, "Start: "); 21 | ``` 22 | 23 | - **Removing Text**: 24 | 25 | ```csharp 26 | sb.Remove(0, 7); // Removes "Start: " 27 | ``` 28 | 29 | - **Replacing Text**: 30 | 31 | ```csharp 32 | sb.Replace("World", "Universe"); 33 | ``` 34 | 35 | - **Converting StringBuilder to String**: 36 | 37 | ```csharp 38 | string result = sb.ToString(); 39 | ``` 40 | -------------------------------------------------------------------------------- /Pages/NET/Try Statements and Exceptions.md: -------------------------------------------------------------------------------- 1 | # Try Statements and Exceptions 2 | 3 | - Exception handling in C# using `try` blocks. 4 | - **`try` Block:** Code that might throw an exception is placed inside a **`try`** block. 5 | - **`catch` Block(s):** After a **`try`** block, one or more **`catch`** blocks can be used to specify handlers for different types of exceptions. 6 | - **`finally` Block:** A **`finally`** block contains code that is executed after the **`try`** and **`catch`** blocks, regardless of whether an exception was thrown, and is typically used for cleanup code. 7 | - **Exception Propagation:** If an exception is not caught in a **`catch`** block, it propagates up the call stack, looking for a matching **`catch`** block in the calling methods. If it reaches the main method without being caught, the program will terminate. 8 | -------------------------------------------------------------------------------- /Pages/Network.md: -------------------------------------------------------------------------------- 1 | # Network 2 | 3 | ## Basic Concepts 4 | 5 | ### OSI Model 6 | 7 | - The Open Systems Interconnection (OSI) model is a conceptual framework used to describe the functions of a networking system. 8 | - It consists of seven layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application. 9 | - Each layer has specific responsibilities and communicates with the layers above and below it. 10 | 11 | ### How the Web Works 12 | 13 | - The web relies on the client-server architecture, where a client (e.g., a web browser) sends requests to a server, and the server responds with the requested resources. 14 | - The communication happens over the HTTP protocol, which is an application-layer protocol. 15 | - Domain Name System (DNS) maps human-readable domain names to IP addresses for server identification. 16 | 17 | ## Application Layer Protocols 18 | 19 | ### HTTP/1.x vs HTTP/2 vs HTTP/3 20 | 21 | - **HTTP/1.x:** Older version, supports only one outstanding request per TCP connection (addressed by HTTP/1.1 keep-alive). 22 | - **HTTP/2:** Multiplexes multiple requests over a single TCP connection, improving performance and efficiency. 23 | - **HTTP/3:** Built on top of the QUIC protocol, providing additional performance improvements and security benefits. 24 | 25 | ### Keep-Alive Concept (Pros and Cons) 26 | 27 | - **Keep-Alive:** Allows multiple HTTP requests/responses to be sent over a single TCP connection, reducing connection overhead. 28 | - **Pros:** Improved performance, reduced latency, and more efficient resource utilization. 29 | - **Cons:** Potential security risks (e.g., denial of service attacks), increased memory usage, and potential head-of-line blocking issues. 30 | -------------------------------------------------------------------------------- /Pages/Operating System.md: -------------------------------------------------------------------------------- 1 | # Operating System 2 | 3 | [Process Management](Operating%20System/Process%20Management.md) 4 | 5 | [Process Schedulers](Operating%20System/Process%20Schedulers.md) 6 | 7 | [Memory Management](Operating%20System/Memory%20Management.md) 8 | 9 | [Process Synchronization](Operating%20System/Process%20Synchronization.md) 10 | 11 | [File System](Operating%20System/File%20System.md) 12 | 13 | [Disk Scheduling](Operating%20System/Disk%20Scheduling.md) 14 | 15 | [Concurrent Concepts](Operating%20System/Concurrent%20Concepts.md) 16 | -------------------------------------------------------------------------------- /Pages/Operating System/Concurrent Concepts.md: -------------------------------------------------------------------------------- 1 | # Concurrent Concepts 2 | 3 | - **Kernel:** The core component of an operating system that manages system resources and provides services to user applications. 4 | - **Fork:** A system call used to create a new process, which is an almost identical copy of the calling process. 5 | - **Context Switching:** The process of storing and restoring the state (context) of a process to switch between different processes. 6 | - **User-Level and Kernel-Level Threads:** User-level threads are managed by user-level libraries, while kernel-level threads are managed directly by the operating system kernel. 7 | - **Process vs Thread:** A process is an instance of a program, while a thread is a lightweight unit of execution within a process. 8 | - **Multithreading vs Multiprocessing:** Multithreading allows multiple threads to share resources within a single process, while multiprocessing involves running multiple processes concurrently. 9 | - **Starvation:** A situation where a process or thread is continuously denied access to resources it requires, potentially leading to an indefinite wait. 10 | 11 | ## **Kernel** 12 | 13 | The kernel is the most critical component of an operating system. It acts as a mediator between applications and the computer's hardware. The kernel manages system resources such as CPU, memory, and I/O devices, and it handles system calls, which are requests from user-space applications for services provided by the kernel. 14 | 15 | ## **Fork** 16 | 17 | **`Fork`** is a system call that creates a new process. The new process, referred to as the child process, is a duplicate of the calling process, known as the parent process, except for the returned value. Forking is a common way to perform parallel tasks in Unix-like operating systems. 18 | 19 | ## **Context Switching** 20 | 21 | Context switching is the process of saving the state of a currently running process or thread so that it can be resumed later and activating another process or thread. This is crucial in a multitasking environment to ensure that the CPU can be shared among multiple processes. Context switching involves overhead and can impact system performance if excessive. 22 | 23 | ## **User-Level and Kernel-Level Threads** 24 | 25 | - **User-Level Threads**: These threads are managed without kernel support, typically by a runtime library or user-level process. The kernel is unaware of these threads and sees them as a single process. This approach offers flexibility and efficiency but lacks true concurrency and kernel-level services. 26 | - **Kernel-Level Threads**: Managed directly by the operating system kernel, allowing true parallel execution on multiprocessor systems. The kernel has full knowledge of all threads, enabling better scheduling and management. However, creating and managing kernel threads can be more resource-intensive than user-level threads. 27 | 28 | ## **Process vs Thread** 29 | 30 | - **Process**: A process is an instance of a running program and includes the program code, its current activity, and the resources allocated to it such as memory and file handles. Processes are isolated from each other, providing stability and security. 31 | - **Thread**: A thread is the smallest unit of execution within a process. Threads within the same process share resources like memory and file descriptors, which allows for efficient communication and data exchange but requires synchronization to prevent conflicts. 32 | 33 | ## **Multithreading vs Multiprocessing** 34 | 35 | - **Multithreading**: Involves running multiple threads within a single process. This allows for efficient use of resources and faster execution as threads can share memory and resources directly. However, it requires careful synchronization to avoid issues like race conditions. 36 | - **Multiprocessing**: Refers to running multiple processes concurrently. Processes are fully isolated, which enhances stability and security but at the cost of higher overhead for communication and resource sharing. 37 | 38 | ## **Starvation** 39 | 40 | Starvation occurs when a process or thread is perpetually denied the necessary resources to make progress, often due to scheduling algorithms or resource allocation policies that favor other processes or threads. Starvation can lead to significant performance issues and requires careful management of resources and fair scheduling policies to prevent. 41 | -------------------------------------------------------------------------------- /Pages/Operating System/Disk Scheduling.md: -------------------------------------------------------------------------------- 1 | # Disk Scheduling 2 | 3 | Disk scheduling algorithms are strategies used by operating systems to decide the order in which disk I/O requests are processed. These algorithms aim to optimize the performance of the disk subsystem by reducing seek time (the time it takes for the disk's read/write head to move to the location of the requested data) and minimizing latency. 4 | 5 | - **First Come First Served (FCFS):** Requests are served in the order they arrive. 6 | - **Shortest Seek Time First (SSTF):** The request with the minimum seek time from the current head position is served next. 7 | - **Scan:** The disk arm moves back and forth across the disk, servicing requests in the direction of its movement. 8 | - **C-Scan:** Similar to Scan, but the disk arm moves in only one direction and resets to the other end after servicing requests. 9 | - **Look:** Similar to Scan, but the disk arm only goes as far as the last pending request in each direction. 10 | - **C-Look:** Similar to C-Scan, but the disk arm only goes as far as the last pending request in the current direction. 11 | 12 | ## **First Come First Served (FCFS)** 13 | 14 | - **Description**: This is the simplest disk scheduling algorithm. Requests are processed in the exact order they arrive in the queue. 15 | - **Pros**: Fairness, as no request is prioritized over another. 16 | - **Cons**: Not efficient in terms of overall system performance, especially when requests are scattered across the disk. 17 | 18 | ## **Shortest Seek Time First (SSTF)** 19 | 20 | - **Description**: Chooses the request that is closest to the current head position, thereby minimizing the seek time for the next request. 21 | - **Pros**: Reduces the average seek time compared to FCFS. 22 | - **Cons**: Can lead to starvation, where requests frequently far from the head may wait a very long time to be serviced. 23 | 24 | ## **Scan (Elevator Algorithm)** 25 | 26 | - **Description**: The disk arm starts at one end of the disk and moves toward the other end, servicing requests along the way. Upon reaching the other end, the direction is reversed, and the process continues in a back-and-forth manner. 27 | - **Pros**: Offers a more uniform wait time compared to SSTF. 28 | - **Cons**: Requests just missed by the arm have to wait for the arm to traverse the entire disk and come back. 29 | 30 | ## **C-Scan (Circular Scan)** 31 | 32 | - **Description**: Similar to Scan, but instead of reversing direction, the arm goes back to the starting end of the disk after reaching the other end and continues processing requests. 33 | - **Pros**: Provides a more uniform wait time for requests across the disk. 34 | - **Cons**: Requests at the beginning of the disk may experience longer wait times after the arm resets. 35 | 36 | ## **Look** 37 | 38 | - **Description**: A variation of the Scan algorithm where the arm only goes as far as the last request in each direction before reversing or stopping, rather than traveling to the end of the disk. 39 | - **Pros**: Reduces unnecessary movement of the disk arm, leading to improved efficiency. 40 | - **Cons**: Like Scan, requests just missed can experience longer wait times. 41 | 42 | ## **C-Look (Circular Look)** 43 | 44 | - **Description**: Similar to C-Scan, but the arm only goes as far as the last request in the direction of movement. After servicing the last request, it immediately jumps back to the first request in the opposite direction without servicing any requests on the return trip. 45 | - **Pros**: Reduces the traversal time of the disk arm, potentially decreasing the average wait time. 46 | - **Cons**: Requests at one end of the disk might wait longer if the arm frequently doesn't reach that end before jumping back. 47 | -------------------------------------------------------------------------------- /Pages/Operating System/File System.md: -------------------------------------------------------------------------------- 1 | # File System 2 | 3 | Lays out the rules for how files are named, stored, and accessed by the OS and applications. 4 | 5 | - **File Allocation Table (FAT):** A data structure used in some file systems to track and locate file data on disk. 6 | - **Seek Time:** The time taken for the disk read/write head to move to the desired track. 7 | - **Rotational Latency:** The time taken for the desired sector to rotate under the read/write head. 8 | 9 | ## **File Allocation Table (FAT)** 10 | 11 | The File Allocation Table (FAT) is a classic file system architecture used in many computer systems, notably in early versions of Microsoft Windows and in memory cards and USB flash drives due to its simplicity. FAT maintains a table that acts as a map of the disk's contents. Each entry in the FAT corresponds to a block of the disk, indicating the next block in a file or marking the block as the end of a file. This chaining method allows files to be stored in non-contiguous blocks on the disk, facilitating dynamic file allocation and growth. However, FAT systems can suffer from fragmentation over time, leading to decreased performance. 12 | 13 | ## **Seek Time** 14 | 15 | Seek time refers to the time it takes for the disk's read/write head to move to the correct track on the disk where the desired data is stored. Disk drives contain platters divided into concentric circles called tracks, which are further divided into sectors. When a read or write operation is initiated, the drive must first move the head to the correct track. Seek time is a critical component of disk access time and can significantly affect overall system performance, especially in systems with heavy disk I/O operations. Minimizing seek time is crucial for improving the responsiveness and throughput of storage devices. 16 | 17 | ## **Rotational Latency** 18 | 19 | Rotational latency is the delay waiting for the rotation of the disk to bring the desired sector under the read/write head. Once the head is positioned over the correct track (after the seek time), it must wait for the disk to rotate the correct sector into position. Rotational latency depends on the rotational speed of the disk, measured in revolutions per minute (RPM). Higher RPM values result in lower rotational latency, which contributes to faster read and write operations. Solid-state drives (SSDs) eliminate rotational latency entirely by using non-mechanical storage methods, offering significantly faster data access times. 20 | -------------------------------------------------------------------------------- /Pages/Operating System/Memory Management.md: -------------------------------------------------------------------------------- 1 | # Memory Management 2 | 3 | Memory management is a fundamental aspect of operating systems, ensuring that each process has the necessary memory to execute while maximizing the efficiency and utilization of the system's memory resources. 4 | 5 | ## Summary 6 | 7 | - **Loading Process into Memory:** 8 | - **Contiguous Allocation:** 9 | - **Fixed Partitioning:** Memory is divided into fixed-size partitions. 10 | - **Dynamic Partitioning:** Partitions are allocated dynamically, and compaction is used to reduce external fragmentation. 11 | - **Non-Contiguous Allocation:** Processes are scattered in memory, using techniques like paging or segmentation. 12 | - **Frame:** A fixed-size block of main memory used for memory allocation. 13 | - **Page and Page Table:** Memory is divided into equal-sized pages, and a page table maps virtual addresses to physical addresses. 14 | - **Paging and Multi-Level Paging:** Techniques for mapping virtual addresses to physical addresses using page tables. 15 | - **Virtual Memory:** A technique that allows processes to access more memory than physically available by using secondary storage. 16 | - **Page Hit and Page Fault:** A page hit occurs when the requested page is in memory, while a page fault triggers fetching the page from secondary storage. 17 | 18 | ### **Loading Process into Memory** 19 | 20 | When a process is created or executed, it must be loaded into memory. The operating system handles this by allocating the required memory space to the process, considering the current memory usage and the process's needs. 21 | 22 | ### **Contiguous Allocation** 23 | 24 | Contiguous allocation involves assigning a single continuous block of memory to a process. 25 | 26 | - **Fixed Partitioning**: The memory is divided into fixed-size partitions, each of which can hold exactly one process. This method can lead to internal fragmentation if the process's memory requirement is less than the partition size. 27 | - **Dynamic Partitioning**: Memory partitions are created dynamically to fit the size of the requesting process, which helps in reducing wasted space. However, this can lead to external fragmentation over time, and compaction may be needed to consolidate free memory spaces. 28 | 29 | ### **Non-Contiguous Allocation** 30 | 31 | In non-contiguous allocation, a process can occupy multiple non-adjacent areas in memory. This approach reduces fragmentation and allows for more flexible memory utilization. 32 | 33 | - **Paging**: Memory is divided into fixed-size blocks called pages, and processes are divided into pages of the same size. A page table is used to map a process's virtual pages to physical frames in memory, allowing for non-contiguous allocation. 34 | - **Segmentation**: Similar to paging, but segments can be of various lengths, representing different logical parts of a program (e.g., code, data, stack). This allows memory to be more closely matched to the process's logical structure. 35 | 36 | ### **Frame** 37 | 38 | A frame is a fixed-size block of main memory, equivalent in size to a page, used as the basic unit of memory allocation in paging systems. 39 | 40 | ### **Page and Page Table** 41 | 42 | - **Page**: A fixed-size block of virtual memory. 43 | - **Page Table**: A data structure used by the operating system to store mappings from virtual addresses to physical frame addresses in memory. 44 | 45 | ### **Paging and Multi-Level Paging** 46 | 47 | - **Paging**: A memory management scheme that eliminates the need for contiguous allocation of physical memory by dividing memory into fixed-sized pages. 48 | - **Multi-Level Paging**: An extension of paging that uses multiple levels of page tables to reduce the memory required to store each process's page table. 49 | 50 | ### **Virtual Memory** 51 | 52 | Virtual memory allows a computer to compensate for physical memory shortages by temporarily transferring data from random access memory (RAM) to disk storage. This process creates the illusion for users of a very large (virtual) memory. 53 | 54 | ### **Page Hit and Page Fault** 55 | 56 | - **Page Hit**: Occurs when the data the process needs to access is already in memory, allowing for immediate access. 57 | - **Page Fault**: Occurs when the data the process needs is not in memory, necessitating reading the data from secondary storage (e.g., hard disk) into memory. 58 | -------------------------------------------------------------------------------- /Pages/Operating System/Process Management.md: -------------------------------------------------------------------------------- 1 | # Process Management 2 | 3 | Involves managing the execution of processes, including their creation, scheduling, and termination. It ensures that the CPU is efficiently utilized and that processes do not interfere with each other. Two key concepts in process management are the Process Control Block (PCB) and the degree of multiprogramming. 4 | 5 | - **Process Control Block (PCB):** A data structure maintained by the OS for each process, containing process information like state, CPU registers, memory details, etc. 6 | - **Degree of Multiprogramming:** The number of processes residing in memory simultaneously. 7 | 8 | ## **Process Control Block (PCB)** 9 | 10 | The Process Control Block is a crucial data structure that the operating system maintains for each process. It acts as the "identity card" for the process, containing all the information needed to manage the process. The PCB is essential for the OS to switch between processes efficiently (context switching), manage process execution, and keep track of process states. The components of a PCB typically include: 11 | 12 | - **Process State**: Indicates the current state of the process (e.g., running, waiting, ready, terminated). 13 | - **Process ID (PID)**: A unique identifier assigned to each process. 14 | - **CPU Registers**: The state of the CPU registers for the process. When a context switch occurs, these registers are saved to the PCB of the current process and then restored from the PCB of the next process to run. 15 | - **CPU Scheduling Information**: Information needed for scheduling the process, such as priority, scheduling queue pointers, and any other scheduling parameters. 16 | - **Memory Management Information**: Details about the process's memory allocation, such as base and limit registers, page tables, or segment tables, depending on the memory management scheme. 17 | - **Accounting Information**: This includes information used for performance monitoring, billing, and debugging, such as CPU usage, memory usage, and the number of times the process has been executed. 18 | - **I/O Status Information**: Details about the I/O devices allocated to the process, list of open files, etc. 19 | 20 | ## **Degree of Multiprogramming** 21 | 22 | The degree of multiprogramming in an operating system refers to the number of processes that are kept in memory simultaneously. It is a measure of how many processes are resident in the main memory and ready or waiting to execute. The degree of multiprogramming is closely related to the concept of multitasking and affects the utilization of the CPU and system resources. 23 | 24 | - **Higher Degree of Multiprogramming**: Indicates that more processes are loaded into memory, which can lead to better CPU utilization as there is a higher chance that at least one process is always in the state to be executed. However, if too high, it might lead to contention for resources, increased context switching overhead, and can potentially degrade system performance due to thrashing. 25 | - **Lower Degree of Multiprogramming**: Indicates fewer processes are in memory, which might simplify resource management but can lead to underutilization of the CPU and system resources, as the processor might spend more time idle. 26 | -------------------------------------------------------------------------------- /Pages/Operating System/Process Schedulers.md: -------------------------------------------------------------------------------- 1 | # Process Schedulers 2 | 3 | Process schedulers are components of the operating system that decide which process to run at any given time, based on criteria such as priority, process state, and system load. There are three main types of schedulers: 4 | 5 | 1. **Short-Term Scheduler (CPU Scheduler)**: 6 | - **Function**: Selects which process in the ready queue should be executed next by the CPU. 7 | - **Frequency of Execution**: Operates very frequently (milliseconds) to ensure the CPU remains busy. 8 | - **Criteria**: Often uses criteria like process priority, shortest job next, or round-robin scheduling. 9 | 2. **Medium-Term Scheduler**: 10 | - **Function**: Temporarily removes processes from main memory and stores them on disk (swapping) when there is a need to free up memory. It can later bring these processes back into memory. 11 | - **Purpose**: Helps in controlling the degree of multiprogramming and managing memory more efficiently. 12 | 3. **Long-Term Scheduler (Job Scheduler)**: 13 | - **Function**: Determines which processes are admitted from the job queue into the ready queue in main memory, starting the execution process. 14 | - **Frequency of Execution**: Operates less frequently than the short-term scheduler, as it deals with the admission of new processes into the system. 15 | - **Criteria**: May consider factors like job priorities, memory requirements, or the need to balance CPU-bound and I/O-bound processes. 16 | 17 | ## Summary 18 | 19 | - **Short-Term Scheduler (CPU Scheduler):** Selects a process from the ready queue and allocates the CPU to it. 20 | - **Medium-Term Scheduler:** Swaps processes between main memory and secondary storage (e.g., disk) to manage memory usage. 21 | - **Long-Term Scheduler (Job Scheduler):** Selects processes from the job queue and loads them into memory for execution. 22 | -------------------------------------------------------------------------------- /Pages/Operating System/Process Synchronization.md: -------------------------------------------------------------------------------- 1 | # Process Synchronization 2 | 3 | It ensures that concurrent processes operate safely without interfering with each other, maintaining data consistency and preventing race conditions. 4 | 5 | - **Lock Variable:** A variable used to control access to a shared resource. 6 | - **Binary Semaphore:** A synchronization primitive that can take two values (0 or 1) to control access to a shared resource. 7 | - **Counting Semaphore:** A generalized semaphore that can take values greater than 1 to control access to multiple instances of a resource. 8 | 9 | ## **Lock Variable** 10 | 11 | A lock variable is a simple way to achieve mutual exclusion in accessing shared resources. The idea is to use a variable that indicates whether a resource is free (usually **`0`** for free and **`1`** for busy). Before accessing the resource, a process checks the lock variable: 12 | 13 | - If the lock is **`0`** (free), the process sets it to **`1`** (busy) and proceeds to access the resource. 14 | - If the lock is **`1`** (busy), the process must wait until the lock becomes free. 15 | 16 | **Limitations**: This method is prone to race conditions itself because checking the lock variable's value and updating it are not atomic operations. This can lead to multiple processes reading that the lock is free and proceeding to access the resource simultaneously. 17 | 18 | ## **Binary Semaphore** 19 | 20 | A binary semaphore is a more sophisticated synchronization primitive that avoids the limitations of lock variables. It's essentially a variable with two values (0 and 1) and supports two atomic operations: 21 | 22 | - **Wait (or P operation)**: If the semaphore's value is **`1`**, it sets the value to **`0`** and allows the process to proceed. If it's already **`0`**, the process is blocked until the semaphore's value is **`1`** again. 23 | - **Signal (or V operation)**: Increments the semaphore's value, potentially unblocking a waiting process. 24 | 25 | Binary semaphores provide a way to ensure mutual exclusion without the race conditions associated with lock variables. 26 | 27 | ## **Counting Semaphore** 28 | 29 | Counting semaphores generalize the concept of binary semaphores to allow for values greater than **`1`**. This type of semaphore can be used to control access to a resource pool with multiple instances. 30 | 31 | - **Wait (P operation)**: Decrements the semaphore's value. If the result is negative, the process is blocked until another process signals. 32 | - **Signal (V operation)**: Increments the semaphore's value, potentially unblocking one or more waiting processes. 33 | 34 | Counting semaphores are useful for managing a fixed number of identical resources, allowing multiple processes to access the resources concurrently, up to the maximum number of resources available. 35 | 36 | ## **Critical Sections and Synchronization** 37 | 38 | In all these mechanisms, the goal is to protect the "critical section" — the part of the code where shared resources are accessed. Proper synchronization ensures that only one process (or thread) can enter its critical section at a time when accessing shared resources, preventing data inconsistency and race conditions. 39 | -------------------------------------------------------------------------------- /Pages/Test.md: -------------------------------------------------------------------------------- 1 | # Test 2 | 3 | [Test Pyramid](Test/Test%20Pyramid.md) 4 | 5 | [Test Isolation](Test/Test%20Isolation.md) 6 | 7 | [Code Coverage](Test/Code%20Coverage.md) 8 | 9 | [TDD / BDD](Test/TDD%20BDD.md) 10 | -------------------------------------------------------------------------------- /Pages/Test/Code Coverage.md: -------------------------------------------------------------------------------- 1 | # Code Coverage 2 | 3 | Code coverage is a metric that measures the degree to which the source code of a program is executed or covered by tests. It helps identify untested or under-tested areas of the codebase and provides insights into the effectiveness of the test suite. 4 | 5 | Code coverage tools analyze the execution of tests and report the percentage of code lines, branches, or other coverage metrics that were exercised during the test run. While high code coverage is desirable, it is important to note that it does not guarantee the absence of bugs or ensure the overall quality of the software. Code coverage should be used in conjunction with other quality assurance practices, such as code reviews, static analysis, and adherence to coding standards. 6 | 7 | ## **Importance of Code Coverage** 8 | 9 | - **Identifies Untested Code:** Helps developers find parts of the code that have not been executed during testing, highlighting areas that might need additional tests. 10 | - **Improves Test Quality:** Encourages the creation of more thorough tests to increase coverage, potentially uncovering hidden bugs. 11 | - **Risk Management:** High-risk areas without sufficient test coverage can be easily identified, allowing teams to prioritize testing efforts where they are most needed. 12 | 13 | ## **Code Coverage Metrics** 14 | 15 | Code coverage is often broken down into several metrics, each offering a different perspective on test completeness: 16 | 17 | - **Line Coverage:** Measures the percentage of code lines that have been executed during tests. 18 | - **Branch Coverage:** Tracks the coverage of conditional branches (e.g., **`if`**, **`switch`** cases), ensuring that both true and false paths are tested. 19 | - **Function Coverage:** Indicates the percentage of functions or methods that have been called. 20 | - **Statement Coverage:** Similar to line coverage but focuses on individual statements, particularly relevant in languages where multiple statements can be on a single line. 21 | 22 | ## **Limitations** 23 | 24 | While code coverage is a valuable tool, it has limitations and should not be the sole measure of code quality: 25 | 26 | - **False Sense of Security:** High coverage might not equate to high code quality or the absence of bugs. Tests need to be meaningful, not just designed to artificially inflate coverage metrics. 27 | - **Not Covering All Cases:** Coverage metrics might not fully account for edge cases or complex conditional logic that could lead to failures. 28 | - **Maintenance Overhead:** Achieving very high coverage levels can sometimes lead to significant maintenance overhead for the test suite, especially if the tests are brittle or tightly coupled to the code. 29 | 30 | ## **Best Practices** 31 | 32 | - **Balanced Approach:** Aim for high coverage but focus on the quality of tests. Cover critical paths and use coverage data to guide testing efforts rather than as an absolute metric. 33 | - **Continuous Integration:** Integrate code coverage analysis into the continuous integration (CI) pipeline to regularly monitor and manage coverage. 34 | - **Combine with Other Techniques:** Use code coverage in conjunction with other testing and quality assurance practices, such as manual testing, peer reviews, and static code analysis, to ensure comprehensive quality control. 35 | -------------------------------------------------------------------------------- /Pages/Test/TDD BDD.md: -------------------------------------------------------------------------------- 1 | # TDD / BDD 2 | 3 | ## Test-Driven Development (TDD) and Behavior-Driven Development (BDD) 4 | 5 | TDD and BDD are software development methodologies that emphasize writing tests before writing the actual production code. 6 | 7 | - **Test-Driven Development (TDD):** TDD is a practice where developers write failing unit tests before implementing the corresponding code. The cycle of writing tests, making them pass, and refactoring the code is repeated until the desired functionality is achieved. TDD promotes a modular and testable codebase, encourages good design practices, and provides a safety net for future changes. 8 | - **Behavior-Driven Development (BDD):** BDD is an extension of TDD that focuses on describing the desired behavior of the system in a language that is understandable to both technical and non-technical stakeholders. BDD tests are written in a more human-readable format, often using tools like Cucumber or SpecFlow, and emphasize collaboration and communication between team members. 9 | -------------------------------------------------------------------------------- /Pages/Test/Test Isolation.md: -------------------------------------------------------------------------------- 1 | # Test Isolation 2 | 3 | Test isolation is a principle in software testing that emphasizes the independence of individual tests. It asserts that each test should be executed in an environment that is unaffected by previous tests, ensuring that the outcome of one test does not influence the outcome of another. This isolation is critical for several reasons: 4 | 5 | - **Reliability**: Tests that are not isolated can lead to flaky tests, where tests might pass or fail unpredictably depending on the order of execution or the side effects from other tests. 6 | - **Maintainability**: Isolated tests are easier to understand and maintain since each test is self-contained, making it clear what conditions and inputs lead to specific outcomes. 7 | - **Debuggability**: When a non-isolated test fails, it can be challenging to determine whether the failure was due to the test's own logic or side effects from previous tests. Isolated tests simplify troubleshooting by ensuring that failures are self-contained. 8 | 9 | ## **Test Isolation and Parallel Testing** 10 | 11 | Parallel testing involves running multiple tests simultaneously to reduce the total time required for test execution. Test isolation plays a crucial role in this context: 12 | 13 | - **Enables Parallel Execution**: Isolated tests can be run in parallel without the risk of interference, as they do not share any state or dependencies. This lack of interference is vital for ensuring that the results of parallel tests are accurate and predictable. 14 | - **Improves Performance**: By allowing tests to run in parallel, test isolation helps to significantly reduce the overall test execution time, especially in large projects with thousands of tests. 15 | - **Consistency Across Environments**: Test isolation ensures that tests produce the same results regardless of whether they are run in sequence or in parallel, which is important for consistency in continuous integration (CI) pipelines and various testing environments. 16 | 17 | ## Achieving Test Isolation 18 | 19 | - **Use of Setup and Teardown Methods**: Most testing frameworks offer setup and teardown methods that are run before and after each test, respectively. These methods can be used to create a fresh environment for each test and clean up afterward, ensuring no residual state affects subsequent tests. 20 | - **Mocking and Stubbing**: External dependencies (like databases, file systems, and external services) can be mocked or stubbed to ensure that tests do not affect each other through shared resources. 21 | - **Database Transactions**: For tests involving a database, running each test within a separate database transaction that is rolled back at the end can ensure that database changes made by one test do not affect others. 22 | -------------------------------------------------------------------------------- /Pages/Test/Test Pyramid.md: -------------------------------------------------------------------------------- 1 | # Test Pyramid 2 | 3 | The test pyramid consists of three main levels: 4 | 5 | 1. **Unit Tests:** These are low-level tests that focus on testing individual units of code, such as functions or methods, in isolation. Unit tests are the foundation of the test pyramid and should make up the majority of tests in a codebase. They are fast to execute, easy to write and maintain, and provide rapid feedback during the development cycle. 6 | 2. **Integration Tests:** Integration tests verify the correct interaction and communication between different components or modules of the system. They test the integration points between units and ensure that the system works as expected when multiple units are combined. Integration tests are more complex and time-consuming than unit tests but still faster than end-to-end tests. 7 | 3. **End-to-End (E2E) Tests:** E2E tests simulate real-world scenarios by testing the entire application from start to finish, including all components and external dependencies. These tests ensure that the system behaves correctly from the user's perspective and validate the overall system functionality. E2E tests are the most comprehensive but also the slowest and most expensive to execute. 8 | 9 | The test pyramid emphasizes having a larger number of unit tests at the base, followed by fewer integration tests, and even fewer end-to-end tests at the top. This distribution ensures that the majority of testing efforts are focused on low-level, fast, and isolated tests, while still providing adequate coverage for higher-level system behaviors. 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DotNet Engineer Masterclass 2 | 3 | This repository was born out of the challenges and questions faced during real software engineering interviews, offering in-depth coverage on a wide array of topics. This will be a preparation handbook for job interviews. 4 | 5 | ## Topics Covered 6 | 7 | - [Data Structures](Pages/Data%20Structures.md) 8 | - [Design Patterns](Pages/Design%20Patterns.md) 9 | - [Database](Pages/Database.md) 10 | - [Git](Pages/Git.md) 11 | - [Cryptography](Pages/Cryptography.md) 12 | - [Docker](Pages/Docker.md) 13 | - [Microservices](Pages/Microservices.md) 14 | - [Event-Driven](Pages/Event-Driven.md) 15 | - [Network](Pages/Network.md) 16 | - [Operating System](Pages/Operating%20System.md) 17 | - [Test](Pages/Test.md) 18 | - [.NET](Pages/NET.md) 19 | - [Concurrency](Pages/Concurrency.md) 20 | - [DDD](Pages/DDD.md) 21 | - [Entity Framework](Pages/Entity%20Framework.md) 22 | - [OOP](Pages/OOP.md) 23 | - [Some other Concepts](Pages/Concepts.md) 24 | 25 | ## Getting Started 26 | 27 | To make the most of this repository, start by exploring the topics that interest you the most or those where you feel you need more practice. Each section includes brief and detailed explanations. 28 | 29 | ## How to Contribute 30 | 31 | If you have suggestions, corrections, or additional resources to add, please open an issue or submit a pull request. See the [Contribution Guidelines](CONTRIBUTING.md) for more information on how you can contribute to the repo. 32 | 33 | ## License 34 | 35 | This project is licensed under the MIT License - see the [LICENSE](LICENSE.md) file for details. 36 | 37 | --- 38 | 39 | Thank you for visiting the DotNet Engineer Masterclass. Happy learning! 40 | --------------------------------------------------------------------------------