├── Aws
└── Readme.md
├── CSS
└── readme.md
├── DataBase
├── generalDatabase.md
├── mongodb.md
├── mongooseQuery.md
├── sql.md
└── sqlQueries.md
├── Dsa
└── Readme.md
├── Express
└── Readme.md
├── Interview
└── readme.md
├── Javascript
├── Readme.md
└── practice
│ ├── search.html
│ └── slider.html
├── Miscellaneous
└── Readme.md
├── Nodejs
└── Readme.md
├── OOPs
└── Readme.md
├── README.md
├── Redis
└── Readme.md
├── Redux
└── Readme.md
├── WebSoket
└── Readme.md
├── html
└── Readme.md
├── react
└── Readme.md
└── system
└── Readme.md
/CSS/readme.md:
--------------------------------------------------------------------------------
1 | ## What is the Box model in CSS? Which CSS properties are a part of it?
2 |
3 | - In web development, the box model is a conceptual model that describes how the layout of an HTML element is calculated. Every HTML element can be considered as a rectangular box, with content, padding, border, and margin.
4 | - The box model consists of the following parts:
5 | - Content: The actual content of the HTML element, such as text, images, or video.
6 | - Padding: The space between the content and the element's border. Padding can be used to add space between the content and the border, or to make the content appear larger.
7 | - Border: The line that surrounds the element's content and padding. The border can be customized in terms of its style, width, and color.
8 | - Margin: The space between the border of the element and the adjacent elements. Margin can be used to create space between elements or to separate them.
9 |
10 | 
11 |
12 |
13 |
14 |
15 | ## What is the z-index? What is the stacking context?
16 | - z-index is a CSS property that determines the stacking order of positioned elements. It specifies the stack level of an element, relative to other elements on the same stacking context. Elements with higher z-index values will appear in front of elements with lower z-index values.
17 | - A stacking context is an HTML element that contains a set of stacked, overlapping elements. Each stacking context has its own stacking order, which is determined by the z-index values of its child elements. The stacking context is created by certain CSS properties, including position: relative, position: absolute, position: fixed, position: sticky, and opacity (when it is less than 1).
18 | - The stacking order of elements can be visualized as a three-dimensional stack, with each element occupying a layer in the stack. The z-index property determines the order of the layers, with higher values appearing in front of lower values.
19 | - A stacking context is an element that contains a set of layers. Within a local stacking context, the z-index values of its children are set relative to that element rather than to the document root. Layers outside of that context — i.e. sibling elements of a local stacking context — can't sit between layers within it. If an element B sits on top of element A, a child element of element A, element C, can never be higher than element B even if element C has a higher z-index than element B.
20 |
21 |
22 |
23 | ## Difference between flex and grid? Where to use it?
24 | - Flexbox is one-dimensional, while CSS Grid is two-dimensional.
25 | - Flexbox is best for layouts that involve either rows or columns, while CSS Grid is best for layouts that involve both rows and columns.
26 | - Flexbox is designed for laying out items in a container, while CSS Grid is designed for laying out the container itself.
27 | - Flexbox provides more flexibility in distributing space among items in a container, while CSS Grid provides more control over the placement of items.
28 |
29 |
30 | ## Difference between ***display:none*** and ***visibility:hidden***
31 | - "**display: none**" removes an element from the document flow entirely. This means that the element is not rendered and does not take up any space on the page. Additionally, any child elements of the hidden element are also hidden. If you inspect the HTML code, you won't see the element at all. This is useful when you want to completely remove an element from the page and don't want it to be visible or accessible in any way.
32 | - "**visibility: hidden**", which is often confused with "display: hidden" (which is not a valid property), hides an element while still taking up space on the page. This means that the element is not visible, but its space is still reserved, and any child elements are also still present and hidden. If you inspect the HTML code, you will see the element but it will not be visible on the page. This is useful when you want to hide an element but still want it to occupy its space and be accessible in some way.
33 |
34 |
35 |
36 | ## What is the difference between position absolute, relative and fixed?
37 | - **relative**: When an element is positioned as "relative", it is positioned relative to its normal position within the document flow. This means that the element will still take up space in the document flow, and other elements will not be repositioned based on its position. You can use top, right, bottom, and left properties to move it from its original position.
38 | - **absolute**: When an element is positioned as "absolute", it is positioned relative to its nearest positioned ancestor element (i.e., an ancestor element that has a position value of "relative", "absolute", or "fixed"). If there is no positioned ancestor, then the element is positioned relative to the initial containing block (usually the viewport). Unlike "relative", an absolutely positioned element does not take up space in the document flow, and other elements can be positioned relative to it. You can use top, right, bottom, and left properties to specify the position of the element.
39 | - **fixed**: When an element is positioned as "fixed", it is positioned relative to the viewport, and will remain in the same position even if the page is scrolled. This is useful for elements that you want to keep in a fixed position on the screen, such as navigation menus or headers. You can use top, right, bottom, and left properties to specify the position of the element.
40 |
41 | - In summary, "relative" positioning is relative to the normal position of the element within the document flow, "absolute" positioning is relative to the nearest positioned ancestor, and "fixed" positioning is relative to the viewport.
42 |
43 |
44 | ### What is !important
45 | - In CSS (Cascading Style Sheets), !important is a keyword that is used to give a style rule more weight than it would normally have in the cascade of styles. When !important is added to a CSS property value, it overrides any previous styles set for that property and makes the new style rule the most important one. This means that the style rule with !important will take precedence over other styles applied to the same element.
46 |
47 |
48 |
49 | ## Difference between reset vs normalize CSS?. How do they differ?
50 | Reset CSS: CSS resets aim to remove all built-in browser styling. For example margins, paddings, font-sizes of all elements are reset to be the same.
51 |
52 | Normalize CSS: Normalize CSS aims to make built-in browser styling consistent across browsers. It also corrects bugs for common browser dependencies.
53 |
54 |
55 | ## What is the difference between inline, inline-block, and block?
56 | Block Element: The block elements always start on a new line. They will also take space for an entire row or width. List of block elements are
,
.
57 |
58 | Inline Elements: Inline elements don't start on a new line, they appear on the same line as the content and tags beside them. Some examples of inline elements are , , , and tags.
59 |
60 | Inline Block Elements: Inline-block elements are similar to inline elements, except they can have padding and margins and set height and width values.
61 |
62 |
63 |
64 | ## What are Pseudo elements and Pseudo classes?
65 | Pseudo-elements allows us to create items that do not normally exist in the document tree, for example ::after.
66 |
67 | ```CSS
68 | ::before
69 | ::after
70 | ::first-letter
71 | ::first-line
72 | ::selection
73 | ```
74 |
75 | In the below example, the color will appear only on the first line of the paragraph.
76 |
77 | ```CSS
78 | p: :first-line {
79 | color: #ffOOOO;
80 | font-variant: small-caps;
81 | }
82 | ```
83 |
84 | Pseudo-classes select regular elements but under certain conditions like when the user is hovering over the link.
85 |
86 | ```CSS
87 | :link
88 | :visited
89 | :hover
90 | :active
91 | :focus
92 | ```
93 |
94 | Example of the pseudo-class, In the below example, the color applies to the anchor tag when it’s hovered.
95 |
96 | ```CSS
97 | /* mouse over link */
98 | a:hover {
99 | color: #FFOOFF;
100 | }
101 | ```
102 |
103 |
104 |
105 | ## How are the CSS selectors matched against the elements by the browser?
106 | The order of matching selectors goes from right to left of the selector expression. The elements in the DOM are filtered by browsers based on the key selectors and are then traversed up to the parent elements for determining the matches. The speed of determining the elements depends on the length of the chain of selectors. Consider an example:
107 |
108 | ```CSS
109 | p span{
110 | color: black;
111 | }
112 | ```
113 |
114 | Here, the browser first finds all span elements in the DOM and then it traverses to each of its parent elements to check if they are the paragraph p elements.
115 |
116 | Once the browser finds all matching span tags having paragraph elements as parent and applies the color of black to the content, the matching process is stopped.
117 |
118 |
119 | ## Can you name the four types of @media properties?
120 | The four types of
121 | ```CSS
122 | @media All{}
123 | @media Screen{}
124 | @media Print{}
125 | @media Speech{}
126 | ```
127 | properties are:
128 |
129 | **All** → It’s the default property. Used for all media-type devices.
130 | **Screen** → Used for computer screen, mobile screen.
131 | **Print** → Used for printers.
132 | **Speech** → Used for screen readers.
133 |
134 |
135 | ## What is progressive rendering? How do you implement progressive rendering in the website?. What are the advantages of it?
136 | Progressive rendering is the name given to techniques used to improve the performance of a webpage (in particular, improve perceived load time) to render content for display as quickly as possible.
137 |
138 | We can implement the progressive rendering of the page by loading the **lazy loading of the images**. We can use *Intersection Observer API* to lazy load the image. The API makes it simple to detect when an element enters the viewport and take an action when it does. Once the image enters the viewport, we will start loading the images.
--------------------------------------------------------------------------------
/DataBase/generalDatabase.md:
--------------------------------------------------------------------------------
1 | ## What is a database?
2 | - A database is an organized collection of structured data that is stored and managed in a computer system.
3 | - It is designed to efficiently store, retrieve, and manage large amounts of information.
4 | - It provides a structured way to organize, store, and manipulate data, enabling efficient data management, data retrieval, and data analysis. Databases are widely used in various applications and industries to store and manage data effectively.
5 |
6 |
7 |
8 | ## What are the different types of databases?
9 | - There are several types of databases, including:
10 | - 1) **Relational Databases**: These databases organize data into tables with predefined relationships between them. They use Structured Query Language (SQL) for querying and managing data. Examples include Oracle, MySQL, and Microsoft SQL Server.
11 | - 2)**Non-Relational Databases (NoSQL)**: These databases are designed to handle unstructured or semi-structured data and provide flexible data models. They offer high scalability and performance. Examples include MongoDB, Cassandra, and Redis.
12 | - 3)**Object-Oriented Databases**: These databases store data in the form of objects, which encapsulate both data and methods. They are used in object-oriented programming languages and provide support for object-oriented concepts like inheritance and polymorphism.
13 | - 4)**Hierarchical Databases**: These databases organize data in a tree-like structure with parent-child relationships. Each child can have only one parent, and the relationships are rigidly defined. IBM's Information Management System (IMS) is an example of a hierarchical database.
14 | - 5)**Network Databases**: Network databases also organize data with a parent-child relationship, but unlike hierarchical databases, they allow each child to have multiple parents. The CODASYL database is an example of a network database.
15 | - 6)**Graph Databases**: Graph databases use graph structures to represent and store data. They are designed to handle highly interconnected data and are used for tasks such as social network analysis and recommendation systems. Examples include Neo4j and Amazon Neptune.
16 | - 7)**Time-Series Databases**: Time-series databases are optimized for handling time-stamped or time-series data, such as sensor data, stock prices, or log files. They provide efficient storage and retrieval of time-based data and support time-based queries. InfluxDB and Prometheus are examples of time-series databases.
17 | - 8)**Spatial Databases**: These databases are specialized for storing and querying spatial or geographic data, such as maps, locations, or GPS coordinates. They offer spatial indexing and support for spatial queries. PostGIS and Oracle Spatial are examples of spatial databases.
18 | - It's important to note that these types of databases are not mutually exclusive, and hybrid database systems can incorporate elements from multiple types to cater to specific requirements.
19 |
20 |
21 |
22 | ## What is the difference between SQL and NoSQL databases?
23 | - SQL (Structured Query Language) and NoSQL (Not Only SQL) are two types of database management systems that are used to store and manage data in different ways.
24 |
25 | - Here are some of the key differences between SQL and NoSQL databases:
26 | **Data model** - SQL databases are relational databases that use tables to store data. Each table has a set of columns and rows, and each row represents a record with a unique identifier. SQL databases use a structured query language (SQL) to manipulate data and perform operations such as insert, update, delete, and select.
27 | - NoSQL databases, on the other hand, are non-relational databases that can store data in a variety of formats, including key-value, document, column-family, and graph. NoSQL databases are designed to handle large volumes of unstructured and semi-structured data, which makes them a popular choice for big data and real-time applications. NoSQL databases use different query languages and APIs for data manipulation and retrieval, depending on the type of database.
28 |
29 | **Query Language:** SQL databases use a structured query language (SQL) to manipulate and retrieve data. SQL is a standard language that provides a consistent syntax for working with relational databases. In contrast, NoSQL databases use different query languages or APIs, depending on the type of database. For example, MongoDB uses a query language called MongoDB Query Language (MQL), while Cassandra uses a Query Language called CQL.
30 |
31 | **Scalability:** = SQL databases are vertically scalable, which means they can handle increasing amounts of data by adding more hardware resources such as CPU, RAM, or storage. However, adding more hardware resources can be costly and may not provide linear scalability. NoSQL databases, on the other hand, are horizontally scalable, which means they can handle increasing amounts of data by adding more nodes to a cluster. This allows for more flexible and cost-effective scalability.
32 |
33 | **ACID Compliance:** =ACID stands for Atomicity, Consistency, Isolation, and Durability, which are the properties that ensure transactional consistency and reliability in a database. SQL databases are typically ACID compliant, which means they guarantee that each transaction is executed reliably, even in the face of failures or errors. NoSQL databases, however, may sacrifice some of the ACID properties for the sake of scalability or performance.
34 |
35 | **Use Cases:**= SQL databases are well suited for applications that require complex queries and transactions, such as financial applications, e-commerce websites, or enterprise systems. NoSQL databases, on the other hand, are better suited for applications that require high scalability, low latency, and flexible data models, such as social media platforms, IoT devices, or real-time analytics.
36 |
37 | - SQL and NoSQL databases have different strengths and weaknesses, and the choice between them depends on the specific needs of the application. SQL databases are reliable, consistent, and provide a structured data model, while NoSQL databases are flexible, scalable, and provide a non-relational data model.
38 |
39 |
40 |
41 | ## Explain the concept of normalization in database design.
42 | - Normalization is a process in database design that involves organizing data in a structured and efficient manner by eliminating redundancy and improving data integrity. It aims to minimize data duplication and ensure that each piece of data is stored in the most appropriate and logical place within the database.
43 | - The concept of normalization is based on a set of rules called normal forms, which provide guidelines for designing well-structured databases. The most commonly used normal forms are:
44 | - First Normal Form (1NF): This form requires that each column in a table contains only atomic values (indivisible values). It eliminates repeating groups and ensures that each attribute holds a single value.
45 | - Second Normal Form (2NF): In addition to meeting the requirements of 1NF, this form states that every non-key attribute in a table must be functionally dependent on the entire primary key. It eliminates partial dependencies.
46 | - Third Normal Form (3NF): Building upon the rules of 2NF, 3NF requires that there should be no transitive dependencies, meaning that non-key attributes should not depend on other non-key attributes. It helps eliminate redundancy by splitting tables into multiple tables as necessary.
47 | -Normalization helps ensure data consistency, reduces the chances of data anomalies or inconsistencies, and simplifies data maintenance and updates. It also allows for more efficient querying and retrieval of data.
48 |
49 |
50 |
51 | ## What is indexing?
52 | - Indexing is the process of creating a data structure that allows for quick retrieval of information from a database or other data storage system. An index is a data structure that contains a subset of the data in a larger data store, arranged in a way that allows for fast searching and retrieval of information.
53 | - In a database, an index is typically created on one or more columns of a table. When a query is executed that includes a search condition on the indexed column(s), the database engine can use the index to quickly locate the relevant data, rather than having to scan the entire table.
54 | - Indexes can greatly improve the performance of database queries, especially for large data sets. However, indexes also come with some overhead, as they require additional disk space and can slow down write operations (since the index must be updated each time a record is added, deleted, or modified).
55 | - There are several types of indexes, including:
56 | - **B-tree indexes**: The most common type of index, used for efficient querying of range-based searches.
57 | - **Hash indexes: Used** for fast equality-based searches, but not well-suited for range-based searches.
58 | - **Bitmap indexes**: Used for efficient searching of low-cardinality columns (i.e., columns with a limited number of distinct values).
59 |
60 |
61 | ## how do you decide which columns to index?
62 | - Selectivity: Choose columns with high selectivity, meaning they have a wide range of distinct values. Indexing highly selective columns is more effective because it narrows down the data and reduces the number of rows to scan during query execution.
63 |
64 | - Query Frequency: Identify the columns frequently used in queries, especially in the WHERE clause or JOIN conditions. Indexing these columns can significantly improve the performance of queries that involve filtering or joining based on those columns.
65 |
66 | - Cardinality: Consider the cardinality of the column, which refers to the number of distinct values in the column compared to the total number of rows in the table. Columns with high cardinality are often good candidates for indexing as they provide more selective filtering.
67 |
68 | - Sorting and Grouping: If your queries involve sorting or grouping by certain columns, consider indexing those columns. Indexes can speed up sorting operations and enhance the performance of queries that involve ORDER BY or GROUP BY clauses.
69 | Composite Indexes: In some cases, creating an index on multiple columns can be more effective than indexing individual columns. Composite indexes can be useful when queries involve conditions or joins on multiple columns together.
70 |
71 | - Write Operations: Consider the impact of indexes on write operations (inserts, updates, deletes). Indexes incur overhead during write operations as the database needs to update the index along with the data. Excessive indexing can lead to slower write performance, so strike a balance between read and write performance based on your workload requirements.
72 |
73 |
74 |
75 |
76 | ## How do you optimize a database query for better performance?
77 | - Optimizing database queries can significantly improve the performance and efficiency of a database system. Here are some key approaches to optimize database queries:
78 | - **Indexing**: Indexes can improve query performance by allowing the database to locate data more quickly. Identify the frequently used columns in your queries and create appropriate indexes on those columns. Indexes can speed up data retrieval, especially for large tables.
79 | - **Query Optimization**: Analyze and optimize the SQL queries themselves. Ensure that queries are well-structured, use appropriate joins, filters, and aggregations, and avoid unnecessary operations or calculations. Utilize query optimization techniques like query rewriting, query hints, and query plan analysis.
80 | - **Denormalization**: In certain cases, denormalization can improve query performance by reducing the number of joins required. By storing redundant data or precalculating derived values, you can avoid complex joins and improve query response time. However, be cautious as denormalization can impact data consistency and maintenance.
81 | - **Proper Database Design**: A well-designed database schema can contribute to better query performance. Normalize the database schema to minimize redundancy and maintain data integrity. Properly define primary keys, foreign keys, and relationships between tables to ensure efficient data retrieval.
82 | - **Database Statistics**: Keep database statistics up to date. Database systems use statistics to determine optimal query execution plans. Regularly update statistics on tables and indexes to provide the query optimizer with accurate information for making efficient decisions.
83 | - **Database Configuration**: Adjust database configuration parameters to suit your workload and hardware resources. Parameters like memory allocation, buffer sizes, and query caching can significantly impact query performance. Optimize these settings based on your specific requirements and workload patterns.
84 | - **Partitioning**: Partition large tables by dividing them into smaller, more manageable segments based on predefined criteria, such as range, list, or hash. Partitioning can improve query performance by reducing the amount of data the database needs to scan
85 | - **Proper Index Maintenance**: Regularly analyze and maintain indexes by rebuilding or reorganizing them. Over time, indexes can become fragmented, which can negatively impact query performance. Regular maintenance ensures indexes are optimized and data access is efficient.
86 | - **Caching**: Implement caching mechanisms to store frequently accessed or computationally expensive query results in memory. Caching can significantly reduce the load on the database and improve query response time for repetitive queries.
87 | - **Hardware and Infrastructure**: Optimize your hardware infrastructure, such as disk configurations, memory capacity, and network bandwidth. Ensure that the database server has sufficient resources to handle the expected workload
88 | - It's important to note that query optimization is a continuous process, and the appropriate optimization techniques may vary depending on the specific database system, workload, and data characteristics. Profiling and monitoring the database performance can help identify bottlenecks and guide further optimization efforts.
89 |
90 |
91 | ## What is Normalization / Denormalization?
92 | - Normalization and denormalization are techniques used in database design to optimize the structure of a database for performance, scalability, and data integrity.
93 | - Normalization is the process of organizing data in a database so that it is structured in a logical and consistent way. This involves breaking down large tables into smaller, more specialized tables, and creating relationships between them using foreign keys. By doing this, data redundancy is reduced, and data integrity is improved.
94 | - Denormalization, on the other hand, is the process of intentionally introducing redundancy into a database to improve performance. This involves adding duplicate data to a table, which can help to reduce the need for joins between tables, and speed up query performance. However, denormalization can also increase the risk of data inconsistency and can make updates to the database more complex.
95 | - normalization is about reducing redundancy and improving consistency, while denormalization is about improving performance by introducing redundancy. Both techniques have their own advantages and disadvantages, and their use depends on the specific requirements and characteristics of the database and the application that uses it.
96 |
97 | ## What is a deadlock in a database and how can it be resolved?
98 | - A deadlock in a database occurs when two or more transactions are waiting for each other to release resources (e.g., locks) that they hold, resulting in a circular dependency that prevents any of the transactions from proceeding. As a result, the transactions remain in a blocked state indefinitely, causing a system-wide deadlock.
99 | - Deadlocks can occur due to various reasons, such as concurrent access to shared resources, inconsistent lock acquisition ordering, or poor transaction design. Resolving deadlocks requires implementing strategies to detect and resolve the deadlock situation. Here are some common approaches:
100 | - **Deadlock Detection**: Employ deadlock detection mechanisms to identify the occurrence of deadlocks in the database system. Deadlock detection algorithms periodically examine the resource allocation and wait-for graphs to identify cycles and determine if a deadlock exists.
101 | - **Deadlock Prevention**: Preventing deadlocks involves implementing techniques that eliminate one or more conditions necessary for a deadlock to occur. This can include techniques like imposing a strict lock acquisition ordering or using a two-phase locking protocol.
102 | - **Deadlock Avoidance**: Deadlock avoidance involves predicting and avoiding situations that might lead to a deadlock. This can be achieved by employing algorithms that dynamically analyze the resource needs of transactions and make scheduling decisions to ensure safe execution without encountering a deadlock.
103 | - **Deadlock Resolution**: If a deadlock is detected, it needs to be resolved. One common approach is to employ a resource preemption strategy, where the system selectively rolls back one or more transactions to break the deadlock. The rolled-back transaction(s) can then be restarted to progress without the deadlock situation.
104 | - **Timeout Mechanisms**: Set appropriate timeouts for transactions and operations to prevent them from waiting indefinitely. If a transaction waits for a resource beyond a specified timeout period, it can be rolled back or terminated to break the potential deadlock.
105 | - **Deadlock Monitoring and Analysis**: Continuously monitor and analyze the occurrence of deadlocks in the database system. Collect data on deadlock incidents, analyze their root causes, and fine-tune the system's configuration or transaction design to minimize the chances of deadlocks.
106 | - Deadlock Monitoring and Analysis: Continuously monitor and analyze the occurrence of deadlocks in the database system. Collect data on deadlock incidents, analyze their root causes, and fine-tune the system's configuration or transaction design to minimize the chances of deadlocks.
107 |
108 |
109 |
110 |
111 | ## Explain the difference between a clustered and non-clustered index.
112 | - Clustered Index:
113 | A clustered index determines the physical order of data rows in a table based on the values of the indexed column(s). In simple terms, it's like sorting the rows of a table based on a specific column. Each table can have only one clustered index because it determines the physical layout of the data on disk.
114 | - Example: Consider a table called "Students" with columns like "StudentID" (primary key), "Name," and "Age." If we create a clustered index on the "StudentID" column, the data rows in the table will be physically sorted based on the values in the "StudentID" column. This means that the rows in the table will be ordered by their StudentID values.
115 |
116 |
117 | - Non-Clustered Index:
118 | A non-clustered index is a separate structure that contains a sorted list of key values along with a pointer to the actual data rows in the table. It does not determine the physical order of the data rows. Instead, it provides a quick way to look up data based on the indexed column(s) without altering the physical order of the table.
119 | - Example: Let's use the same "Students" table as before. If we create a non-clustered index on the "Age" column, the index will contain a sorted list of unique age values along with pointers to the corresponding data rows in the table. This index allows for faster lookups based on the age value without rearranging the actual rows in the table.
120 |
121 | - The choice between a clustered and non-clustered index depends on the specific requirements of the table and the queries that will be executed against it. Clustered indexes are well-suited for tables where the primary key provides a logical ordering or when range-based queries are common. Non-clustered indexes are useful for improving query performance on specific columns that are frequently used in queries.
122 |
123 |
124 | ## Describe the concept of data replication and its significance in a distributed database system.
125 | - Database replication is the process of creating and maintaining copies of a database in multiple locations, with the aim of improving data availability, reliability, and performance. In other words, it involves copying data from one database to another so that multiple users can access the same data from different locations. This helps to ensure that the data is always available and reduces the risk of data loss in the event of a hardware or network failure.
126 |
127 | - Significance of Data Replication in a Distributed Database System:
128 | - **Improved Data Availability**: Replication enhances data availability by ensuring that multiple copies of data exist. If one server or node fails, other copies of the data can still be accessed, allowing the system to continue functioning without interruption. Replication provides fault tolerance and minimizes downtime.
129 | - **Enhanced Performance and Scalability**: With data replication, data can be accessed from multiple locations, reducing the network latency and improving response times for read operations. It allows for localized data access, enabling distributed query processing and reducing the load on a single server. Replication also enables horizontal scalability by distributing the data and load across multiple servers.
130 | - **Disaster Recovery and High Availability**: Replication plays a critical role in disaster recovery and high availability scenarios. By maintaining replicated copies of data in geographically dispersed locations, organizations can recover quickly from disasters, such as natural calamities, hardware failures, or network outages. Replication enables failover and load balancing across redundant servers, ensuring continuous service availability.
131 | - **Data Consistency and Integrity**: Data replication must ensure data consistency and integrity across all replicas. Different consistency models, such as eventual consistency or strong consistency, can be employed to balance performance and data consistency requirements. Consistency mechanisms, such as quorum-based protocols or distributed consensus algorithms, are used to synchronize updates across replicas and resolve conflicts.
132 | - **Geographical Data Distribution**: Replication allows for data to be distributed across different geographic locations, enabling localized data access and reducing network latency for users in different regions. This is particularly important in global organizations with users spread across multiple locations.
133 | - **Load Balancing**: Replication enables load balancing by distributing read and write operations across multiple copies of data. Read operations can be distributed among replicas to improve performance and handle high read loads. Write operations can be managed through mechanisms like master-slave replication or multi-master replication to distribute the write load and maintain data consistenc.
134 | - **Offline Data Access**: Replication can provide offline data access in scenarios where connectivity to the main database is limited or intermittent. Replicas of data can be synchronized with the main database and used for local data access when the network connection is not available.
135 | - It's important to note that data replication also introduces challenges, such as ensuring data consistency, managing conflicts, handling updates across replicas, and maintaining synchronization. These challenges need to be carefully addressed through appropriate replication strategies, consistency models, conflict resolution mechanisms, and monitoring tools to achieve the desired benefits of data replication in a distributed database system.
136 |
137 | - Some common methods of database replication include:
138 | - **Master-slave replication**: In this approach, the master database is the authoritative source of data, and all changes to the data are made on the master. The changes are then propagated to one or more replica databases, which are read-only copies of the master. Slave databases can be used for reporting, backups, or load balancing, among other purposes.
139 | - **Master-master replication**: In this approach, two or more databases act as both master and replica at the same time, meaning that changes can be made to any of the databases. Changes made on one master are propagated to the other master(s), and conflicts are resolved based on specific rules or algorithms.
140 | - **Multi-master replication**: This is a more advanced form of master-master replication, in which multiple databases can act as both master and replica at the same time, and changes can be made to any of the databases. The replication process is more complex in this approach, as conflicts can arise if multiple databases attempt to change the same data at the same time.
141 |
142 |
143 |
144 | ## what is orm and odn . and also the difference between them
145 | - ORM (Object-Relational Mapping) and ODM (Object-Document Mapping) are both software development techniques used to bridge the gap between object-oriented programming languages and relational databases (in the case of ORM) or document databases (in the case of ODM). Here's an explanation of each concept and the difference between them:
146 |
147 | - **ORM (Object-Relational Mapping)**:
148 | ORM is a technique that enables developers to map objects from an object-oriented programming language (such as Java or C#) to relational database tables. It provides a way to interact with the database using object-oriented principles, abstracting away the complexities of SQL queries and database-specific operations. ORM frameworks, such as Hibernate (for Java) or Entity Framework (for .NET), handle the mapping and provide APIs for performing CRUD (Create, Read, Update, Delete) operations on the database using objects and object-oriented queries.
149 |
150 | - **ODM (Object-Document Mapping)**:
151 | ODM is a technique similar to ORM, but it is used to map objects from an object-oriented programming language to document databases, which store data in a document-oriented format (e.g., JSON or BSON). Document databases, such as MongoDB or Couchbase, don't have a rigid schema like relational databases. ODM frameworks, such as Mongoose (for Node.js) or Morphia (for Java), provide mapping and querying capabilities to interact with the document database using object-oriented paradigms. They allow developers to define object schemas, perform CRUD operations, and query the database using object-oriented query languages or APIs.
152 |
153 | - The main difference between ORM and ODM lies in the type of databases they are designed to work with:
154 | - Database Type: ORM is used for relational databases, which store data in tables with a predefined schema and enforce relationships using foreign keys. ODM, on the other hand, is used for document databases, where data is stored in a flexible, self-contained document format without a fixed schema.
155 | - Mapping Paradigm: ORM frameworks map objects to relational tables, representing object properties as columns and relationships as foreign keys or join tables. ODM frameworks, on the other hand, map objects to documents, with object properties and relationships often represented directly in the document structure.
156 | - Query Language: ORM frameworks typically use SQL or SQL-like query languages to interact with the relational database, allowing for complex queries, joins, and aggregations. ODM frameworks use query languages or APIs tailored to the document database, which can involve object-oriented query languages or specialized document query syntax.
157 | - Schema Flexibility: Relational databases enforced strict schemas, and any changes require modifying the table structure. ORM frameworks need to handle schema migrations. Document databases, on the other hand, are schema-flexible, allowing documents within a collection to have varying structures. ODM frameworks can handle evolving schemas without requiring migrations.
158 | - Data Relationships: Relational databases rely on explicit relationships defined through foreign keys to establish connections between entities. ORM frameworks handle these relationships using object associations and provide features like lazy loading and eager loading. Document databases support embedded documents or references to establish relationships, and ODM frameworks provide mapping capabilities for managing these relationships
159 |
160 |
161 |
162 |
163 | ## difference between json and bson
164 | - JSON (JavaScript Object Notation) and BSON (Binary JSON) are both data formats used for representing structured data. Here's the difference between JSON and BSON:
165 |
166 | **Format**:
167 | - JSON: JSON is a text-based data format that uses human-readable syntax. It is often used for data interchange between systems and is easy to understand and work with for both humans and machines.
168 | - BSON: BSON, on the other hand, is a binary representation of JSON-like documents. It adds additional data types and features not available in JSON and is designed for efficient storage and manipulation of data in database systems.
169 | **Data Types**:
170 | - JSON: JSON supports a limited set of basic data types, including strings, numbers, booleans, arrays, objects, and null values. It does not have built-in support for binary data, dates, or other complex data types.
171 | - BSON: BSON extends the data types supported by JSON and includes additional types such as binary data, dates, timestamps, regular expressions, and more. It provides richer data modeling capabilities and allows for more precise representation of data.
172 | **Size and Efficiency**:
173 | - JSON: JSON is a human-readable format and tends to be larger in size compared to its binary counterpart, BSON. This can impact network bandwidth and storage requirements.
174 | - BSON: BSON is a binary format and is more compact in size compared to JSON. It includes features like field length prefixes, which make it more efficient for storage and transmission. BSON is commonly used in databases that deal with large amounts of data and need optimized storage and retrieval performance.
175 | **Features and Functionality**:
176 | - JSON: JSON is widely supported across programming languages and platforms, making it easy to integrate with various systems. It is used for representing structured data in web APIs, configuration files, and data interchange scenarios.
177 | - BSON: BSON provides additional features that are useful in database systems, such as support for ordered arrays, efficient indexing, and the ability to represent complex data structures. It is commonly used in database systems like MongoDB, which store and process data in a document-oriented manner.
178 | - In summary, JSON is a human-readable text format used for data interchange, while BSON is a binary format optimized for storage and retrieval in database systems. BSON extends the data types supported by JSON and offers additional features for efficient data manipulation. The choice between JSON and BSON depends on the specific requirements of the application or system, including factors like data size, performance, and integration needs.
179 |
180 |
181 | ## Should I create/destroy a new connection for each database operation?
182 | - Creating and destroying a new connection for each database operation can have a significant impact on performance, especially if you have a high volume of database operations. It is generally not recommended to create and destroy connections for each operation, as establishing a connection can be a relatively expensive operation in terms of time and resources.
183 | - Instead, it is advisable to establish a connection to the database once and reuse that connection for multiple operations. Connection pooling is a common technique used to manage and reuse a pool of established connections. With connection pooling, a set of pre-established connections is maintained in a pool, and each connection is handed out to the application when needed and returned to the pool after use.
184 | - Benefits of using connection pooling include:
185 | **"Performance"**: Reusing connections avoids the overhead of establishing a new connection for each operation, resulting in improved performance and reduced latency.
186 |
187 | **Resource Efficiency**: Connection pooling optimizes the utilization of database server resources by reducing the number of connections opened and closed.
188 |
189 | **Scalability**: Connection pooling allows multiple concurrent clients to share a pool of connections, enabling better scalability in handling concurrent requests.
190 |
191 | - However, it's important to configure the connection pool size appropriately based on the expected workload and database capacity. Setting the pool size too large can lead to resource exhaustion, while setting it too small can result in connection delays and contention.
192 | - The specifics of connection pooling and how it is implemented can vary depending on the programming language and database driver you are using. It's recommended to refer to the documentation and best practices specific to your programming language and database technology to effectively manage connections and utilize connection pooling.
193 |
194 |
195 | ## How do you import and export to a database?
196 | - The process of importing and exporting data to and from a database depends on the specific database management system (DBMS) you are using. However, I can provide a general overview of the common methods used for importing and exporting data.
197 | - **Importing Data**:
198 | - File Import: Many DBMSs support importing data from various file formats such as CSV (Comma-Separated Values), JSON (JavaScript Object Notation), XML (eXtensible Markup Language), or Excel spreadsheets. You typically use specific commands or utilities provided by the DBMS to load the data from the file into the database.
199 | - Data Pumping: Some DBMSs offer utilities or tools that can pump data from one database to another, enabling data import from another database system or instance. These tools often provide options for mapping and transforming data during the import process.
200 | - **Exporting Data**:
201 | - File Export: Similar to importing, DBMSs allow exporting data to files in different formats like CSV, JSON, XML, or Excel. You can use specific commands or utilities provided by the DBMS to extract data from the database and save it to a file.
202 | - Database Dump/Backup: Many DBMSs provide functionality to create a database dump or backup, which is a comprehensive snapshot of the entire database or specific tables. This backup file can be used to restore the database or transfer the data to another system.
203 | - It's important to consult the documentation or resources specific to your DBMS for detailed instructions on how to import and export data, as the methods and commands can differ between DBMSs.
204 |
205 |
206 |
207 | ## My query/update seems to execute twice. Why is this happening?
208 | - If your query or update seems to be executing twice in Mongoose, it is likely because you are calling the function twice. There are a few common reasons why this can happen:
209 | - **Incorrect use of middleware**: If you are using Mongoose middleware to execute code before or after a query or update, you may inadvertently trigger the query/update twice if you do not call the next() function at the end of the middleware function. This can cause the query/update to be executed twice, once for the original function call and once for the middleware function call.
210 | - **Incorrect use of Promise**: If you are using a Promise to execute your query/update, you may be inadvertently executing the Promise twice by calling it multiple times. This can happen if you pass the Promise object to another function and forget to handle it properly.
211 | - **Multiple event listeners**: If you are using event listeners to execute code when a query/update is executed, you may be inadvertently attaching multiple event listeners to the same event. This can cause the event listener code to be executed multiple times, resulting in multiple executions of the query/update.
--------------------------------------------------------------------------------
/DataBase/mongodb.md:
--------------------------------------------------------------------------------
1 | ## What is MongoDB and why is it classified as a NoSQL database?
2 | - MongoDB is a popular open-source, document-oriented NoSQL database. It is designed to store and manage large volumes of unstructured or semi-structured data.
3 | - It's called a NoSQL database because it doesn't use traditional tables like other databases. Instead, it stores data in a flexible and adaptable way.
4 | - This makes it easier to work with different types of data and change how the data is organized. MongoDB is often used for handling large amounts of data and for applications that need to be able to grow and change quickly.
5 | - They provide flexible schemas, horizontal scalability, and better performance for certain types of applications, particularly those dealing with large amounts of data or requiring agile development processes.
6 | - MongoDB's NoSQL nature enables it to handle complex and diverse data structures, making it suitable for various use cases such as content management systems, real-time analytics, IoT applications, and more. It offers features like high availability, horizontal scaling through sharding, replication, and a rich query language for efficient data retrieval and manipulation.
7 |
8 |
9 |
10 | ## explain the concept of document in monogodb
11 | - In MongoDB, a document is the fundamental unit of data storage. It is similar to a row in a traditional relational database or a record in other data storage systems.
12 | - A document in MongoDB is a JSON-like data structure, represented as a set of key-value pairs. It can contain various types of data, including strings, numbers, arrays, nested documents, and more. The document's structure is flexible, meaning different documents in the same collection can have different fields and structures. This flexibility allows for dynamic and evolving data models.
13 | - Documents in MongoDB are typically grouped together in collections. A collection is a container for multiple documents that share a common theme or purpose. Collections are analogous to tables in relational databases. Each document within a collection can have its own unique set of fields and values.
14 | - Since MongoDB is a schema-less database, you can add, modify, or remove fields within a document without affecting other documents in the collection. This flexibility is beneficial when working with evolving data or situations where different documents may have varying attributes.
15 | - Documents in MongoDB can be easily queried, updated, and indexed. The document-oriented nature of MongoDB makes it well-suited for handling complex and diverse data structures and allows for efficient storage and retrieval of information.
16 |
17 |
18 |
19 | ## What are the key features and advantages of MongoDB?
20 | - MongoDB offers several key features and advantages that contribute to its popularity and adoption in various applications. Here are some of its key features and advantages
21 | - Flexible Document Model: MongoDB uses a flexible document model, allowing you to store and manage diverse and evolving data structures. The document-based approach eliminates the need for a predefined schema, making it easy to handle changing data requirements.
22 | - Scalability and High Performance: MongoDB is designed to handle large volumes of data and high traffic loads. It supports horizontal scaling through sharding, which distributes data across multiple servers or clusters. This enables efficient scaling to handle increasing data and user demands.
23 | - Replication and High Availability: MongoDB provides built-in replication, automatically maintaining multiple copies of data across different servers. This ensures data durability, fault tolerance, and high availability. If one server fails, another replica takes over seamlessly.
24 | - Rich Querying and Indexing: MongoDB offers a powerful query language called MongoDB Query Language (MQL) that supports complex queries, including support for document-based queries, range queries, and geospatial queries. It also provides various indexing options to optimize query performance.
25 | - Ad Hoc Queries: MongoDB supports ad hoc queries, allowing you to query data without needing to predefine the queries or create complex joins. This flexibility enables faster development iterations and exploratory data analysis.
26 | - Flexible Data Model: MongoDB accommodates a variety of data types within a single document, including arrays, nested documents, and even binary data. This flexibility makes it easier to represent complex relationships and hierarchies.
27 | - Integration with Programming Languages: MongoDB provides official drivers and client libraries for various programming languages, making it easy to work with MongoDB in your preferred programming language. It has support for languages like JavaScript, Python, Java, .NET, and more.
28 | - Easy Administration and Monitoring: MongoDB offers robust administration and monitoring tools, including a web-based interface called MongoDB Compass and a command-line interface. These tools facilitate database management, monitoring performance, and diagnosing issues.
29 | - Community and Ecosystem: MongoDB has a large and active community of developers, providing a wealth of resources, tutorials, and support. It also has a vibrant ecosystem with a wide range of integrations and third-party tools available.
30 |
31 |
32 |
33 |
34 | ## What is a collection in MongoDB, and how does it differ from a traditional table in an SQL database?
35 | - In MongoDB, a collection is a grouping of individual documents. It is similar to a table in a traditional SQL database, but with a few key differences:
36 | - Schema Flexibility: In MongoDB, collections do not enforce a predefined schema. Each document within a collection can have its own unique set of fields and structure. This allows for dynamic and evolving data models without the need for altering the schema or migrating data. In contrast, SQL tables have a fixed schema where every row must adhere to the predefined structure of the table.
37 | - Document Storage: In a MongoDB collection, documents are stored in BSON (Binary JSON) format, which is a binary representation of JSON-like documents. BSON supports various data types, including strings, numbers, arrays, and nested documents. SQL databases store data in a tabular format with rows and columns, where each column has a specific data type.
38 | - No Joins: MongoDB does not support traditional SQL joins between collections. Instead, it promotes denormalization by embedding related data within a single document or referencing other documents using unique identifiers (IDs). This avoids the need for complex joins and can improve query performance. In SQL databases, joins are commonly used to combine data from multiple tables based on specified conditions.
39 | - Indexing: MongoDB provides flexible indexing options for collections to improve query performance. It supports indexing on specific fields or combinations of fields, allowing for efficient querying and sorting. SQL databases also have indexing capabilities, but the indexing mechanisms and syntax may differ between the various database systems.
40 | - Transaction Support: MongoDB introduced multi-document transactions in recent versions, allowing atomic operations within a single document or across multiple documents within a single replica set or a sharded cluster. Traditional SQL databases have long supported transactions that span multiple tables or even multiple databases.
41 | - Overall, collections in MongoDB and tables in SQL databases serve as containers for organizing related data. However, MongoDB's collections offer schema flexibility, document-based storage, denormalization approaches, and indexing mechanisms that differentiate them from traditional SQL tables. These features make MongoDB well-suited for handling dynamic and evolving data structures and provide greater agility in development.
42 |
43 |
44 |
45 |
46 |
47 | ## What is the purpose of an index in MongoDB? How does it improve query performance?
48 | - In MongoDB, an index is a data structure that improves the efficiency of query execution by allowing the database to locate and retrieve data more quickly. Indexes are created on specific fields within a collection to facilitate faster data access.
49 | - The main purpose of an index in MongoDB is to speed up query performance by reducing the number of documents that need to be examined during a query execution. Without an index, MongoDB would have to scan every document in a collection to find the desired data, which can be time-consuming and inefficient, especially with large datasets.
50 | - The main purpose of an index in MongoDB is to speed up query performance by reducing the number of documents that need to be examined during a query execution. Without an index, MongoDB would have to scan every document in a collection to find the desired data, which can be time-consuming and inefficient, especially with large datasets.
51 | - When an index is created on a field or combination of fields, MongoDB creates a separate data structure that organizes the indexed field values in a way that allows for faster data retrieval. This data structure is typically a B-tree or a variation of it.
52 | - Here's how an index improves query performance in MongoDB:
53 | - Faster Data Retrieval: With an index, MongoDB can use the index data structure to locate the specific documents that match the query criteria. Instead of scanning the entire collection, the database can quickly identify the subset of documents that need to be examined, significantly reducing the time taken to retrieve the data.
54 | - Reduced Disk I/O: By using an index, MongoDB can read smaller portions of data from disk, resulting in reduced disk I/O operations. This helps to minimize the physical read operations required to fetch the relevant documents, improving overall query performance.
55 | - Efficient Sorting: Indexes can also improve the performance of sorting operations. When an index is created on the field being sorted, MongoDB can leverage the index's sorted order to satisfy the sorting request, avoiding the need for costly in-memory or disk-based sorting operations.
56 | - Covered Queries: MongoDB can perform what is known as a covered query when all the fields required by a query are included in an index. In this case, the database can satisfy the query solely using the index, avoiding the need to access the actual documents. Covered queries can be significantly faster as they eliminate the overhead of fetching and processing the actual document data.
57 | - It's important to note that while indexes can greatly enhance query performance, they do come with some overhead in terms of storage space and the impact on write operations. Therefore, it's crucial to create indexes judiciously, considering the specific query patterns and balancing the trade-off between query performance improvement and the cost of maintaining the indexes during write operations.
58 |
59 |
60 |
61 |
62 | ## Describe the difference between embedded documents and referenced documents in MongoDB.
63 | - In MongoDB, there are two common approaches for representing relationships between documents: embedded documents and referenced documents. Here's an explanation of the difference between these two approaches:
64 | - Embedded Documents:
65 | - In the embedded documents approach, related data is nested within the same document.
66 | - It involves directly embedding one document within another as a subdocument or subfield.
67 | - Embedded documents allow you to store related data together, providing denormalized and self-contained representations.
68 | - This approach is suitable when the related data is frequently accessed together and does not need to be queried or modified independently.
69 | - Example: A blog post document may contain an embedded array of comments, where each comment is a subdocument with fields like author, content, and timestamp.
70 | - Referenced Documents:
71 | - In the referenced documents approach, relationships between documents are established by referring to other documents' identifiers (IDs).
72 | - Rather than embedding the entire related document, a reference to the related document's ID is stored within the referring document.
73 | - Referenced documents allow for more flexibility and scalability as related data can be stored in separate collections.
74 | This approach is suitable when the related data is large, subject to frequent updates, or when the relationships are one-to-many or many-to-many.
75 | - Example: In a social media application, a user document may contain a reference to another user's ID as a friend or follower.
76 | Here are some considerations when choosing between embedded and referenced documents:
77 | - Data Access Patterns: Consider how often the related data needs to be accessed together or separately. Embedded documents provide fast and efficient retrieval of related data in a single query, while referenced documents require additional queries to fetch the referenced data.
78 |
79 | - Data Consistency and Updates: Embedded documents ensure atomic updates within a single write operation. With referenced documents, updating related data might require multiple write operations and careful handling of consistency.
80 |
81 | - Data Size and Growth: Embedded documents can lead to document size inflation if the related data is large or there are many relationships. Referenced documents can handle large or growing data more effectively, as each document remains smaller and more focused.
82 |
83 | - Query Flexibility: Referenced documents offer more flexibility in querying and aggregating related data across multiple collections, enabling complex join-like operations. Embedded documents are more limited in querying as related data is contained within the document.
84 |
85 |
86 |
87 |
88 | ## How does MongoDB handle transactions and maintain data consistency?
89 | - MongoDB introduced multi-document transactions starting from version 4.0, allowing for ACID (Atomicity, Consistency, Isolation, Durability) transactions within a single replica set or a sharded cluster. Here's how MongoDB handles transactions and maintains data consistency:
90 | - Atomicity: MongoDB transactions guarantee atomicity, meaning that either all the operations within a transaction are applied successfully, or none of them are applied at all. If any operation fails within a transaction, all changes made within that transaction are rolled back, ensuring that the data remains in a consistent state.
91 | - Consistency: MongoDB enforces data consistency within a transaction by ensuring that the database state transitions from one valid state to another valid state. This is achieved by maintaining strict isolation between concurrent transactions, preventing them from interfering with each other's data.
92 | - Isolation: MongoDB provides isolation by allowing transactions to operate independently of each other. Transactions are isolated from each other until they are committed, ensuring that each transaction sees a consistent snapshot of the data during its execution. This prevents issues such as dirty reads, non-repeatable reads, and write skew.
93 | - Durability: MongoDB ensures durability by persisting transactional data to disk. Once a transaction is committed, its changes are durable and will survive subsequent failures, system restarts, or crashes. This ensures that the data remains intact and recoverable.
94 | - Document-Level Locking: MongoDB uses document-level locking to ensure data consistency during concurrent operations. This means that different documents can be modified simultaneously by multiple transactions as long as they do not have conflicting writes. This allows for high concurrency while maintaining data integrity.
95 | - It's important to note that transactions in MongoDB are subject to certain limitations and considerations, such as the need for transactions to operate within a single replica set or sharded cluster. Also, while MongoDB supports multi-document transactions, it's generally recommended to design data models that minimize the need for frequent and complex transactions, as they can impact performance and scalability.
96 | - By providing ACID transactions, MongoDB offers developers the ability to maintain data consistency and integrity even in complex, multi-step operations, ensuring the reliability of critical business processes.
97 |
98 |
99 |
100 |
101 |
102 | ## What is sharding in MongoDB, and how does it contribute to scalability?
103 | - Sharding in MongoDB is a technique used to horizontally partition data across multiple machines or servers, enabling the distribution of data and workload across a cluster. It is a key feature that contributes to the scalability and performance of MongoDB. Here's an explanation of sharding and its benefits:
104 | - Horizontal Data Partitioning: Sharding involves dividing a large MongoDB dataset into smaller subsets called shards. Each shard represents a separate database instance or server that stores a portion of the data. By distributing the data horizontally across multiple shards, MongoDB can handle large datasets that may exceed the storage capacity or performance capabilities of a single machine.
105 | - Data Distribution and Load Balancing: MongoDB's sharding mechanism automatically distributes incoming data across the shards based on a shard key. The shard key is a field or set of fields chosen to determine how data is distributed. This ensures that data is evenly spread across the shards, preventing hotspots and balancing the workload. MongoDB's balancer process continuously monitors and migrates data between shards to maintain an even distribution.
106 | - Scalability: Sharding allows MongoDB to scale horizontally by adding more shards to the cluster as data and workload increase. As the number of shards grows, the system can handle larger datasets, higher query throughput, and increased concurrent operations. This horizontal scalability allows for linear growth in storage capacity and performance by simply adding more servers to the cluster.
107 | - Query Performance: Sharding can significantly improve query performance. When a query is issued, MongoDB's query router (mongos) routes the query to the relevant shards based on the shard key. This allows the query to be executed in parallel across multiple shards, enabling distributed query processing and faster response times.
108 | - High Availability and Fault Tolerance: Sharding in MongoDB also contributes to high availability and fault tolerance. Each shard can be replicated as a replica set, ensuring data redundancy and automatic failover in case of a shard failure. The distributed nature of sharding reduces the impact of a single shard or server failure on the overall system, providing better resilience.
109 | - It's important to note that sharding in MongoDB requires careful planning and consideration of the data model, shard key selection, and query patterns. The choice of shard key is crucial as it determines the distribution of data and the effectiveness of load balancing. Additionally, sharding introduces some complexity in managing data distribution, shard configuration, and balancing operations.
110 |
111 |
112 |
113 |
114 | ## Explain the process of data replication in MongoDB and how it ensures high availability.
115 | - Data replication in MongoDB refers to the process of maintaining multiple copies of data across different servers or nodes within a replica set. Replication provides high availability, data redundancy, and fault tolerance. Here's an explanation of how data replication works in MongoDB and how it ensures high availability:
116 | - Replica Set: In MongoDB, a replica set is a group of MongoDB instances that host the same data set. A replica set typically consists of a primary node and one or more secondary nodes. The primary node handles all write operations and serves read operations as well.
117 | - Replication Process: The replication process in MongoDB follows a primary-secondary model. Initially, one node is elected as the primary node, and the other nodes become secondary nodes. The primary node receives write operations and applies them to its data set. It then replicates the changes to the secondary nodes through an asynchronous replication mechanism.
118 | - Replication Oplog: MongoDB uses an operation log (oplog) to record all write operations performed on the primary node. The oplog is a capped collection that acts as a circular buffer, storing a rolling window of operations for a specified period. The secondary nodes continuously pull and apply these operations from the primary's oplog to synchronize their data.
119 | - Automatic Failover: In the event of a primary node failure, MongoDB automatically triggers a process known as failover. During failover, one of the secondary nodes is elected as the new primary by a consensus mechanism. The new primary takes over the write operations and continues serving read operations. This automatic failover ensures high availability and continuous operation even in the presence of node failures.
120 | - Data Redundancy: By replicating data across multiple nodes, MongoDB ensures data redundancy. If a primary node fails, one of the secondary nodes can take its place and continue serving data. This redundancy minimizes the risk of data loss and allows the system to recover quickly from failures.
121 | - Consistency and Durability: MongoDB ensures data consistency and durability during replication. Before acknowledging a write operation, the primary node replicates the operation to a majority of the replica set members and waits for their acknowledgement. This ensures that the write is committed to a majority of the nodes, providing data consistency and durability guarantees.
122 | - Replica Set Monitoring: MongoDB provides built-in tools and features for monitoring replica sets, including automatic monitoring of replica set health, status, and synchronization lag. These monitoring capabilities help identify any replication issues or performance bottlenecks and allow for proactive management and maintenance of the replica set.
123 | - By implementing data replication through replica sets, MongoDB ensures high availability, data redundancy, and fault tolerance. The automatic failover mechanism and continuous synchronization of data between nodes enable the system to maintain uptime, recover from failures, and provide reliable access to data even in the face of node or server outages.
124 |
125 |
126 |
127 |
128 |
129 | ## What is the Aggregation Framework in MongoDB? How does it differ from regular queries?
130 | - The Aggregation Framework in MongoDB is a powerful tool that allows for data processing and analysis in a flexible and efficient manner. It provides a set of operators and stages that enable users to perform complex data transformations, aggregations, and computations within MongoDB. Here's an explanation of the Aggregation Framework and how it differs from regular queries:
131 | - Data Transformation and Analysis: The Aggregation Framework is designed to handle more complex data processing tasks beyond the capabilities of regular queries. It allows for tasks such as grouping, filtering, sorting, joining, and performing mathematical computations on the data. This makes it suitable for tasks like generating reports, extracting insights, and performing advanced analytics within MongoDB.
132 | - Pipeline-Based Processing: The Aggregation Framework operates using a pipeline-based processing model. A pipeline is a sequence of stages, where each stage performs a specific operation on the data. Stages can include operations like filtering, transforming documents, grouping, sorting, projecting specific fields, and applying mathematical operations. The output of one stage becomes the input for the next stage, allowing for a series of data transformations to be performed in a controlled manner.
133 | - Rich Set of Operators and Stages: The Aggregation Framework provides a wide range of operators and stages to manipulate and process data. Some of the commonly used stages include $match for filtering documents, $group for grouping documents based on specific fields, $project for reshaping documents and including or excluding fields, $sort for sorting documents, and $sum for calculating sums. These stages can be combined and arranged in various ways to achieve the desired data transformation or aggregation.
134 | - Performance and Efficiency: The Aggregation Framework is optimized for performance and efficiency. It leverages MongoDB's native query execution engine and indexes to process data in a highly optimized manner. By performing computations and transformations directly in the database, it minimizes data transfer between the database and the application, reducing network latency and improving overall performance.
135 | - Flexibility and Expressiveness: The Aggregation Framework offers a high level of flexibility and expressiveness. It allows for complex data transformations and aggregations that can involve multiple fields, conditions, and calculations. It supports a rich set of operators and expressions to manipulate and analyze the data, giving users the ability to tailor the aggregation pipeline to their specific requirements.
136 | - In contrast, regular queries in MongoDB (e.g., find) are primarily used for retrieving and filtering data based on specific criteria. They are suitable for simple read operations but may not provide the flexibility or computational capabilities required for more complex data transformations or aggregations. Regular queries are focused on retrieving data from the database, whereas the Aggregation Framework enables more advanced data processing, analysis, and computation within the database itself.
137 |
138 |
139 |
140 | ## How does MongoDB handle concurrency and locking?
141 | - MongoDB handles concurrency and locking using a mechanism known as Multi-Version Concurrency Control (MVCC). MVCC allows multiple concurrent operations to occur on the database while maintaining data consistency.
142 | - In MongoDB, each document has a unique identifier called the _id field. When a document is modified, instead of immediately updating the existing document, MongoDB creates a new version of the document with the changes applied. The new version of the document is assigned a new _id value and stored separately from the original document.
143 | - When multiple operations are performed concurrently on the same document, each operation works on a separate version of the document, avoiding conflicts. This approach ensures that readers can access the original version of the document while writers can create new versions with modifications.
144 | - To maintain consistency, MongoDB uses a form of optimistic concurrency control. When a write operation is performed, MongoDB checks if any other concurrent write operations have modified the document since the operation began. If conflicts are detected, MongoDB can either abort the operation or retry it, depending on the specified behavior.
145 | - MongoDB does not use traditional locks for read and write operations. Instead, it relies on fine-grained locks at the level of individual documents. This allows for greater concurrency as different documents can be accessed and modified simultaneously.
146 | - Overall, MongoDB's approach to concurrency and locking through MVCC allows for high-performance and scalable operations in multi-threaded and distributed environments. It enables concurrent access to the database while maintaining data consistency and minimizing conflicts.
147 |
148 |
149 |
150 | ## what is pipeline
151 | - In the context of MongoDB's Aggregation Framework, a pipeline refers to a sequence of stages that are applied to the data in order to perform data transformations, aggregations, and computations. The pipeline is constructed using various stages, and each stage performs a specific operation on the data, generating an intermediate result that serves as the input for the next stage in the pipeline.
152 | - Each stage in the pipeline is represented by an object, and these stages are arranged in a specific order to define the desired data processing flow. The output of one stage becomes the input for the next stage, allowing for a series of transformations to be applied to the data.
153 |
154 |
155 |
156 | ## what is mongoose?
157 | - Mongoose is an Object-Document Mapping (ODM) library for Node.js and MongoDB. It provides a higher-level abstraction layer on top of the MongoDB Node.js driver, simplifying the interaction with MongoDB databases and making it easier to work with data in a MongoDB environment. Here's an explanation of what Mongoose is and its key features:
158 | - Object-Document Mapping (ODM): Mongoose is an ODM library, which means it allows developers to define JavaScript objects (schemas) that map to MongoDB documents. It provides a way to model and structure data using JavaScript classes and objects, making it easier to work with MongoDB in an object-oriented manner.
159 | - Schema Definition: Mongoose allows developers to define schemas that represent the structure of the data stored in MongoDB. Schemas define the fields, data types, validation rules, and default values for documents. By defining schemas, Mongoose provides a clear structure and consistency for data in MongoDB, similar to how tables and columns are defined in relational databases.
160 | - Data Validation: Mongoose provides built-in data validation capabilities. Developers can specify validation rules for each field in a schema, ensuring that the data stored in MongoDB meets the defined constraints. Mongoose supports a wide range of validation options, including required fields, data type validation, custom validation functions, and more.
161 | - Middleware and Hooks: Mongoose offers middleware and hooks that allow developers to define pre and post-processing functions for various operations. Middleware functions can be executed before or after specific database operations, such as saving or updating documents, allowing for custom logic and additional processing steps.
162 | - Query Building and Execution: Mongoose provides a query builder API that simplifies the creation and execution of database queries. It offers a chainable syntax to build queries using methods like find(), sort(), limit(), and more. Mongoose also supports advanced querying features, including query population (referencing and populating related documents) and query hooks.
163 | - Integration with Node.js and Express: Mongoose seamlessly integrates with Node.js and popular web frameworks like Express.js. It provides features like connection management, middleware support, and request-scoped database connections, making it easy to work with MongoDB in Node.js-based applications.
164 | -In simple terms, Mongoose is a library that helps Node.js developers work with MongoDB, a popular database. It makes it easier to define the structure of data (schemas), validate data, and perform operations on the database. Mongoose simplifies the interaction between Node.js and MongoDB, allowing developers to write code more easily and efficiently when working with data.
165 |
166 |
167 | ## Explain why mongoose does not return a promise but has a .then
168 | - Mongoose does not return a Promise by default because it follows a callback-based approach for handling asynchronous operations. Instead of directly returning a Promise, Mongoose provides the .then() method as a way to handle the result of an asynchronous operation.
169 | - In simpler terms, when you perform an operation with Mongoose, such as querying the database, Mongoose expects you to provide a callback function that will be called when the operation completes. This allows you to specify what should happen with the result of the operation.
170 | - The .then() method in Mongoose allows you to attach a callback function that will be executed when the operation is finished. This way, you can work with the result of the operation without directly dealing with Promises.
171 | - So, while Mongoose doesn't return Promises itself, it offers the convenience of using the .then() method to handle asynchronous operations in a more familiar way.
172 |
173 | ## How do you create indexes with mongoose
174 | - In Mongoose, you can create indexes on fields in your MongoDB collections using the index() method provided by the Mongoose Schema.
175 | - Here's an example of how to create an index using Mongoose:
176 | - Define your Mongoose Schema, specifying the fields you want to index:
177 | ```javascript
178 | const mongoose = require('mongoose');
179 | const yourSchema = new mongoose.Schema({
180 | field1: { type: String, index: true },
181 | field2: { type: Number },
182 | });
183 | // Create the model
184 | const YourModel = mongoose.model('YourModel', yourSchema);
185 | ```
186 |
187 | - Save documents to the collection using the defined schema:
188 | ```javascript
189 | const document = new YourModel({
190 | field1: 'value1',
191 | field2: 123,
192 | });
193 | document.save(function(err, savedDocument) {
194 | if (err) {
195 | console.error(err);
196 | } else {
197 | console.log('Document saved successfully:', savedDocument);
198 | }
199 | });
200 | ```
201 |
202 | - Alternatively, you can create indexes directly using the createIndex() method of the Mongoose model:
203 | ```javascript
204 | YourModel.createIndexes(function(err) {
205 | if (err) {
206 | console.error(err);
207 | } else {
208 | console.log('Indexes created successfully.');
209 | }
210 | });
211 | ```
212 |
213 |
--------------------------------------------------------------------------------
/DataBase/mongooseQuery.md:
--------------------------------------------------------------------------------
1 | #### mongoose Schema types
2 | https://mongoosejs.com/docs/schematypes.html
3 |
4 | #### mongoose general Queries:
5 | https://mongoosejs.com/docs/queries.html
6 |
7 |
8 | #### Aggregate examples:
9 | ```javascript
10 | Model.
11 | aggregate([{ $match: { age: { $gte: 21 }}}]).
12 | unwind('tags').
13 | exec();
14 | ```
15 |
16 |
17 | #### more Examples of aggregation:
18 | https://studio3t.com/knowledge-base/articles/mongodb-aggregation-framework/
19 | https://www.mongodb.com/docs/manual/core/aggregation-pipeline/
20 |
21 |
22 |
23 | #### more information of MongoDB
24 | https://interviewhandbook.notion.site/NoSQL-282749fd15d540ff8afee0cf1517ee18
25 |
26 |
27 | ## Explain why mongoose does not return a promise but has a .then
28 | - Mongoose is a library for working with MongoDB databases in Node.js. While most database operations in Mongoose are asynchronous, Mongoose does not return Promises by default.
29 | - Instead, Mongoose uses a callback function to handle the results of asynchronous operations. However, Mongoose also provides a .then() method that allows you to chain asynchronous operations together, similar to Promises.
30 | - So while Mongoose does not directly return Promises, it provides a way to work with Promises using the .then() method.
31 |
32 |
33 |
34 | ## What are pre and post hooks?
35 | - In Mongoose, pre and post hooks are functions that can be executed before or after certain operations are performed on a Mongoose model or document.
36 | - Pre hooks are functions that are executed before a specified operation is performed on a model or document. These functions are useful for performing additional validation, modifying data, or executing additional logic before an operation is executed. Pre hooks can be registered using the pre() method on a Mongoose schema.
37 | - Post hooks are functions that are executed after a specified operation is performed on a model or document. These functions are useful for logging or auditing data, executing additional logic, or modifying data after an operation is executed. Post hooks can be registered using the post() method on a Mongoose schema.
38 | - example
39 | ```javascript
40 | const mongoose = require('mongoose');
41 | const userSchema = new mongoose.Schema({name: String,email: String,});
42 | userSchema.pre('save', function (next) {const now = new Date();
43 | this.updatedAt = now;
44 | if (!this.createdAt) {
45 | this.createdAt = now;
46 | }
47 | next();
48 | });
49 | userSchema.post('save', function (doc, next) {
50 | console.log(`User ${doc.name} saved`);
51 | next();
52 | });
53 | const User = mongoose.model('User', userSchema);
54 |
55 |
56 | ## What are aggregation pipelines with mongoose?
57 |
58 | - Aggregation pipelines in Mongoose are a way to process and transform data from MongoDB collections using a sequence of stages.
59 | - An aggregation pipeline consists of a series of stages that are executed in a specific order to transform the input data. Each stage in the pipeline takes the input documents and performs some operation on them, passing the results to the next stage.
60 | - The stages in an aggregation pipeline can perform a wide range of operations, such as filtering, sorting, grouping, and transforming data. Some examples of pipeline stages include $match, $group, $sort, $project, and $lookup.
61 | - Aggregation pipelines in Mongoose are defined using the aggregate() method, which takes an array of pipeline stages as its argument. Once the pipeline is defined, you can execute it using the exec() method, which returns a Promise that resolves to an array of output documents.
62 | - Aggregation pipelines are a powerful tool for working with data in MongoDB collections, and can be used for a wide range of tasks such as data analysis, reporting, and business intelligence.
63 | ```javascript
64 | const Order = mongoose.model('Order', { total: Number});
65 | Order.aggregate([{ $group: { _id: null, avgTotal: { $avg: "$total" } } }]).exec((err, result) => {
66 | if (err) {
67 | console.error(err);
68 | return;
69 | }
70 | console.log(result);
71 | });
72 |
73 |
74 | ## How do you create indexes with mongoose
75 | - we can create indexes in Mongoose using the Model.ensureIndexes() method.
76 | - The ensureIndexes() method ensures that all defined indexes are created in the background. It takes a callback function that is called once all indexes have been created, or if an error occurs during the index creation process.
77 | - Example =>
78 | ```javascript
79 | const mongoose = require('mongoose');
80 | const userSchema = new mongoose.Schema({name: String,email: String,});
81 | userSchema.index({ email: 1 }, { unique: true });
82 | const User = mongoose.model('User', userSchema);
83 | User.ensureIndexes((err) => {if (err) {console.error(err);} else {console.log('Indexes created successfully'); }});
84 |
85 |
86 |
87 |
88 | ## Mongoose basic query:
89 | - Mongoose is an Object-Data Modeling (ODM) library for Node.js and MongoDB, which provides a straightforward way to interact with MongoDB databases. It simplifies the process of defining schemas, creating models, and performing database operations.
90 |
91 | #### Connecting to MongoDB:
92 | - ```javascript
93 | const mongoose = require("mongoose")
94 | const connect = mongoose.connect("mongodb://127.0.0.1:27017/db")
95 | module.exports = {connect}
96 |
97 | #### Defining a schema:
98 | - ```javascript
99 | const mongoose = require("mongoose")
100 | const userSchema = mongoose.Schema({
101 | name:"String",
102 | email:"String",
103 | password:"String",
104 | })
105 | const UserModel = mongoose.model("user",userSchema)
106 | module.exports = {UserModel}
107 |
108 | #### Creating a new document:
109 | - ```javascript
110 | const user = new UserModel({ name: 'John Doe', age: 30 });
111 | user.save()
112 |
113 | #### Finding documents:
114 | - ```javascript
115 | let user = await UserModel.find() // Find all documents
116 | let SpecificUser = await Usermodel.findOne({email}) //Find documents with a specific condition
117 | let userbyid = await Usermodel.findById(id).exec(); //find document that match the given id
118 |
119 | #### Updating documents:
120 | - ```javascript
121 | let filter = {key:value} // to find document
122 | let update = {key:value} // update value
123 | let user = await UserModel.findOneAndUpdate(filter, update);
124 | // find by Id and update
125 | await userModel.findByIdAndUpdate(id, update)
126 |
127 |
128 | #### delete document:
129 | - ```javascript
130 | await UserModel.deleteOne({ email: 'abc@gmail.com' }); //delete one that match the condition
131 | await UserModel.deleteMany({ name: /Stark/, age: { $gte: 18 } }); //delete many that match the condition
132 | await UserModel.findByIdAndRemove(id) //find by Id and remove
133 | await UserModel.findOneAndDelete(conditions) // fidn by condition and delete
134 |
135 | #### replace value:
136 | - ```javascript
137 | const res = await Person.replaceOne({ _id: 24601 }, { name: 'xyz' });
138 |
139 |
140 |
141 | ## Aggregation example:
142 | - ```javascript
143 | 1) Model.aggregate([{ $match: { age: { $gte: 21 }}}]).exec();
144 | 2) aggregate.lookup({ from: 'users', localField: 'userId', foreignField: '_id', as: 'users' });
145 |
146 |
147 | ## populate:
148 |
149 | #### defining schema with ref
150 | - ```javascript
151 | const mongoose = require('mongoose');
152 | const { Schema } = mongoose;
153 |
154 | const personSchema = Schema({
155 | _id: Schema.Types.ObjectId,
156 | name: String,
157 | age: Number,
158 | stories: [{ type: Schema.Types.ObjectId, ref: 'Story' }]
159 | });
160 |
161 | const storySchema = Schema({
162 | author: { type: Schema.Types.ObjectId, ref: 'Person' },
163 | title: String,
164 | fans: [{ type: Schema.Types.ObjectId, ref: 'Person' }]
165 | });
166 |
167 | const Story = mongoose.model('Story', storySchema);
168 | const Person = mongoose.model('Person', personSchema);
169 |
170 | #### Saving refs
171 | - ```javascript
172 | const author = new Person({
173 | _id: new mongoose.Types.ObjectId(),
174 | name: 'Ian Fleming',
175 | age: 50
176 | });
177 |
178 | author.save(function(err) {
179 | if (err) return handleError(err);
180 |
181 | const story1 = new Story({
182 | title: 'Casino Royale',
183 | author: author._id // assign the _id from the person
184 | });
185 |
186 | story1.save(function(err) {
187 | if (err) return handleError(err);
188 | });
189 | });
190 |
191 |
192 | #### Population
193 | - ```javascript
194 | const story = await Story.
195 | findOne({ title: 'Casino Royale' }).
196 | populate('author').
197 | exec();
198 |
--------------------------------------------------------------------------------
/DataBase/sql.md:
--------------------------------------------------------------------------------
1 | ## What is SQL
2 | SQL stands for Structured Query Language. It is a programming language used for managing and manipulating relational databases.
3 | SQL allows users to define, manipulate, and retrieve data from databases.
4 | It provides a standard syntax and set of commands for creating, modifying, and querying databases, as well as performing various operations such as inserting, updating, and deleting data.
5 | SQL is widely used in database management systems (DBMS) like MySQL, Oracle, Microsoft SQL Server, and PostgreSQL.
6 |
7 |
8 |
9 | ## what is relational database?
10 | A relational database is a type of database management system (DBMS) that organizes and stores data in a structured manner based on the relational model.
11 | In a relational database, data is organized into tables, which are composed of rows (also known as records) and columns (also known as fields).
12 | The tables in a relational database are related to each other through common attributes or keys.
13 | These relationships allow data to be linked and retrieved efficiently.
14 | The relational model enforces data integrity and provides mechanisms for data manipulation using SQL.
15 | Relational databases are widely used in various applications and industries due to their flexibility, scalability, and ability to handle complex data relationships.
16 | They provide a reliable and efficient way to store, retrieve, and manipulate structured data. Some popular relational database management systems include MySQL, Oracle Database, Microsoft SQL Server, and PostgreSQL.
17 |
18 |
19 | ## what is RDBMS
20 | RDBMS stands for Relational Database Management System. It refers to a software application or system that manages and controls a relational database.
21 | An RDBMS provides the tools and mechanisms for creating, organizing, and manipulating relational databases based on the relational model.
22 | The main components of an RDBMS include:
23 | Data Definition Language (DDL): It is used to define the structure of the database, including creating tables, specifying constraints, and defining relationships between tables.
24 | Data Manipulation Language (DML): It allows users to perform operations on the data stored in the database, such as inserting, updating, and deleting records. SQL is the most common language used for DML operations.
25 | Data Query Language (DQL): It enables users to retrieve data from the database by specifying queries using SQL. DQL is used to search, filter, and sort data based on specified criteria.
26 | Data Control Language (DCL): It includes commands for managing user access rights, security, and permissions on the database objects. DCL statements are used to grant or revoke privileges to users.
27 | Transaction Management: An RDBMS supports transaction management to ensure data integrity and consistency. Transactions group a set of database operations and provide mechanisms for atomicity, consistency, isolation, and durability (ACID properties).
28 | Some popular RDBMS products include MySQL, Oracle Database, Microsoft SQL Server, PostgreSQL, and SQLite. These systems are widely used for various applications that require reliable and efficient management of structured data.
29 |
30 |
31 |
32 | ## What are constraints?
33 | - In the context of databases, constraints are rules or conditions applied to the data in database tables to enforce data integrity and maintain consistency. Constraints define certain restrictions and requirements that the data must adhere to. They help ensure that the data stored in the database meets specific criteria and follows predefined rules.
34 | - Here are some commonly used constraints in SQL:
35 | - **Primary Key Constraint**: Ensures that a column or a combination of columns in a table uniquely identifies each record. It prevents duplicate or null values in the specified column(s).
36 | - **Foreign Key Constraint**: Establishes a relationship between two tables, where the values in a column of one table must match the values in the primary key column of another table. It ensures referential integrity.
37 | - **Unique Constraint**: Ensures that the values in a column or a combination of columns are unique across the table. It prevents duplicate values but allows null values.
38 | - **Not Null Constraint**: Specifies that a column cannot contain null values. It ensures that the specified column(s) must have a value.
39 | - **Check Constraint**: Defines a condition that each row in a table must satisfy. It allows you to define custom rules or conditions to restrict the values that can be inserted or updated in a column.
40 | - These constraints are defined when creating or altering a table in SQL. They help maintain data integrity, prevent data inconsistencies, and enforce business rules. When data is inserted, updated, or deleted in the tables, the constraints are checked to ensure the data meets the defined criteria, and if any violation occurs, the database system rejects the operation.
41 | - By using constraints, you can improve the quality and reliability of the data stored in the database, enforce data consistency, and minimize errors or inconsistencies in the data.
42 |
43 |
44 | ## What are primary keys and foreign keys?
45 | - In SQL, primary keys and foreign keys are used to establish relationships between tables in a relational database.
46 | - **Primary Key**:
47 | A primary key is a column or a set of columns in a table that uniquely identifies each record in the table. It serves as a unique identifier for each row. The primary key constraint ensures that the values in the primary key column(s) are unique and not null. Commonly, a primary key is defined when creating a table using the PRIMARY KEY constraint.
48 | CREATE TABLE students (
49 | id INT PRIMARY KEY,
50 | name VARCHAR(50),
51 | age INT
52 | );
53 | - **Foreign Key**:
54 | A foreign key is a column or a set of columns in a table that refers to the primary key of another table. It establishes a relationship between two tables, where the foreign key column(s) in one table references the primary key column(s) in another table. The foreign key constraint ensures referential integrity, meaning the values in the foreign key column(s) must match the values in the referenced primary key column(s) or be null.
55 | - CREATE TABLE courses (id INT PRIMARY KEY, name VARCHAR(50));
56 | CREATE TABLE enrollments (
57 | student_id INT,
58 | course_id INT,
59 | FOREIGN KEY (student_id) REFERENCES students(id),
60 | FOREIGN KEY (course_id) REFERENCES courses(id)
61 | );
62 |
63 |
64 |
65 | ## what is table
66 | - a table is a fundamental structure used to organize and store data in a relational database management system (RDBMS). A table consists of rows (also known as records or tuples) and columns (also known as fields or attributes).
67 | - Each row in a table represents a single instance or record, while each column represents a specific attribute or piece of information associated with the records. The columns are defined with specific data types, such as integers, strings, dates, or other types, to represent the kind of data they can store.
68 | - Tables are organized based on the relational model, which allows relationships to be established between tables through primary key and foreign key constraints. The relationships between tables enable efficient data retrieval, manipulation, and querying using Structured Query Language (SQL).
69 | - Tables form the backbone of a relational database, and data is organized and stored in a structured manner within these tables. They provide a logical and systematic approach to managing and accessing data, allowing for efficient data management and data-driven applications.
70 |
71 |
72 |
73 |
74 | ## what is difference between alter and update in sql
75 | - **ALTER**:
76 | - The ALTER command is used to modify the structure of a database object, such as a table, view, or database itself. It allows you to add, modify, or drop columns, constraints, indexes, and other structural elements of the database objects. With ALTER, you can make changes to the schema of the database.
77 | - **UPDATE**:
78 | - The UPDATE command is used to modify the data within a table. It allows you to change the values of one or more columns in existing rows of a table based on specified conditions. UPDATE is used to update existing records with new values.
79 |
80 |
81 | ## How do you group by?
82 | - In SQL, the GROUP BY clause is used to group rows from a table based on one or more columns. It is often used in combination with aggregate functions to perform calculations on each group of rows. The GROUP BY clause helps summarize data and generate results based on common values in the specified columns.
83 | - SELECT column1, column2, ..., aggregate_function(column)FROM table GROUP BY column1, column2, ...;
84 |
85 |
86 |
87 | ## What are joins?
88 | - Joins in SQL are used to combine data from two or more tables based on a related column between them. Joins allow you to retrieve data from multiple tables as a single result set, enabling you to combine and analyze data from different sources.
89 | - There are different types of joins commonly used in SQL:
90 | - Inner Join: Returns only the rows where there is a match between the columns being joined in both tables. It excludes unmatched rows from the result set.
91 | - Left Join (or Left Outer Join): Returns all the rows from the left (or first) table and the matching rows from the right (or second) table. If there are no matches in the right table, NULL values are returned for the right table columns.
92 | - Right Join (or Right Outer Join): Returns all the rows from the right (or second) table and the matching rows from the left (or first) table. If there are no matches in the left table, NULL values are returned for the left table columns.
93 | - Full Join (or Full Outer Join): Returns all the rows from both tables and combines them based on the join condition. If there are no matches in either table, NULL values are returned for the columns from the table that has no match.
94 | - Cross Join (or Cartesian Join): Returns the Cartesian product of the rows from both tables, resulting in a combination of all rows from one table with all rows from the other table. It does not require a join condition.
95 |
96 |
97 | ## what are aggregate function used for?
98 | - Aggregate functions in SQL are used for performing calculations on a set of rows and returning a single value as a result. These functions operate on a group of rows and produce a summarized output. Aggregate functions are commonly used in conjunction with the GROUP BY clause to perform calculations on grouped data.
99 | - Here are some commonly used aggregate functions in SQL:
100 | - **COUNT**: Returns the number of rows or non-null values in a specified column.
101 | - **SUM:** Calculates the sum of the values in a specified column.
102 | - **AVG**: Calculates the average (mean) of the values in a specified column.
103 | - **MIN**: Returns the minimum value in a specified column.
104 | - **MAX**: Returns the maximum value in a specified column.
105 | - **GROUP_CONCAT**: Concatenates values from multiple rows into a single string, optionally separated by a specified delimiter.
106 | - These aggregate functions allow you to obtain useful insights and summaries from data. For example, you can use the COUNT function to count the number of sales transactions, the SUM function to calculate the total revenue, or the AVG function to determine the average salary of employees.
107 |
108 |
109 | ## what is purpose of window function?
110 | - Window functions in SQL are used to perform calculations across a set of rows that are related to the current row. They provide a way to perform calculations and retrieve results based on a specific window or subset of rows, rather than the entire table. Window functions operate on a group of rows called a window, which is defined using the OVER clause.
111 | - The purpose of window functions is to provide more advanced and flexible calculations compared to traditional aggregate functions. While aggregate functions return a single value for a group of rows, window functions return a value for each row in the result set, based on the specified window.
112 | - Here are some common use cases and purposes of window functions:
113 | - Calculating Aggregates: Window functions allow you to calculate aggregate functions (such as SUM, AVG, MIN, MAX) over a specific window of rows, rather than the entire table. This can be useful for calculating running totals, moving averages, or cumulative sums.
114 | - Ranking and Ordering: Window functions enable ranking, row numbering, and ordering of rows within a window. You can assign ranks to rows based on certain criteria, retrieve the top N records, or calculate percentiles.
115 | - Analytical Functions: Window functions provide analytical capabilities for data analysis and reporting. You can calculate differences between rows, identify peaks or valleys in a time series, or compare values within a specific range.
116 | - Partitioning Data: Window functions allow you to partition or group data within a window based on one or more columns. This allows you to perform calculations separately for each partition, providing insights and comparisons within subsets of data.
117 |
118 |
119 |
--------------------------------------------------------------------------------
/DataBase/sqlQueries.md:
--------------------------------------------------------------------------------
1 | ### Sql general queries:
2 |
3 | #### select data base
4 | USE db_name
5 |
6 | #### Show Databases
7 | SHOW DATABASES;
8 |
9 | #### Create Database
10 | CREATE DATABASE db_name;
11 |
12 | #### Work on a database
13 | USE db_name;
14 |
15 | #### Show Tables
16 | CREATE TABLE students (name varchar(255),email varchar(50),batch varchar(20));
17 |
18 | #### Table Information
19 | DESCRIBE students;
20 |
21 |
22 | #### Delete Table
23 | DROP TABLE students;
24 |
25 |
26 | #### Reset Table Data
27 | TRUNCATE TABLE students;
28 |
29 | #### Delete Database
30 | DROP DATABASE db_name;
31 |
32 |
33 | #### Create Data
34 | INSERT INTO students ( name, email, batch)VALUES ("e", "e@gmail.com", "xyz");
35 |
36 |
37 | #### Read Data
38 | SELECT * FROM students;SELECT name FROM students where name = 'e';
39 |
40 |
41 | #### Update Data
42 | UPDATE students SET batch = "Cohort-1" where name = 'e';
43 |
44 |
45 | #### Delete Data
46 | DELETE FROM students;
47 | DELETE FROM students where name = 'e';
48 |
49 |
50 | ### WHERE CLAUSE
51 | - Get all the users
52 | ``SELECT * FROM user_data;``
53 |
54 | - Get all the users where gender is Male
55 | ``SELECT * FROM user_data WHERE gender = "Male";``
56 |
57 | - Get all the users where gender is not Male
58 | ``SELECT * FROM user_data WHERE NOT gender = "Male";``
59 | ``SELECT * FROM user_data WHERE gender != "Male";``
60 |
61 | - Get all the users where gender is Male and language is Hindi
62 | ``SELECT * FROM user_data WHERE gender = "Male" and language = "Hindi";``
63 |
64 | - Get all the users where gender is Male and language is not Hindi
65 | ``SELECT * FROM user_data WHERE gender = "Male" and NOT language = "Hindi";``
66 | ``SELECT * FROM user_data WHERE gender = "Male" and language != "Hindi";``
67 |
68 | - Get all the users where shirt size is L or XL
69 | ``SELECT * FROM user_data WHERE shirt_size = "L" or shirt_size = "XL";``
70 | ``SELECT * FROM user_data WHERE shirt_size IN ("L", "XL");``
71 |
72 | - Combining multiple conditions
73 | ``SELECT * FROM user_data WHERE (gender = "Male" and shirt_size = 'L') OR (gender = "Female" and shirt_size = 'M');``
74 | ``SELECT * FROM user_data WHERE ((gender = "Male" and shirt_size = 'L') OR (gender = "Female" and shirt_size = 'M')) AND language = "English";``
75 | ``SELECT * FROM user_data WHERE ((gender = "Male" and shirt_size = 'L') OR (gender = "Female" and shirt_size = 'M')) AND language IN ("English", "Hindi");``
76 |
77 |
78 |
79 | ### WHERE OPERATORS
80 | ``SELECT * FROM students_marks WHERE maths > 50;``
81 | ``SELECT * FROM students_marks WHERE science >= 75;``
82 | ``SELECT * FROM students_marks WHERE english < 40;``
83 | ``SELECT * FROM students_marks WHERE maths <= 50;``
84 |
85 | - Both the starting and ending values are included
86 | ``SELECT * FROM students_marks WHERE maths BETWEEN 50 AND 75;``
87 |
88 |
89 |
90 | ### COUNT FUNCTION
91 | ``SELECT COUNT(id) FROM students_marks WHERE gender = 'Male';``
92 | ``SELECT COUNT(id) FROM students_marks WHERE maths >= 75;``
93 |
94 |
95 | ### ORDERING RESULTS
96 | ``SELECT * FROM students_marks WHERE gender = 'Male' ORDER BY maths ASC;``
97 | ``SELECT * FROM students_marks WHERE gender = 'Female' ORDER BY science DESC;``
98 | ``SELECT * FROM students_marks ORDER BY maths ASC, science DESC;``
99 |
100 |
101 | ### LIMIT & OFFSET
102 | ``SELECT * FROM students_marks LIMIT 10;``
103 | - The first value is offset (skip) count the second value is limit count
104 | ``SELECT * FROM students_marks LIMIT 20,10;``
105 |
106 |
107 | ### SUM FUNCTION
108 | ``SELECT SUM(maths) FROM students_marks WHERE gender = 'Male';``
109 | ``SELECT SUM(maths) FROM students_marks WHERE maths >= 75;``
110 |
111 |
112 |
113 | ### AVG FUNCTION
114 | ``SELECT AVG(maths) FROM students_marks WHERE gender = 'Male';``
115 | ``SELECT AVG(maths) FROM students_marks WHERE maths >= 75;``
116 |
117 |
118 | ### LIKE
119 | - % - Any number or characters
120 | - _ - Single character
121 | ``SELECT * FROM students_marks WHERE name LIKE 'a%';``
122 | ``SELECT * FROM students_marks WHERE name LIKE '%a';``
123 | ``SELECT * FROM students_marks WHERE name LIKE '_a%';``
124 | ``SELECT * FROM students_marks WHERE name LIKE '%a_';``
125 | ``SELECT * FROM students_marks WHERE name LIKE '___a%';``
126 |
127 |
128 |
129 | ### DISTINCT
130 | ``SELECT DISTINCT company FROM employee_salary;``
131 | ``SELECT DISTINCT company, departmement FROM employee_salary;``
132 | ``SELECT COUNT(DISTINCT company) FROM employee_salary;``
133 |
134 |
135 | ### SUM & AVG
136 | ``SELECT AVG(maths) FROM students_marks;``
137 | ``SELECT AVG(maths), AVG(science) FROM students_marks;``
138 | ``SELECT SUM(salary) FROM employee_salary;``
139 |
140 | ### OPERATIONS
141 | ``SELECT (maths+science+english) AS total_marks FROM students_marks;``
142 | ``SELECT (maths+science+english)/3 AS avg_marks FROM students_marks;``
143 |
144 |
145 | ### Min & Max
146 | ``SELECT MIN(maths) FROM students_marks;``
147 | ``SELECT MAX(science) FROM students_marks;``
148 | ``SELECT MIN(maths) FROM students_marks WHERE class = "V";``
149 | ``SELECT MAX(science) FROM students_marks WHERE gender = "Female";``
150 |
151 |
152 |
153 | ### **Grouping**
154 | #### COUNT, MAX, MIN, SUM, AVG
155 | ``SELECT COUNT(*) FROM students_marks GROUP BY gender;``
156 | ``SELECT COUNT(*), class, section FROM students_marks GROUP BY class, section;``
157 | ``SELECT AVG(maths), class, section FROM students_marks GROUP BY class, section;``
158 |
159 |
160 |
161 | ### Having
162 | - Filters on aggregation
163 |
164 | ``SELECT COUNT(*) as num,gender FROM students_marks GROUP BY gender HAVING num < 400;``
165 | ``SELECT COUNT(*) as num, class, section FROM students_marks GROUP BY class, section HAVING num > 20;``
166 | ``SELECT AVG(maths) as maths_avg, class, section FROM students_marks GROUP BY class, section HAVING maths_avg > 50;``
167 |
168 |
169 |
170 | ### Primary Key & Auto Increment
171 | ``CREATE TABLE users (id int NOT NULL AUTO_INCREMENT,name varchar(255) NOT NULL,PRIMARY KEY (id));``
172 |
173 |
174 |
175 |
176 | ### Relationships and Joins
177 | #### joins
178 | ``SELECT * FROM blog JOIN user ON blog.user_id = user.id;``
179 | #### RIGHT JOIN
180 | ``SELECT * FROM blog RIGHT JOIN user ON blog.user_id = user.id;``
181 | #### LEFT JOIN
182 | ``SELECT * FROM blog LEFT JOIN user ON blog.user_id = user.id;``
183 |
184 | #### JOIN with as
185 | ``SELECT COUNT(*) FROM blog_category as bb right join category as cc on bb.category_id = cc.id where bb.blog_id = 1;``
186 |
187 |
188 |
189 | #### Multiple JOINS
190 | ``SELECT * FROM blog join blog_category ON blog.id = blog_category.blog_id JOIN category ON blog_category.category_id = category.id;``
191 | ``SELECT aa.title, cc.name, uu.name FROM blog as aa join blog_category as bb ON aa.id = bb.blog_id JOIN category as cc ON bb.category_id =cc.id RIGHT JOIN user as uu ON aa.user_id = uu.id;``
192 |
193 |
194 |
195 |
196 |
197 |
198 | #### For Practice
199 | https://sqlzoo.net/wiki/SQL_Tutorial
--------------------------------------------------------------------------------
/Dsa/Readme.md:
--------------------------------------------------------------------------------
1 | # Data structure and Algorithm
2 |
3 | ## Data Structure
4 | - In computer science, a data structure is a way of organizing, storing, and manipulating data in a computer so that it can be used efficiently. It provides a way to manage and organize data in a specific way so that it can be easily accessed and modified. Data structures can be implemented using programming languages and can be used for a variety of purposes, including searching, sorting, and indexing data. Some common examples of data structures include arrays, linked lists, stacks, queues, trees, and graphs.
5 |
6 | ## Algorithm
7 | - An algorithm is a set of instructions or a procedure for solving a problem or performing a specific task. It is a sequence of well-defined steps or rules that describe how to perform a computation or solve a problem. Algorithms are used in programming and computer science to design software systems and solve complex computational problems. A good algorithm is efficient, accurate, and solves the problem in the most optimal way possible.
8 |
9 |
10 | | s.n. | Data Structure(in-built) | Data Structure(not in-built) | Alogorithm | Techniques |
11 | | :----: | :---------------------: | :---------------------------: | :--------: | :---------: |
12 | | 1 | Array | Stack | Sliding window | Backtracking: N-Queens Problem,Subset Sum Problem,Sudoku Solver |
13 | | 2 | Object |Queue | Two pointer | recursion => is a technique used in algorithms to solve problems |
14 | | 3 | set | Linked List | Searching Algorithms=> Linear Search, Binary Search | |
15 | | 4 | | Tree | Sorting Algorithms: Bubble Sort,Selection Sort,Insertion Sort,Merge Sort,Quick Sort | |
16 | | 5 | | Graph | Graph Algorithms=>Depth-First Search (DFS),Breadth-First Search (BFS),Dijkstra's Algorithm,Bellman-Ford Algorithm,Floyd-Warshall Algorithm | |
17 | | 6 | | Hash Table | Greedy Algorithms: Coin Change Problem,Fractional Knapsack Problem,Activity Selection Problem | |
18 | | 7 | | | | Dynamic Programming:Fibonacci Series,Knapsack Problem,Longest CommoSubsequence,Longest Increasing Subsequence |
19 | | 8 | | | | | |
20 | 9 | | | | | |
21 |
22 |
23 |
24 | ### Array
25 |
26 | - In JavaScript, an array is a data structure that stores a collection of elements, such as numbers or strings, in a single variable. Arrays are one of the most commonly used data types in JavaScript and are frequently used to store lists of data.
27 | - You can access the elements of an array by their index, which is a numerical value that represents their position in the array. In JavaScript, array indices start at 0, so the first element of an array is at index 0, the second element is at index 1, and so on.
28 | - var myArray = [element1, element2, ..., elementN];
29 | - time complexity
30 |
31 | | Method | Average case | worst case | use |
32 | | :-----: | :--------: | :---------: | :---: |
33 | | Accessing an element by index | O(1) | O(1) | |
34 | | Searching for an element | O(n) | O(n) | |
35 | | Adding an element to the end of the array | O(1) | O(n) | |
36 | | Adding an element to the beginning or middle of the array | O(n) | O(n) | |
37 | | Removing an element from the end of the array |O(1) | O(1)| |
38 | | Removing an element from the beginning or middle of the array | O(n) | O(n) | |
39 | | toString() | O(n) | O(n) | convert a number to string |
40 | | join() | O(n) | O(n) | joins all array elements into a string. |
41 | | pop() | O(1) | O(1) | Removes the last element from an array |
42 | | push() | O(1) | O(1) | adds a new element to an array (at the end) |
43 | | shift() | O(n) | O(n) | removes the first array element and "shifts" all other elements to a lower index |
44 | | unshift() | O(n) | O(n) | adds a new element to an array (at the beginning), and "unshifts" older elements |
45 | | length | O(1) | O(1) | give the length of array |
46 | | delete | O(n) | O(n) | Array elements can be deleted using the JavaScript operator |
47 | | concat() | O(n) | O(n) | creates a new array by merging (concatenating) existing arrays |
48 | | splice() | O(n) | O(n) | method can be used to add new items to an array , you can use splice() to remove elements without leaving "holes" in the array ,method does not remove any elements from the source array.|
49 | | slice() | O(n) | O(n) | method slices out a piece of an array into a new array. , method creates a new array |
50 | | sort() | O(n log n) | | sort the array |
51 | | reverse() | O(n) | | reverse the array |
52 |
53 |
54 | ### Object
55 |
56 | - In JavaScript, an object is a data structure that allows you to store collections of key-value pairs. Each key in the object is a unique string that is used to access its corresponding value.
57 | - var myObject = {key1: value1, key2: value2, ..., keyN: valueN};
58 | - In addition to storing simple values like strings and numbers, objects can also contain functions, other objects, and arrays. This makes them a powerful tool for organizing and storing complex data structures in JavaScript.
59 |
60 |
61 |
62 | ### Set
63 | - In JavaScript, a Set is a built-in object that represents a collection of unique values. You can use a Set to store and manipulate data in a way that ensures that each value is unique, and to easily perform operations like union, intersection, and difference between multiple sets.
64 | - const mySet = new Set();
65 | - const mySet = new Set([value1, value2, ..., valueN]);
66 | - mySet.add(value);
67 | - mySet.delete(value);
68 | - mySet.has(value); // returns true if the value exists, false otherwise
69 | - mySet.size; // returns the number of elements in the Set
70 | - The order of values in a Set is determined by their insertion order. A Set only contains unique values, so if you try to add a value that already exists in the Set, it will be ignored.
71 | - Add or remove element: O(1)
72 | - Check if element exists: O(1)
73 |
74 |
75 |
76 | ### Stack
77 | - a stack is an abstract data type that represents a collection of elements with two principal operations: push, which adds an element to the collection, and pop, which removes the most recently added element that was not yet removed. The order in which elements are removed from a stack is known as Last-In-First-Out (LIFO).
78 | - A stack is typically implemented using an array or a linked list, where elements are added and removed from the same end of the data structure. The end of the stack where elements are added is called the "top" of the stack.
79 | - Some of the common operations performed on a stack include:
80 | - Push: This operation adds an element to the top of the stack.
81 | - Pop: This operation removes the top element from the stack.
82 | - Peek: This operation returns the top element of the stack without removing it.
83 | - IsEmpty: This operation checks if the stack is empty or not.
84 | - Size: This operation returns the number of elements in the stack.
85 | - The space complexity of a stack is also O(n), where n is the number of elements in the stack. This is because a stack is a linear data structure that stores elements in a contiguous block of memory.
86 | - Example: A web browser uses a stack to keep track of the user's navigation history. Each time the user visits a new page, the page is pushed onto a stack. If the user clicks the back button, the most recent page is popped off the stack and displayed.
87 | - Push element: O(1)
88 | - Pop element: O(1)
89 | - Peek element: O(1)
90 | - Access or modify element at arbitrary index: O(1)
91 | - Check if element exists: O(n)
92 |
93 |
94 |
95 | | Abstract Data type |
96 | | :-----------------:
97 | | An abstract data type (ADT) is a way of describing a type of data and the operations that can be performed on it, without specifying exactly how those operations are implemented. It's like a blueprint for creating different kinds of data structures that have a set of behaviors and functions that can be used to manipulate them, without revealing the details of how they work behind the scenes. For example, a stack is an abstract data type that allows you to add and remove elements in a specific order, but how those elements are actually stored and accessed is hidden from the user. Some examples of abstract data types include stacks, queues, lists, trees, and graphs. Each of these data types has a set of operations that can be performed on them, such as adding an element, removing an element, or searching for an element.
98 |
99 |
100 |
101 |
102 | ### Queue
103 | - In JavaScript, a queue data structure is used to store and manage a collection of elements in a first-in-first-out (FIFO) order. It can be implemented using arrays or linked lists.
104 | - The time complexity of the queue operations is as follows:
105 | - enqueue: O(1)
106 | - dequeue: O(1)
107 | - front: O(1)
108 | - isEmpty: O(1)
109 | - The space complexity of the queue depends on the number of elements it stores at any given time
110 | - A Queue is a linear data structure that follows the FIFO (First In First Out) principle. This means that the first element that is inserted into the Queue is the first one to be removed. A Queue has two main operations: Enqueue and dequeue
111 | - enqueue adds an element to the end of the array using the push method, and dequeue removes an element from the front of the array using the shift method. peek returns the first element in the array without removing it, and size returns the length of the array.
112 | - Example: Call Center Queue: In a call center, calls are typically handled in the order that they are received. Calls are placed into a queue, and the next available operator will take the next call in the queue.
113 |
114 |
115 |
116 | ### LinkedList
117 | - In JavaScript, linked lists can be implemented using objects as nodes and storing the reference to the next node as a property of the node object.
118 | - A linked list is a data structure consisting of a group of nodes, each of which contains a reference to the next node in the sequence. In a singly linked list, each node only has a reference to the next node, while in a doubly linked list, each node has references to both the next and previous nodes.
119 | - The time complexity of adding or removing a node from a linked list is O(n), where n is the number of nodes in the list, since we may need to traverse the entire list to find the node to add or remove. However, inserting or removing a node at the beginning of the list can be done in constant time, since we only need to update the head reference. The space complexity of a linked list is O(n), since we need to allocate memory for each node.
120 | - Example : Implementing a stack or queue: A linked list can be used to implement a stack or queue data structure. In this case, the linked list can be used to store the elements of the stack or queue, with the first element of the list representing the top of the stack or front of the queue.
121 | - Add or remove element given pointer before add/removal location:O(n)
122 | - Add or remove element given pointer at add/removal location:O(1) if doubly linked
123 | - Add or remove element at arbitrary position without pointer: O(n)
124 | - Access element at arbitrary position without pointer: O(n)
125 | - Check if element exists: O(n)
126 | - Reverse between position i and j: O(j−i)
127 | - Detect a cycle: O(n) using fast-slow pointers or hash map
128 |
129 | ### Tree
130 | - A tree data structure is a hierarchical structure consisting of nodes connected by edges or links. It is similar to a real-life tree, where each node is like a branch connected to the trunk, and the edges represent the connection between the nodes.
131 | - In a tree, there is always one node called the "root" node, which is at the top of the hierarchy. All other nodes are connected to this root node through edges. The nodes that have other nodes connected to them are called "parent" nodes, and the nodes that are connected to them are called "child" nodes.
132 | - The tree structure is commonly used in computer science for representing hierarchical data, such as file directories, organization charts, or the structure of an XML or HTML document.
133 | - Example: organization of files and folders in a computer's file system. Each folder can contain multiple subfolders, and each subfolder can contain multiple files and subfolders, forming a tree-like structure.
134 |
135 |
136 | ### Graph
137 | - a graph data structure is a way to represent a set of objects (nodes or vertices) that are connected by links (edges or arcs). Graphs can be used to model a wide variety of real-world problems, such as social networks, transportation systems, and computer networks.
138 | - In a graph, each node represents a specific object, and the edges between nodes represent relationships between those objects. For example, in a social network graph, each node might represent a person, and the edges might represent friendships between those people.
139 |
140 |
141 |
142 | ### Hash Table
143 | - A hash table, also known as a hash map, is a data structure that stores data in an associative manner, using key-value pairs. The keys are hashed to a numeric index, which is used to access the corresponding value in the table.
144 | - In JavaScript, hash tables can be implemented using objects, which are essentially collections of key-value pairs. When a new key-value pair is added to the hash table, the key is hashed using a hash function and the resulting index is used to store the value in the object. When retrieving a value, the hash function is used to calculate the index and the value is retrieved from the corresponding location in the object.
145 | - Hash tables are commonly used to implement caches, databases, and other data structures that require fast key-based lookups. They have a time complexity of O(1) for average case operations, making them efficient for use in many scenarios. However, in the worst case, hash tables can have a time complexity of O(n), where n is the number of elements in the table, if there are many hash collisions. Therefore, it's important to choose an appropriate hash function to minimize the chances of collisions.
146 | - Add or remove key-value pair: O(1)
147 | - Check if key exists: O(1)
148 | - Check if value exists: O(n)
149 | - Access or modify value associated with key: O(1)
150 | - Iterate over all keys, values, or both: O(n)
151 |
152 |
153 |
154 |
155 | ## Algorithm
156 |
157 | ## Sliding window
158 | - Sliding window algorithm is a technique used to solve problems where we need to find a substring or a subarray satisfying certain constraints. It works by creating a window of size k and sliding it over the input string or array from left to right.
159 | - The algorithm maintains two pointers - left and right. The left pointer points to the start of the window, while the right pointer points to the end of the window. The size of the window is fixed and is equal to k. At each step, the window slides one position to the right.
160 | - The time complexity of sliding window algorithm is O(n), where n is the length of the input string or array.
161 | - In general, the space complexity of the sliding window algorithm is O(k), where k is the size of the sliding window. This is because the algorithm only needs to store k elements at any given time.
162 |
163 |
164 |
165 |
166 | ## Two pointer
167 | - The Two Pointer Algorithm is a technique used to solve problems related to arrays or linked lists. The idea behind this algorithm is to use two pointers that traverse the data structure in some fashion, often simultaneously, until they identify some combination of elements that meet a certain condition.
168 | - Two pointer algorithm is a technique used in computer programming to solve problems involving arrays or linked lists. It involves using two pointers that move through the data structure at different speeds to solve a problem more efficiently. The two pointers can move in different directions or at different speeds depending on the problem. This algorithm is often used for problems that involve searching for a target value, finding a substring, or checking for the existence of a palindrome. The basic idea behind this algorithm is to maintain two pointers, which point to two different positions in the data structure, and move them to get closer to the solution.
169 | - The space complexity of the Two Pointer Algorithm is generally O(1), as it only requires a constant amount of extra memory to store the two pointers. The time complexity of the algorithm varies depending on the specific use case, but it is typically O(n), as both pointers traverse the data structure only once.
170 |
171 |
172 |
173 |
174 | ## Linear Search
175 | - Linear Search is a simple search algorithm that is used to find an element in a list or an array. It works by sequentially checking each element of the list or array until the desired element is found or the end of the list is reached.
176 | - The time complexity of linear search is O(n), where n is the number of elements in the list or array. This means that the worst-case scenario is when the target value is not in the list and the algorithm has to check every element in the list. The space complexity of linear search is O(1), since it only requires a constant amount of memory to store the loop index and the target value.
177 |
178 |
179 |
180 | ## Binary search
181 | - Binary search is a search algorithm used to find the position of a specific value (key) within a sorted array. The algorithm works by repeatedly dividing the sorted array into two equal halves and searching for the key within the appropriate half until the key is found or it is determined that the key does not exist in the array.
182 | - Here's how the binary search algorithm works:
183 | - Start by finding the middle element of the array.
184 | - If the middle element is equal to the key, then we have found the position of the key in the array.
185 | - If the middle element is less than the key, then the key must be in the right half of the array. So, we repeat the search on the right half of the array.
186 | - If the middle element is greater than the key, then the key must be in the left half of the array. So, we repeat the search on the left half of the array.
187 | - We continue this process until we find the key or determine that it does not exist in the array.
188 | - The time complexity of binary search algorithm is O(log n), where n is the number of elements in the array.
189 |
190 |
191 | ## Bubble Sort
192 | - Bubble Sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted.
193 | - The algorithm works by comparing each pair of adjacent elements in the list from the beginning, swapping them if they are in the wrong order (ascending or descending arrangement) and continuing until the end of the list.
194 | - The time complexity of the Bubble Sort algorithm is O(n^2) in the worst and average cases, and O(n) in the best case (when the list is already sorted). The space complexity is O(1) as the algorithm only requires a constant amount of additional space for the swapping of elements.
195 |
196 |
197 |
198 | ## knapsack
199 | - The Knapsack problem is a combinatorial optimization problem where you have to select a set of items, each with a weight and a value, in such a way that the total weight is less than or equal to a given limit and the total value is maximized.
200 | - The algorithm for solving the Knapsack problem is the dynamic programming approach. It involves building a table where each cell (i, j) represents the maximum value that can be obtained by selecting from the first i items, subject to a weight limit of j.
201 | - The Knapsack problem is a classic optimization problem in computer science. It involves choosing items to put in a knapsack (a backpack or a container) that has a limited capacity, with the goal of maximizing the total value of the items.
202 | - Here's the general outline of the algorithm:
203 | - Create a table with (n+1) rows and (W+1) columns, where n is the number of items and W is the weight limit.
204 | - Initialize the first row and the first column to 0.
205 | - For each item i from 1 to n:
206 | - a. For each weight limit j from 1 to W:
207 | i. If the weight of the item i is greater than j, set the value of the cell (i, j) to the value of the cell (i-1, j).
208 | ii. Otherwise, set the value of the cell (i, j) to the maximum of the following two values:
209 | 1. The value of the cell (i-1, j).
210 | 2. The value of the current item i plus the value of the cell (i-1, j-weight[i]), where weight[i] is the weight of item i.
211 | - The maximum value that can be obtained is in the cell (n, W).
212 | - The time complexity of this algorithm is O(nW), where n is the number of items and W is the weight limit. The space complexity is also O(nW), as we are creating a table with (n+1) rows and (W+1) columns.
213 |
214 | - In other words, imagine you are a thief and you are trying to steal the most valuable items from a store, but you can only carry a limited amount of weight in your bag. You want to choose the items that have the highest value-to-weight ratio so that you can maximize the total value of the items you steal without going over the weight limit of your bag.
215 | - The Knapsack problem is commonly used in computer science to test the efficiency of algorithms, particularly dynamic programming and greedy algorithms.
216 |
217 |
218 | ## Selection Sort
219 | - Selection Sort is a sorting algorithm that works by repeatedly finding the minimum element from the unsorted part of an array and placing it at the beginning of the array. It divides the input array into two parts: the sorted part and the unsorted part.
220 | - Here are the steps of the Selection Sort algorithm:
221 | - Set the first element of the array as the minimum value.
222 | - Traverse the array starting from the second element, and for each element, compare it with the minimum value. If the element is smaller than the minimum value, set it as the new minimum value.
223 | - After reaching the end of the array, swap the minimum value with the first element of the array.
224 | - Repeat the above steps for the remaining unsorted part of the array, i.e., from the second element to the end of the array.
225 | - The time complexity of the Selection Sort algorithm is O(n^2), where n is the number of elements in the array. This is because the algorithm needs to traverse the array twice for each element. The space complexity is O(1), as the algorithm sorts the array in place without requiring extra memory.
226 |
227 |
228 |
229 | ## Merge Sort
230 | - Merge Sort is a sorting algorithm that follows the divide-and-conquer approach. It breaks down the input array into smaller subarrays, sorts them recursively, and then merges them to get the final sorted array. The algorithm has a time complexity of O(n log n).
231 | - Here are the steps to perform the Merge Sort algorithm:
232 | - Divide the array into two halves, by finding the middle index.
233 | - Recursively sort the two halves using merge sort.
234 | - Merge the two sorted halves by comparing the elements from each half and placing them in order in a new array.
235 | - Repeat the merging process until all subarrays have been merged into one final sorted array.
236 |
237 |
238 | ## quick sort
239 | - Quick sort is a sorting algorithm that uses the divide-and-conquer approach. It works by partitioning an array into two sub-arrays, sorting the sub-arrays independently, and then combining them in a way that results in a sorted array.
240 | - The basic idea behind quick sort is to select a pivot element, which can be any element in the array. Then, we partition the array into two sub-arrays, one with elements smaller than the pivot and one with elements greater than the pivot. We then recursively apply the same process to each sub-array until the entire array is sorted.
241 | - Here are the steps of the quick sort algorithm:
242 | - Choose a pivot element from the array.
243 | - Partition the array into two sub-arrays, one with elements smaller than the pivot and one with elements greater than the pivot.
244 | - Recursively apply the same process to each sub-array.
245 | - Combine the sorted sub-arrays to obtain the final sorted array.
246 | - The time complexity of the quick sort algorithm is O(n log n) in the average case and O(n^2) in the worst case (when the pivot is chosen poorly and the array is already sorted or almost sorted). However, in practice, quick sort is often faster than other sorting algorithms due to its cache efficiency and ability to be easily parallelized.
247 |
248 |
249 |
250 | ## Dynamic Programming
251 | - Dynamic Programming is a problem-solving technique used in computer science and mathematics. It involves breaking down a complex problem into smaller sub-problems and solving them in a systematic way, building up to the solution of the original problem. The approach involves storing the solutions of sub-problems in memory so that they can be accessed and reused as needed. Dynamic Programming is used to solve a wide range of problems, including optimization, sequence analysis, and graph algorithms. It is particularly useful for problems that involve overlapping sub-problems, where the same sub-problem may need to be solved multiple times.
252 | - In JavaScript, dynamic programming can be used to solve a variety of problems, such as finding the maximum subarray, the longest common subsequence, or the shortest path in a graph.
253 | - The time complexity of a DP algorithm can be represented by O(n*m), where n is the size of the input and m is the number of subproblems.
254 |
255 |
256 | ## backtracking
257 | - Backtracking is a problem-solving technique used in algorithm design. It involves building solutions to problems incrementally, and at each step, making a decision that can lead to either a successful or an unsuccessful outcome. If the current decision leads to a dead end, the algorithm backtracks, undoing the last decision, and tries an alternative path.
258 | - The process continues until all possible solutions have been explored or a satisfactory solution is found. Backtracking is particularly useful when searching for all possible solutions to a problem, as it allows the algorithm to eliminate unpromising paths early on and explore more promising ones. It is commonly used in problems such as puzzle-solving, permutation generation,graph traversal,Sudoku puzzles, or finding the shortest path in a graph.
259 | - The time complexity of a backtracking algorithm can be represented by O(b^d), where b is the branching factor, i.e., the number of choices at each decision point, and d is the depth of the search tree.
260 |
261 |
262 |
263 | ## recursion
264 | - Recursion is a programming technique that allows a function to call itself within its own code. In simpler terms, recursion is a process in which a function solves a problem by breaking it down into smaller subproblems, solving each subproblem recursively, and then combining the results to solve the original problem.
265 | - This means that the time taken by the function is proportional to the input size n. In general, the time complexity of a recursive function can range from O(1) to O(2^n), depending on the number of recursive calls made and the time complexity of each call.
266 |
267 |
268 |
269 | ## greedy algorithm
270 | - Greedy algorithms are a class of algorithms that aim to find the optimal solution for a problem by making a locally optimal choice at each step. The idea is to make a decision based on the best available option at the moment without considering the future consequences. Greedy algorithms are usually used for optimization problems, where the goal is to find the best solution among the possible choices.
271 | - A simple example of a greedy algorithm is the coin change problem. In this problem, we are given a set of coins of different denominations and a target amount. The goal is to find the minimum number of coins required to make up the target amount. A greedy approach to solving this problem is to always choose the largest denomination coin that fits into the remaining amount until the amount is zero.
272 | - The time complexity of a greedy algorithm depends on the problem and the implementation. In some cases, greedy algorithms can solve problems in linear time, while in other cases, they may require exponential time. It is important to note that greedy algorithms do not always provide the optimal solution for a problem, and in some cases, a more complex approach may be required.
273 |
274 |
275 |
276 |
277 |
278 |
279 |
280 |
--------------------------------------------------------------------------------
/Interview/readme.md:
--------------------------------------------------------------------------------
1 | ## ROLE : REACT JS DEVELOPER
2 | ## ROUND : 1
3 |
4 | Explain the structure of an HTML Page.
5 |
6 | Different type of CSS Styling? INLINE, INTERNAL, EXTERNAL
7 |
8 | How is CSS files managed when the website is hosted?
9 |
10 | What do you know about firebase?
11 |
12 |
13 |
14 | ## ROUND : 2
15 |
16 | Explain your Project
17 |
18 | With respect to functionalities showcase the functionalities in your project. If there's search bar whether it's working or not.
19 |
20 | What is REDUX?
21 |
22 | Explain the flow of redux in your project
23 |
24 | What are hooks in React and name any 3 hooks.
25 |
26 | What is Context API how to use it.
27 |
28 | What are MAP, FILTER and REDUCE?
29 |
30 | What's the middleware you used in Redux, ANS: REDUX THUNK. Any other middleware you used?
31 |
32 | Do you know what Redux Saga is?
33 |
34 | What is useMemo hook?
35 |
36 | What is prop drilling?
37 |
38 | What is **this** keyword?
39 |
40 | In a note editor, create an array of objects such that the object has *price, product name and product size* key. You need to search a product by it's name. If the name is shirt then even if he writes *sh* the result should appear.
41 |
42 |
43 |
44 |
45 |
--------------------------------------------------------------------------------
/Javascript/practice/search.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 | Search Bar
8 |
9 |
10 |
11 |
12 |
23 |
24 |
--------------------------------------------------------------------------------
/Javascript/practice/slider.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 | Document
8 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
Previous
72 |
Next
73 |
74 |
75 |
76 |
108 |
109 |
--------------------------------------------------------------------------------
/Miscellaneous/Readme.md:
--------------------------------------------------------------------------------
1 | ## Run time vs Compile time?
2 |
3 | Run time and compile time are programming terms that refer to different stages of software program development. After a developer writes ***a source code, this code must be compiled into machine code in order to become an executable program***. This is referred to as **compile time**.
4 |
5 | ***A compiled program can be opened and run by a user. When the application is running, it is called run time.***
6 |
7 |
8 |
9 | ## What is Interpreted language?
10 |
11 | *‘What static and dynamic typing are?’, also ‘What static and dynamic binding is?’*
12 | Static indicates that resolution takes place at the time a program is constructed — **compile time**. Dynamic indicates that resolution takes place at the time a program is run — **run time**.
13 |
14 |
15 |
16 | ## What static and dynamic typing is?
17 |
18 | Static typing means that the executable form of a program generated at compile time will vary depending upon the types of data values found in the program. Dynamic typing means that the generated code will always be the same irrespective of the type — any differences in execution will be determined at run time.
19 |
20 | In dynamic typing, value can change from its first declarations type to another type at any time for example int to string, but in statically typing value cannot be changed from one type to another easily. You have to convert them.
21 |
22 |
23 | ## What static and dynamic binding is?
24 |
25 | Binding refers to the association of names in program text to the storage locations to which they refer. In static binding, this association is predetermined at compile time. With dynamic binding, this association is not determined until run time.
26 |
27 | Example:
28 |
29 | If someone attempts to invoke a method like MyClass.foo(), a static binding system will verify at build time that there is a class called MyClass and that class has a method called foo. A dynamic binding system will wait until run time to see whether either exists.
30 |
31 |
32 |
33 | ## What is Interpreted language?.
34 |
35 | With a compiled language, the code you enter is reduced to a set of machine-specific instructions before being saved as an executable file — at compile time. With an Interpreted language, most of its implementations execute instructions directly and freely, without previously compiling a program into machine language instructions. Interpreted languages must be reduced to machine instructions.— at run time. You can write dynamically typed interpreted language.
36 |
37 |
38 |
39 |
40 | ## What is OWASP?
41 | - OWASP stands for the Open Web Application Security Project. It is an open community that focuses on improving the security of software applications, particularly web applications. OWASP provides resources and tools to help developers, security professionals, and organizations better understand and mitigate the risks associated with web application security.
42 | - OWASP maintains a list of the top 10 web application security risks, which is updated regularly to reflect the latest threats and vulnerabilities. This list serves as a guide for developers and security professionals to help them prioritize and address the most critical security risks when developing or testing web applications.
43 | - In addition to the top 10 list, OWASP provides a variety of resources and tools, including security guidelines, testing frameworks, and educational materials. The organization also hosts conferences and events around the world to promote the importance of web application security and share knowledge and best practices among the community.
44 |
45 |
46 |
47 | ## What is gRPC?
48 |
49 | - gRPC is a way for different computer programs to communicate with each other, even if they are written in different programming languages or run on different computers. It's like a telephone that lets programs talk to each other, using a special language to understand each other's messages. gRPC is very fast and efficient, and it's used in many different types of applications that need to exchange information over a network.
50 | - gRPC builds on top of traditional RPC frameworks by providing features such as:
51 | - Bi-directional streaming: Both the client and server can send and receive data at the same time, which allows for more efficient communication.
52 | - Multi-language support: gRPC supports multiple programming languages, including C++, Java, Python, and Go,JavaScrip, among others.
53 | - Code generation: Protocol Buffers enable code generation in multiple programming languages, which simplifies development and reduces the risk of errors.
54 | - Flow control: gRPC provides flow control mechanisms to prevent overwhelming the network and ensure that data is transferred efficiently.
55 | - Authentication and encryption: gRPC includes built-in support for authentication and encryption, which enhances security and privacy.
56 | - gRPC is commonly used in microservices architectures, where small, independently deployable services communicate with each other over the network. It is also used in cloud-native applications, IoT devices, and other distributed systems that require efficient and reliable communication between different components.
57 |
58 |
59 |
60 | ## What is GraphQL?
61 | - GraphQL is an open-source query language and runtime for APIs that was developed by Facebook. It provides a more efficient, powerful, and flexible alternative to RESTful APIs by enabling clients to specify exactly what data they need and receive only that data in response.
62 | - With GraphQL, clients can make a single request to the server to fetch data from multiple resources, as opposed to RESTful APIs, where clients may need to make multiple requests to retrieve related data. The client sends a GraphQL query to the server, which processes the query and returns only the data that the client requested.
63 | - Key features of GraphQL include:
64 | - Strong typing: GraphQL schemas define the types of data that can be queried, which enables type checking and reduces errors.
65 | - Hierarchical: GraphQL queries are hierarchical, which means that clients can specify complex data relationships and nested data structures.
66 | - Client-specified queries: The client specifies exactly what data it needs, which improves performance by reducing the amount of data sent over the network.
67 | - Language-agnostic: GraphQL is not tied to any specific programming language or database, which means it can be used with a variety of backend technologies.
68 | - Real-time subscriptions: GraphQL supports real-time subscriptions, which enables clients to receive data updates in real-time.
69 | - GraphQL is commonly used in web and mobile applications that require a flexible and efficient way to retrieve data from servers. It has gained popularity in recent years, and many major companies, including Facebook, GitHub, and Shopify, have adopted it for their APIs
70 |
71 |
72 |
73 |
74 |
75 | ## How can we implement caching on frontend?
76 | - Implementing caching on the frontend is a common technique to improve the performance of web applications by reducing the number of requests to the server. Here are some ways to implement caching on the frontend:
77 | - Browser caching: Browsers can cache static assets such as images, stylesheets, and JavaScript files, reducing the amount of time it takes to load pages. By setting the appropriate HTTP headers, developers can control how long these assets are cached and when they should be invalidated.
78 | - Service Worker caching: Service Workers are a client-side technology that enables web applications to cache data and assets on the user's device. This allows the application to load and run even when the user is offline or has a poor internet connection. Service Workers can cache content in a variety of ways, including network-first, cache-first, and stale-while-revalidate strategies.
79 | - In-memory caching: JavaScript frameworks such as React and Angular provide mechanisms for in-memory caching of components and application state. By storing frequently used data in memory, these frameworks can reduce the amount of time it takes to render components and improve the performance of the application.
80 | - Local Storage: Local Storage is a browser-based mechanism for storing data on the user's device. By storing frequently used data in Local Storage, web applications can reduce the number of requests to the server and improve the performance of the application.
--------------------------------------------------------------------------------
/OOPs/Readme.md:
--------------------------------------------------------------------------------
1 | ## What is Object-Oriented Programming (OOP)?
2 | - Object-Oriented Programming (OOP) is a programming paradigm that organizes data and behavior into reusable structures called objects. It emphasizes the concept of objects, which are instances of classes, and their interactions to build complex software systems.
3 | - In OOP, the program's structure revolves around objects that encapsulate data (attributes or properties) and the operations (methods or functions) that can be performed on that data. Objects are created from class blueprints, which define their common characteristics and behaviors.
4 | - The key principles of OOP are:
5 | - **Encapsulation**: Encapsulation refers to the bundling of data and methods within an object, hiding the internal details and exposing only the necessary interfaces. It helps achieve data abstraction, data security, and code organization.
6 | - **Inheritance**: Inheritance allows the creation of new classes (subclasses) based on existing classes (superclasses). Subclasses inherit the properties and behaviors of their superclasses and can add or override them. It promotes code reuse and hierarchical classification.
7 | - **Polymorphism**: the ability of objects of different types to be treated as objects of a common superclass. It allows you to use a single interface to represent multiple object types.
8 | - Think of it this way: Let's say you have a superclass called "Shape" and two subclasses called "Circle" and "Square" that inherit from the Shape class. Each subclass has its own implementation of the "draw" method.
9 | - **Abstraction**: Abstraction focuses on representing essential features of objects while hiding unnecessary details. It simplifies complex systems by breaking them down into manageable and modular components. Abstract classes and interfaces are used to define common characteristics and contracts for related classes. Abstraction allows programmers to work at higher levels of abstraction, dealing with concepts and behaviors rather than implementation details.
10 |
11 |
12 |
13 |
14 | ## What is the difference between a class and an object?
15 | - In Object-Oriented Programming (OOP), a class and an object are related but distinct concepts. Here's the difference between a class and an object:
16 |
17 | **Class**:
18 | - A class is a blueprint or template that defines the structure, behavior, and initial state of objects.
19 | - It serves as a blueprint for creating objects of a specific type.
20 | - It encapsulates attributes (data members) and behaviors (methods) that define the characteristics and actions of objects.
21 | - A class provides a definition of an object but doesn't represent a specific instance.
22 | - It defines the common properties and behaviors shared by objects of the same type.
23 | - You can create multiple objects (instances) based on a single class.
24 | - A class acts as a data type, allowing you to create objects with specific attributes and behaviors.
25 |
26 | **Object**:
27 | - An object is an instance of a class.
28 | - It is a concrete entity created from a class blueprint.
29 | - Objects have their own state (attribute values) and behavior (methods).
30 | - Each object created from a class is independent, with its own unique identity and data.
31 | - Objects can interact with each other by invoking methods and exchanging data.
32 | - Objects represent specific instances or occurrences of a concept or entity defined by the class.
33 | - You can create, manipulate, and destroy objects during the execution of a program.
34 |
35 | To summarize, a class is an abstract representation or blueprint that defines the properties and behaviors of objects, while an object is a concrete instance created from a class with its own unique identity and data. Classes provide the structure, and objects are the actual entities that exist and interact during the program execution.
36 |
37 |
38 |
39 | ## What is the difference between a superclass and a subclass?
40 | - In Object-Oriented Programming (OOP), the terms "superclass" and "subclass" are used to describe the relationship between classes. Here's the difference between a superclass and a subclass:
41 |
42 | **Superclass** (Base Class or Parent Class):
43 | - A superclass is a class that is higher in the class hierarchy.
44 | - It is a generic class that defines common properties and behaviors shared by one or more related subclasses.
45 | - A superclass can be seen as a more general or abstract representation of a concept.
46 | - It serves as a base for inheritance, providing a set of attributes and methods that can be inherited by its subclasses.
47 | - Superclasses can have their own attributes and methods.
48 | - Multiple subclasses can inherit from the same superclass.
49 | - Superclasses can be instantiated, but they are typically used as base classes for creating more specialized subclasses.
50 |
51 | **Subclass** (Derived Class or Child Class):
52 | - A subclass is a class that is derived from a superclass.
53 | - It inherits properties (attributes and methods) from its superclass.
54 | - A subclass can add new attributes and methods or override the inherited ones.
55 | - It represents a specialized version of the superclass, refining or extending its behavior.
56 | - Subclasses can have their own additional attributes and methods specific to their specialization.
57 | - Subclasses can further serve as superclasses for other subclasses, creating a hierarchy of classes.
58 | - Subclasses can be instantiated to create objects with the extended or refined behavior.
59 |
60 | Key points:
61 | - Superclasses and subclasses establish an inheritance relationship, where the subclass inherits the properties of the superclass.
62 | - The superclass represents a more general concept, while the subclass represents a more specialized version of that concept.
63 | - Subclasses inherit the attributes and methods of the superclass and can add their own unique attributes and methods or modify the inherited ones.
64 |
65 | In summary, a superclass is a higher-level, more general class that provides common properties and behaviors, while a subclass is a specialized class that inherits and extends the properties and behaviors of the superclass. Inheritance allows code reuse, promotes modularity, and facilitates hierarchical organization of classes.
66 |
67 |
68 |
69 |
70 |
71 | ## Explain the concept of abstraction in OOP.
72 | - In Object-Oriented Programming (OOP), abstraction is a fundamental concept that focuses on creating simplified and abstract representations of complex systems. It allows you to model real-world entities or concepts in a simplified manner by highlighting the essential features and hiding the unnecessary details.
73 |
74 | Abstraction involves two main aspects:
75 |
76 | 1. **Data Abstraction**:
77 | Data abstraction refers to the process of representing the relevant data and hiding the internal details or implementation of how that data is stored or manipulated. It allows you to define classes with attributes (properties) that represent the state of an object, without exposing the internal data representation.
78 |
79 | In data abstraction:
80 | - Only the essential attributes required to represent an object's state are defined, while the internal details are hidden.
81 | - Access to the object's attributes is controlled through methods (getters and setters), providing an interface for interacting with the object's data.
82 | - Data abstraction promotes encapsulation by encapsulating the data within a class and allowing controlled access to it.
83 | - It enables separation of concerns, allowing you to focus on the relevant data and operations, and hide unnecessary implementation details.
84 |
85 | 2. **Procedural Abstraction**:
86 | Procedural abstraction involves defining the behavior or functionality of an object or module without specifying the exact implementation details. It allows you to define methods (functions) that perform certain operations or actions without revealing how those operations are carried out.
87 |
88 | In procedural abstraction:
89 | - The internal steps or algorithms used to perform a task are hidden, and only the high-level functionality is exposed.
90 | - Methods provide an interface for invoking actions or operations on objects, without exposing the internal implementation.
91 | - Procedural abstraction promotes modularity and code reusability by allowing different objects or modules to use the same methods to achieve a certain functionality.
92 | - It enables you to work with objects or modules at a higher level of abstraction, focusing on what needs to be done rather than how it is done.
93 |
94 | Abstraction helps to manage complexity, improve code maintainability, and facilitate code reuse. It allows you to create simplified models of real-world systems, providing a clear separation between the interface and the implementation. By abstracting away unnecessary details, you can focus on the essential aspects of the system, making the code more readable, understandable, and adaptable.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ### Details about Repository:
2 |
3 | - [AWS](https://github.com/dekta/Revision/tree/main/Aws)
4 | - [Database](https://github.com/dekta/Revision/tree/main/DataBase)
5 | - [Express](https://github.com/dekta/Revision/tree/main/Express)
6 | - [JavaScript](https://github.com/dekta/Revision/tree/main/Javascript)
7 | - [NodeJs](https://github.com/dekta/Revision/tree/main/Nodejs)
8 | - [Redis](https://github.com/dekta/Revision/tree/main/Redis)
9 | - [React](https://github.com/dekta/Revision/tree/main/react)
10 | - [HTML](https://github.com/dekta/Revision/tree/main/html)
11 | - [Redux](https://github.com/dekta/Revision/tree/main/Redux)
12 | - [System](https://github.com/dekta/Revision/tree/main/system)
13 | - [Miscellaneous](https://github.com/dekta/Revision/tree/main/Miscellaneous)
14 | - [DSA](https://github.com/dekta/Revision/tree/main/Dsa)
15 |
--------------------------------------------------------------------------------
/Redux/Readme.md:
--------------------------------------------------------------------------------
1 | # **REDUX**
2 |
3 | ## 1. What is Redux? 🙄
4 |
5 | - Redux is a JavaScript library that helps manage the state of an application in a predictable way. It provides a single source of truth called the 'store' and uses actions and reducers to update and control the state. By following a strict unidirectional data flow, Redux makes it easier to handle and understand how data changes in your application.
6 |
7 | ---
8 |
9 | ## 2. What is the purpose of Redux? 🤔
10 |
11 | - Redux is used to manage the state of an application in a predictable and organized way. It provides a centralized store to hold the state and follows a unidirectional data flow, making it easier to understand and update the state.
12 |
13 | - ### **Example**
14 |
15 | Imagine you have a shopping cart feature in your e-commerce application. The cart can be updated from different components, such as adding items, removing items, or changing quantities.
16 |
17 | Without Redux, you might have to pass the cart state and update functions through multiple levels of components, causing complexity and potential bugs. With Redux, you can store the cart state in the Redux store and dispatch actions to update it.
18 |
19 | For instance, when a user adds an item to the cart, you dispatch an action like ADD_TO_CART with the item details. The corresponding reducer receives this action, updates the cart state by adding the item, and returns a new state.
20 |
21 | All components interested in the cart state can subscribe to the store and receive updates automatically whenever the state changes. They can access the current cart state from the store, eliminating the need to pass it explicitly.
22 |
23 | By using Redux, you have a centralized store that holds the state of the entire application, making it easier to manage, debug, and test. It provides a clear separation of concerns and promotes a more organized and predictable approach to state management.
24 |
25 | ---
26 |
27 | ## 3. What are the core concepts in Redux?
28 | - The core concepts in Redux are the store, actions, and reducers. The store is a JavaScript object that holds the application state. Actions are plain JavaScript objects that describe the type of update or change that needs to be made to the state. Reducers are pure functions that take the current state and an action as input and produce a new state based on the action.
29 |
30 | ---
31 |
32 | ## 4. How does Redux ensure a predictable state in an application?
33 | - Redux ensures a predictable state in an application by following a strict unidirectional data flow and enforcing certain principles. Let's break down the process and provide an example code snippet to illustrate the concept.
34 |
35 | 1. **Single Source of Truth:** In Redux, the entire state of an application is stored in a single JavaScript object called the "store." This store serves as the single source of truth for the application's state.
36 |
37 | **Example Code:**
38 | ```javascript
39 | import { createStore } from 'redux';
40 | import rootReducer from './reducers';
41 |
42 | const store = createStore(rootReducer);
43 | ```
44 |
45 | 2. **Actions and Reducers:** State changes in Redux are driven by actions. An action is a plain JavaScript object that describes the type of update to be made to the state. Reducers are responsible for handling these actions and producing a new state based on the current state and the action received.
46 |
47 | **Example Code:**
48 | ```javascript
49 | // Define an action
50 | const addToCartAction = {
51 | type: 'ADD_TO_CART',
52 | payload: { id: 1, name: 'Product 1', price: 10 }
53 | };
54 |
55 | // Define a reducer
56 | const cartReducer = (state = [], action) => {
57 | switch (action.type) {
58 | case 'ADD_TO_CART':
59 | return [...state, action.payload];
60 | default:
61 | return state;
62 | }
63 | };
64 |
65 | ```
66 |
67 | 3. **Pure Functions:** Redux reducers must be pure functions, meaning they do not modify the original state but produce a new state object. This ensures predictability since the same input will always produce the same output.
68 |
69 | 4. **Unidirectional Data Flow:** In Redux, state changes flow in a single direction. Actions are dispatched to the store, which triggers the corresponding reducers to update the state. Components can subscribe to the store to receive updates when the state changes.
70 |
71 | **Example Code:**
72 | ```javascript
73 | // Dispatch an action to add an item to the cart
74 | store.dispatch(addToCartAction);
75 |
76 | // Component subscribes to the store and receives updates
77 | store.subscribe(() => {
78 | const cartItems = store.getState().cart;
79 | // Update component with the new cartItems
80 | });
81 | ```
82 |
83 | - By following this unidirectional data flow and ensuring that state changes are handled by pure functions, Redux guarantees a predictable state. This predictability makes it easier to understand and reason about how data changes in an application, especially as the application grows in complexity.
84 |
85 | - Please note that the example code provided is a simplified representation to illustrate the concepts. In a real-world application, you would typically have multiple actions, reducers, and a more structured state tree.
86 |
87 |
88 | ---
89 |
90 | ## 5. What is a Redux action?
91 | - A Redux action is a plain JavaScript object that describes an intention to change the state. It typically has a type property that defines the type of action being performed and can contain additional data or payload relevant to that action.
92 |
93 | - **Example Code:**
94 |
95 | ```javascript
96 | // Define an action to add a book to the cart
97 | const addToCartAction = {
98 | type: 'ADD_TO_CART',
99 | payload: { id: 1, title: 'Book Title', price: 10 }
100 | };
101 | ```
102 |
103 | - In this example, addToCartAction is an action object with a type of **'ADD_TO_CART'** and a payload containing the details of the book being added to the cart.
104 |
105 | Actions in Redux are plain JavaScript objects, which makes them easy to understand and serialize. They represent the intention to modify the state and are dispatched to the Redux store using the store.dispatch() method.
106 |
107 | Actions serve as a clear and explicit way to communicate state changes within the application. Reducers then handle these actions and produce a new state based on the current state and the action received.
108 | ---
109 |
110 | ## 6. What is a Redux reducer?
111 | - A Redux reducer is a pure function responsible for handling actions and producing a new state based on the current state and the action received.
112 |
113 | - **Example Code:**
114 |
115 | ```javascript
116 | // Define a reducer to manage the cart state
117 | const cartReducer = (state = [], action) => {
118 | switch (action.type) {
119 | case 'ADD_TO_CART':
120 | return [...state, action.payload];
121 | case 'REMOVE_FROM_CART':
122 | return state.filter(item => item.id !== action.payload);
123 | default:
124 | return state;
125 | }
126 | };
127 | ```
128 |
129 | - In this example, cartReducer is a function that takes the current state and an action as parameters. It uses a switch statement to handle different action types and return a new state based on the action received.
130 |
131 | - In the ADD_TO_CART case, the reducer creates a new array by spreading the existing state and appending the action.payload (which represents the item to be added to the cart). This ensures immutability by not modifying the original state.
132 |
133 | - In the REMOVE_FROM_CART case, the reducer filters out the item with the specified action.payload from the state array, returning a new array without that item.
134 |
135 | - The default case returns the current state if the action type does not match any of the cases.
136 |
137 | Reducers in Redux are pure functions, meaning they do not modify the original state or have any side effects. They take the current state and an action, and always produce a new state object as output based on the logic defined within the function.
138 |
139 | ---
140 |
141 | ## 7. What is the role of the Redux store?
142 | - The Redux store is the central hub of an application's state. Its role is to hold the entire state tree and provide methods to dispatch actions, subscribe to state changes, and access the current state.
143 |
144 | - **Example Code:**
145 |
146 | ```javascript
147 | import { createStore } from 'redux';
148 | import rootReducer from './reducers';
149 | const store = createStore(rootReducer);
150 | ```
151 |
152 | In this example, the Redux store is created using the createStore function provided by Redux. The rootReducer is a function that combines multiple reducers into a single reducer function representing the overall state tree.
153 |
154 | ### **The Redux store has the following key responsibilities:**
155 |
156 | 1. **Holds the State:** The store holds the complete state tree of your application. This state represents the current snapshot of all data in your application.
157 |
158 | 2. **Dispatches Actions:** The store provides a method called dispatch to dispatch actions to update the state. When an action is dispatched, it is sent to the appropriate reducers, triggering state updates.
159 |
160 | - **Example Code:**
161 |
162 | ```javascript
163 | // Dispatch an action to add an item to the cart
164 | store.dispatch({ type: 'ADD_TO_CART', payload: { id: 1, name: 'Product 1', price: 10 } });
165 | ```
166 |
167 | 3. **Manages Reducers:** The store is responsible for managing the registered reducers. These reducers specify how the state should be updated in response to different actions. The store calls the reducers, passes the current state and the dispatched action, and produces a new state.
168 |
169 | 4. **Enables State Access:** The store provides a method called getState() to access the current state. Components can subscribe to the store to receive updates whenever the state changes and retrieve the latest state from the store.
170 |
171 | - **Example Code:**
172 |
173 | ```javascript
174 | // Component subscribes to the store and receives updates
175 | const unsubscribe = store.subscribe(() => {
176 | const state = store.getState();
177 | // Update component with the new state
178 | });
179 |
180 | // Unsubscribe from store updates
181 | unsubscribe();
182 | ```
183 | The Redux store plays a crucial role in managing the state of your application. It serves as the single source of truth, handles the dispatching of actions, manages the registered reducers, and allows components to access and subscribe to the state updates.
184 |
185 | ---
186 |
187 | ## 8. How does Redux enable state management in React applications?
188 | - Redux enables state management in React applications by providing a centralized store and facilitating the flow of state through components. It allows components to access and update the state in a predictable and efficient manner. Let's explore this with a code example:
189 |
190 | First, we need to set up Redux in a React application. This involves creating a Redux store and connecting it to the React components.
191 |
192 | - **Example Code:**
193 | ```javascript
194 |
195 | // store.js
196 | import { createStore } from 'redux';
197 | import rootReducer from './reducers';
198 |
199 | const store = createStore(rootReducer);
200 |
201 | export default store;
202 | ```
203 |
204 | Next, we can integrate Redux with a React component using the react-redux library. This involves connecting the component to the Redux store, mapping state and actions to component props, and using them to access and update the state.
205 |
206 | - **Example Code:**
207 | ```javascript
208 | // App.js
209 | import React from 'react';
210 | import { connect } from 'react-redux';
211 |
212 | const App = ({ count, increment }) => {
213 | return (
214 |
215 |
Counter: {count}
216 | Increment
217 |
218 | );
219 | };
220 |
221 | const mapStateToProps = (state) => ({
222 | count: state.counter.count,
223 | });
224 |
225 | const mapDispatchToProps = (dispatch) => ({
226 | increment: () => dispatch({ type: 'INCREMENT' }),
227 | });
228 |
229 | export default connect(mapStateToProps, mapDispatchToProps)(App);
230 | ```
231 |
232 | In this example, the App component is connected to the Redux store using the connect function from react-redux. The mapStateToProps function maps the count property from the state to the component's props. The mapDispatchToProps function maps the increment action dispatch to the component's props.
233 |
234 | Now, the App component can access the state through the props and trigger state updates by dispatching actions.
235 |
236 | Redux facilitates state management by allowing components to access the state via props, eliminating the need to pass down props through multiple levels of the component tree. Components only need to subscribe to the relevant parts of the state they require.
237 |
238 | By using Redux, the state becomes more centralized and predictable. Actions flow through the reducers, which update the state in an immutable way. Components can subscribe to the store to receive updates and re-render efficiently when the state changes.
239 |
240 | Overall, Redux simplifies state management in React applications, making it easier to handle complex state interactions and maintain a consistent and scalable architecture.
241 | ---
242 |
243 | ## 9. What is the benefit of using Redux in large-scale applications?
244 | - Using Redux in large-scale applications offers several benefits. It provides a predictable and structured approach to state management, making it easier to reason about and test. Redux also facilitates communication and collaboration among team members by enforcing a consistent pattern for managing state. Additionally, Redux's centralized store allows for easier debugging and performance optimization in large applications.
245 |
246 | ---
247 |
248 | ## 10. What is middleware in Redux and how is it used?
249 | - Middleware in Redux is a mechanism that intercepts actions before they reach the reducers. It allows for additional functionality to be added to the dispatch process. Middleware can be used to perform tasks such as logging actions, making asynchronous API calls, or modifying actions before they reach the reducers. Popular middleware libraries for Redux include Redux Thunk and Redux Saga. Middleware enhances Redux's capabilities
250 |
251 | - Middleware in Redux is a mechanism that sits between the dispatching of an action and the moment it reaches the reducer. It allows you to add extra functionality to the dispatch process. Middleware provides a way to intercept and handle actions in a modular and reusable manner.
252 |
253 | - When an action is dispatched in Redux, it passes through the middleware chain before reaching the reducer. Each middleware has the ability to examine, modify, or even block the action before passing it along to the next middleware or the reducer.
254 |
255 | - Middleware is commonly used for tasks such as logging actions, making asynchronous API calls, performing side effects, or transforming actions. It helps separate concerns and keeps the action logic decoupled from the components.
256 |
257 | - To use middleware in Redux, you typically apply it when creating the store using the applyMiddleware function provided by Redux. This function takes one or more middleware functions as arguments and returns an enhanced store creator.
258 |
259 | - ### **Example code:**
260 |
261 | ```javascript
262 | import { createStore, applyMiddleware } from 'redux';
263 | import thunk from 'redux-thunk';
264 | import rootReducer from './reducers';
265 |
266 | const store = createStore(rootReducer, applyMiddleware(thunk));
267 | ```
268 |
269 | In this example, Redux Thunk middleware is applied using applyMiddleware(thunk). Redux Thunk allows you to dispatch functions as actions, enabling asynchronous actions and handling side effects.
270 |
271 | Middleware functions are typically written in a specific format that receives the store and next as parameters. They return a function that receives the action and can manipulate it before calling next(action) to pass it to the next middleware or the reducer.
272 |
273 | Middleware provides a flexible and extensible way to enhance Redux's capabilities and handle complex scenarios like asynchronous operations or logging. It enables you to customize the behavior of actions and perform additional tasks in a controlled and reusable manner.
274 |
275 |
276 |
277 | ## 11. How to use Redux DevTools extension for debugging?
278 |
279 | 
280 |
281 | - To use the Redux DevTools extension for debugging your Redux application, follow these steps:
282 |
283 | 1. **Install the Redux DevTools extension:**
284 | - For Google Chrome: Go to the Chrome Web Store and search for "Redux DevTools" to find and install the extension.
285 | - For Mozilla Firefox: Go to the Firefox Add-ons Marketplace and search for "Redux DevTools" to find and install the extension.
286 |
287 | 2. **Integrate Redux DevTools into your Redux store setup:**
288 | - In your Redux store configuration file (often called store.js or similar), import the compose function from the Redux package:
289 |
290 | ```javascript
291 | import { createStore, applyMiddleware, compose } from 'redux';
292 | ```
293 | - Locate the line where you create your Redux store using createStore, and modify it as follows:
294 |
295 | ```javascript
296 | const store = createStore(
297 | rootReducer,
298 | compose(
299 | applyMiddleware(/* any middleware you use */),
300 | window.__REDUX_DEVTOOLS_EXTENSION__ && window.__REDUX_DEVTOOLS_EXTENSION__() // Add this line
301 | )
302 | );
303 | ```
304 |
305 | 3. **Ensure the Redux DevTools extension is open:**
306 |
307 | - Open your browser's extension toolbar and verify that the Redux DevTools extension is enabled and visible.
308 |
309 | 4. **Start your application and inspect the Redux DevTools:**
310 |
311 | - Launch your Redux-powered application in the browser.
312 | - Open the browser's developer tools panel (usually accessed by right-clicking and selecting "Inspect" or pressing Ctrl+Shift+I or Cmd+Option+I).
313 | - In the developer tools panel, navigate to the "Redux" or "Redux DevTools" tab.
314 | - You should see the Redux DevTools panel, which provides insights into your Redux state, actions, and time-travel debugging capabilities.
315 | - The Redux DevTools extension allows you to inspect the state changes, track dispatched actions, and replay the state history. It greatly facilitates debugging and understanding how your Redux store behaves over time.
316 |
317 | The Redux DevTools extension allows you to inspect the state changes, track dispatched actions, and replay the state history. It greatly facilitates debugging and understanding how your Redux store behaves over time.
318 | ---
319 |
320 | ## 12. Can Redux be used with frameworks other than React?
321 | - Yes, Redux can be used with frameworks other than React. While Redux has become synonymous with React due to its popularity within the React ecosystem, it is a standalone library that can be used with any JavaScript framework or even with vanilla JavaScript applications. The core principles of Redux, such as the unidirectional data flow and the concepts of actions and reducers, can be applied in various environments.
322 |
323 |
324 | ---
325 |
326 | ## 13. History 📔 of Redux ( Bonus🎉)
327 | - Redux was created by **Dan Abramov** and **Andrew Clark** and was inspired by the Flux architecture pattern developed by Facebook. It was first introduced to the JavaScript community in **2015**.
328 |
329 | - The motivation behind Redux was to address the challenges of managing state in complex JavaScript applications, especially those using React. At the time, as applications grew in size and complexity, handling state became more difficult and error-prone.
330 |
331 | - Redux aimed to provide a solution by introducing a centralized state management approach. It emphasized a unidirectional data flow and the concept of a single source of truth for the application state.
332 |
333 | - The creators of Redux wanted to make state changes predictable and easier to reason about. By enforcing strict rules and patterns, Redux helped developers maintain a clear understanding of how data flows through an application and how state updates are handled.
334 |
335 | - Since its release, Redux has gained significant popularity and has become the de facto state management library for React applications. Its simplicity and scalability have made it a favorite choice for many developers working on JavaScript projects.
336 |
337 | - Over time, the Redux ecosystem has grown, with the introduction of middleware, tooling, and additional packages that enhance its functionality and ease of use. The community has contributed numerous extensions and integrations, making Redux a powerful tool for managing state in a wide range of applications.
338 |
339 | - Overall, Redux has had a profound impact on how developers handle state in JavaScript applications, providing a reliable and organized approach to managing complex application states.
340 |
341 | ---
342 |
343 | ### **THANKS 🤍 FOR READING**
344 |
345 | ### *if you like the content don't forget to give a star **🌟** for the repository*
--------------------------------------------------------------------------------
/WebSoket/Readme.md:
--------------------------------------------------------------------------------
1 | ## How is web sockets different from HTTP?
2 |
3 | - Connection type: HTTP is a request-response protocol, where the client sends a request to the server, and the server responds with a response. WebSockets, on the other hand, provide a full-duplex, persistent connection between the client and the server that allows data to be transmitted in both directions at any time.
4 | - Data format: HTTP is a text-based protocol that uses structured messages (requests and responses) in a specific format. WebSockets use a binary protocol that allows for more efficient transmission of data and can transmit any type of data, not just text.
5 | - Efficiency: Because HTTP requires a new request and response for each piece of data sent between the client and the server, it can be less efficient than WebSockets, which use a persistent connection to transmit data more efficiently.
6 | - Real-time capabilities: HTTP is not designed for real-time communication, and while techniques like long-polling can be used to simulate real-time communication, it is not as efficient as WebSockets, which are specifically designed for real-time communication.
7 | - Support for server-initiated updates: HTTP requests are always initiated by the client, whereas WebSockets can support server-initiated updates, where the server can send data to the client at any time without the client first requesting it.
8 |
9 |
10 |
11 | ## What are web sockets?
12 |
13 | - WebSockets are a communication protocol that allows real-time bidirectional data transfer between a client and a server over a single, persistent TCP connection. Unlike traditional HTTP requests, which are stateless and require a new connection to be established for each request, WebSockets allow for a long-lived connection to be established between a client and a server, enabling both the client and the server to send and receive data at any time without the overhead of establishing a new connection.
14 | - WebSockets are particularly useful for applications that require real-time data transfer, such as online gaming, chat applications, financial trading platforms, and other applications that require constant, low-latency communication between a client and a server.
15 | - example how WebSockets work in practice:
16 | - 1) The client sends an HTTP request to the server requesting to open a WebSocket connection.
17 | - 2) The server responds with an HTTP response containing an "Upgrade" header that indicates that the connection is being upgraded to a WebSocket connection.
18 | - 3) The client and server negotiate the WebSocket protocol version and other parameters, and establish a persistent TCP connection between the client and the server.
19 | - 4) Once the WebSocket connection is established, both the client and the server can send data to each other at any time, without the need to establish a new connection.
20 | - 5)The data sent over the WebSocket connection is encapsulated in frames, which contain a header and a payload. The header specifies the type of data contained in the frame, such as text or binary data, and whether the frame is the final frame in a sequence of frames. The payload contains the actual data being sent.
21 | - 6) When the client or server wants to close the WebSocket connection, they send a special close frame to the other end of the connection, indicating that the connection is being closed.
22 |
23 |
24 |
25 |
26 | ## Differences between webhooks and WebSockets
27 |
28 | - Webhooks are used for one-way communication from a source application to a destination application, while WebSockets facilitate two-way communication between server and client.
29 | - Webhooks are mostly used by two servers to pass information, while WebSockets are used primarily for server-to-client (mostly web browsers) communication.
30 | - Also, webhooks close the socket connection on the receiving application once a response has been sent back, while WebSockets keep the connection open for as long as required and not just for a single transfer of information.
31 | - In terms of communication protocols, WebSocket uses its own custom WS protocol while webhooks use regular HTTP.
--------------------------------------------------------------------------------
/html/Readme.md:
--------------------------------------------------------------------------------
1 |
2 | ## What is BEM?
3 | - BEM is a way to name and organize CSS classes that helps make your code easier to understand, maintain, and reuse. It divides your interface into small, reusable parts called "blocks" that can have "elements" and "modifiers" to change their appearance or behavior. Using BEM makes it easier to create consistent and modular code, and helps you understand how different parts of your interface are related.
4 |
5 |
6 |
7 |
8 | ## Briefly describe the correct usage of the following HTML5 semantic elements: , ,,
9 | - is used to contain introductory and navigational information about a section of the page. This can include the section heading, the author’s name, time and date of publication, table of contents, or other navigational information.
10 |
11 | - is meant to house a self-contained composition that can logically be independently recreated outside of the page without losing its meaning. Individual blog posts or news stories are good examples.
12 |
13 | - is a flexible container for holding content that shares a common informational theme or purpose.
14 |
15 | -