├── Topics ├── Topic 1 │ ├── Strong and weak agent.md │ ├── AI problems.md │ ├── Environment.md │ ├── History of AI.md │ ├── Agent intoduction.md │ ├── PEAS.md │ ├── Introduction to AI.md │ └── Types of agent.md └── Topic 2 │ ├── Uninformed and informed.md │ ├── Local Search.md │ ├── IDS.md │ ├── Greedy best first search.md │ ├── Best First Search.md │ ├── Hill climbing.md │ ├── DFS.md │ ├── UCS.md │ ├── Bidirectional search.md │ ├── BFS.md │ ├── Problem solving.md │ ├── Stochastic beam.md │ ├── DLS.md │ ├── A*.md │ ├── Local beam.md │ ├── AO*.md │ ├── Problem formulation.md │ └── State space search.md └── README.md /Topics/Topic 1/Strong and weak agent.md: -------------------------------------------------------------------------------- 1 | ### Strong agent 2 | A strong agent is an agent that is capable of achieving a wide range of goals in complex and dynamic environments. It has a high degree of intelligence and can adapt to changing circumstances and new tasks without significant intervention from its human operators. Strong agents are often associated with the concept of artificial general intelligence (AGI), which refers to a hypothetical form of AI that can perform any intellectual task that a human can do. 3 | 4 | 5 | ### Weak agent 6 | A weak agent is an agent that is designed to perform a specific task or set of tasks in a narrow domain. It has a lower degree of intelligence and is less capable of adapting to new situations or tasks outside of its intended purpose. Weak agents are also known as artificial narrow intelligence (ANI). 7 | -------------------------------------------------------------------------------- /Topics/Topic 1/AI problems.md: -------------------------------------------------------------------------------- 1 | ### AI problems 2 | 3 | - Bias and discrimination: AI systems may reflect and perpetuate biases present in the data used to train them, leading to discrimination against certain groups. 4 | 5 | - Lack of transparency: Some AI systems are opaque, meaning it can be difficult to understand how they make decisions or predictions. 6 | 7 | - Job displacement: AI systems can automate many tasks traditionally done by humans, leading to job displacement and economic disruption. 8 | 9 | - Security and privacy: AI systems may be vulnerable to cyberattacks or privacy breaches, leading to data theft or misuse. 10 | 11 | - Ethical considerations: As AI systems become more advanced, they raise ethical questions about their impact on society and their use in certain domains, such as warfare. 12 | 13 | - Regulation and governance: There is a need for clear regulations and governance frameworks to ensure that AI is developed and used responsibly and ethically. 14 | -------------------------------------------------------------------------------- /Topics/Topic 2/Uninformed and informed.md: -------------------------------------------------------------------------------- 1 | ### Uniform Search: 2 | - Uniform search, also known as blind search, is a type of search algorithm that explores the problem space without using any knowledge about the problem domain. In uniform search, each possible path from the initial state to the goal state is considered equally, without any preference or bias. Examples of uniform search algorithms include breadth-first search, depth-first search, and iterative deepening search. 3 | 4 | ### Informed Search: 5 | - Informed search, also known as heuristic search, is a type of search algorithm that uses domain-specific knowledge to guide the search and improve its efficiency. In informed search, a heuristic function is used to estimate the distance or cost from the current state to the goal state. The heuristic function provides a measure of the "goodness" of each possible path, and the algorithm prioritizes the paths that are most likely to lead to the goal state. Examples of informed search algorithms include A* search, Best-First search, and Hill Climbing. 6 | -------------------------------------------------------------------------------- /Topics/Topic 2/Local Search.md: -------------------------------------------------------------------------------- 1 | ### Local Search: 2 | Local search is an optimization algorithm used in AI and computer science to find the optimal solution by iteratively improving the current solution. Local search is used for problems where finding the global optimal solution is infeasible, and the focus is on finding a solution that is "good enough". 3 | 4 | 5 | #### Algorithm: 6 | 7 | - Initialize the search with a random or initial solution 8 | - Generate a set of neighboring solutions 9 | - Evaluate the quality of each neighboring solution 10 | - Choose the best neighboring solution as the new current solution 11 | - Repeat steps 2-4 until the stopping condition is met 12 | 13 | 14 | #### Performance Evaluation: 15 | Local search is a fast and efficient algorithm for finding a "good enough" solution for optimization problems. However, it can get stuck in local optima, and it does not guarantee finding the global optimal solution. Therefore, it is often used in combination with other optimization algorithms to improve its performance. 16 | -------------------------------------------------------------------------------- /Topics/Topic 2/IDS.md: -------------------------------------------------------------------------------- 1 | ### Iterative Deepening Search (IDS): 2 | IDS is a blind search algorithm that combines the advantages of BFS and DFS. It performs DFS up to a certain depth limit, and then starts over with a higher depth limit if the goal state is not found. IDS guarantees that the shortest path from the initial state to the goal state will be found first. The performance of IDS depends on the branching factor of the problem and the depth of the goal state. 3 | 4 | 5 | #### Algorithm: 6 | 7 | - Set the depth limit to 0. 8 | - Repeat the following steps: 9 | - a. Perform DFS up to the current depth limit. 10 | - b. If the goal state is found, return the path from the initial state to the goal state. 11 | - c. Increment the depth limit. 12 | - Return failure if the depth limit exceeds the maximum depth. 13 | 14 | 15 | #### Performance Evaluation: 16 | The time complexity of IDS is O(b^d), where b is the branching factor of the problem and d is the depth of the goal state. The space complexity of IDS is O(bd), as it stores all the nodes along the path from the root to the current node. 17 | -------------------------------------------------------------------------------- /Topics/Topic 2/Greedy best first search.md: -------------------------------------------------------------------------------- 1 | ### Greedy Best-First Search: 2 | Greedy best-first search is a type of best-first search algorithm that chooses the next state based only on the heuristic value, without considering the actual cost of reaching that state. Greedy best-first search is a type of greedy algorithm and can be less efficient than more complex search algorithms, as it does not always choose the optimal path. 3 | 4 | 5 | 6 | #### Algorithm: 7 | 8 | - Initialize the search with the initial state 9 | - Generate the possible next states from the current state 10 | - Calculate the heuristic value for each possible next state 11 | - Choose the next state with the lowest heuristic value 12 | - Repeat steps 2-4 until the goal state is reached or no more states can be generated 13 | 14 | 15 | #### Performance Evaluation: 16 | Greedy best-first search can be a fast and simple algorithm for problems with simple and easy-to-calculate heuristic functions. However, it can be less efficient than more complex search algorithms such as A* search, as it does not take into account the actual cost of reaching the current state. 17 | -------------------------------------------------------------------------------- /Topics/Topic 2/Best First Search.md: -------------------------------------------------------------------------------- 1 | ### Best-First Search: 2 | Best-first search is an informed search algorithm that uses a heuristic function to guide the search and prioritize the most promising paths. The heuristic function estimates the cost of reaching the goal state from each possible state, and the algorithm chooses the path with the lowest estimated cost. Best-first search can be used in both single-agent and multi-agent settings. 3 | #### Algorithm: 4 | 5 | - Initialize the search with the initial state 6 | - Generate the possible next states from the current state 7 | - Calculate the heuristic value for each possible next state 8 | - Choose the next state with the lowest heuristic value 9 | - Repeat steps 2-4 until the goal state is reached or no more states can be generated 10 | 11 | 12 | 13 | #### Performance Evaluation: 14 | Best-first search has a better performance than uninformed search algorithms such as depth-first search and breadth-first search. However, it can be less efficient than more complex informed search algorithms such as A* search, as it does not take into account the cost of reaching the current state. 15 | 16 | -------------------------------------------------------------------------------- /Topics/Topic 2/Hill climbing.md: -------------------------------------------------------------------------------- 1 | ### Hill Climbing Search: 2 | Hill Climbing is a local search algorithm that starts with an initial solution and iteratively makes small incremental improvements until a better solution is found or a local maxima is reached. The algorithm selects the best neighboring solution and compares it with the current solution. If the neighboring solution is better, it becomes the current solution, and the process continues until no better solution can be found. 3 | 4 | 5 | #### Algorithm: 6 | 7 | - Initialize the current solution 8 | - Repeat the following steps until no better solution can be found: 9 | - a. Generate all neighboring solutions 10 | - b. Evaluate the quality of each neighboring solution 11 | - c. Select the best neighboring solution 12 | - d. If the best neighboring solution is better than the current solution, update the current solution to the best neighboring solution 13 | - e. Otherwise, terminate and return the current solution 14 | 15 | 16 | #### Performance Evaluation: 17 | Hill Climbing is a simple and easy-to-implement algorithm that can quickly find good solutions for small problems. However, it can get stuck in local optima, which makes it unsuitable for large and complex problems. 18 | 19 | -------------------------------------------------------------------------------- /Topics/Topic 2/DFS.md: -------------------------------------------------------------------------------- 1 | ### Depth-First Search (DFS): 2 | DFS is a blind search algorithm that explores the problem space in a depth-first manner, i.e., it explores as far as possible along each branch before backtracking. The algorithm maintains a stack to keep track of the nodes that need to be expanded, and it visits each node only once. DFS does not guarantee that the shortest path from the initial state to the goal state will be found first. The performance of DFS depends on the depth of the goal state and the branching factor of the problem. 3 | 4 | 5 | #### Algorithm: 6 | 7 | - Initialize the stack with the initial state. 8 | - While the stack is not empty: 9 | - a. Pop the next state from the stack. 10 | - b. If the state is the goal state, return the path from the initial state to the goal state. 11 | - c. Generate all possible successor states of the current state. 12 | - d. Push the successor states onto the stack. 13 | - Return failure if the stack is empty. 14 | 15 | #### Performance Evaluation: 16 | The time complexity of DFS is O(b^m), where b is the branching factor of the problem and m is the maximum depth of the tree. The space complexity of DFS is O(bm), as it stores all the nodes along the path from the root to the current node. 17 | -------------------------------------------------------------------------------- /Topics/Topic 2/UCS.md: -------------------------------------------------------------------------------- 1 | ### Uniform Cost Search: 2 | Uniform cost search is a type of search algorithm that explores the search space by prioritizing the nodes with the lowest path cost. It is an optimal algorithm that guarantees to find the shortest path from the initial state to the goal state. Uniform cost search is particularly useful in situations where the path cost between states is not uniform. 3 | 4 | 5 | 6 | #### Algorithm: 7 | 8 | - Initialize the search tree with the initial state and a cost of 0. 9 | - While the search tree is not empty, select the node with the lowest path cost. 10 | - If the selected node is the goal state, return the path to the goal state. 11 | - For each possible action from the selected node, generate a new state and calculate its path cost. 12 | - Add the new state to the search tree with its calculated path cost. 13 | - Repeat steps 2-5 until the goal state is found. 14 | 15 | 16 | #### Performance Evaluation: 17 | Uniform cost search is an optimal algorithm that guarantees to find the shortest path from the initial state to the goal state. However, it may not be feasible for large search spaces, as it requires keeping track of the path cost for each state. Additionally, it may be slow in situations where the path cost varies significantly between states. 18 | -------------------------------------------------------------------------------- /Topics/Topic 2/Bidirectional search.md: -------------------------------------------------------------------------------- 1 | ### Bidirectional Search: 2 | Bidirectional search is a type of search algorithm that explores the search space from both the initial state and the goal state simultaneously. It is a useful algorithm in situations where the search space is too large, and the goal state is far from the initial state. Bidirectional search reduces the time and memory complexity of the search process by exploring the search space from both ends. 3 | ![image](https://user-images.githubusercontent.com/93985255/230156542-cda91c21-ee80-4157-976a-60ead0f74faf.png) 4 | 5 | 6 | #### Algorithm: 7 | 8 | - Initialize the search tree with the initial state and the goal state. 9 | - While neither search has found the goal state, expand the search from both ends simultaneously. 10 | - If a state is reached from both searches, a solution has been found. 11 | - Return the path from the initial state to the common state and from the goal state to the common state. 12 | 13 | 14 | #### Performance Evaluation: 15 | Bidirectional search is a useful algorithm in situations where the search space is large and the goal state is far from the initial state. It reduces the search time and memory requirements by exploring the search space from both ends. However, it requires additional memory to store both the search trees and may not be feasible for large search spaces. 16 | -------------------------------------------------------------------------------- /Topics/Topic 2/BFS.md: -------------------------------------------------------------------------------- 1 | ### Breadth-First Search (BFS): 2 | BFS is a blind search algorithm that explores the problem space in a breadth-first manner, i.e., it expands all the nodes at the current level before moving to the next level. The algorithm maintains a queue to keep track of the nodes that need to be expanded, and it visits each node only once. BFS guarantees that the shortest path from the initial state to the goal state will be found first. The performance of BFS depends on the branching factor of the problem, as it explores all the nodes at each level before moving to the next level. 3 | 4 | ![image](https://user-images.githubusercontent.com/93985255/230155281-087ef0fa-101b-434f-a85a-c8ad2587710b.png) 5 | 6 | 7 | #### Algorithm: 8 | 9 | - Initialize the queue with the initial state. 10 | - While the queue is not empty: 11 | - a. Dequeue the next state from the queue. 12 | - b. If the state is the goal state, return the path from the initial state to the goal state. 13 | - c. Generate all possible successor states of the current state. 14 | - d. Enqueue the successor states onto the queue. 15 | - Return failure if the queue is empty. 16 | 17 | #### Performance Evaluation: 18 | The time complexity of BFS is O(b^d), where b is the branching factor of the problem and d is the depth of the goal state. The space complexity of BFS is also O(b^d), as it stores all the nodes at each level. 19 | -------------------------------------------------------------------------------- /Topics/Topic 2/Problem solving.md: -------------------------------------------------------------------------------- 1 | ### Problem Solving 2 | Problem solving is a fundamental aspect of AI, as it is a process of finding solutions to complex problems through various methods and techniques. AI systems are designed to solve problems in a variety of domains, such as natural language processing, image recognition, and decision-making. 3 | #### Representation: 4 | - AI systems represent problems and solutions in a way that can be processed by machines. For example, in natural language processing, a problem can be represented as a text input, and the solution can be represented as a set of actions that the AI system needs to take. 5 | 6 | #### Search: 7 | - AI systems use search algorithms to find the best solution to a problem. These algorithms can be simple, like brute force search, or more complex, like heuristic search, which uses domain-specific knowledge to guide the search. 8 | 9 | #### Reasoning: 10 | - AI systems use logical reasoning to draw conclusions from available information. For example, in a diagnostic system, the AI system may use a set of rules to determine the cause of a patient's symptoms. 11 | 12 | #### Optimization: 13 | - AI systems use optimization algorithms to find the best solution to a problem given a set of constraints. For example, in a scheduling system, the AI system may use an optimization algorithm to schedule tasks in a way that minimizes the overall time or cost. 14 | -------------------------------------------------------------------------------- /Topics/Topic 1/Environment.md: -------------------------------------------------------------------------------- 1 | ### Environment 2 | The environment refers to the external context in which an agent operates. An environment can be physical or virtual and is defined by the set of states, actions, and rewards that an agent can perceive and interact with. 3 | 4 | ### Types of Environment 5 | 6 | 7 | - Fully observable environment: In this type of environment, the agent can directly observe the complete state of the environment at each time step. 8 | 9 | - Partially observable environment: In this type of environment, the agent cannot directly observe the complete state of the environment, but must infer it from the observations it receives through its sensors. 10 | 11 | - Deterministic environment: In this type of environment, the outcome of an agent's actions is completely predictable and does not involve any randomness or uncertainty. 12 | 13 | - Stochastic environment: In this type of environment, the outcome of an agent's actions involves some degree of randomness or uncertainty. 14 | 15 | - Episodic environment: In this type of environment, the agent's experience is divided into a sequence of discrete episodes, where each episode consists of a sequence of actions, observations, and rewards. 16 | 17 | - Sequential environment: In this type of environment, the agent's experience is a continuous sequence of actions, observations, and rewards that are interdependent and can influence future outcomes. 18 | -------------------------------------------------------------------------------- /Topics/Topic 1/History of AI.md: -------------------------------------------------------------------------------- 1 | ### History of AI 2 | 3 | - 1943: McCulloch and Pitts propose a model of an artificial neuron, which is the basis for neural networks. 4 | 5 | - 1950: Alan Turing proposes the Turing Test, a measure of a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human. 6 | 7 | - 1956: The Dartmouth Conference marks the birth of AI as a field of research. The conference was attended by John McCarthy, Marvin Minsky, Claude Shannon, and other pioneers of AI. 8 | 9 | - 1958: John McCarthy invents LISP, a programming language used for AI research. 10 | 11 | - 1961: The first AI program to play chess is written by Dietrich Prinz. 12 | 13 | - 1966: The ELIZA program, a natural language processing program that simulates conversation, is developed by Joseph Weizenbaum. 14 | 15 | - 1969: Shakey, the first mobile robot, is developed at Stanford Research Institute. 16 | 17 | - 1970s-1980s: The development of expert systems, which use knowledge-based rules to solve complex problems in specific domains. 18 | 19 | - 1990s: The development of machine learning algorithms, which enable machines to learn from data and improve performance over time. 20 | 21 | - 2010s: The emergence of deep learning, a subset of machine learning that uses artificial neural networks to learn from data and achieve state-of-the-art performance in tasks such as image recognition and natural language processing. 22 | -------------------------------------------------------------------------------- /Topics/Topic 2/Stochastic beam.md: -------------------------------------------------------------------------------- 1 | ### Stochastic Beam Search: 2 | Stochastic Beam Search is a variant of Beam Search that randomly selects a subset of the best solutions to explore in the next iteration. The algorithm generates k random initial solutions and evaluates their quality. In each iteration, it selects the best m solutions among the k solutions and randomly selects k new solutions from the neighborhood of the best m solutions. 3 | 4 | 5 | #### Algorithm: 6 | 7 | - Initialize k random solutions 8 | - Repeat the following steps until no better solution can be found: 9 | - a. Evaluate the quality of each solution 10 | - b. Select the best m solutions among the k solutions 11 | - c. Generate all neighboring solutions of the best m solutions 12 | - d. Randomly select k new solutions from the neighboring solutions 13 | - e. If the best new solution is better than the current solution, update the current solution to the best new solution 14 | - f. Otherwise, terminate and return the current solution 15 | 16 | 17 | #### Performance Evaluation: 18 | Stochastic Beam Search can overcome the local optima problem by randomly exploring the search space. It can generate diverse solutions and quickly converge to a good solution. However, it requires more computational resources than Hill Climbing and Greedy Local Search, and the performance can be highly dependent on the number of initial solutions k and the number of best solutions to select m. 19 | -------------------------------------------------------------------------------- /Topics/Topic 2/DLS.md: -------------------------------------------------------------------------------- 1 | ### Depth-Limited Search: 2 | Depth-limited search is a type of search algorithm that limits the search depth to a predefined level. It is a variation of depth-first search that avoids infinite loops and explores the most promising nodes first. The algorithm starts at the initial state and explores all possible paths of a given length before moving to the next level. If the goal state is not found at the specified depth, the algorithm backtracks and explores other paths. The algorithm terminates when the goal state is found or all possible paths have been explored up to the predefined depth limit. 3 | 4 | 5 | #### Algorithm: 6 | 7 | - Initialize the search tree with the initial state. 8 | - If the current state is the goal state, return the path to the goal state. 9 | - If the current depth limit has been reached, return failure. 10 | - For each possible action from the current state, generate a new state. 11 | - Recursively apply the algorithm to the new state with an incremented depth limit. 12 | - If the goal state is found, return the path to the goal state. 13 | 14 | 15 | #### Performance Evaluation: 16 | Depth-limited search is useful in situations where the search space is large, and the goal state is deep within the search space. It is a memory-efficient algorithm that can find solutions quickly if the goal state is not too deep. However, if the goal state is beyond the specified depth limit, the algorithm will not find a solution. 17 | -------------------------------------------------------------------------------- /Topics/Topic 2/A*.md: -------------------------------------------------------------------------------- 1 | ### A* Algorithm: 2 | The A* algorithm is a widely used search algorithm in artificial intelligence that combines the best features of uniform cost search and greedy search algorithms. A* uses a heuristic function to guide the search towards the optimal solution while also considering the cost of reaching that solution. 3 | 4 | #### Algorithm: 5 | 6 | - Initialize open and closed sets with the start node as the only member of the open set. 7 | - While the open set is not empty, select the node with the lowest f(n) value, where f(n) = g(n) + h(n) and g(n) is the cost of the path from the start node to n, and h(n) is the heuristic estimate of the cost from n to the goal. 8 | - If the selected node is the goal node, then return the path to the goal. 9 | - For each successor of the selected node, calculate its g and h values, and add it to the open set if it is not already there or if the new g value is lower than the previous one. 10 | - Move the selected node from the open set to the closed set. 11 | 12 | 13 | #### Performance Evaluation: 14 | A* algorithm is complete, optimal, and admissible, meaning that it will always find the optimal solution if one exists, and it will never overestimate the cost to the goal. A* performs well when the heuristic function is well-designed, as it can quickly find the optimal solution without expanding too many nodes. However, A* can be slow when the search space is very large or when the heuristic is not informative. 15 | -------------------------------------------------------------------------------- /Topics/Topic 2/Local beam.md: -------------------------------------------------------------------------------- 1 | ### Local beam search: 2 | A Local Beam Search is a variant of the traditional beam search algorithm, which is used in solving optimization problems. It is a heuristic search algorithm that starts with a set of randomly generated solutions and iteratively improves them by exploring the neighborhood of the current solutions. The algorithm maintains a fixed number of solutions, known as the beam width, and chooses the best solutions from the neighborhood of each of the current solutions. This process is repeated until a satisfactory solution is found. 3 | 4 | #### Algorithm: 5 | 6 | - Initialize the beam with k randomly generated solutions. 7 | - Repeat until a satisfactory solution is found: 8 | - a. For each solution s in the beam, generate all its neighbors. 9 | - b. Select the k best solutions from the union of the current beam and the set of all neighbors. 10 | - c. Replace the current beam with the k selected solutions. 11 | 12 | 13 | #### Performance Evaluation: 14 | The performance of the local beam search algorithm depends on various factors, such as the size of the problem, the choice of the beam width, and the quality of the initial solutions. In general, a larger beam width will increase the likelihood of finding better solutions, but it will also increase the computational complexity of the algorithm. On the other hand, a smaller beam width will decrease the computational complexity but may result in suboptimal solutions. 15 | -------------------------------------------------------------------------------- /Topics/Topic 1/Agent intoduction.md: -------------------------------------------------------------------------------- 1 | In AI, an agent is a software or hardware entity that perceives its environment and takes actions to achieve a specific goal. An agent can be thought of as a decision maker that interacts with its environment through sensors and actuators. 2 | 3 | 4 | ### Example of agent 5 | A self-driving car can be considered an agent because it perceives its environment through sensors such as cameras and lidar, and takes actions such as steering and accelerating to reach its destination. 6 | 7 | 8 | ### Types of Agent 9 | There are various types of agent architectures, but most of them share the following components: 10 | 11 | - Perception: This component allows the agent to sense its environment through sensors such as cameras, microphones, and other sensors. 12 | 13 | - Reasoning: This component allows the agent to reason about its environment and make decisions based on that information. 14 | 15 | - Action: This component allows the agent to take actions in the environment through actuators such as motors, speakers, and other effectors. 16 | 17 | - Learning: This component allows the agent to learn from its experiences and improve its performance over time. 18 | 19 | 20 | ### Role of agent 21 | The role of an agent in AI is to solve problems in complex and dynamic environments. By perceiving its environment, reasoning about the information it receives, and taking actions to achieve a specific goal, an agent can solve problems that would be difficult or impossible for humans to solve alone. 22 | -------------------------------------------------------------------------------- /Topics/Topic 2/AO*.md: -------------------------------------------------------------------------------- 1 | ### AO* Algorithm: 2 | AO* is an extension of the A* algorithm that uses an adaptive heuristic function that is updated during the search. The idea behind AO* is to improve the heuristic function over time by incorporating the actual cost of reaching each node during the search. 3 | 4 | #### Algorithm: 5 | 6 | - Initialize open and closed sets with the start node as the only member of the open set. 7 | - While the open set is not empty, select the node with the lowest f(n) value, where f(n) = g(n) + h(n) and g(n) is the cost of the path from the start node to n, and h(n) is the adaptive heuristic estimate of the cost from n to the goal. 8 | - If the selected node is the goal node, then return the path to the goal. 9 | - For each successor of the selected node, calculate its g value, and update the adaptive heuristic function using the actual cost of reaching the successor. 10 | - If the new adaptive heuristic estimate is lower than the previous one, add the successor to the open set with the updated h value. 11 | - Move the selected node from the open set to the closed set. 12 | 13 | 14 | 15 | #### Performance Evaluation: 16 | AO* algorithm can converge faster than A* because it updates the heuristic function during the search. AO* is also complete, optimal, and admissible like A*, and it can handle cases where the heuristic is not informative or inconsistent. However, AO* can be computationally expensive because it requires updating the heuristic function after each expansion, and it may converge to a suboptimal solution if the heuristic function is not accurate enough. 17 | -------------------------------------------------------------------------------- /Topics/Topic 2/Problem formulation.md: -------------------------------------------------------------------------------- 1 | ### Problem formulation is the process of defining a problem in a way that can be solved using computational methods. It involves identifying the relevant variables, constraints, and goals of the problem, and defining them in a way that can be processed by an AI system. 2 | 3 | ### Types of problem formulation 4 | 5 | #### Incremental Formulation: 6 | - In incremental formulation, a problem is broken down into a series of smaller sub-problems or stages, and each sub-problem is solved separately. The solutions to each sub-problem are then combined to find the overall solution to the problem. Incremental formulation is often used in dynamic or uncertain domains, where the problem changes over time and the solution must be updated accordingly. Examples of incremental formulation problems include robotics, where a robot must navigate through an environment and make decisions based on changing sensory input. 7 | 8 | #### Complete State Formulation: 9 | - In complete state formulation, the problem is defined in terms of a complete state, which includes all relevant information about the problem at a given point in time. The solution to the problem is then found by determining the sequence of actions that will transform the current state into the desired state. Complete state formulation is often used in well-structured problems, such as puzzles or games, where the solution can be found by exploring all possible states of the problem. Examples of complete state formulation problems include chess or Rubik's cube, where the objective is to find the sequence of moves that will lead to a winning state. 10 | -------------------------------------------------------------------------------- /Topics/Topic 2/State space search.md: -------------------------------------------------------------------------------- 1 | ### State space search 2 | 3 | State-first search is a type of search algorithm used in artificial intelligence and computer science to explore a problem space by generating and testing candidate solutions. In state-first search, the problem is represented as a set of states, and the algorithm searches for a path from the initial state to a goal state. 4 | 5 | 6 | #### Construction 7 | - Define the problem space: 8 | The problem space is defined by the set of possible states that the problem can be in. This involves defining the initial state, the goal state, and the set of possible actions that can be taken to transition from one state to another. 9 | 10 | - Generate the search tree: 11 | - The search tree is a data structure that represents the problem space and the possible paths from the initial state to the goal state. The search tree is generated by applying the possible actions to the initial state to generate new states, and adding these new states to the tree. 12 | 13 | - Perform state expansion: 14 | - State expansion involves exploring the search tree by expanding the current state to generate new states. This involves applying the possible actions to the current state, and adding the resulting states to the search tree. 15 | 16 | - Check for goal state: 17 | - At each step of the search, the algorithm checks if the current state is the goal state. If it is, the algorithm terminates and returns the path from the initial state to the goal state. 18 | 19 | - Repeat until goal state is found: 20 | - The state-first search algorithm continues to expand states and generate new paths until the goal state is found, or until all possible paths have been explored. 21 | -------------------------------------------------------------------------------- /Topics/Topic 1/PEAS.md: -------------------------------------------------------------------------------- 1 | ### PEAS 2 | PEAS is an acronym that stands for Performance measure, Environment, Actuators, and Sensors. It is a framework used in AI to help define the properties and requirements of an intelligent agent. 3 | 4 | ### Performance measure: 5 | This refers to the objective that the agent is trying to achieve. The performance measure is used to evaluate the success of the agent and can be defined in various ways. For example, it could be a measure of efficiency, accuracy, or profitability, depending on the task and goals of the agent. 6 | - Example: In a game of chess, the performance measure could be the number of pieces captured, the number of checkmates achieved, or the time taken to win the game. 7 | 8 | ### Environment: 9 | This refers to the external context in which the agent operates, including the objects, events, and conditions that the agent can perceive and interact with. The environment can be static or dynamic, deterministic or stochastic, and can have various degrees of complexity. 10 | - Example: In a self-driving car, the environment includes the road, traffic lights, other vehicles, pedestrians, weather conditions, and various obstacles. 11 | 12 | ### Actuators: 13 | This refers to the physical devices or mechanisms that the agent uses to interact with the environment and perform actions. Actuators can include motors, valves, grippers, screens, and other output devices. 14 | - Example: In a robot that cleans floors, the actuators could include wheels, brushes, and a vacuum pump that are used to move the robot and collect dirt. 15 | 16 | ### Sensors: 17 | This refers to the physical devices or mechanisms that the agent uses to sense the state of the environment and collect information. Sensors can include cameras, microphones, sonars, touch sensors, and other input devices. 18 | - Example: In a smart thermostat, the sensors could include temperature sensors, humidity sensors, and motion sensors that are used to detect the presence and behavior of occupants. 19 | -------------------------------------------------------------------------------- /Topics/Topic 1/Introduction to AI.md: -------------------------------------------------------------------------------- 1 | #### Artificial Intelligence (AI) refers to the ability of machines or computer systems to perform tasks that would typically require human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. 2 | 3 | ### The benefits of AI include: 4 | 5 | - Efficiency: AI can perform tasks faster and with more accuracy than humans, leading to increased productivity and efficiency. 6 | 7 | - Cost savings: Automating tasks with AI can reduce labor costs and increase profits. 8 | 9 | - Improved decision-making: AI can analyze vast amounts of data and provide insights that humans may miss, leading to better decision-making. 10 | 11 | - Personalization: AI can be used to personalize products and services to meet the specific needs and preferences of individual customers. 12 | 13 | - Improved safety: AI can be used in hazardous environments or dangerous situations, such as in search and rescue operations, to keep humans out of harm's way. 14 | 15 | ### There are two types of AI: 16 | 17 | - Strong AI: Also known as artificial general intelligence (AGI), this is a hypothetical form of AI that can perform any intellectual task that a human can do. This type of AI does not exist yet. 18 | 19 | - Weak AI: Also known as narrow AI, this type of AI is designed to perform a specific task or set of tasks, such as playing chess or recognizing speech. Most of the AI applications we have today are examples of weak AI. 20 | 21 | ### What AI can do ? 22 | 23 | - Healthcare: AI can be used for medical image analysis, drug discovery, personalized treatment planning, and disease diagnosis. 24 | 25 | - Finance: AI can be used for fraud detection, risk assessment, and investment analysis. 26 | 27 | - Education: AI can be used for personalized learning, intelligent tutoring systems, and student performance analysis. 28 | 29 | - Marketing and Sales: AI can be used for customer segmentation, predictive analytics, and targeted marketing. 30 | 31 | - Manufacturing: AI can be used for predictive maintenance, quality control, and supply chain optimization. 32 | 33 | - Transportation: AI can be used for self-driving cars, traffic optimization, and logistics planning. 34 | 35 | - Agriculture: AI can be used for crop monitoring, yield prediction, and soil analysis. 36 | 37 | - Entertainment: AI can be used for personalized recommendations, content creation, and virtual reality experiences. 38 | 39 | - Customer Service: AI can be used for chatbots, voice assistants, and sentiment analysis. 40 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Artificial-Intelligence 2 | All important thing about AI 3 | ## [TOPICS](https://github.com/prashantjagtap2909/Artificial-Intelligence) 4 | 5 | [Topic 1](https://github.com/prashantjagtap2909/Artificial-Intelligence) 6 | - [Introduction to AI](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%201/Introduction%20to%20AI.md) 7 | - [AI problems](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%201/AI%20problems.md) 8 | - [History of AI](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%201/History%20of%20AI.md) 9 | - [Enviroments](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%201/Environment.md) 10 | - [Agent introduction](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%201/Agent%20intoduction.md) 11 | - [Strong and weak agent](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%201/Strong%20and%20weak%20agent.md) 12 | - [Types of Agents](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%201/Types%20of%20agent.md) 13 | - [PEAS](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%201/PEAS.md) 14 | 15 | 16 | [Topic 2](https://github.com/prashantjagtap2909/Artificial-Intelligence) 17 | - [Problem solving](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Problem%20solving.md) 18 | - [Problem formulation](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Problem%20formulation.md) 19 | - [State space search](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/State%20space%20search.md) 20 | - [Uninformed & informed](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Uninformed%20and%20informed.md) 21 | - [DFS](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/DFS.md) 22 | - [BFS](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/BFS.md) 23 | - [DLS](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/DLS.md) 24 | - [Bidirectional search](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Bidirectional%20search.md) 25 | - [IDS](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/IDS.md) 26 | - [UCS](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/UCS.md) 27 | - [Best First Search](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Best%20First%20Search.md) 28 | - [Greedy Best First Search](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Greedy%20best%20first%20search.md) 29 | - [Uniform cost search](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/UCS.md) 30 | - [Hill climbing](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Hill%20climbing.md) 31 | - [Local Search](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Local%20Search.md) 32 | - [Local Beam Search](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Local%20beam.md) 33 | - [Stochastic Beam Search](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/Stochastic%20beam.md) 34 | - [A*](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/A*.md) 35 | - [AO*](https://github.com/prashantjagtap2909/Artificial-Intelligence/blob/main/Topics/Topic%202/AO*.md) 36 | 37 | 38 | [Topic 3 - will update soon]() 39 | 40 | [Topic 4 - will update soon]() 41 | 42 | [Topic 5 - will update soon]() 43 | 44 | [Topic 6 - will update soon]() 45 | -------------------------------------------------------------------------------- /Topics/Topic 1/Types of agent.md: -------------------------------------------------------------------------------- 1 | ### Different types of agents 2 | 3 | ### 1] Simple reflex agents: 4 | 5 | These agents take input from sensors and perform actions based on a set of pre-defined rules. They do not have a memory and cannot consider the consequences of their actions beyond the current state. 6 | - Characteristics: These agents take input from sensors and perform actions based on a set of pre-defined rules. They do not have a memory and cannot consider the consequences of their actions beyond the current state. 7 | - Advantages: Simple reflex agents are easy to design and implement, and can work well in simple environments. 8 | - Disadvantages: These agents are limited in their ability to handle complex environments or unforeseen situations, and can make mistakes if the rules they follow are incomplete or incorrect. 9 | - Example: A thermostat that turns on or off the heating system based on the current temperature is a simple reflex agent. 10 | 11 | 12 | ### 2] Model-based reflex agents: 13 | These agents have a model of the environment that allows them to make more informed decisions based on past experiences. They can also take into account the possible consequences of their actions in the future. 14 | - Characteristics: These agents have a model of the environment that allows them to make more informed decisions based on past experiences. They can also take into account the possible consequences of their actions in the future. 15 | - Advantages: Model-based reflex agents are more flexible than simple reflex agents and can handle more complex environments. 16 | - Disadvantages: These agents still have limitations in their ability to handle unforeseen situations or rapidly changing environments, and their performance can be impacted by the quality of their models. 17 | - Example: A self-driving car that uses a map and sensors to navigate and avoid obstacles is a model-based reflex agent. 18 | 19 | ### 3] Goal-based agents: 20 | These agents are designed to achieve specific goals in an environment. They can reason about their actions and choose the ones that lead to the desired outcome. 21 | - Characteristics: These agents are designed to achieve specific goals in an environment. They can reason about their actions and choose the ones that lead to the desired outcome. 22 | - Advantages: Goal-based agents are highly flexible and can handle complex and dynamic environments. 23 | - Disadvantages: These agents can be computationally expensive and require a significant amount of time and resources to design and implement. 24 | - Example: A delivery drone that uses AI to plan the optimal route to deliver packages is a goal-based agent. 25 | 26 | ### 4] Utility-based agents: 27 | - Characteristics: These agents prioritize actions based on a utility function that assigns a value to each possible outcome. They choose actions that maximize the expected utility. 28 | - Advantages: Utility-based agents can handle situations where there are multiple possible outcomes, and can make decisions that are optimal in terms of achieving the desired outcome. 29 | - Disadvantages: These agents can be complex to design and implement, and their performance can be impacted by the accuracy of the utility function. 30 | - Example: A stock trading AI that makes decisions based on maximizing profit is a utility-based agent. 31 | 32 | ### 5] Learning agents: 33 | These agents use machine learning algorithms to improve their performance over time. They can adapt to changing environments and learn from their past experiences. 34 | - Characteristics: These agents use machine learning algorithms to improve their performance over time. They can adapt to changing environments and learn from their past experiences. 35 | - Advantages: Learning agents can handle complex and dynamic environments, and can continuously improve their performance over time. 36 | - Disadvantages: These agents can require large amounts of data to train and can be computationally expensive to implement. 37 | - Example: A chatbot that uses machine learning to improve its responses based on user feedback is a learning agent. 38 | 39 | ### 6] Hybrid agents: 40 | These agents combine different types of agent architectures to take advantage of their strengths and overcome their weaknesses. 41 | - Characteristics: These agents combine different types of agent architectures to take advantage of their strengths and overcome their weaknesses. 42 | - Advantages: Hybrid agents can handle a wide range of environments and tasks, and can leverage the strengths of different agent architectures. 43 | - Disadvantages: These agents can be complex to design and implement, and may require significant resources to train and optimize. 44 | - Example: A self-driving car that combines model-based reflex, goal-based, and learning agents to navigate, plan, and improve its performance over time is a hybrid agent. 45 | --------------------------------------------------------------------------------