├── README.md ├── LICENSE └── glossary.md /README.md: -------------------------------------------------------------------------------- 1 | # `aima-glossary` 2 | 3 | This project is intended to provide [definitions for all the terms](glossary.md) in *Artificial Intelligence: A Modern Approach*. The definitions will be filled in over time by volunteers--maybe you. We've got a good start, but we need more help. 4 | 5 | The file [glossary.md](glossary.md) contains an alphabetic list of terms, some with definitions, in markdown format. 6 | 7 | The file [sentences.txt](sentences.txt) contains the LaTeX source of all the sentences in the book that define a term. 8 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | This is free and unencumbered software released into the public domain. 2 | 3 | Anyone is free to copy, modify, publish, use, compile, sell, or 4 | distribute this software, either in source code form or as a compiled 5 | binary, for any purpose, commercial or non-commercial, and by any 6 | means. 7 | 8 | In jurisdictions that recognize copyright laws, the author or authors 9 | of this software dedicate any and all copyright interest in the 10 | software to the public domain. We make this dedication for the benefit 11 | of the public at large and to the detriment of our heirs and 12 | successors. We intend this dedication to be an overt act of 13 | relinquishment in perpetuity of all present and future rights to this 14 | software under copyright law. 15 | 16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 17 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 18 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. 19 | IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR 20 | OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 21 | ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 22 | OTHER DEALINGS IN THE SOFTWARE. 23 | 24 | For more information, please refer to 25 | -------------------------------------------------------------------------------- /glossary.md: -------------------------------------------------------------------------------- 1 | # Glossary for *Artificial Intelligence: A Modern Approach* 2 | 3 | [A](#a) | [B](#b) | [C](#c) | [D](#d) | [E](#e) | [F](#f) | [G](#g) | [H](#h) | [I](#i) | [J](#j) | [K](#k) | [L](#l) | [M](#m) | [N](#n) | [O](#o) | [P](#p) | [Q](#q) | [R](#r) | [S](#s) | [T](#t) | [U](#u) | [V](#v) | [W](#w) | [X](#x) | [Y](#y) | [Z](#z) 4 | 5 | ## 8-puzzle 6 | 7 | **8-puzzle** consists of a 3x3 grid containing 8 numbered tiles and a blank space. A tile adjacent to the blank space can slide into that space. The object is to reach a specified **goal state** from a given **initial state**. 8 | 9 | # A 10 | 11 | ## absolute error 12 | 13 | Magnitude of the difference between the theoretical value (expected value) and the actual value of a physical quantity. 14 | 15 | ## abstraction 16 | 17 | Abstraction is selecting data from a larger pool to show only the relevant details to the object. 18 | 19 | ## abstraction hierarchy 20 | 21 | It hides the complexity of the system and allows individuals to work on different modules of the hierarchy at the same time. 22 | 23 | ## accessibility relations 24 | 25 | In modal logic, an accessibility relation R is a binary relation such that R⊆ W×W where W is a set of possible worlds. The accessibility relation determines for each world w ∈ W which worlds ẃ are accessible from w. 26 | 27 | ## action monitoring 28 | 29 | Checking the preconditions of each action as it is executed, rather than checking the preconditions of the entire remaining plan. 30 | 31 | ## action schema 32 | 33 | ## action-utility function 34 | 35 | ## actions 36 | 37 | The things that an agent can do. We model this with a function, **Actions(s)**, that returns a collection of actions 38 | that the agent can execute in state *s*. 39 | 40 | ## activation 41 | 42 | ## activation function 43 | 44 | A mathematical function that transforms the input or set of inputs received at a neuron to produce an output. 45 | Popular examples include the Sigmoid function, Rectificied Linear Units (ReLU) and Hyperbolic Tangent (Tanh) 46 | 47 | ## active learning 48 | 49 | An active learning agent decided which actions to take in order to guide its learning: it values leearning new things 50 | as well as reaping immediate rewards from the environment. 51 | This is in contrast to a passive learning agent, which learns from its observations, but the actions the agent takes are not influenced by the learning process. 52 | 53 | ## actor 54 | 55 | ## adaptive dynamic programming 56 | 57 | Also known as Approximate Dynamic Programming; it is a type of Reinforcement Learning where local rewards and transitions depend on unknown parameters - we set an initial control policy and update it until it converges to an optimal control policy. 58 | 59 | ## add list 60 | 61 | ## admissible heuristic 62 | 63 | A **heuristic** is a function that scores alternatives at each branching in a search algorithm. An **admissible heuristic** is one that *never overestimates* the cost to reach the goal. Admissible heuristics are **optimistic** in nature as they believe the cost of reaching the goal is less than it actually is. 64 | 65 | ## adversarial search 66 | 67 | Traversing a tree data structure to find all possible moves. It is usually used in a two-player game; each available move is represented using gain and loss for an individual player. An important application of it is in zero sum games, as in those games, one players' loss is the other players' gain. 68 | 69 | ## adversary argument 70 | 71 | ## agent 72 | 73 | An **agent** is anything that can be viewed as perceiving its **environment** through **sensors** and acting upon that environment through **actuators**. 74 | 75 | ## agent function 76 | 77 | An agent's behavior is described by the **agent function** that maps any given percept sequence to an action. 78 | 79 | ## agent program 80 | 81 | _Internally,_ the agent function for an artificial agent will be implemented by an **agent program**. 82 | 83 | ## agglomerative clustering 84 | 85 | It is a category of hierarchical clustering which uses a bottom-up approach. All observations start in their own cluster and different pairs of clusters are merged as you move up levels in the hierarchy. Its results are represented using a dendrogram. 86 | 87 | ## aggregation 88 | 89 | ## algorithm 90 | 91 | An **algorithm** is a sequence of **unambiguous finite steps** that when carried out on a given problem produce the expected outcome and terminate in **finite time**. 92 | 93 | ## alignment method 94 | 95 | ## alpha-beta 96 | 97 | **alpha** (**α**) is the value of the best (i.e., highest-value) choice we have found so far at any choice point along the path for MAX and **beta**(**β**) is the value of the best (i.e., lowest-value) choice we have found so far at any choice point along the path for MIN in a standard minimax tree. 98 | 99 | ## alpha-beta pruning 100 | 101 | **alpha—beta pruning** is applied to a standard minimax tree to prune away branches that cannot possibly influence the final minimax decision. 102 | 103 | ## ambient illumination 104 | 105 | Light that is already present in a scene, before any additional lighting is added. It usually refers to natural light. 106 | 107 | ## ambiguity 108 | 109 | It is the state of being uncertain or doubtful. 110 | 111 | ## ambiguity aversion 112 | 113 | Ambiguity aversion (also known as uncertainty aversion) is a preference for known risks over unknown risks. 114 | 115 | ## analogical reasoning 116 | 117 | Analogical reasoning is any type of thinking that relies upon an analogy. An analogical argument is an explicit representation of a form of analogical reasoning that cites accepted similarities between two systems to support the conclusion that some further similarity exists. 118 | 119 | ## anchoring effect 120 | 121 | It is a type of cognitive bias which makes people focus on the first piece of information (the "anchor") that was given to them, to make decisions. To explain this with an example; when buying a product if you're told a high price by the seller, your mind estimates the worth of that product as that initial/anchor price you're told, and then when you're offered a discount on it, you are more inclined to buy it thinking that you're getting it for cheap. 122 | 123 | ## And-Elimination 124 | 125 | In propositional logic, conjunction elimination (also called and elimination, ∧ elimination, or simplification0 is a valid immediate inference, argument form and rule of inference which makes the inference that, if the conjunction A and B is true, then A is true, and B is true. The rule makes it possible to shorten longer proofs by deriving one of the conjuncts of a conjunction on a line by itself. 126 | 127 | ## AND-parallelism 128 | 129 | ## angelic nondeterminism 130 | 131 | A notional ability always to choose the most favorable option, in constant time. With angelic non-determinism, any problem in NP would be solvable in polynomial time. 132 | 133 | ## angelic semantics 134 | 135 | ## answer literal 136 | 137 | ## answer set programming 138 | 139 | Answer set programming (ASP) is a form of declarative programming oriented towards difficult (primarily NP-hard) search problems. It is based on the stable model (answer set) semantics of logic programming. 140 | 141 | ## answer sets 142 | 143 | ## aortic coarctation 144 | 145 | It is a Medical term which means the narrowing of aorta - the largest artery in the body which starts from the heart. 146 | 147 | ## apprenticeship learning 148 | 149 | It is the process of learning by observing the demonstration of an expert. In a way, it is a form of supervised learning where the training data would be the tasks performed by the expert. 150 | 151 | ## architecture 152 | 153 | It is a very broad term; however, it is generally used to refer to the structure of the buildings and other constructions. 154 | 155 | ## arity 156 | 157 | In logic, mathematics, and computer science, the arity of a function or operation is the number of arguments or operands that the function takes. 158 | 159 | ## artificial life 160 | 161 | Artificial life (often abbreviated ALife or A-Life) is a field of study wherein researchers examine systems related to natural life, its processes, and its evolution, through the use of simulations with computer models, robotics, and biochemistry. 162 | 163 | ## ascending-bid 164 | 165 | Bidders place bids of progressively higher amounts, aiming to outbid each other. The bidder who places the highest bid by the end of the auction wins. 166 | 167 | ## Asilomar Principles 168 | The Asilomar Conference on Beneficial AI was a conference organized by the Future of Life Institute, held January 5-8, 2017, at the Asilomar Conference Grounds in California. More than 100 thought leaders and researches in economics, law, ethics, and philosophy met at the conference, to address and formulate principles of beneficial AI. Its outcome was the creation of a set of guidelines for AI research – the 23 Asilomar AI Principles. 169 | 170 | ## assignment 171 | 172 | ## associative memory 173 | 174 | In terms of Psychology, it is the type of memory which allows us to remember things by finding links between, apparently, unrelated things. To explain with an example; remembering someones' name by the dress they wore the first time you met them. Clearly, these two things seem completely unrelated. 175 | 176 | ## asymptotic analysis 177 | 178 | It is a Mathematical method of describing limiting behavior by using an input bound function, which means that the algorithm would run in constant time if no output was given, as the rest of the factors contributing to the computation are constant. 179 | 180 | ## asymptotic bounded optimality 181 | 182 | ## ATMS 183 | 184 | Assumption-Based Truth Maintenance System (ATMS) allows to maintain and reason with a number of simultaneous, possibly incompatible, current sets of assumption. 185 | 186 | ## atom 187 | 188 | An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element. 189 | 190 | ## atomic representation 191 | 192 | ## atomic sentence 193 | 194 | ## attribute-based extraction 195 | 196 | ## augmented grammar 197 | 198 | Any grammar whose productions are augmented with conditions expressed using features. 199 | 200 | ## authority 201 | 202 | ## automatic assembly sequencing 203 | 204 | ## autonomy 205 | 206 | It is the character of being independent and self-governing in vital as well as non-vital situaltions. 207 | 208 | ## average reward 209 | 210 | ## axiom 211 | 212 | In Mathematics or logics, an axiom is a statement or a proposition which is assumed to be true to serve as a starting point for further arguments and reasoning. Example of an axiom: “Nothing can both be and not be at the same time and in the same respect”. 213 | 214 | # B 215 | 216 | ## back-propagation 217 | 218 | **back-propagation** is an algorithm used for *supervised learning* of **artificial neural networks** using gradient descent. 219 | The method calculates the gradient of a given error function with respect to the weights of the network. The "backward" terminology stems because the gradient calculation requires backward propagation through the newtork. 220 | 221 | ## backed-up value 222 | ## backgammon 223 | 224 | Backgammon is one of the oldest known board games. It is a two player game where each player has fifteen pieces (checkers) which move between twenty-four triangles (points) according to the roll of two dice. The objective of the game is to be first to bear off, i.e. move all fifteen checkers off the board. 225 | 226 | ## background subtraction 227 | 228 | Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing (object recognition etc.). 229 | 230 | ## backjumping 231 | ## backmarking 232 | ## backoff model 233 | ## backpropagation 234 | 235 | To minimize the cost function, we need to know how the changes in weights and biases affect the cost function i.e. partial derivatives of the cost function w.r.t every weight and bias in the network; back propagation is a method that allows us to quickly compute all these partial derivatives. 236 | 237 | ## Backus-Naur form (BNF) 238 | 239 | A mathematical notation used to describe the syntax of a programming language. 240 | 241 | ## backward-chaining 242 | 243 | ## bag of words 244 | 245 | The **bag of words** model is a simplifying representation used in natural language processing and information retrieval. Also known as the vector space model. In this model, a text is represented as the bag of its words, disregarding grammar and even word order but keeping multiplicity. 246 | 247 | ## bagging 248 | 249 | Bagging (stands for **B**ootstrap **Agg**regat**ing**) is a way to decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multisets of the same cardinality/size as your original data. By increasing the size of your training set you can't improve the model predictive force, but just decrease the variance, narrowly tuning the prediction to expected outcome. 250 | 251 | ## bang-bang control 252 | ## baseline 253 | ## batch gradient descent 254 | ## Bayes' rule 255 | 256 | **Bayes' rule** describes the probabilty of an event(lets say A) in the light of that a given event B has already occured. 257 | Mathematically Bayes' rule can be described as :- **P(A|B) = P(A)P(B|A)/P(B)** 258 | 259 | ## Bayes-Nash equilibrium 260 | ## Bayesian learning 261 | 262 | A Machine Learning method which enables us to encode our initial perception of what a model should look like, regardless of what the data tells us. It proves to be very useful when there’s a sparse amount of data to train our model properly. 263 | 264 | ## Bayesian network 265 | 266 | A probabilistic graphical model representing a group of variables along with their conditional dependencies through a direct acyclic graph; it is also used to compute the probability distribution for a subset of network variables, provided the distributions or values of any subset of the remaining variables. 267 | 268 | ## beam search 269 | 270 | Beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic. But in beam search, only a predetermined number of best partial solutions are kept as candidates.[1] It is thus a greedy algorithm. 271 | 272 | ## behaviorism 273 | ## belief function 274 | ## belief propagation 275 | ## belief revision 276 | ## belief state 277 | ## Bellman equation 278 | ## Bellman update 279 | 280 | ## benchmarking 281 | 282 | Benchmarking is to measure the quality of something for the purposes of comparison or evaluation. 283 | 284 | ## best-first search 285 | ## biconditional 286 | ## binary constraint 287 | ## binary resolution 288 | ## binding list 289 | ## binocular stereopsis 290 | ## biological naturalism 291 | ## blocks world 292 | ## bluff 293 | ## body 294 | ## boid 295 | 296 | Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates the flocking behaviour of birds. His paper on this topic was published in 1987 in the proceedings of the ACM SIGGRAPH conference. The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object. 297 | 298 | ## boosting 299 | 300 | Boosting is a two-step approach, where one first uses subsets of the original data to produce a series of averagely performing models and then "boosts" their performance by combining them together using a particular cost function (=majority vote). Unlike bagging, in the classical boosting the subset creation is not random and depends upon the performance of the previous models: every new subsets contains the elements that were (likely to be) misclassified by previous models. 301 | 302 | ## boundary set 303 | ## bounded optimality 304 | ## bounded PlanSAT 305 | ## bounded rationality 306 | ## bounds consistent 307 | ## bounds propagation 308 | ## branching factor 309 | ## bridge 310 | ## bunch 311 | 312 | # C 313 | 314 | ## calculative rationality 315 | ## canonical distribution 316 | ## cart-pole 317 | 318 | A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force to the cart in the left or right direction. The pendulum starts upright, and the goal is to prevent it from falling over. A reward is provided for every timestep that the pole remains upright. 319 | 320 | ## cascaded finite-state transducers 321 | ## case agreement 322 | ## causal 323 | ## causal link 324 | ## causal network 325 | ## center 326 | ## central limit theorem 327 | 328 | The central limit theorem (CLT) establishes that, in some situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a "bell curve") even if the original variables themselves are not normally distributed. 329 | 330 | ## certainty effect 331 | ## certainty equivalent 332 | ## CFG 333 | ## chain rule 334 | 335 | It is a mathematical formula used to compute the derivatives of a composition of two or more functions. 336 | 337 | ## characters 338 | 339 | A character is any letter, number, space, punctuation mark, or a symbol. 340 | 341 | ## chart 342 | ## checkers 343 | ## chess 344 | 345 | Chess is a two-player strategy board game played on a chessboard, a checkered gameboard with 64 squares arranged in an 8×8 grid. Play does not involve hidden information. Each player begins with 16 pieces: one king, one queen, two rooks, two knights, two bishops, and eight pawns. Each of the six piece types moves differently, with the most powerful being the queen and the least powerful the pawn. The objective is to checkmate the opponent's king by placing it under an inescapable threat of capture. To this end, a player's pieces are used to attack and capture the opponent's pieces, while supporting each other. 346 | 347 | ## Chomsky Normal Form 348 | 349 | A grammar is in **Chomsky Normal Form (usually found as CNF)** if all its production rules are in one of the following forms: 350 | 351 | ``` 352 | A -> BC 353 | A -> a 354 | S -> ε 355 | ``` 356 | 357 | where `S` is the starting symbol and `ε` the symbol for the empty string. 358 | 359 | ## circuit verification 360 | ## circumscription 361 | ## Clark Normal Form 362 | ## classification 363 | 364 | Sorting or dividing data into two or more categories on the basis of a distinct feature. 365 | 366 | ## clause 367 | ## closed-loop 368 | ## clustering 369 | 370 | In terms of Data Science, clustering is the grouping of data instances or objects with similar features and characteristics. 371 | 372 | ## clutter 373 | ## CMAC 374 | ## co-NP 375 | ## co-NP-complete 376 | ## coarse-to-fine 377 | ## coarticulation 378 | ## coastal navigation 379 | ## cognitive psychology 380 | 381 | It is a branch of Psychology which deals with the mental processes involved in obtaining and comprehending new information. Some of the prominent processes include judging, problem solving and remembering. 382 | 383 | ## collusion 384 | ## color constancy 385 | ## communication 386 | ## commutativity 387 | ## competitive 388 | ## competitive ratio 389 | ## complementary literals 390 | ## complete assignment 391 | ## complete data 392 | ## completeness 393 | ## completing the square 394 | ## completion 395 | ## compliant motion 396 | ## composition 397 | ## compositional semantics 398 | ## compositionality 399 | ## computable 400 | ## computational linguistics 401 | ## computational neuroscience 402 | ## conclusion 403 | ## concurrent action list 404 | ## conditional effect 405 | ## conditional Gaussian 406 | ## conditional probability table 407 | ## conditional random field 408 | ## conditioning 409 | ## confirmation 410 | ## conflict set 411 | ## conflict-directed backjumping 412 | ## conformant 413 | ## conjugate gradient 414 | ## conjunct ordering 415 | ## conjunction 416 | ## conjunctive normal form 417 | ## connectionist 418 | ## consciousness 419 | ## consequentialism 420 | ## consistency 421 | ## consistent 422 | ## consistent plan 423 | 424 | A plan in which there are no cycles in the *ordering constraints* and no conflicts with the **causal links**. 425 | 426 | ## constraint language 427 | ## constraint learning 428 | ## constraint logic programming 429 | ## constraint optimization problem 430 | ## constraint propagation 431 | ## constraint satisfaction problem 432 | ## constraint weighting 433 | ## consumable 434 | ## context-free grammar 435 | ## context-specific independence 436 | ## contingency plan 437 | ## continuous 438 | ## contraction 439 | ## contradiction 440 | ## control theory 441 | ## controller 442 | ## convention 443 | ## convex set 444 | ## convolution 445 | 446 | A mathematical term which basically means merging two signals to form a third signal. 447 | 448 | ## cooperative 449 | ## coordination 450 | ## corpus 451 | ## Cournot competition 452 | ## covariance 453 | ## covariance matrix 454 | ## critic 455 | ## critical path 456 | ## critical path method 457 | ## cross-correlation 458 | ## crossover point 459 | ## cryptarithmetic 460 | ## cumulative distribution 461 | ## cumulative probability density function 462 | ## current-best-hypothesis 463 | ## cycle cutset 464 | ## cyclic solution 465 | ## CYK algorithm 466 | 467 | # D 468 | ## DARPA Grand Challenge 469 | 470 | The DARPA Grand Challenge is a prize competition for American Autonomous Vehicles,funded by the **Defense Advanced Research Projects Agency**,the most prominent research organization of the United States Department of Defense. 471 | 472 | ## data association 473 | ## data complexity 474 | ## data compression 475 | 476 | It is the process of encoding data using fewer bits than were used in the original representation so that the data consumes lesser disk space. 477 | 478 | ## data matrix 479 | ## data mining 480 | ## data-driven 481 | ## database semantics 482 | ## Datalog 483 | ## Davis-Putnam algorithm 484 | ## decayed MCMC 485 | ## decentralized planning 486 | ## decision analysis 487 | ## decision boundary 488 | ## decision maker 489 | ## decision network 490 | ## decision theory 491 | ## decision theory 492 | ## decision tree 493 | 494 | A decision tree is a construct that uses a tree like graph or model of decisions and their possible consequences,including chance event outcomes,resource costs and utility. 495 | 496 | ## declarative 497 | ## declarative bias 498 | ## decomposition 499 | ## deduction theorem 500 | ## deductive learning 501 | 502 | Going from a known 503 | general rule to a new rule that is logically entailed (and thus nothing new), but is nevertheless useful 504 | because it allows more efficient processing. 505 | 506 | ## deep belief networks 507 | ## deep learning 508 | 509 | It is a subfield of Machine Learning that tries to map the working of the human brain in processing data and creating patterns to use in decision making. 510 | 511 | ## default logic 512 | ## definite clause 513 | ## definite clause grammar 514 | ## definition of a rational agent 515 | ## deformable template 516 | ## degree of belief 517 | ## degree of freedom 518 | 519 | It is a statistics term which represents the number of variables that you are allowed to change for analysis, without any constraint violation. 520 | 521 | ## delete list 522 | ## deliberative layer 523 | ## demonic nondeterminism 524 | ## Dempster-Shafer theory 525 | ## depth 526 | ## depth of field 527 | ## depth-first search 528 | 529 | It is an algorithm which allows us to traverse a graph or a tree data structure; it starts from the root node and traverses as far as possible for each branch before backtracking. (In case of graph data structure, the root node would be any arbitrary node that you select). 530 | 531 | ## depth-limited search 532 | ## detailed balance 533 | ## Deterministic 534 | ## diachronic 535 | ## diagnostic 536 | ## diameter 537 | ## Differential GPS 538 | ## diffuse albedo 539 | ## Diophantine equations 540 | ## direct utility estimation 541 | ## Dirichlet process 542 | ## disambiguation 543 | ## discount factor 544 | ## discrete 545 | ## discretization 546 | ## disjoint 547 | ## disjunction 548 | ## disjunctive constraint 549 | ## disparity 550 | ## distant point light source 551 | ## distortion 552 | ## distributed constraint satisfaction 553 | ## DL 554 | ## domain 555 | ## domain closure 556 | ## dominant strategy equilibrium 557 | ## downward refinement property 558 | ## dropping conditions 559 | ## DT 560 | ## dual graph 561 | ## dualism 562 | ## duration 563 | ## dynamic 564 | ## dynamic backtracking 565 | ## dynamic Bayesian network 566 | ## dynamic programming 567 | 568 | A method of solving complex problems by breaking them down to sub-problems that can be solved by back tracking from the last stage. Used in popular real-world problems including traveling salesman problem, Fibonacci sequence, knapsack problem, etc. 569 | 570 | ## dynamic state 571 | 572 | # E 573 | 574 | ## early stopping 575 | ## economy 576 | ## effect 577 | ## effective branching factor 578 | ## efficient 579 | ## electric motor 580 | ## eliminative materialism 581 | ## elitism 582 | ## embodied cognition 583 | ## emergent behavior 584 | ## empirical gradient 585 | ## empirical loss 586 | ## empiricism 587 | ## English auction 588 | ## entailment 589 | ## entropy 590 | ## environment 591 | ## environment generator 592 | ## episodic 593 | ## epsilon-ball 594 | ## equality symbol 595 | ## equilibrium 596 | ## ergodic 597 | ## error rate 598 | ## event 599 | ## evidence 600 | ## evidence reversal 601 | ## evolutionary algorithms 602 | ## evolutionary psychology 603 | ## evolutionary strategies 604 | ## exact cell decomposition 605 | ## execution 606 | ## execution monitoring 607 | ## executive layer 608 | ## exhaustive decomposition 609 | ## existence uncertainty 610 | ## Existential Instantiation 611 | ## expand 612 | ## expectation 613 | ## expectation-maximization 614 | ## expected value 615 | ## expectiminimax value 616 | ## explanation-based learning 617 | ## explanatory gap 618 | ## exploitation 619 | ## exploration 620 | ## exploration problem 621 | ## expressiveness 622 | ## extended Kalman filter (EKF) 623 | ## extension 624 | ## extensive form 625 | ## externalities 626 | ## extrinsic 627 | 628 | # F 629 | ## fact 630 | ## factor 631 | ## factored frontier 632 | ## factored representation 633 | ## factorial HMM 634 | ## factoring 635 | ## false negative 636 | ## false positive 637 | ## feature extraction 638 | ## feature selection 639 | ## feed-forward network 640 | ## FIFO queue 641 | ## filtering 642 | ## finite horizon 643 | ## first-choice hill climbing 644 | ## first-order Markov process 645 | ## fixate 646 | ## fixed point 647 | ## fixed-lag smoothing 648 | ## flaw 649 | ## fluent 650 | ## focal plane 651 | ## foreshortening 652 | ## forward-backward algorithm 653 | ## forward-chaining 654 | ## frame problem 655 | ## frames 656 | ## framing effect 657 | ## free space 658 | ## frequentist 659 | ## friendly AI 660 | ## frontier 661 | ## full joint probability distribution 662 | ## fully observable 663 | 664 | If an agent's sensors give it access to the complete state of the environment at each point in time,then we say that the task environment is fully observable. 665 | 666 | ## functionalism 667 | ## futility pruning 668 | ## fuzzy control 669 | A fuzzy control system is a control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively). 670 | 671 | ## fuzzy logic 672 | 673 | Fuzzy logic is a form of many-valued logic in which the truth values of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. 674 | 675 | ## fuzzy set theory 676 | 677 | Fuzzy sets (aka uncertain sets) are somewhat like sets whose elements have degrees of membership. In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. 678 | 679 | # G 680 | 681 | ## G-set 682 | ## gain parameter 683 | ## gain ratio 684 | ## gait 685 | ## game theory 686 | 687 | Game theory is the study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers. 688 | 689 | ## game tree 690 | ## Gaussian distribution 691 | ## Gaussian error model 692 | ## Gaussian filter 693 | ## Gaussian process 694 | ## generalization 695 | ## generalization hierarchy 696 | ## generalization loss 697 | ## generalized modus ponens 698 | ## generating 699 | ## generator 700 | ## genetic algorithms 701 | ## genetic programming 702 | ## Gibbs sampling 703 | ## GLIE 704 | ## global constraint 705 | ## global minimum 706 | ## Go 707 | ## goal 708 | ## goal clauses 709 | ## goal formulation 710 | ## goal monitoring 711 | ## goal test 712 | ## goal-directed reasoning 713 | ## gold standard 714 | ## gorilla problem 715 | ## gradient 716 | ## gradient descent 717 | ## grammar 718 | ## graph 719 | ## graph coloring 720 | ## grasping 721 | ## greedy agent 722 | ## greedy best-first search 723 | ## grid world 724 | ## ground term 725 | ## grounding 726 | 727 | # H 728 | 729 | ## Hamming distance 730 | ## Hansard 731 | ## haptic feedback 732 | ## head 733 | ## heavy-tailed distribution 734 | ## Hebbian learning 735 | ## Hessian 736 | ## heuristic function 737 | ## heuristic search 738 | ## hidden Markov model 739 | 740 | A hidden Markov model (or HMM) is a temporal probabilistic model in which the state of the process is described by a *single discrete* 741 | random variable. 742 | 743 | ## hierarchical lookahead 744 | ## hierarchical reinforcement learning 745 | ## high-level action 746 | ## Hinton diagrams 747 | ## holdout cross-validation 748 | ## homeostatic 749 | ## homophones 750 | ## horizon effect 751 | ## Horn clause 752 | ## hub 753 | ## human-level AI 754 | ## Hungarian algorithm 755 | ## hybrid A* 756 | ## hybrid agent 757 | ## hybrid architecture 758 | ## hybrid Bayesian network 759 | ## hydraulic actuation 760 | ## hypothesis 761 | ## hypothesis prior 762 | ## hypothesis space 763 | 764 | # I 765 | 766 | ## i.i.d. 767 | 768 | **i.i.d.** denotes **independent and identically distributed** random variables. They are defined on the same probability space, have identical probability distribution functions, and are mutually independent. 769 | 770 | ## identification in the limit 771 | ## identity matrix 772 | ## identity uncertainty 773 | ## ignore delete lists 774 | ## ignore preconditions heuristic 775 | ## image 776 | ## imperfect information 777 | ## implementation 778 | ## implementation level 779 | ## implication 780 | ## importance sampling 781 | ## inclusion-exclusion principle 782 | ## incompleteness theorem 783 | ## incremental belief-state search 784 | ## independence 785 | ## independent subproblems 786 | ## index 787 | ## indexed random variable 788 | ## indexing 789 | ## individuation 790 | ## induction 791 | ## inductive learning 792 | 793 | Going from a set of specific input-output pairs 794 | to a (possibly incorrect) general rule is called **inductive learning**. 795 | 796 | ## inductive logic 797 | ## inductive logic programming 798 | ## inference 799 | ## inference rules 800 | ## inferential frame problem 801 | ## infinite 802 | ## infinite horizon 803 | ## infix 804 | ## information extraction 805 | ## information gain 806 | ## information gathering 807 | ## information retrieval 808 | ## information sets 809 | ## informed search 810 | ## inheritance 811 | ## initial state 812 | ## input resolution 813 | ## inside-outside algorithm 814 | ## insurance premium 815 | ## intelligence 816 | ## interleaving 817 | ## interlingua 818 | ## internal state 819 | ## interpretation 820 | ## interreflections 821 | ## intrinsic 822 | ## intuition pump 823 | ## inverse 824 | ## inverse entailment 825 | ## inverse kinematics 826 | ## inverse reinforcement learning 827 | ## inverted pendulum 828 | ## inverted spectrum 829 | ## IR 830 | ## irreversible 831 | ## iterative deepening search 832 | ## iterative expansion 833 | ## iterative-deepening A* 834 | 835 | # J 836 | 837 | ## join tree 838 | ## joint action 839 | ## joint plan 840 | ## JTMS 841 | ## justification 842 | 843 | # K 844 | 845 | ## k-d tree 846 | ## K-means clustering 847 | ## Kalman filtering 848 | ## Kalman gain matrix 849 | ## kernel 850 | ## kernel function 851 | ## kernel trick 852 | ## kinematic state 853 | ## kinematics 854 | ## King Midas problem 855 | ## knowledge acquisition 856 | ## knowledge base 857 | ## knowledge engineering 858 | ## knowledge-based agents 859 | ## Known 860 | ## Kriegspiel 861 | ## Kullback-Leibler divergence 862 | 863 | # L 864 | 865 | ## label 866 | ## Lambert's cosine law 867 | ## landmarks 868 | ## language 869 | ## language generation 870 | ## language identification 871 | ## large-scale learning 872 | ## layers 873 | ## leak node 874 | ## learning 875 | 876 | An agent is **learning** if it improves its 877 | performance after making observations about the world. 878 | 879 | ## learning curve 880 | ## learning element 881 | ## learning rate 882 | ## least commitment 883 | ## least-constraining-value 884 | ## leave-one-out cross-validation 885 | ## lens 886 | ## level cost 887 | ## level of abstraction 888 | ## level sum 889 | ## leveled off 890 | ## lexical category 891 | ## lexicon 892 | ## LIFO queue 893 | ## lifting lemma 894 | ## likelihood 895 | ## likelihood weighting 896 | ## line search 897 | ## linear Gaussian 898 | ## linear interpolation smoothing 899 | ## linear programming 900 | ## linear regression 901 | ## linear resolution 902 | ## linear separator 903 | ## linkage constraints 904 | ## links 905 | ## liquid event 906 | ## Lisp 907 | ## literal 908 | ## local consistency 909 | ## local search 910 | ## locality 911 | ## locality-sensitive hash 912 | ## localization 913 | ## locally structured 914 | ## locally weighted regression 915 | ## location sensors 916 | ## locking 917 | ## log likelihood 918 | ## logic 919 | ## logical equivalence 920 | ## logical minimization 921 | ## logical omniscience 922 | ## logicist 923 | ## logistic function 924 | ## logistic regression 925 | ## long-distance dependencies 926 | ## LOOCV 927 | ## loopy path 928 | ## loosely coupled 929 | ## loss function 930 | ## lottery 931 | ## low-dimensional embedding 932 | 933 | # M 934 | 935 | ## machine reading 936 | ## macrops 937 | ## magic set 938 | ## Mahalanobis distance 939 | ## Maintaining Arc Consistency (MAC) 940 | ## makespan 941 | ## margin 942 | ## marginalization 943 | ## Markov blanket 944 | ## Markov chain 945 | ## Markov decision process 946 | ## Markov localization 947 | ## Markov network 948 | ## Markov property 949 | ## material value 950 | ## materialism 951 | ## matrix 952 | ## max norm 953 | ## max-level 954 | ## maximin 955 | ## maximin equilibrium 956 | ## maximum a posteriori 957 | ## maximum expected utility 958 | ## maximum-likelihood 959 | ## mechanism 960 | ## mechanism design 961 | ## mel frequency cepstral coefficient (MFCC) 962 | ## memoization 963 | ## memoized 964 | ## mental states 965 | ## mereology 966 | ## metadata 967 | ## metalevel learning 968 | ## metalevel state space 969 | ## metaphor 970 | ## metareasoning 971 | ## metonymy 972 | ## micromort 973 | ## min-conflicts 974 | ## mind-body problem 975 | ## minimax 976 | ## minimax decision 977 | **minimax decision** is the optimal choice which leads MAX to the state with the highest minimax value and leads MIN to lowest minimax value. 978 | ## minimax search 979 | ## minimax value 980 | The **minimax value** of a node in a game tree is the utility (for MAX) of being in the corresponding state, assuming that both players play optimally from there to the end of the game. 981 | ## minimum description length 982 | ## minimum slack 983 | ## minimum-remaining-values 984 | ## Minkowski distance 985 | ## missing precondition 986 | ## missing state variable 987 | ## mixture distribution 988 | ## mixture of Gaussians 989 | ## mobile manipulator 990 | ## modal logic 991 | ## model 992 | ## model checking 993 | ## model selection 994 | ## modus ponens 995 | ## monitoring 996 | ## monotonic preference 997 | ## monotonicity 998 | ## Monte Carlo 999 | ## Monte Carlo localization 1000 | ## Monte Carlo simulation 1001 | ## Monte Carlo tree search 1002 | ## motion blur 1003 | ## motion model 1004 | ## multiactor 1005 | ## multiagent 1006 | ## multiagent planning problem 1007 | ## multiagent systems 1008 | ## multiplexer 1009 | ## multiplicative utility function 1010 | ## multiply connected 1011 | ## multivariate Gaussian 1012 | ## multivariate linear regression 1013 | ## mutation 1014 | ## mutex 1015 | ## mutual preferential independence 1016 | ## mutually utility independent 1017 | ## myopic 1018 | 1019 | # N 1020 | 1021 | ## n-armed bandit 1022 | ## n-gram model 1023 | ## natural kind 1024 | ## natural numbers 1025 | ## nearest-neighbor filter 1026 | 1027 | The nearest-neighbour filter, which repeatedly chooses the closest pairing of predicted position and observation and adds that pairing to the assignment. 1028 | 1029 | ## nearest-neighbors regression 1030 | ## negation 1031 | ## negative 1032 | ## neuroscience 1033 | ## Newton-Raphson 1034 | ## no-good 1035 | ## no-regret learning 1036 | ## noise 1037 | ## noisy channel model 1038 | ## noisy-OR 1039 | ## nondeterministic 1040 | ## nonholonomic 1041 | ## nonlinear 1042 | ## nonlinear regression 1043 | ## nonmonotonicity 1044 | ## nonparametric 1045 | ## nonparametric density estimation 1046 | ## nonparametric model 1047 | ## normalization 1048 | ## normalized form 1049 | ## normative theory 1050 | ## NP-complete 1051 | ## NP-completeness 1052 | ## null hypothesis 1053 | 1054 | # O 1055 | 1056 | ## object model 1057 | ## objective function 1058 | ## objectivist 1059 | ## occupancy grid 1060 | ## occupied space 1061 | ## occur check 1062 | ## Ockham's razor 1063 | ## odometry 1064 | ## off-policy 1065 | ## omniscience 1066 | ## on-policy 1067 | ## online replanning 1068 | ## online search 1069 | ## ontological commitment 1070 | ## ontological engineering 1071 | ## ontology 1072 | ## open list 1073 | ## open-code 1074 | ## open-loop 1075 | ## operationality 1076 | ## operations research 1077 | ## optimal brain damage 1078 | ## optimal controllers 1079 | ## optimally efficient 1080 | ## optimization 1081 | ## optimizer's curse 1082 | ## optogenetics 1083 | ## ordering constraints 1084 | ## OR-parallelism 1085 | ## orientation 1086 | ## origin function 1087 | ## Othello 1088 | ## out of vocabulary 1089 | ## outcome 1090 | ## overall intensity 1091 | ## overfitting 1092 | 1093 | # P 1094 | 1095 | ## PAC learning 1096 | ## PageRank 1097 | ## parameter independence 1098 | ## parameter learning 1099 | ## parametric model 1100 | ## Pareto dominated 1101 | ## parse tree 1102 | ## parsing 1103 | ## partial assignment 1104 | ## partial information 1105 | ## partial program 1106 | ## partially observable 1107 | ## particle filtering 1108 | ## partition 1109 | ## passive learning 1110 | 1111 | A passive learning agent learns from its observations, but the actions the agent takes are not influenced by the learning process. 1112 | This is in contrast to an active learning agent, which chooses actions that will facilitate its own learning. 1113 | 1114 | ## path 1115 | ## path planning 1116 | ## paths 1117 | ## pattern matching 1118 | ## payoff function 1119 | ## PD controller 1120 | ## PDDL 1121 | ## Peano axioms 1122 | ## PEAS 1123 | ## peeking 1124 | ## percept 1125 | 1126 | The term **percept** refers to the agent's perceptual inputs at any given instant. 1127 | 1128 | ## percept schema 1129 | ## percept sequence 1130 | 1131 | An agent's **percept sequence** is the complete history of everything the agent has ever perceived. 1132 | 1133 | ## perception 1134 | ## perception layer 1135 | ## perceptron 1136 | ## perceptron network 1137 | ## perfect rationality 1138 | ## performance element 1139 | ## perplexity 1140 | ## persistence arc 1141 | ## persistent failure model 1142 | ## perspective projection 1143 | ## phone model 1144 | ## phoneme 1145 | ## phrase structure 1146 | ## physical symbol system 1147 | ## physicalism 1148 | ## piano movers 1149 | ## pictorial structure model 1150 | ## PID controller 1151 | ## plan monitoring 1152 | ## plan recognition 1153 | ## planning graph 1154 | ## PlanSAT 1155 | ## playout 1156 | ## ply 1157 | ## pneumatic actuation 1158 | ## point-to-point motion 1159 | ## poker 1160 | ## policy 1161 | ## policy evaluation 1162 | ## policy gradient 1163 | ## policy improvement 1164 | ## policy iteration 1165 | ## policy loss 1166 | ## policy search 1167 | ## policy value 1168 | ## polynomial kernel 1169 | ## pose 1170 | ## positive 1171 | ## possibility axiom 1172 | ## possibility theory 1173 | ## possible world 1174 | ## post-decision disappointment 1175 | ## pragmatics 1176 | ## precedence constraints 1177 | ## precision 1178 | 1179 | **Precision** is a performance measure often used to describe some model, alongside other measures like *accuracy*, *recall* etc. Precision can be thought of as an efficiency measure of a model. It is given as: 1180 | ***Precision = True Positives / (True Positives + False Positives)*** 1181 | 1182 | ## precondition 1183 | ## prediction 1184 | ## preference elicitation 1185 | ## preference independence 1186 | ## prefix 1187 | ## premise 1188 | ## presentation 1189 | ## principle of indifference 1190 | ## principle of insufficient reason 1191 | ## principle of trichromacy 1192 | ## prioritized sweeping 1193 | ## priority queue 1194 | ## prisoner's dilemma 1195 | ## probabilistic checkmate 1196 | ## probabilistic Horn abduction 1197 | ## probabilistic inference 1198 | ## probability 1199 | ## probability density function 1200 | ## probability distribution 1201 | ## probability model 1202 | ## probit distribution 1203 | ## problem 1204 | ## problem formulation 1205 | ## problem-solving agent 1206 | ## procedural attachment 1207 | ## process 1208 | ## product rule 1209 | ## progression planning 1210 | ## Prolog 1211 | ## pronunciation model 1212 | ## proof 1213 | ## proof-checker 1214 | ## proposition symbol 1215 | ## propositionalize 1216 | ## protein design 1217 | ## provably beneficial 1218 | ## pruning 1219 | ## psychological reasoning 1220 | ## PUMA 1221 | ## pure strategy 1222 | ## pure symbol 1223 | 1224 | # Q 1225 | 1226 | ## Q-learning 1227 | ## QALY 1228 | ## quadratic programming 1229 | ## qualia 1230 | ## qualification problem 1231 | ## qualitative physics 1232 | ## quantification 1233 | ## quantization factor 1234 | ## quasi-logical form 1235 | ## question answering 1236 | ## queue 1237 | ## quiescence search 1238 | 1239 | # R 1240 | 1241 | ## radial basis function 1242 | ## radiometry 1243 | ## random surfer model 1244 | ## random-restart hill climbing 1245 | ## randomized weighted majority algorithm 1246 | ## rational agent 1247 | 1248 | A rational agent selects an action that is expected to maximize its performance measure,given the evidence provided by the 1249 | *percept sequence* and whatever built-in knowledge the agent has. 1250 | 1251 | ## rationalism 1252 | ## rationality 1253 | ## reachable set 1254 | ## reactive control 1255 | ## reactive layer 1256 | ## real-time AI 1257 | ## realizable 1258 | ## reasoning 1259 | ## recall 1260 | 1261 | **Recall** is a measure of performance used alongside ***Precision***, ***Accuracy*** and ***F-score***. It is defined as the ratio of the *true positives* to the *summation of true positives and false negatives*. 1262 | 1263 | ## reciprocal rank 1264 | ## recognition 1265 | ## recombine 1266 | ## reconstruction 1267 | ## record linkage 1268 | ## rectangular grid 1269 | ## recurrent network 1270 | ## recursive 1271 | ## recursive best-first search 1272 | ## reduct 1273 | ## reference class 1274 | ## reference controller 1275 | ## reference path 1276 | ## reflect 1277 | ## reflective architecture 1278 | ## refutation 1279 | ## regions 1280 | ## regression 1281 | ## regression planning 1282 | ## regression to the mean 1283 | ## regret 1284 | ## regular expression 1285 | ## regularization 1286 | ## reinforcement 1287 | ## reinforcement learning 1288 | 1289 | In **reinforcement learning** the agent learns from a series of 1290 | reinforcements-rewards or punishments. 1291 | 1292 | ## rejection sampling 1293 | ## relational extraction 1294 | ## relational uncertainty 1295 | ## relative error 1296 | ## relative likelihood 1297 | ## relaxed problem 1298 | ## relevance 1299 | ## relevance feedback 1300 | ## relevant 1301 | ## relevant-states 1302 | ## renaming 1303 | ## rendering 1304 | ## rendering model 1305 | ## repeated state 1306 | ## resolution 1307 | ## resolvent 1308 | ## result set 1309 | ## Rete algorithm 1310 | ## retrograde 1311 | ## reusable 1312 | ## revelation principle 1313 | ## revenue equivalence theorem 1314 | ## reward 1315 | ## reward shaping 1316 | ## reward-to-go 1317 | ## risk-averse 1318 | ## risk-neutral 1319 | ## risk-seeking 1320 | ## Robocup 1321 | ## robot navigation 1322 | ## robotic soccer 1323 | ## robust control theory 1324 | ## ROC curve 1325 | ## rollout 1326 | ## Roomba 1327 | ## Root Mean Square (RMS) 1328 | 1329 | A mathematical formula, **root mean square** is often used as an error estimator. It is described as the root of the summation of all the squared errors. The formula is given as: 1330 |
***square_root[summation(a1^2 + a2^2 + ...)], where a1, a2...are some entities*** 1331 |
In RMS error estimation, the above squared entities are replaced with squared errors. 1332 | 1333 | 1334 | ## rules 1335 | 1336 | # S 1337 | 1338 | ## S-set 1339 | ## sample complexity 1340 | ## sample space 1341 | ## sampling rate 1342 | ## SARSA 1343 | ## SAT 1344 | ## satisfiability 1345 | ## satisfiability threshold conjecture 1346 | ## satisficing 1347 | ## scaled orthographic projection 1348 | ## scanning lidars 1349 | ## scene 1350 | ## schedule 1351 | ## schedulers 1352 | ## schema 1353 | ## Scrabble 1354 | ## sealed-bid second-price auction 1355 | ## search 1356 | ## search cost 1357 | ## search tree 1358 | ## segmentation 1359 | ## selection 1360 | ## semantic ambiguity 1361 | ## semantics 1362 | ## semi-supervised learning 1363 | ## semidynamic 1364 | ## semiotics 1365 | ## sensitivity analysis 1366 | ## sensor interface layer 1367 | ## sensor Markov assumption 1368 | ## sensorless 1369 | ## sequence form 1370 | ## sequential 1371 | ## sequential Monte Carlo 1372 | ## set of support 1373 | ## set semantics 1374 | ## set-cover problem 1375 | ## set-level 1376 | ## shading 1377 | ## shadow 1378 | ## shape 1379 | ## shaving 1380 | ## shortcuts 1381 | ## shoulder 1382 | ## sibyl attack 1383 | ## sideways move 1384 | ## sigmoid perceptron 1385 | ## significance test 1386 | ## similarity networks 1387 | ## simulated annealing 1388 | ## simultaneous localization and mapping (SLAM) 1389 | ## single agent 1390 | ## singly connected 1391 | ## singular 1392 | ## singularity 1393 | ## situation 1394 | ## situation calculus 1395 | ## skeletonization 1396 | ## Skolemization 1397 | 1398 | **Skolemization** is the process of removing the existential quantifiers by elimination. This is similar to the inference rule ***Existential Elimination*** where an inference involving sentence *a*, variable *v* and constant *k* can be made provided *k* does not occur anywhere in the knowledge base. 1399 | 1400 | ## slack 1401 | ## slant 1402 | ## sliding window 1403 | ## small-scale learning 1404 | ## smoothing 1405 | 1406 | **Smoothing** is the process of computing the distribution over past states given evidence up to the present. 1407 | 1408 | ## soccer 1409 | ## social laws 1410 | ## Socratic reasoner 1411 | ## soft margin 1412 | ## softmax function 1413 | 1414 | The **softmax function** is a mathematical function often used for classification tasks. This function calculates the probability distribution of one class, over all the available classes. It's formula is given as: 1415 | 1416 | *F(X) = exp(X) / summation(exp(X))* with the summation being over all the classes. 1417 | 1418 | ## software architecture 1419 | ## sokoban 1420 | ## solution 1421 | ## sonar sensors 1422 | ## sound 1423 | ## spam detection 1424 | ## sparse 1425 | ## sparse model 1426 | ## spatial reasoning 1427 | ## specialization 1428 | ## specular reflection 1429 | ## specularities 1430 | ## speech act 1431 | ## Speech Recognition 1432 | 1433 | **Speech recognition** is the process of analyzing audio and ***recognizing*** parts related to speech within the audio file. This process of ***recognition*** may involve simply identifying the speech part, gender identification and also as complex as identifying the words spoken when given an audio. This field often overlaps into the domain of artificial intelligence and machine learning. 1434 | 1435 | ## split point 1436 | ## stable 1437 | ## stable model 1438 | ## standard normal distribution 1439 | ## standardizing apart 1440 | ## Starcraft 1441 | ## start symbol 1442 | ## state abstraction 1443 | ## state estimation 1444 | ## state space 1445 | ## state-space landscape 1446 | ## static 1447 | ## stationarity assumption 1448 | ## stationary distribution 1449 | ## stationary process 1450 | ## stemming 1451 | ## step cost 1452 | ## step size 1453 | ## stereo vision 1454 | ## stochastic 1455 | ## stochastic beam search 1456 | ## stochastic games 1457 | ## stochastic hill climbing 1458 | ## stochastic policy 1459 | ## straight-line distance 1460 | ## strategic form 1461 | ## strategy 1462 | ## strategy profile 1463 | ## strategy-proof 1464 | ## strong AI 1465 | ## structural EM 1466 | ## structured representation 1467 | ## stuff 1468 | ## subcategory 1469 | ## subgoal independence 1470 | ## subject-verb agreement 1471 | ## subjectivist 1472 | ## subproblem 1473 | ## substitution 1474 | ## subsumption 1475 | ## subsumption architecture 1476 | ## subsumption lattice 1477 | ## successor 1478 | ## successor-state axiom 1479 | ## Sudoku 1480 | ## sum of squared differences 1481 | ## superpixels 1482 | ## supervised learning 1483 | 1484 | In **supervised learning** the agent observes some example 1485 | input-output pairs and learns a function that maps from input to 1486 | output. 1487 | 1488 | ## support vector machine 1489 | ## symmetry-breaking constraint 1490 | ## synchro drive 1491 | ## synchronic 1492 | ## synchronization 1493 | ## syntactic ambiguity 1494 | ## syntactic theory 1495 | ## syntax 1496 | ## synthesis 1497 | 1498 | # T 1499 | 1500 | ## table lookup 1501 | ## tabu search 1502 | ## tactile sensors 1503 | ## taxonomy 1504 | ## Taylor expansion 1505 | ## technological singularity 1506 | ## template 1507 | ## temporal logic 1508 | ## temporal-difference 1509 | ## temporal-projection 1510 | ## term 1511 | ## terminal states 1512 | ## terminal test 1513 | ## test set 1514 | ## text classification 1515 | ## texture 1516 | ## theorem proving 1517 | ## thrashing 1518 | ## tiling 1519 | ## tilt 1520 | ## time and tense 1521 | ## time line 1522 | ## time of flight camera 1523 | ## time to answer 1524 | ## tit-for-tat 1525 | ## topological sort 1526 | ## total Turing Test 1527 | ## toy problem 1528 | ## trace 1529 | ## tractability 1530 | ## tragedy of the commons 1531 | ## trail 1532 | ## training curve 1533 | ## training set 1534 | ## transfer model 1535 | ## transhumanism 1536 | ## transition model 1537 | ## transition probability 1538 | ## transpose 1539 | ## transposition table 1540 | ## traveling salesperson problem 1541 | ## tree decomposition 1542 | ## tree width 1543 | ## treebank 1544 | ## truth 1545 | ## truth value 1546 | ## truth-preserving 1547 | ## truth-revealing 1548 | ## turbo decoding 1549 | ## Turing Test 1550 | The **Turing Test** is a test proposed by Alan Turing in 1950, which is used to determine whether a computer is intelligent by evaluating the "human-ness" of its responses. 1551 | ## type A strategy 1552 | ## type B strategy 1553 | ## type constraint 1554 | ## type signature 1555 | 1556 | # U 1557 | 1558 | ## ultraintelligent machine 1559 | ## unary constraint 1560 | ## unbiased 1561 | ## uncertainty 1562 | ## underconstrained 1563 | ## understanding 1564 | ## unification 1565 | ## unifier 1566 | ## uniform-cost search 1567 | ## Unimate 1568 | ## uninformed search 1569 | ## unique action axioms 1570 | ## unique string axiom 1571 | ## unit clause 1572 | ## unit preference 1573 | ## unit propagation 1574 | ## unit resolution 1575 | ## units function 1576 | ## universal grammar 1577 | ## Universal Instantiation 1578 | ## unknown 1579 | ## unobservable 1580 | ## unrolling 1581 | ## unsupervised clustering 1582 | ## unsupervised learning 1583 | 1584 | In **unsupervised learning** 1585 | the agent learns patterns in the input without any explicit feedback. 1586 | 1587 | ## upper confidence bounds on trees 1588 | ## upper ontology 1589 | ## Urban Challenge 1590 | ## utility 1591 | ## utility independence 1592 | 1593 | # V 1594 | 1595 | ## vague 1596 | ## validation set 1597 | 1598 | A part of the dataset that is used to tune the parameters of a machine learning model. It can also be used to determine a stopping point for the back-propagation algorithm. 1599 | 1600 | ## validity 1601 | ## value 1602 | ## value alignment 1603 | ## vanishing point 1604 | 1605 | A vanishing point of a function means the function has a zero at the point or on the set. 1606 | 1607 | ## variable 1608 | 1609 | A variable is a symbol on whose value a function, polynomial, etc., depends. 1610 | 1611 | ## variational approximation 1612 | ## variational parameters 1613 | ## VCG 1614 | ## vector 1615 | 1616 | A vector is formally defined as an element of a vector space R^n. A vector is given by n coordinates and can be specified as (A_1,A_2,...,A_n). 1617 | 1618 | ## vector field histograms 1619 | ## vehicle interface layer 1620 | ## verification 1621 | ## version space 1622 | ## Vickrey-Clarke-Groves 1623 | ## virtual counts 1624 | ## virtual support vector machine 1625 | ## VLSI layout 1626 | ## vocabulary 1627 | ## Voronoi graph 1628 | 1629 | # W 1630 | 1631 | ## weak AI 1632 | ## weak learning 1633 | ## weight 1634 | ## weight space 1635 | ## weighted A* search 1636 | ## weighted training set 1637 | ## wide content 1638 | ## Widrow-Hoff rule 1639 | ## Winnow algorithm 1640 | ## workspace representation 1641 | ## wrapper 1642 | ## wumpus world 1643 | 1644 | # Z 1645 | 1646 | ## zero-sum games 1647 | 1648 | In game theory and economic theory, a **zero-sum game** is a mathematical representation of a situation in which each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero. 1649 | --------------------------------------------------------------------------------