├── Neural-Networks.md ├── Software-Engineering.md ├── Prolog.md ├── pics ├── ca │ ├── 1.png │ ├── 2.png │ ├── 3.png │ ├── 4.png │ ├── 5.png │ ├── 6.png │ ├── 7.png │ ├── 8.png │ ├── 9.png │ ├── 10.png │ ├── 11.png │ ├── 12.png │ ├── 13.png │ ├── 14.png │ ├── 15.png │ ├── 16.png │ ├── 17.png │ ├── 18.png │ ├── 19.png │ ├── 20.png │ ├── 21.png │ ├── 22.png │ ├── 23.png │ ├── 24.png │ ├── 25.png │ ├── 26.png │ ├── 27.png │ └── my-diagram.png ├── compiler │ ├── 1.png │ ├── 10.png │ ├── 11.png │ ├── 12.png │ ├── 2.png │ ├── 3.png │ ├── 4.png │ ├── 5.png │ ├── 6.png │ ├── 7.png │ ├── 8.png │ └── 9.png └── simulation │ ├── 1.png │ ├── 2.png │ └── 3.png ├── _config.yml ├── README.md ├── CONTRIBUTING.md ├── Simulation.md ├── Compiler.md └── Computer-Architecture.md /Neural-Networks.md: -------------------------------------------------------------------------------- 1 | Need help with it 2 | -------------------------------------------------------------------------------- /Software-Engineering.md: -------------------------------------------------------------------------------- 1 | Need help with it 2 | -------------------------------------------------------------------------------- /Prolog.md: -------------------------------------------------------------------------------- 1 | https://exercism.io/my/tracks/prolog 2 | -------------------------------------------------------------------------------- /pics/ca/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/1.png -------------------------------------------------------------------------------- /pics/ca/2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/2.png -------------------------------------------------------------------------------- /pics/ca/3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/3.png -------------------------------------------------------------------------------- /pics/ca/4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/4.png -------------------------------------------------------------------------------- /pics/ca/5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/5.png -------------------------------------------------------------------------------- /pics/ca/6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/6.png -------------------------------------------------------------------------------- /pics/ca/7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/7.png -------------------------------------------------------------------------------- /pics/ca/8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/8.png -------------------------------------------------------------------------------- /pics/ca/9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/9.png -------------------------------------------------------------------------------- /pics/ca/10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/10.png -------------------------------------------------------------------------------- /pics/ca/11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/11.png -------------------------------------------------------------------------------- /pics/ca/12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/12.png -------------------------------------------------------------------------------- /pics/ca/13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/13.png -------------------------------------------------------------------------------- /pics/ca/14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/14.png -------------------------------------------------------------------------------- /pics/ca/15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/15.png -------------------------------------------------------------------------------- /pics/ca/16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/16.png -------------------------------------------------------------------------------- /pics/ca/17.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/17.png -------------------------------------------------------------------------------- /pics/ca/18.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/18.png -------------------------------------------------------------------------------- /pics/ca/19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/19.png -------------------------------------------------------------------------------- /pics/ca/20.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/20.png -------------------------------------------------------------------------------- /pics/ca/21.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/21.png -------------------------------------------------------------------------------- /pics/ca/22.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/22.png -------------------------------------------------------------------------------- /pics/ca/23.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/23.png -------------------------------------------------------------------------------- /pics/ca/24.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/24.png -------------------------------------------------------------------------------- /pics/ca/25.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/25.png -------------------------------------------------------------------------------- /pics/ca/26.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/26.png -------------------------------------------------------------------------------- /pics/ca/27.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/27.png -------------------------------------------------------------------------------- /pics/compiler/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/1.png -------------------------------------------------------------------------------- /pics/compiler/10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/10.png -------------------------------------------------------------------------------- /pics/compiler/11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/11.png -------------------------------------------------------------------------------- /pics/compiler/12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/12.png -------------------------------------------------------------------------------- /pics/compiler/2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/2.png -------------------------------------------------------------------------------- /pics/compiler/3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/3.png -------------------------------------------------------------------------------- /pics/compiler/4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/4.png -------------------------------------------------------------------------------- /pics/compiler/5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/5.png -------------------------------------------------------------------------------- /pics/compiler/6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/6.png -------------------------------------------------------------------------------- /pics/compiler/7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/7.png -------------------------------------------------------------------------------- /pics/compiler/8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/8.png -------------------------------------------------------------------------------- /pics/compiler/9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/compiler/9.png -------------------------------------------------------------------------------- /pics/ca/my-diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/ca/my-diagram.png -------------------------------------------------------------------------------- /pics/simulation/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/simulation/1.png -------------------------------------------------------------------------------- /pics/simulation/2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/simulation/2.png -------------------------------------------------------------------------------- /pics/simulation/3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerolloz/03-CS-Second-Term/HEAD/pics/simulation/3.png -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman 2 | plugins: 3 | - jemoji 4 | search_enabled: true # Enable or disable the site search 5 | aux_links: 6 | "03-CS-Second-Term on GitHub": 7 | - "//github.com/kerolloz/03-CS-Second-Term" 8 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Summaries for 3 CS Second Term 2 | 3 | | Subject | Exam Date | 4 | |:----------------------------------------------------|:----------| 5 | | [Compiler Design](./Compiler.md) | 26/5 | 6 | | [Computer Architecture](./Computer-Architecture.md) | 29/5 | 7 | | [Prolog](./Prolog.md) | 2/6 | 8 | | [Neural Networks](./Neural-Networks.md) | 9/6 | 9 | | [Simulation](./Simulation.md) | 12/6 | 10 | | [Software Engineering](./Software-Engineering.md) | 16/6 | 11 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # How to contribute to the project 2 | 3 | > Please consider [markdown](https://kerolloz.github.io/markdown) as your reference 4 | 5 | ## Install atom 6 | 7 | download [atom](https://atom.io) and install it. 8 | it's an easy to use text editor.. made by GitHub. 9 | makes your life much easier when dealing with markdown. 10 | 11 | ## Fork the project 12 | 13 | - go to your forked repo 14 | - clone it 15 | - open it with atom 16 | - make your changes, stage them 17 | - save `commit` **Keep a good commit message** 18 | - push changes to your repo 19 | - submit a pull request 20 | 21 | 22 | *NOTE*: You can contact me if you are facing any troubles or having any problems 23 | -------------------------------------------------------------------------------- /Simulation.md: -------------------------------------------------------------------------------- 1 | # Simulation Lectures 2 | 3 | - [x] [Lecture 1](#lecture-1) 4 | - [x] [Lecture 2](#lecture-2) 5 | - [x] [Lecture 3](#lecture-3) 6 | - [x] [Lecture 4](#lecture-4) 7 | - [ ] [Lecture 5](#lecture-5) :construction: 8 | - [ ] [Lecture 6](#lecture-6) 9 | 10 | # Lecture 1 11 | 12 | ## Systems and System Environment: 13 | 14 | * System is defined as a groups of objects that are joined together in some regular interaction toward the accomplishment of some purpose. 15 | - For example: An automobile factory: Machines, components parts and workers operate jointly along assembly line. 16 | 17 | * System environment: changes occurring outside the system. 18 | - Factory : arrival orders 19 | - Banks : arrival of customers 20 | 21 | ## Components of a System: 22 | 23 | * **Entities:** Elements that often make up the system. 24 | * **Attribute:** A property of an entity. 25 | * **Activity:** represents a time period of specified length. 26 | * **State of system:** is defined to be that collection of variables necessary to describe the system at any time, relative to the objective of the study. 27 | - In the study of a bank possible state variables are number of busy tellers, number of customers waiting in the queue or being served, arrival and service times of the next customer. 28 | * **Event:** is defined as an instantaneous occurrence that may change the state of the system. 29 | * **Endogenous** – used to describe the activities and events occurring within a system. 30 | * **Exogenous** – is used to describe activities and events in the environment that affect the system. 31 | - In the bank: arrival of a customer is exogenous event completion of service of a customer is endogenous event. 32 | 33 | ## Examples on components of a system: 34 | 35 | * Banking System 36 | 37 | | Entites | Attributes | Activities | Events | State variables | 38 | |:---------:|:-----------:|:--------------------------------------------:|:-----------------------:|:-------------------------------------------------------------:| 39 | | Customers | the balance | making deposits in their checking accounts. | arrival,
departure. | number of busy tellers,
arrival time of the next customer | 40 | 41 | * Rail System 42 | 43 | | Entites | Attributes | Activities | Events | State variables | 44 | |:---------:|:------------------------:|:----------:|:------------------------------------------:|:-----------------------------------------------------------------------------:| 45 | | Commuters | Origination, Destination | Traveling | arrival at station, arrival at destination | Number of commuters waiting at each station,
number of commuters traveling | 46 | 47 | * Production System 48 | 49 | | Entites | Attributes | Activities | Events | State variables | 50 | |:--------:|:--------------------------------:|:--------------------------:|:---------:|:---------------------------------------:| 51 | | Machines | Speed , Capacity, Breakdown rate | Welding, Cutting, Stamping | breakdown | Status of machines – busy, idle or down | 52 | 53 | * Communications System 54 | 55 | | Entites | Attributes | Activities | Events | State variables | 56 | |:--------:|:--------------------:|:------------:|:----------------------:|:--------------------------------------------:| 57 | | Messages | Length , Destination | Transmitting | arrival at destination | Number of messages waiting to be transmitted | 58 | 59 | 60 | ## Ways to study a system: 61 | 62 | ![ways to study a System](./pics/simulation/1.png) 63 | 64 | Simulation 65 | : is the imitation of the operation of real-world process or system over time. 66 | 67 | * A model construct a conceptual framework that describes a system. 68 | * Simulations involve designing a model of a system and carrying out experiments on it as it progresses through time. 69 | 70 | 71 | ## Goal of modeling and simulation: 72 | 73 | * A model can be used to investigate a wide verity of “what if” questions about real-world system. 74 | * Simulation can be used as: 75 | - Analysis tool for predicating the effect of changes. 76 | - Design tool to predicate the performance of new system. 77 | * It is better to do simulation before implementation. 78 | 79 | ## Reason for using a model: 80 | 81 | 1. Helps in understanding the behavior of a real system before it is built. 82 | 2. Cost of building and experimenting with a model is less. 83 | 3. Models have the capability of scale time or space in favorable manner. 84 | 85 | 86 | 87 | ## When Simulation Is Appropriate: 88 | 89 | * Simulation enables study of internal interaction of subsystems in complex system. 90 | * Simulation can be used with new design and policies before implementation. 91 | * Simulation models are designed for training make learning possible without cost disrupting. 92 | 93 | ## When Simulation Is Not Appropriate: 94 | 95 | * When the problem can be solved by common sense. 96 | * If it is easier to perform direct experiments. 97 | * If cost exceed savings. 98 | * If resource or time are not available. 99 | * If system behavior is too complex like human behavior 100 | 101 | 102 | 103 | ## Advantages of simulation: 104 | 105 | * New policies, operating procedures, information flows and so on can be explored without disrupting ongoing operation of the real system. 106 | * Time can be compressed or expanded to allow for a speed-up or slow-down of the phenomenon (clock is self-control). 107 | * A simulation study can help in understanding how the system operates. 108 | * “What if” questions can be answered. 109 | 110 | ## Disadvantages of simulation: 111 | 112 | * Model building requires special training. 113 | * Vendors of simulation software have been actively developing packages that contain models that only need input (templates). 114 | * Simulation results can be difficult to interpret. 115 | * Simulation modeling and analysis can be time consuming and expensive. 116 | 117 | ## Areas of application: 118 | 119 | 1. Semiconductor Manufacturing. 120 | 2. Military application. 121 | 3. Transportation modes and Traffic. 122 | 4. Business Process Simulation. 123 | 5. Health Care. 124 | 6. Risk analysis. 125 | 7. CPU, Memory. 126 | 8. Network simulation. 127 | 128 | ## How to simulate: 129 | 130 | 1. By hand. 131 | 2. Spreadsheets 132 | 3. Programming in General Purpose Languages => Java. 133 | 4. Simulation Languages => SIMAN. 134 | 5. Simulation Packages => Arena. 135 | 136 | ## Types of Models: 137 | 138 | * All models can be grouped into three types: 139 | 1. Graphic models: 140 | - Conceptual drawings, graphs, charts, and diagrams. 141 | - Football coaches develop them to show how players (components) should interact during an offensive or defensive play (system). 142 | 143 | 2. Mathematical models: 144 | - Show relationships in terms of formulas. 145 | - complex mathematical models track storms and space flights, predict ocean currents and land erosion, and help scientists conduct complex experiments. 146 | 147 | 3. Physical models: 148 | - three-dimensional representations of reality => (Model Airplane, Model House, Model City). 149 | - Two types of physical models exists: 150 | 1. Mock-up: is used to evaluate the styling, balance, color, or other aesthetic feature of a technology artifact. 151 | - Mock-ups are generally constructed of materials that are easy to work with => wood, clay, Styrofoam, paper, and various kinds of cardboard. 152 | 2. Prototype: is a working model of a system. 153 | - Prototypes are built to test the operation, maintenance, 154 | and/or safety of the item and is built of the same material as the final product. 155 | 156 | ## Types of Models: 157 | 158 | * Dynamic :vs: Static 159 | * Stochastic :vs: Deterministic 160 | * Discrete :vs: Continuous 161 | 162 | ![types of models](./pics/simulation/2.png) 163 | 164 | ## Characterizing a Simulation Model: 165 | 166 | 167 | | | Deterministic | Stochastic (NON-DETERMINISTIC or PROBABILISTIC) | 168 | |:--------------------:|:--------------------------------------------------------:|:------------------------------------------------------:| 169 | | **Random variables** | No random variable in the model. | model has one or more random variables as inputs. | 170 | | **Behavior** | behavior is predictable. | behavior cannot be predicted. | 171 | | **Example** | Clinic: patients arriving at scheduled appointment time. | Bank: random customer inter-arrival and service times. | 172 | 173 |
174 | 175 | | Static | Dynamic | 176 | |:--------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------:| 177 | | No time element. | Passage of time is important part of model. | 178 | | Time Independent view of the system. | Time dependent view of the system. | 179 | | e.g. Class has same number of students in an year. | E.g. ATM can accept card only when it is in ready state. ATM cannot read card when it is in ERROR state. Thus state of ATM is a dynamic aspect. | 180 | 181 |
182 | 183 | | Discrete system | Continuous system | 184 | |:-----------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------:| 185 | | state variables change only at discrete set of points in time (a countable number of points in time). | the state variables change continuously over time (infinite number of states). | 186 | 187 | ## How to develop a model? 188 | 189 | 1. Determine the goals and objectives. 190 | 2. Build a conceptual model. 191 | 3. Convert into a specification model. 192 | 4. Convert into a computational model. 193 | 5. Verify. 194 | 6. Validate. 195 | 196 | ## Three Model Levels: 197 | 198 | 1. Conceptual: 199 | - Very high level 200 | - How comprehensive should the model be? 201 | - What are the state variables, which are dynamic, and which are important? 202 | 203 | 2. Specification: 204 | - On paper 205 | - May involve equations, pseudocode, etc. 206 | - How will the model receive input? 207 | 208 | 3. Computational: 209 | - A computer program. 210 | - simulation language. 211 | 212 | ## Steps in Simulation Study: 213 | 214 | ![steps in simulation Study](./pics/simulation/3.png) 215 | 216 | # Lecture 2 217 | 218 | All of it is just section problems. 219 | 220 | # Lecture 3 221 | 222 | * Problem formulation is the most important step in a simulation study. It have a significant impact on the success of the simulation study. 223 | * The first step in simulation project is to ensure that adequate attention has been directed toward understanding what is to be accomplished by performing the study. 224 | * Problem formulation process consist of: 225 | 1. A formal problem statement 226 | 2. Orientation of the system 227 | 3. Establishment of specific project objectives 228 | 229 | **1. Formal Problem Statement:** 230 | - Goal: provide both the practitioner and the potential audience with a clearly understandable high-level justification for the simulation. 231 | - The goal including: 232 | 1. Increasing customer satisfaction. 233 | - is of fundamental interest in any system involving service operations. 234 | - This type of system typically includes waiting or processing queues. 235 | - Reductions in queue time usually result in increased customer satisfaction. 236 | - Reductions in the number of late jobs will reduce operating costs and will increase customer satisfaction. 237 | 238 | 2. Increasing throughput. 239 | - involves the amount of products or number of jobs that can be processed over a given period of time. 240 | - This can involve the elimination or improvement of different process operations. 241 | - It can also include the identification and redesign of bottleneck processes. 242 | 243 | 3. Reducing waste. 244 | - Reducing waste results in reduced operating costs and increased net profits. 245 | - Waste can be reduced through reductions in damage and old-fashion. 246 | - damage can involve processes that are time and temperature critical. 247 | - old-fashion waste can result from an organization’s to bring its product to the market on time. 248 | 249 | 4. Reducing work in progress. 250 | - Work in progress is work that requires further processing for completion. 251 | - Work in progress is commonly found in processes that require multiple discrete operations. 252 | - Work in progress typically requires storage before the next process can be carried out. 253 | - Reducing work in progress reduces process costs associated with resource capacity and storage requirements. 254 | 255 | 256 | 257 | ## Tools for Developing the Problem Statement: 258 | 259 | * There are two common tools available to the practitioner for 260 | assisting with the problem statement: 261 | 1. Fishbone / Cause-Effect / Ishikawa Chart: 262 | - The purpose of this chart is to identify the cause of the problem or effect of interest. 263 | - The head of the fish is labeled with the problem or effect. 264 | - When the Fishbone diagram is complete, the practitioner can concentrate on the most important sources or causes of the problem. 265 | 266 | 2. Pareto Chart: 267 | - Only a few factors are the cause of many problems. This is frequently referred to as the 80–20 rule: 80% of the problem is caused by 20% of the factors. 268 | 269 | **2. Orientation of the system:** 270 | * Goal: The practitioner’s familiarizing himself or herself with the system. 271 | * Orientation Process/Types: 272 | 1. Initial orientation visit: 273 | - Goal: To obtain a high-level understanding of the basic inputs and outputs of the system. 274 | 2. Detailed flow orientation visit: 275 | - Goal: An understanding of how the system operates. 276 | 3. Review orientation visit 277 | - Goal: To ensure that the understanding of the system operation is consistent with the practitioners’ understanding of the system and/or flow chart. 278 | 279 | **3. Project Objectives:** 280 | 281 | * Common project objectives may involve 282 | 1. Performance-related operating policies 283 | 2. Performance-related resources policies 284 | 3. Cost-related resource policies 285 | 4. Equipment capabilities evaluation 286 | 287 | 288 | # Lecture 4 289 | 290 | ## Project Management Concepts: 291 | 292 | 1. Project parameters, standard measurements: 293 | - Time: 294 | - Associated with the project schedule, which is implemented as a Gantt chart. 295 | - If there are significant differences between the actual project progress and the project schedule, something may be incorrect. 296 | - If the project is continuously ahead of schedule, the following situations may exist: 297 | - The budget is being consumed at an excessive rate. 298 | - Excess resources are assigned to the project. 299 | - The project is much less complex than originally estimate 300 | - if the project is continuously behind schedule, any of the following situations may be present: 301 | - Expenditures are being delayed. 302 | - Insufficient resources are assigned to the project. 303 | - The project is much more complex than originally estimated. 304 | - Cost: 305 | - The cost parameter means that there is a budget associated with the project. 306 | - Simulation project costs may include computer hardware and software. 307 | - The budget is perhaps the most easily tracked project parameter. 308 | - In the event that the project is continuously under budget, the following situations may be present: 309 | - Expenditures are being delayed. 310 | - Insufficient or inexperienced resources are assigned to the project. 311 | - The project is much smaller or less complex than originally estimated. 312 | - Problems may also exist if the project is continuously over budget, Typical causes are: 313 | - The budget is being consumed at an excessive rate. 314 | - Excess resources are assigned to the project. 315 | - The project is much larger or more complex than originally estimated. 316 | - Technical performance: 317 | - Specified in the problem statement phase of the project. 318 | - The project must achieve these objectives in order to be considered successful from a technical performance standpoint. 319 | 1. Project life cycles, common phases: 320 | - Conceptual: 321 | - During this phase, the organization will formally assign the project to the project manager. 322 | - The problem formulation process may be completed during this life cycle phase. 323 | - Planning: 324 | - During the planning phase, the project manager will identify all of the project team members. 325 | - The simulation-planning process activities involving the work breakdown structure, linear responsibility chart, and Gantt chart are conducted during this life-cycle phase. 326 | - Execution: 327 | - Most of the simulation project activities for the project will be completed. 328 | - These activities include: 329 | the system definition, 330 | input data collection and analysis, 331 | model translation, 332 | verification, 333 | validation, 334 | experimentation, and analysis. 335 | - Completion: 336 | - Turning over the results of the project and primarily include the simulation project report. 337 | 1. Project stakeholders 338 | - Internal stakeholders: 339 | - Individuals who are directly associated with the simulation project team => (Practitioner/project manager, Analysts, Statisticians, Data collectors). 340 | - External stakeholders: 341 | - Individuals or organizations who are not directly associated with the simulation project team. 342 | - Stakeholder strategy 343 | 344 | 345 | ## Simulation Project Manager Functions: 346 | 347 | There are **five** generally accepted project manager functions that will affect the success or failure of the simulation project: 348 | 349 | 1. Planning: 350 | - This process includes the development of a work breakdown structure and a Gantt chart. 351 | - A work breakdown structure is the successive division of project tasks to the point that individual responsibility and accountability can be assigned. 352 | - A Gantt chart illustrates the duration and relationships among the work breakdown structure tasks. 353 | 354 | 2. Organizing: 355 | - identifying, acquiring, and aligning the simulation project team. 356 | - The relationship between the host organizations and the different projects that the team members are assigned can be clarified through the use of a matrix organizational chart. 357 | 358 | 3. Motivating 359 | 4. Directing 360 | 5. Controlling 361 | - The control process consists of: 362 | * Setting project standards 363 | * Observing performance 364 | * Comparing the observed performance with the project standards 365 | * Taking corrective action 366 | 367 | 368 | ## Developing the Simulation Project Plan 369 | 370 | * The project-planning process as a minimum consists of developing: 371 | - A work breakdown structure (WBS) 372 | - A linear responsibility chart (LRC) 373 | - A Gantt chart: 374 | * Common relationships: 375 | - Start-to-Start: This sort of situation may occur when a single previous process splits into two different tasks that must be worked on simultaneously. 376 | - Finish-to-Finish: is found when both tasks are desired to be completed at the same time. 377 | -------------------------------------------------------------------------------- /Compiler.md: -------------------------------------------------------------------------------- 1 | # Compiler Lectures 2 | 3 | - [x] [Lecture 1](#lecture-1) 4 | - [x] [Lecture 2](#lecture-2) 5 | - [x] [Lecture 3](#lecture-3) 6 | - [ ] [Lecture 4](#lecture-4) :construction: 7 | - [ ] [Lecture 5](#lecture-5) 8 | - [ ] [Lecture 6](#lecture-6) 9 | - [x] [Other resources](#other-resources) 10 | 11 | ## Lecture 1 12 | 13 | Compiler 14 | : a program that takes a program written in a source language and translates it into an equivalent program in a target language. 15 | 16 | **Techniques** used in compiler design are applicable to many computer science problems. 17 | 18 | | Techniques used in | Can be used in | 19 | |:-------------------|:-------------------------------------------------------------------------------| 20 | | lexical analyzer | text editors,information retrieval system, and pattern recognition programs | 21 | | parser | query processing system such as SQL | 22 | | compiler design | - Natural Language Processing (NLP)
- Software having a complex front-end | 23 | 24 | ### Parts of a Compiler 25 | 26 | | | Analysis | Synthesis | 27 | |:------------------|:-------------------------------------------------------------------------|:--------------------------------------------------------------------------------| 28 | | **In this phase** | An intermediate representation is created from the given source program. | The equivalent target program is created from this intermediate representation. | 29 | | **Parts** | - Lexical Analyzer
- Syntax Analyzer
- Semantic Analyzer | - Intermediate Code Generator
- Code Generator
- Code Optimizer | 30 | 31 | ### Phases of a Compiler 32 | 33 | From source program to target program, the compiler goes through the following phases. 34 | 35 | | Phase | what happens | 36 | |:------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------| 37 | | [Lexical Analyzer](#lexical-analyzer) | reads the source program character by character and returns the [tokens](#token) of the source program. | 38 | | [Syntax Analyzer](#syntax-analyzer) (parser) | creates the syntactic structure (generally a parse tree) of the given program. | 39 | | [Semantic Analyzer](#semantic-analyzer) | checks the source program for semantic errors and collects the type information for the code generation. | 40 | | [Intermediate Code Generator](#intermediate-code-generator) | produces an explicit intermediate codes representing the source program. These intermediate codes are generally machine (architecture) independent. | 41 | | [Code Optimizer](#code-optimizer) | optimizes the code produced by the intermediate code generator in the terms of time and space. | 42 | | [Code Generator](#code-generator) | Produces the target language in a specific architecture. The target program is normally is a relocatable object file containing the machine codes. | 43 | 44 | Each phase transforms the source program from one representation into another. 45 | 46 | They communicate with: 47 | - error handlers. 48 | - the symbol table. 49 | 50 | ### Lexical Analyzer 51 | 52 | > - Puts information about identifiers into the symbol table. 53 | > - Regular expressions are used to describe tokens (lexical constructs). 54 | > - A (Deterministic) Finite State Automaton can be used in the implementation of a lexical analyzer. 55 | 56 |

57 | 58 | A Token 59 | : describes a pattern of characters having same meaning in the source program. (such as identifiers, operators, keywords, numbers, delimiters and so on) 60 | 61 | Example: 62 | newval := oldval + 12 63 | 64 | | Lexemes | Tokens | 65 | |:--------|:--------------------| 66 | | newval | identifier | 67 | | := | assignment operator | 68 | | oldval | identifier | 69 | | + | add operator | 70 | | 12 | a number | 71 | 72 | ### Syntax Analyzer 73 | 74 | > Checks whether a given program satisfies the rules implied by a CFG or not. If it satisfies, the syntax analyzer creates a parse tree for the given program. 75 | 76 | #### Parse Tree 77 | 78 | ![parse tree](http://img.c4learn.com/2012/01/Parse-Tree-Syntax-Analysis-in-Compiler-Design.jpg) 79 | In a parse tree, 80 | * All terminals are at leaves. 81 | * All inner nodes are non-terminals in a CFG. 82 | 83 | _syntax of a language_ is specified by a __CFG__ (CFG rules are mostly recursive) 84 | we use BNF to specify a CFG 85 | 86 | Example: 87 | 88 | ``` 89 | assgstmt -> identifier := expression 90 | expression -> identifier 91 | expression -> number 92 | expression -> expression + expression 93 | ``` 94 | 95 | *[CFG]: Context Free Grammar 96 | *[BNF]: Backus Naur Form 97 | *[]()* 98 | 99 | #### Syntax Analyzer :vs: Lexical Analyzer 100 | 101 | ##### Which constructs recognized by lexical analyzer, and which by syntax analyzer? 102 | 103 | - Both of them do similar things; ~~But~~ 104 | 105 | | | Lexical Analyzer | Syntax Analyzer | 106 | |:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------| 107 | | deals with | simple non-recursive constructs of the language | recursive constructs of the language | 108 | | | - simplifies the job of the syntax analyzer
- recognizes the smallest meaningful units (tokens) in a source program | works on the smallest meaningful units (tokens) in a source program to recognize meaningful structures in our programming language | 109 | 110 | #### Parsing Techniques 111 | 112 | | | Top-Down :arrow_down: | Bottom-Up :arrow_up: (shift-reduce parsing) | 113 | |:--------------------------------------|:-----------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------| 114 | | Construction of the parse tree starts | from root towards leaves | from leaves towards root | 115 | | Efficient parsers | easily constructed by hand | created with the help of some software tools | 116 | | | Recursive Predictive Parsing, Non-Recursive Predictive Parsing (LL Parsing). | Operator-Precedence Parsing – simple, restrictive, easy to implement LR Parsing – much general form of shift-reduce parsing, LR, SLR, LALR | 117 | 118 | ### Semantic Analyzer 119 | 120 | > - Type-checking is an important part 121 | - semantic information cannot be represented by a context-free language 122 | - CFGs used in the syntax analysis are integrated with attributes (semantic rules) 123 | 124 | The result is a syntax-directed translation (Attribute grammars) 125 | 126 | Example: 127 | 128 | ``` 129 | newval := oldval + 12 130 | ``` 131 | ` The type of the identifier newval must match with type of the expression (oldval+12)` 132 | 133 | ### Intermediate Code Generation 134 | > The level of intermediate codes is close to the level of machine codes. 135 | 136 | Example: 137 | 138 | ``` 139 | newval := oldval * fact + 1 140 | ``` 141 | 142 | :arrow_down: 143 | 144 | ``` 145 | id1 := id2 * id3 + 1 146 | ``` 147 | 148 | Intermediates Codes :arrow_down: (Quadraples) 149 | 150 | ```assembly 151 | MULT id2,id3,temp1 152 | ADD temp1,#1,temp2 153 | MOV temp2,,id1 154 | ``` 155 | 156 | 157 | ### Code Optimizer 158 | 159 | ```assembly 160 | MULT id2,id3,temp1 161 | ADD temp1,#1,id1 162 | ``` 163 | 164 | ### Code Generator 165 | 166 | Example: 167 | (assume that we have an architecture with instructions 168 | whose at least one of its operands is a machine register) 169 | 170 | ```assembly 171 | MOVE id2,R1 172 | MULT id3,R1 173 | ADD #1,R1 174 | MOVE R1,id1 175 | ``` 176 | 177 | --- 178 | 179 | ## Lecture 2 180 | 181 | ### Lexical Analyzer: 182 | * It reads the source program character by character to produce tokens.
183 | 184 | * Normally a lexical analyzer doesn’t return a list of tokens at one shot, it returns a token when the parser asks a token from it. 185 |
186 | 187 | ### Tokens: 188 | 189 | - Represents a set of strings described by a pattern. 190 | - Additional information should be held for that specific lexeme. This additional information is called as the attribute of the token. 191 | - Token type and its attribute uniquely identifies a lexeme. 192 | - Regular expression is used to specify tokens. 193 |
194 | 195 | ### Concepts of Languages: 196 | 197 | - *Alphabet:* set of finite symbols. 198 | - *String:* sequence of symbols on an alphabet. 199 | - *Language:* consists of set of strings. 200 | - *Operation on Language:* 201 | - Concatenation 202 | - Union 203 | - Exponentiation 204 | - Kleen Closure "\*" 205 | - Positive Closure "+" 206 |
207 | 208 | ### Regular Expressions: 209 | 210 | - Used to describe tokens. 211 | - Normally, they are built up of simpler regular expressions. 212 | - *Regular set:* a language denoted by a regular expression. 213 | 214 | ### Presedence Rules in Regular Expressions: 215 | 216 | 1. Parentheses 217 | 2. \* "Kleen Closure" 218 | 3. Concatenation. 219 | 4. \| 220 | 221 | ### Regular Definition Rules: 222 | 223 | * We can give names to regular expressions, and we can use these names as symbols to define other regular expressions. 224 | * A regular definition is a sequence of the definitions of the form: 225 | ``` 226 | d1 => r1 227 | d2 => r2 228 | . 229 | dn => rn 230 | ``` 231 | 232 | *Examples:* 233 | 234 | 1. Identifiers in Pascal. 235 | ``` 236 | letter => A | B | ... | Z | a | b | ... | z 237 | digit => 0 | 1 | ... | 9 238 | id => letter (letter | digit )* 239 | ``` 240 | 1. Identifiers in C. 241 | ``` 242 | letter => [A-Za-z] 243 | digit => [0-9] 244 | CID => letter_(letter_|digit)\* 245 | ``` 246 | 1. Unsigned numbers in Pascal. 247 | ``` 248 | digit => 0 | 1 | ... | 9 249 | digits => digit + 250 | opt-fraction => ( . digits ) ? 251 | opt-exponent => ( E (+|-)? digits ) ? 252 | unsigned-num => digits opt-fraction opt-exponent 253 | ``` 254 | 1. Unsigned numbers or floating point numbers in C. 255 | ``` 256 | digit => [0-9] 257 | digits => digit+ 258 | number => digits(.digits)?(E[+-]? digits)? 259 | ``` 260 | 261 | ### Finite Automaton: 262 | 263 | - There are two types of FA: 264 | - *Deterministic:* faster, take more space. 265 | - *Non-deterministic:* slower, take less space. 266 | - Deterministic is widely used in lexical analyzer. 267 | - To generate DFA we have two ways: 268 | - Regular Expression => NFA => DFA 269 | - Regular Expression => DFA 270 | 271 | **I. NFA To DFA:** 272 | * NFA may have Ɛ transitions. 273 | * DFA Does not have Ɛ transitions. 274 | * In DFA, for each symbol a and state s, there is at most one labeled edge a leaving s. 275 |
276 | 277 | *Thomson's Construction:* 278 | * Used to convert reg. expression to NFA. 279 | 280 | Example:
281 | ![Thomson](./pics/compiler/2.png) 282 | 283 | * We use the generated NFA is converted then to DFA. 284 | 285 | Example:
286 | 287 | ``` 288 | S 0 = Ɛ-closure({0}) = {0,1,2,4,7} 289 | 290 | Ɛ-closure(move(S0 ,a)) = Ɛ-closure({3,8}) = {1,2,3,4,6,7,8} = S1 291 | Ɛ-closure(move(S0 ,b)) = Ɛ-closure({5}) = {1,2,4,5,6,7} = S2 292 | 293 | Ɛ-closure(move(S1 ,a)) = Ɛ-closure({3,8}) = {1,2,3,4,6,7,8} = S1 294 | Ɛ-closure(move(S1 ,b)) = Ɛ-closure({5}) = {1,2,4,5,6,7} = S2 295 | 296 | Ɛ-closure(move(S2 ,a)) = Ɛ-closure({3,8}) = {1,2,3,4,6,7,8} = S1 297 | Ɛ-closure(move(S2 ,b)) = Ɛ-closure({5}) = {1,2,4,5,6,7} = S2 298 | ``` 299 | 300 | ![Thomson](./pics/compiler/3.png) 301 | 302 | ``` 303 | S0 is the start state of DFA since 0 is a member of S0 ={0,1,2,4,7} 304 | S1 is an accepting state of DFA since 8 is a member of S1 = {1,2,3,4,6,7,8} 305 | ``` 306 | 307 | ![](./pics/compiler/4.png) 308 | 309 | **II. DFA Direct Conversion:** 310 | 311 | * First we augment the given regular expression by concatenating it with a special symbol #. 312 | * Then each alphabet symbol (plus #) will be numbered (without Ɛ). 313 | * Then, we create a syntax tree for this augmented regular expression. 314 | * In this syntax tree, all alphabet symbols (plus # and the empty string) in the augmented regular expression will be on the leaves, and all inner nodes will be the operators in that augmented regular expression. 315 | * Then compute first set of the root and follow set of each character. 316 | 317 | 318 | Example: 319 | 320 | ``` 321 | (a|b) * a convert it to augmented regular expression. 322 | (a|b) * a # then number each alphabet and # 323 | 324 | ( a | b ) * a # 325 | 1 2 3 4 then create syntax tree 326 | ``` 327 | 328 | ![](./pics/compiler/5.png) 329 | 330 | ``` 331 | first(root) = {1, 2, 3} 332 | followpos(1)={1,2,3} 333 | followpos(2)={1,2,3} 334 | followpos(3)={4} 335 | followpos(4)={} 336 | 337 | S1 =firstpos(root)={1,2,3} 338 | a: followpos(1) and followpos(3) = {1, 2, 3, 4} = S2 339 | b: followpos(2) = {1, 2, 3} = S1 340 | 341 | S2 = {1, 2, 3, 4} 342 | a: followpos(1) and followpos(3) = {1, 2, 3, 4} = S2 343 | b: followpos(2) = {1, 2, 3} = S1 344 | ``` 345 | 346 | ![](./pics/compiler/6.png) 347 | 348 | **DFA Minimization:** 349 | * partition the set of states into two groups: 350 | * G1: set of accepting states. 351 | * G2: set of non-accepting states. 352 | 353 | * For each new group G: 354 | * partition G into subgroups such that states s1 and s2 are in the same group if and only if for all input symbols a, states s1 and s2 have transitions to states in the same group. 355 | 356 | Example: 357 | 358 | ![DFA](./pics/compiler/7.png) 359 | 360 | ``` 361 | G1 = {1, 2, 3} 362 | G2 = {4} 363 | 364 | for G1: 365 | a b 366 | 1 => 2 3 367 | 2 => 2 3 368 | --- 369 | 3 => 4 3 370 | 371 | So, divide G1 into {1, 2} and {3} 372 | 373 | for G2: 374 | a b 375 | 4 => 2 3 376 | 377 | Resulting DFA 378 | ``` 379 | 380 | ![Minimized DFA](./pics/compiler/8.png) 381 | 382 | ### Issues in Lexical Analyzer: 383 | * The lexical analyzer has to recognize the longest possible string. 384 | * the end of a token is normally not defined 385 | * Normally it doesn’t return a comment as a token. So, the comments are only processed by the lexical analyzer, and the don’t complicate the syntax of the language. 386 | 387 | ## Lecture 3 388 | 389 | ## A context-free grammar 390 | > - gives a precise syntactic specification of a programming language. 391 | > - the design of the grammar is an initial phase of the design of a compiler. 392 | > - a grammar can be directly converted into a parser by some tools. 393 | 394 | In CFG, we have: 395 | 396 | - A finite set of terminals (in our case, this will be the set of tokens) 397 | - A finite set of non-terminals (syntactic-variables) 398 | - A finite set of productions rules in the following form 399 | ``` 400 | A → α 401 | ``` 402 | A is a non-terminal 403 | α is a string of terminals and non-terminals (including the empty string) 404 | - A start symbol (one of the non-terminal symbol) 405 | 406 | Example: 407 | 408 | ``` 409 | E → E + E | E – E | E * E | E / E | - E 410 | E → ( E ) 411 | E → id 412 | ``` 413 | 414 | ### Parser 415 | 416 | > - Works on a stream of tokens(The smallest item is a token). 417 | - Scans input from left to right (one symbol at a time). 418 | 419 | Efficient parsers can be implemented only for sub-classes of context-free grammars. 420 | 421 | LL for top-down parsing. 422 | LR for bottom-up parsing. 423 | 424 | ### Derivation 425 | 426 | > A sequence of replacements of non-terminal symbols 427 | 428 | ``` 429 | E ⇒ E+E 430 | ``` 431 | E+E _derives_ from E (we can replace E by E+E) 432 | 433 | In general a derivation step is: 434 | 435 | αAβ ⇒ αƴβ 436 | if there is a production rule `A → ƴ` in our grammar 437 | 438 | | | drives in | 439 | |:--:|:----------------------:| 440 | | ⇒ | one step | 441 | | ⇒* | zero or more steps | 442 | | ⇒+ | one step or more steps | 443 | 444 | 445 | ### CFG - Terminology 446 | 447 | L(G) 448 | : the language of G which is a set of sentences. 449 | 450 | sentence of L(G) 451 | : string of terminal symbols of G 452 | 453 | 454 | - If S is the start symbol of G then 455 | ω is a sentence of L(G) 456 | iff `S ⇒ ω` where ω is a string of terminals of G 457 | - If G is a CFG, L(G) is a context-free 458 | language. 459 | - Two grammars are equivalent if they produce the same 460 | language 461 | 462 | S → α 463 | 464 | | α contains non-terminals | it is called as | 465 | |:------------------------:|:--------------------:| 466 | | :heavy_check_mark: | sentential form of G | 467 | | :x: | sentence of G | 468 | 469 | ### Derivation Example 470 | 471 | 472 | left-most derivation 473 | : If we always choose the left-most non-terminal in each derivation step 474 | ``` 475 | E => -E => -(E) => -(E+E) => -(id+E) => -(id+id) 476 | ``` 477 | 478 | right-most derivation 479 | : If we always choose the right-most non-terminal in each derivation step 480 | ``` 481 | E => -E => -(E) => -(E+E) => -(E+id) => -(id+id) 482 | ``` 483 | 484 | At each derivation step, we can choose any of the non-terminal in the sentential form of G for the replacement 485 | 486 | - top-down parsers try to find the left-most derivation. 487 | - bottom-up parsers try to find the right-most derivation in the reverse order. 488 | 489 | #### Parse Tree & Ambiguity 490 | 491 | > can be seen as a graphical representation of a derivation 492 | 493 | ambiguous grammar 494 | : produces more than one parse tree for a sentence 495 | 496 | unambiguous grammar 497 | : unique selection of the parse tree for a sentence 498 | 499 | For most parsers, **grammar must be unambiguous**. 500 | ambiguity must be eliminated during the design phase of compiler. 501 | We have to prefer one of the parse trees of a sentence (generated by an ambiguous grammar) to disambiguate that grammar to restrict to this choice. 502 | 503 | ``` 504 | stmt ⇒ if expr then stmt | 505 | if expr then stmt else stmt | otherstmts 506 | ``` 507 | `if E1 then if E2 then S1 else S2` 508 | 509 | ![parse tree](./pics/compiler/1.png) 510 | We prefer the second parse tree (else matches with closest if). 511 | The unambiguous grammar will be: 512 | ``` 513 | stmt ⇒ matchedstmt | unmatchedstmt 514 | matchedstmt ⇒ if expr then matchedstmt else matchedstmt | otherstmts 515 | unmatchedstmt ⇒ if expr then stmt | 516 | if expr then matchedstmt else unmatchedstmt 517 | ``` 518 | 519 | ### Ambiguity - Operator Presedence 520 | > Ambiguous grammars (because of ambiguous operators) can 521 | be _disambiguated_ according to the precedence and 522 | associativity rules. 523 | 524 | disambiguate the grammar precedence: 525 | `E ⇒ E+E | E*E | E^E | id | (E)` 526 | 527 | ^ (right to left) 528 | \* (left to right) 529 | \+ (left to right) 530 | 531 | ``` 532 | E ⇒ E+T | T 533 | T ⇒ T*F | F 534 | F ⇒ G^F | G 535 | G ⇒ id | (E) 536 | ``` 537 | 538 | ### Left Recursion 539 | 540 | left recursive grammar 541 | : has a non-terminal A such that there is a derivation. 542 | 543 | `A ⇒+ Aα ` for some string α 544 | 545 | > Top-down parsing cannot handle left-recursive grammars. 546 | 547 | we have to convert left-recursive grammar to an equivalent NON left-recursive grammar. 548 | 549 | immediate left-recursion 550 | : when left-recursion appear in a single step of the derivation 551 | 552 | The left-recursion may appear in one or more steps of the derivation. 553 | 554 | Example 1: 555 | 556 | ![immediate left recursion elimination example](./pics/compiler/9.png) 557 | 558 | --- 559 | 560 | Example 2: 561 | 562 | ![immediate left recursion elimination example2](./pics/compiler/10.png) 563 | 564 | > Note that eliminating the **immediate left-recursion**, doesn't mean that the grammar is _NOT_ **left-recursive**. 565 | 566 | For example: 567 | ![left recursive grammar](./pics/compiler/11.png) 568 | 569 | #### Eliminate Left-Recursion -- Algorithm 570 | 571 | ```algorithm 572 | - Arrange non-terminals in some order: A 1 ... A n 573 | - for i from 1 to n do { 574 | - for j from 1 to i-1 do { 575 | replace each production 576 | Ai -> Aj y 577 | by 578 | Ai -> α1 y | ... | αk y 579 | where Aj -> α1 | ... | αk 580 | } 581 | - eliminate immediate left-recursions among Ai productions 582 | } 583 | ``` 584 | **Example**: 585 | ``` 586 | S -> Aa | b 587 | A -> Ac | Sd | f 588 | 589 | - Order of non-terminals: S, A 590 | for S: 591 | - we do not enter the inner loop. 592 | - there is no immediate left recursion in S. 593 | for A: 594 | - Replace A -> Sd with A -> Aad | bd 595 | So, we will have A -> Ac | Aad | bd | f 596 | - Eliminate the immediate left-recursion in A 597 | A -> bdA’ | fA’ 598 | A’ -> cA’ | adA’ | ε 599 | So, the resulting equivalent grammar which is not left-recursive is: 600 | S -> Aa | b 601 | A -> bdA’ | fA’ 602 | A’ -> cA’ | adA’ | ε 603 | ``` 604 | 605 | ### Left-Factoring 606 | 607 | A predictive parser 608 | : a top-down parser without backtracking 609 | insists that the grammar must be left-factored. 610 | 611 | if we have 612 | > `A -> α β1 | α β2` 613 | 614 | when processing α we cannot know whether expand 615 | `A to α β1` 616 | or 617 | `A to α β2` 618 | 619 | But, if we re-write the grammar as follows 620 | `A -> αA’` 621 | `A’ -> β1 | β2 so, we can immediately expand A to αA’` 622 | 623 | #### Left-Factoring -- Algorithm 624 | 625 | ![left recursive algorithm](./pics/compiler/12.png) 626 | 627 | ##### Example 1 628 | 629 | ``` 630 | A -> abB | aB | cdg | cdeB | cdfB 631 | 632 | A -> aA ’ | cdg | cdeB | cdfB 633 | A ’ -> bB | B 634 | 635 | A -> aA ’ | cdA ’’ 636 | A ’ -> bB | B 637 | A ’’ -> g | eB | fB 638 | ``` 639 | 640 | --- 641 | 642 | ##### Example 2 643 | 644 | ``` 645 | A -> ad | a | ab | abc | b 646 | 647 | A -> aA’ | b 648 | A’ -> d | ε | b | bc 649 | 650 | A -> aA’ | b 651 | A’ -> d | ε | bA’’ 652 | A’’ -> ε | c 653 | ``` 654 | 655 | 656 | 657 | ## Lecture 4 658 | 659 | ``` 660 | Not Added Yet! 661 | ``` 662 | 663 | ## Lecture 5 664 | 665 | ``` 666 | Not Added Yet! 667 | ``` 668 | 669 | ## Lecture 6 670 | 671 | ``` 672 | Not Added Yet! 673 | ``` 674 | 675 | 676 | 677 | ## Other resources 678 | 679 | - Sheet (Answered by [AlaaOthman](//github.com/AlaaOhman)): [on Google Drive](https://drive.google.com/file/d/1yXkSxJLjOUPgtGgFM6gkJSPVtjMLJnhD/view?usp=drivesdk) 680 | - Textbook: [Compilers Principles Techniques And Tools](http://booksdl.org/get.php?md5=346B2177C8F721EE62872DCAF64B9F85) 681 | - TutorialsPoint(videos on YouTube): [Playlist](https://www.youtube.com/playlist?list=PLWPirh4EWFpGa0qAEcNGJo2HSRC5_KMT6) 682 | - TutorialsPoint(written): [Lectures](https://www.tutorialspoint.com/compiler_design/index.htm) 683 | - Udacity: [Programming Languages](https://www.udacity.com/course/programming-languages--cs262) 684 | - String patterns 685 | - Lexical Analysis 686 | - Grammars 687 | - Parsing 688 | - Interpreting 689 | - and more... 690 | -------------------------------------------------------------------------------- /Computer-Architecture.md: -------------------------------------------------------------------------------- 1 | # Computer Architecture Lectures 2 | 3 | - [x] [Lecture 1](#lecture-1) 4 | - [x] [Lecture 2](#lecture-2) 5 | - [x] [Lecture 3](#lecture-3) 6 | - [x] [Lecture 4](#lecture-4) 7 | - [x] [Lecture 5](#lecture-5) 8 | - [x] [Lecture 6](#lecture-6) 9 | - [x] [Lecture 7](#lecture-7) 10 | - [x] [Lecture 8](#lecture-8) 11 | - [x] [Lecture 9](#lecture-9) 12 | 13 | ## Lecture 1 14 | 15 | ### What is inside a computer? 16 | 17 | There are **five** classic components of a computer: 18 | 1. **Processor**: 19 | * Divided into two groups: 20 | - **Data section (_data path_)**: 21 | * contains the registers and the Arithmetic Logic unit. 22 | * is capable of performing certain operations on data items. 23 | - **Control section**: 24 | * the control unit that generates control signals that direct the operation of memory and data path. 25 | * control signals do the following: 26 | - tell memory to send or receive data. 27 | - tell the ALU what operation to perform. 28 | - route data between different parts of the data path. 29 | 1. **Registers**: 30 | * the storage element for data inside the CPU. 31 | * hold temporary data during calculations. 32 | * faster in accessing than memory. 33 | 1. **Main Memory**: 34 | * Large collection of circuits, each capable of storing a single bit and is arranged in small cells. 35 | * Each cell has a unique address 36 | * Longer strings stored by using consecutive cells. 37 | ![main memory](./pics/ca/1.png) 38 | 1. **System Bus**: 39 | * Group of signal lines have the same function. 40 | * Allow transferring the signals between different parts of the computer and from one device to another. 41 | * There are **three** types of system buses: 42 | - _data_ bus: 43 | * transfers data from CPU to memory and vice-versa. 44 | * connects I/O ports and CPU. 45 | - _address_ bus: determines where the address of memory locations should be sent. 46 | - _control_ bus: determines the operation. 47 | ![system bus](./pics/ca/2.png) 48 | 49 | 50 | there are two pieces of information that should be known to solve this example: 51 | 1. If there are N address lines in address bus, that means we can directly address 2^N of memory locations. 52 | in other words, if we have N address lines, then our memory has 2^N cell. 53 | 54 | 1. The number of data lines used in the data bus is equal to the 55 | size of data word that can be written or read. 56 | 57 | *Example:* 58 | 59 | How many memory locations can be addressed by a microprocessor with 14 address lines? 60 | 61 | N = 14 62 | then number of memory locations (or memory cells) = 2^N = 2^14 locations. 63 | 64 | *Example:* 65 | 66 | Suppose that a computer’s Main Memory has 1013 cells. How many address lines are needed in order for all the cells to be usable? 67 | 68 | 2^N = 1013 69 | then to get N we need to take log for base 2 for both sides 70 | log(2^N) = ceil(log(1013)) 71 | N = 9. 72 | 73 | --- 74 | 75 | ### Levels of transformation 76 | 77 | ![levels of transformation](./pics/ca/3.png) 78 | ![levels of transformation](./pics/ca/my-diagram.png) 79 | 80 | Abstraction 81 | : A higher level only needs to know about the **interface** to the lower level, _not how the lower level is implemented._ 82 | 83 | 84 | ### Convert C program to machine language: 85 | 86 | ![levels of transformation](./pics/ca/4.png) 87 | 88 | Machine Language 89 | : fundamental instructions expressed as 0s and 1s. 90 | 91 | Assembly Language 92 | : human-readable equivalent of machine language. 93 | 94 | Assembler 95 | : converts assembly language program to machine language. 96 | 97 | Linker 98 | : links separately assembled modules together into a single module suitable for loading and execution. 99 | 100 | Loader 101 | : part of operating system responsible for loading executable files into memory and execute them. 102 | 103 | ### What is computer architecture? 104 | 105 | Simply, 106 | >Computer Architecture = Machine organization + Instruction Set Architecture 107 | 108 | 109 | ### Instruction Set Architecture (ISA) 110 | > * Interfaces the software to the hardware 111 | > * provides support for programming. 112 | > * provides the mechanism by which the software tells the hardware what should be done. 113 | 114 | ![ISA](./pics/ca/5.png) 115 | 116 | ## Lecture 2 117 | 118 | ### ISA Components 119 | - Storage cells: registers, memory. 120 | - Machine Instruction Set: set of possible operations. 121 | - The instruction format: Size and meaning of fields within the instruction. 122 | 123 | **Every instruction need to specify four _(1 which + 3 where)_ things:** 124 | 1. Which operation to perform. 125 | 1. Where to find the operand or operands, if there are operands. 126 | 1. Where to put the result, if there is a result. 127 | 1. Where to find the next instruction. 128 | 129 | *Example:* 130 | 131 | `MOVE.W D4, D5` 132 | 133 | MOVE => operation 134 | D4 => location of the operand 135 | D5 => location to put the result 136 | Find next instruction => implicitly in the word following this instruction. 137 | 138 | 139 | ### Instruction Cycle 140 | 141 | ![Instruction Cycle](./pics/ca/6.png) 142 | 143 | For example, to do the add instruction: 144 | 145 | Fetch 146 | : get the instruction from memory into the processor. 147 | 148 | Decode 149 | : internally decode what it has to do. 150 | 151 | Execute 152 | : take the values from the registers, actually add them together. 153 | 154 | Store 155 | : store the result back into another register.(_retiring_ the instruction) 156 | 157 | ### Classes of instructions 158 | 159 | 1. _Data movement instructions_: `Load, Store` 160 | 1. _Arithmetic and logic_ (ALU) instructions: `Add, Sub, Shift` 161 | 1. _Branch_ instructions(control flow instructions): `Br, Brz` 162 | 163 | ### Program Counter (PC) 164 | 165 | * _incremented_ during the _instruction fetch_ to _point_ to the _next instruction_ to be executed. 166 | * controls program flow. 167 | 168 | ### Branch target address 169 | 170 | * A target address is specified in a `branch` or `jump` instruction. 171 | * The target address is loaded into the PC, replacing the address stored there. 172 | * A branch may be: 173 | - **Unconditional**, as is the C `goto` statement. 174 | - **Conditional**, depends on whether some condition within the processor state is true or false. 175 | 176 | ### Condition Code (CC) 177 | > The bit(s) that describe the condition stored in it. 178 | >> also called: 179 | * the processor status word (PSW). 180 | * the status register. 181 | 182 | - **No** machine instruction corresponds directly to the conditional statements. 183 | 184 | > The approach most machines take is to set various ___status flags___ within the CPU as a result of ALU operations. 185 | 186 | - The instruction set contains a number of conditional branch instructions that test various of these flags and then branch or not according to their settings. 187 | 188 | ### Hypothetical machine models 189 | 190 | there are **5** types of them: 191 | 192 | | address instruction | use | 193 | |:---------------------:|:-------------------------------------------------------------------------------------------:| 194 | | 3 address instruction | memory addresses for both operands and the result. | 195 | | 2 address instruction | overwrites one operand in memory with the result | 196 | | 1 address instruction | a register **(accumulator)** hold one operand & the result. | 197 | | 0 address instruction | a CPU register **(stack)** to hold both operands and the result. | 198 | | 4 address instruction | like 3 address but also allows the address of the next instruction to specified explicitly. | 199 | 200 | *[ALU]: Arithmetic Logic Unit 201 | *[PSW]: Processor Status Word 202 | *[ISA]: Instruction Set Architecture 203 | *[PC]: Program Counter 204 | *[CC]: Condition Code* 205 | 206 | ## Lecture 3 207 | 208 | - for a 2-operand arithmetic instruction we need to specify: 209 | 1. The operation to be performed. 210 | 1. Location of the 1st operand. 211 | 1. Location of the 2nd operand. 212 | 1. Place to store the result. 213 | 1. Location of next instruction to be performed 214 | - the variation in specifying the five items makes various types of hypothetical machine models. 215 | - in each machine we study the encoding of an ALU instruction. 216 | 217 | 218 | - assume the following: 219 | - data lines in data bus = 24 => then size of data word = 3 Bytes. 220 | - address lines = 24 => then the size of address of each operand = 3 Bytes 221 | - we have 128 instructions then every instruction is encoded in 7 bit ≅ 8 Bits (1 byte) (approximately) 222 | 223 | 224 | | | 4-address machine | 3-address machine | 2-address machine | 1-address machine (accumulator machine) | 225 | |:-----------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:| 226 | | instruction | Operation result operand1 operand2 addressOfNextInstruction | specifies only 3 addresses in the list (1st operand, 2nd operand, and the result). | 2 addresses in the list (1st operand, 2nd operand). result in one of the operands | 1 address (1st operand). accumulator (2nd operand, result) | 227 | | bytes for each instruction | number of operand's addresses(4) * size of address for each operand(3) + size of opcode(1) = 13 bytes | number of operand's addresses(3) * size of address for each operand(3) + size of opcode(1) = 10 bytes. | number of operand's addresses(2) * size of address for each operand(3) + size of opcode(1) = 7 bytes. | number of operand's addresses(1) * size of address for each operand(3) + size of opcode(1) = 4 bytes. | 228 | | memory access to fetch instruction | 5 memory accesses => ceil(size of instruction / size of data word) = ceil(13/3). | 4 memory accesses => ceil(size of instruction / size of data word) = ceil(10/3). | 3 memory accesses => ceil(size of instruction / size of data word) = ceil(7/3). | 2 memory accesses => ceil(size of instruction / size of data word) = ceil(4/3). | 229 | | number of memory Accesses | 5 (for fetching the instruction) + 2 (for fetching the 1st and 2nd operands) + 1 (for storing the result) = 8 memory accesses | 4 (for fetching the instruction) + 2 (for fetching the 1st and 2nd operands) +1 (for storing the result) = 7 memory accesses | 3 (for fetching the instruction) + 2 (for fetching the 1st and 2nd operands) + 1 (for storing the result) = 6 memory accesses | 2 (for fetching the instruction) + 1 (for fetching one operand or storing the result) = 3 memory accesses | 230 | | PC register handles address of next instruction | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | 231 | 232 |
233 |
234 | 235 | #### 4-address machine 236 | 237 | >*NOTE:* not normally seen in machine design, Because of the large instruction word size and number of memory accesses. 238 | 239 | ![4-address machine](./pics/ca/7.png) 240 | 241 | #### 3-address machine 242 | 243 | ![3-address machine](./pics/ca/8.png) 244 | 245 | #### 2-address machine 246 | 247 | ![2-address machine](./pics/ca/9.png) 248 | 249 | #### 1-address machine (accumulator machine) 250 | >- Requires two special instructions: 251 | * LDA Addr; Load the content of Addr to accumulator. 252 | * STA Addr; Stores the content of accumulator to address Addr. 253 | - Generally provide a minimum in the size of both program and CPU memory required. 254 | 255 | 256 | ![1-address machine](./pics/ca/10.png) 257 | 258 | 259 | 260 | *Example:* 261 | 262 | - Assuming that we have only 2^24 memory cells and the width of the data bus is 24 bits. 263 | - Write the code to implement the expression A = B - C*(D+E) on 3-, 2-, and 1- address machines. In accordance with programming language practice, computing the expression should not change the values of its operands. 264 | 265 | Solution: 266 | - We have 2^24 memory cells => size of address for any operand = log(2^24) = 24 bits = 3 bytes. 267 | - The width of the data bus is 24 bits => the size of data word = 3 bytes. 268 | 269 | A = B - C * ( D + E ) 270 | 271 | 3-address : 272 | 273 | | Instruction | Size | Memory Accesses | 274 | |:-----------:|:----------------:|:------------------:| 275 | | ADD A, D, E | 1+3*3 = 10 bytes | ceil(10/3) + 3 = 7 | 276 | | MPY A, A, C | 1+3*3 = 10 bytes | ceil(10/3) + 3 = 7 | 277 | | SUB A, B, A | 1+3*3 = 10 bytes | ceil(10/3) + 3 = 7 | 278 | | Total | 30 bytes | 21 memory accesses | 279 | 280 | 2-address : 281 | 282 | | Instruction | Size | Memory Accesses | 283 | |:-----------:|:---------------:|:------------------:| 284 | | MOV T, D | 1+3*2 = 7 bytes | ceil(7/3) + 2 = 5 | 285 | | ADD T, E | 1+3*2 = 7 bytes | ceil(7/3) + 3 = 6 | 286 | | MPY T, C | 1+3*2 = 7 bytes | ceil(7/3) + 3 = 6 | 287 | | MOV A, B | 1+3*2 = 7 bytes | ceil(7/3) + 2 = 5 | 288 | | SUB A, T | 1+3*2 = 7 bytes | ceil(7/3) + 3 = 6 | 289 | | Total | 35 bytes | 28 memory accesses | 290 | 291 | 1-address: 292 | 293 | | Instruction | Size | Memory Accesses | 294 | |:-----------:|:-------------:|:------------------:| 295 | | LDA D | 1+3 = 4 bytes | ceil(4/3) + 1 = 3 | 296 | | ADD E | 1+3 = 4 bytes | ceil(4/3) + 1 = 3 | 297 | | MPY C | 1+3 = 4 bytes | ceil(4/3) + 1 = 3 | 298 | | STA A | 1+3 = 4 bytes | ceil(4/3) + 1 = 3 | 299 | | LDA B | 1+3 = 4 bytes | ceil(4/3) + 1 = 3 | 300 | | SUB A | 1+3 = 4 bytes | ceil(4/3) + 1 = 3 | 301 | | STA A | 1+3 = 4 bytes | ceil(4/3) + 1 = 3 | 302 | | Total | 28 bytes | 21 memory accesses | 303 | 304 | 305 | ## Lecture 4 306 | 307 | ### 0-address machine 308 | 309 | ![0-address machine](./pics/ca/11.png) 310 | 311 | - An instruction of this machine specifies no address. 312 | - A stack (of registers) is used as the source of operands and also the destination of the result. 313 | - For executing the instruction, the operands are popped from the stack and then the result is pushed to it. 314 | - Requires two special instructions: 315 | ``` 316 | PUSH Addr; push the content of Addr to the top of stack 317 | POP Addr; pop the top of the stack and store it in Addr 318 | ``` 319 | - The address of next instruction is handled by The Program Counter (PC) register. 320 | - The code to add two memory operands will be like this: 321 | `Op3 = Op1 + Op2` 322 | ``` 323 | PUSH Op1; 324 | PUSH Op2; 325 | ADD; 326 | POP Op3; 327 | ``` 328 | - Number of bytes required for each instruction: 329 | number of operand's addresses(1) * size of address for each operand(3) + size of opcode(1) = 4 bytes. 330 | - An ALU instruction is encoded in 1 byte. 331 | *NOTE:* the instruction is fetched in 2 memory accesses => ceil(size of instruction / size of data word) = ceil(4/3). 332 | - Number of memory access: 333 | 2 (for fetching the `push` or `pop` instruction) + 334 | 1 (for fetching one operand or storing the result) 335 | = 3 memory accesses 336 | **OR** 337 | 1 (for fetching ALU instructions) = 1 memory access 338 | 339 | *Example:* 340 | 341 | - Assuming that we have only 2^24 memory cells and the width of the data bus is 24 bits. 342 | - Assuming that every opcode is encoded in 1 byte 343 | 344 | Write the code to implement the expression `A = B - C*(D+E)` on 0-address machine. 345 | In accordance with programming language practice, computing the expression should not change the values of its operands. 346 | 347 | *Solution:* 348 | 349 | - We have 2^24 memory cells => size of address for any operand = log(2^24) = 24 bits = 3 bytes. 350 | - The width of the data bus is 24 bits => the size of data word = 3 bytes. 351 | 352 | `A = B - C * ( D + E )` 353 | 354 | | Instruction | Size | Memory Accesses | 355 | |:-----------:|:---------------:|:------------------:| 356 | | PUSH D | 1+3*1 = 4 bytes | ceil(4/3) + 1 = 3 | 357 | | PUSH E | 1+3*1 = 4 bytes | ceil(4/3) + 1 = 3 | 358 | | ADD | 1 bytes | ceil(1/3) = 1 | 359 | | PUSH C | 1+3*1 = 4 bytes | ceil(4/3) + 1 = 3 | 360 | | MPY | 1 bytes | ceil(1/3) = 1 | 361 | | PUSH B | 1+3*1 = 4 bytes | ceil(4/3) + 1 = 3 | 362 | | SUB | 1 bytes | ceil(1/3) = 1 | 363 | | POP A | 1+3*1 = 4 bytes | ceil(4/3) + 1 = 3 | 364 | | Total | 23 bytes | 18 memory accesses | 365 | 366 | 367 | ### The General Register machine 368 | ![GRM](./pics/ca/12.png) 369 | - Uses a set of registers to retain intermediate results (for complex operations) inside the CPU. ALU instructions operates on registers. 370 | - In an instruction, a register is addressed by extra bits (half address): 371 | N registers requires a code of length log2(N). 372 | for example: 32 registers are addressed by 5 bits. 373 | - An ALU instruction usually uses 3 registers. 374 | for example: ADD R2, R4, R6; => R2 = R4+R6 375 | - An instruction that specifies one operand in memory and one operand in a register are known as a 1½ address instruction. 376 | - Assuming that there are 32 general-purpose registers => then each register reference requires 5 bits to specify 1 of the 32 registers. 377 | - Number of bytes required for 3-register add instruction: 378 | number of operand's addresses(3) * size of address for each operand(5) + size of opcode(8) = 23 bits. 379 | That means the instruction is fetched in 1 memory accesses => ceil(size of instruction / size of data word) = ceil(23 bits/24 bits). 380 | 381 | - Number of memory accesses required for 3-register add instruction: 382 | 1 (for fetching the instruction). 383 | *NOTE:* There is no memory accesses for operands. 384 | 385 | - Number of bytes required for load instruction: 386 | 5 bits for the register + 387 | 24 bits for memory address of the operand + 388 | 8 bits for the operation = 37 bits 389 | That means the instruction is fetched in 2 memory accesses => ceil(size of instruction / size of data word) = ceil(37 bits/24 bits). 390 | 391 | - Number of memory accesses required for load instruction: 392 | 2 (for fetching the instruction) + 393 | 1 (for fetching the operand) = 3 memory accesses. 394 | 395 | 396 | *Example:* 397 | Consider a General Register Machine that includes 32 general purpose registers. Assume that every opcode is encoded in 1 byte, the memory is addressed by 24 bits and the width of the data bus is 24 bits. 398 | 399 | 1. Write the assembly code to implement the expression `A = ( B + C ) * ( D + E )` on the above machine. 400 | 2. Compute the memory size (in bytes) required for the code in (1). 401 | 3. Compute the number of memory accesses required to execute the expression (1) on the specified machine. 402 | 403 | *Solution:* 404 | 405 | - The machine includes 32 general purpose registers => size of address of register = 5 bits 406 | - Size of address for any operand in memory = 24 bits 407 | - The width of the data bus is 24 bits => the size of data word = 24 bits. 408 | 409 | 410 | | Instruction | Size | Memory Accesses | 411 | |:--------------:|:-----------------------:|:-------------------:| 412 | | LOAD R1, B | 8 + 5 + 24 = 37 bits | ceil(37/24) + 1 = 3 | 413 | | LOAD R2, C | 8 + 5 + 24 = 37 bits | ceil(37/24) + 1 = 3 | 414 | | ADD R1, R1, R2 | 8 + 5 + 5 + 5 = 23 bits | ceil(23/24) = 1 | 415 | | LOAD R3, D | 8 + 5 + 24 = 37 bits | ceil(37/24) + 1 = 3 | 416 | | LOAD R4, E | 8 + 5 + 24 = 37 bits | ceil(37/24) + 1 = 3 | 417 | | ADD R3, R3, R4 | 8 + 5 + 5 + 5 = 23 bits | ceil(23/24) = 1 | 418 | | MPY R1, R1, R3 | 8 + 5 + 5 + 5 = 23 bits | ceil(23/24) = 1 | 419 | | STORE A, R1 | 8 + 24 + 5 = 37 bits | ceil(37/24) + 1 = 3 | 420 | | Total | 245 bits | 18 memory accesses | 421 | 422 | 423 | * Trade-offs in instruction types: 424 | - The 3-address machines have the shortest code sequences but require large number of bits per instruction. 425 | 426 | - The 0-address machines have the longest code sequences and require small number of bits per instruction. 427 | 428 | - Even in 0-address machines there are 1-address instructions, push and pop. 429 | 430 | - General register machines can use 3-address instructions with small instruction size by using 2 register operands and 1 memory address. 431 | 432 | ## Lecture 5 433 | 434 | Effective address 435 | : the address that the CPU must first compute to access an operand in memory. 436 | 437 | This address is then issued to the memory subsystem. 438 | 439 | #### Addressing modes 440 | 441 | - There are **7** ways to compute the effective address: 442 | 1. Immediate addressing 443 | 2. Direct addressing 444 | 3. Indirect addressing 445 | 4. Register direct addressing 446 | 5. Register indirect addressing 447 | 6. Displacement (based or indexed) addressing 448 | 7. Relative addressing 449 | 450 | --- 451 | 452 | 1. Immediate addressing: 453 | - Is used to access constants stored in the instruction. 454 | - It supplies an operand immediately without computing an address. 455 | ![addressing](./pics/ca/13.png) 456 | 2. Direct addressing: 457 | - The address of the operand is specified as a constant in the instruction. 458 | ![addressing](./pics/ca/14.png) 459 | 3. Indirect addressing: 460 | - In indirect addressing, a constant in the instruction specifies not the address of the value, but the address of the address of the value. 461 | - The indirect addressing is used in implementing pointers. 462 | - Two memory accesses are required to access the value: 463 | * fetching the pointer, which is stored in memory; 464 | * having that address, the CPU accesses the value stored at that address 465 | ![addressing](./pics/ca/15.png) 466 | 4. Register Direct addressing: 467 | - In the register direct mode, the operand is contained in the specified register. 468 | ![addressing](./pics/ca/16.png) 469 | 5. Register Indirect addressing: 470 | - This addressing mode is used to sequentially access the elements of an array stored in memory: 471 | * The starting address of the array is stored in a register, 472 | * an access is made to the current element, then the register is incremented to point to the next element. 473 | ![addressing](./pics/ca/17.png) 474 | 6. Displacement (Indexing) addressing: 475 | - The memory address is formed by adding a constant contained within the instruction, to the address value contained in a register. 476 | - Used to access C structs or Pascal Records. 477 | ![addressing](./pics/ca/18.png) 478 | 7. Relative addressing: 479 | - Similar to indexed, but the base address is held in the PC rather than in another register. 480 | - Allows the storage of memory operands at a fixed offset from the current instruction. 481 | ![addressing](./pics/ca/19.png) 482 | 483 | ## Lecture 6 484 | 485 | **Classification of ISAs:** 486 | - The architectural designs of CPU are 487 | * RISC (Reduced instruction set computing). 488 | * CISC (Complex instruction set computing). 489 | 490 | RISC 491 | : is a computer which only uses simple commands that can be divided into several instructions which achieve low-level operation within a single CLK cycle. 492 | 493 | CISC 494 | : is a computer where single instructions can perform numerous low-level operations like a load from memory, an arithmetic operation, and a memory store. Or are accomplished by multi-step processes or addressing modes in single instructions. 495 | 496 | **Simple Risc Computer (SRC):** 497 | - 32-bit general purpose registers. 498 | - PC (Program Counter Register): holds the next instruction. 499 | - IR (Instruction Register): is the part of a CPU's control unit that holds the instruction currently being executed or decoded. 500 | - 32-bit words (4 bytes) can be fetched or stored. 501 | - Contains 2^32 Bytes of memory. 502 | - Memory addresses ranges from 0 to 232 -1. 503 | ![src](./pics/ca/20.png) 504 | 505 | ### Instruction Formats 506 | 507 | - Arithmetic instructions: There are four arithmetic instructions: `add`, `addi`, `sub`, and `neg`. 508 | - Logical and shift instructions: There are nine logical and shift instructions: `and`, `andi`, `or`, `ori`, `not`, `shr`, `sha`, `shl`, and `shc`. 509 | - Miscellaneous instructions: There are two zero-operand instructions: `nop` and `stop`. 510 | - Load and store instructions: There are four load instructions `Id`, `Idr`, `la`, and `lar`, and two store instructions `st` and `str`. 511 | - Branch instructions: There are two branch instructions: `br` and `brl`. 512 | - All instructions are 32 bits long. 513 | - All instructions have a 5-bit opcode field, allowing 32 different instructions. 514 | - The `ra`, `rb`, and `rc` fields are 5-bit fields that specify one of the 32 general purpose registers. 515 | - Constants cl, c2, c3. 516 | - The notation `M[x]` means the value stored at word x in memory. 517 | 518 | 519 | 1. Accessing Memory: The Load and Store Instructions 520 | - The load and store instructions are the only SRC instructions to access operands in memory. 521 | ![examples](./pics/ca/21.png) 522 | *Example:* 523 | Encode the instruction `ld r22, 24(r4)`. 524 | the opcode for ld = 1. 525 | *Solution:* 526 | the instruction means: 527 | ``` 528 | R[r22] = M[24 + R[r4]]; 529 | ld = 1 530 | ra = 22 531 | rb = 4 532 | c = 24 533 | 1 22 4 24 534 | 00001 10110 00100 00000000000011000 535 | ``` 536 | 1. Arithmetic and Logic Instructions: 537 | - The instruction neg (op = 15): takes the 2's complement of the contents of register R[rc] and stores it in register R[ra]. 538 | - The not (op = 24) instruction: takes the logical (1's) complement of the contents of register R[rc] and stores it in register R[ra]. 539 | * Review: (1's) complement

540 | ![Review](./pics/ca/22.png)

541 | * Review: (2's) complement

542 | ![Review](./pics/ca/23.png)

543 | * There are two types of ALU instructions: 544 | * Register ALU Instructions: add, sub, and, or. 545 | * Immediate Addressing ALU Instructions: addi, andi, ori. 546 | ``` 547 | Add: adds the value in two registers. 548 | Addi: adds an immediate value (constant) to the register. 549 | ``` 550 | 1. Miscellaneous Instructions 551 | 552 | | instruction | opcode | purpose | 553 | |:-----------:|:------:|:----------------------------------:| 554 | | nop | 0 | do nothing, used as a time waster. | 555 | | stop | 31 | halt the machine. | 556 | 557 | 558 | ## Lecture 7 559 | 560 | - This lecture is concerned with the fetch, decode, execute, store instruction cycle in more details. 561 | - General Concepts: 562 | * The program counter (PC): is pointing to the next instruction to be executed. 563 | * Memory Address Register (MAR): is the CPU register that either stores the memory address from which data will be fetched for the CPU, or the address to which data will be sent and stored. In other words, MAR holds the memory location of data that needs to be accessed. 564 | * Memory Data Register (MDR) or Memory Buffer Register (MBR): is the register of a computer's control unit that contains the data to be stored in the computer storage (e.g. RAM), or the data after a fetch from the computer storage. 565 | * Instruction Register (IR) or Current Instruction Register (CIR): is the part of a CPU's control unit that holds the instruction currently being executed or decoded. 566 | - This video illustrates the phases in depth with more details: 567 | 568 | 569 | ## Lecture 8 570 | 571 | Shift instructions: Shift the operand in R[rb] right, or left," from 1 to 32 bits, and place the result in R[ra]. 572 | - The amount of the shift is governed by an encoded 5-bit unsigned integer, so shifts from 0 to 31 bits are possible. 573 | - The integer representing the shift count is stored as an immediate value in the 5 least significant bits in the instruction. 574 | - for example: `shr` shifts zeros in from the left as the value is shifted right. 575 | 576 | Branch Instructions: 577 | ![table](./pics/ca/24.png) 578 | 579 | 580 | ## Lecture 9 581 | 582 | ### Instruction Processing “Cycle” 583 | 584 | - Instructions are processed under the direction of a “control unit” step by step. 585 | - Instruction cycle: Sequence of steps to process an instruction 586 | - Fundamentally, there are six phases: 587 | 1. Fetch 588 | 2. Decode 589 | 3. Evaluate Address 590 | 4. Fetch Operands 591 | 5. Execute 592 | 6. Store Result 593 | - Not all instructions require all six phases 594 | 595 | ### Instruction Processing “Cycle” vs. Machine Clock Cycle 596 | 597 | 1. Single-cycle machine: 598 | - All six phases of the instruction processing cycle take a single machine clock cycle to complete. 599 | - All state updates made at the end of an instruction’s execution. 600 | - Big disadvantage: The slowest instruction determines cycle time, Therefore, long clock cycle time. 601 | ![SCM](./pics/ca/25.png) 602 | 2. Multi-cycle machine: 603 | - All six phases of the instruction processing cycle can take multiple machine clock cycles to complete. 604 | - In fact, each phase can take multiple clock cycles to complete 605 | - Instruction processing broken into multiple cycles/stages 606 | - State updates can be made during an instruction’s execution 607 | - Architectural state updates made only at the end of an instruction’s execution 608 | - Advantage over single-cycle: The slowest “stage” determines cycle time. 609 | ![MCM](./pics/ca/26.png) 610 | 611 | ### Single-cycle vs. Multi-cycle: Control & Data 612 | 613 | | Machine | Control | Data | 614 | |:------------:|:------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------:| 615 | | Single-cycle | Control signals are generated in the same clock cycle as the one during which data signals | Everything related to an instruction happens in one clock cycle (serialized processing) | 616 | | Multi-cycle | Control signals needed in the next cycle can be generated in the current cycle | Latency of control processing can be overlapped with latency of datapath operation (more parallelism) | 617 | 618 | 619 | ### Performance of Computer Systems 620 | 621 | Response time 622 | : the time between the start and the completion of a task (in time units) 623 | 624 | Throughput 625 | : the total amount of tasks done in a given time period (in number of tasks per unit of time) 626 | 627 | - The computer user is interested in response time (or execution time) 628 | - The manager of a data processing center is interested in throughput 629 | - The computer user wants response time to decrease, while the manager wants throughput increased. 630 | 631 | ### CPU Time or CPU Execution Time 632 | 633 | CPU time (or CPU Execution time) 634 | : the time between the start and the end of execution of a given program. 635 | 636 | - This time accounts for the time CPU is computing the given program, including operating system routines executed on the program’s behalf. 637 | - It does not include the time waiting for I/O and running other programs. 638 | - CPU time is a true measure of processor/memory performance. 639 | 640 | #### Analysis of CPU Time 641 | 642 | CPU time depends on the program which is executed, including: 643 | 644 | - the number of instructions executed, 645 | - types of instructions executed and their frequency of usage. 646 | 647 | ### Clock rate 648 | 649 | - refers to the frequency at which a chip like a central processing unit (CPU) is running and is used as an indicator of the processor's speed 650 | - is given in Hz (= 1/sec). 651 | - defines duration of discrete time intervals called clock cycle times or clock cycle periods: 652 | `clock_cycle_time = 1/clock_rate (in sec)` 653 | 654 | ![table of units](./pics/ca/27.png) 655 | 656 | *Example:* 657 | 658 | A processor having a clock cycle time of 0.25 nsec will have a clock rate of ……. 659 | - 1000MHz 660 | - 2000MHz 661 | - 3000MHz 662 | - **4000MHz** 663 | 664 | *Solution:* 665 | 666 | Clock cycle time C is the reciprocal of the clock rate f: 667 | C = 1 / f 668 | f= 1/C = 1/0.25ns = 4 GHz or 4000 MHz 669 | 670 | **CPU Time Equation:** 671 | ``` 672 | CPU time = Clock cycles for a program * Clock cycle time 673 | = Clock cycles for a program / Clock rate 674 | = Instruction count * CPI / Clock rate 675 | ``` 676 | 677 | Clock cycles for a program 678 | : total number of clock cycles needed to execute all instructions of a given program. 679 | 680 | Instruction count 681 | : number of instructions executed, sometimes referred as the instruction path length. 682 | 683 | CPI 684 | : the average number of clock cycles per instruction 685 | 686 | >CPI = Clock cycles for a program / Instructions count 687 | 688 | 689 | Single cycle microarchitecture performance: 690 | - CPI = 1 691 | - Clock cycle time = long 692 | 693 | Multi-cycle microarchitecture performance: 694 | - CPI => different for each instruction 695 | - Average CPI => small 696 | - Clock cycle time => short 697 | --------------------------------------------------------------------------------