This short tutorial on linear programming used to be part of the No bullshit guide to linear algebra, but was cut from the main text of the book, because it might not be interesting to all readers.
5 |Business students will find this tutorial useful as it gives three points of view: - algorithmic (the steps of the simplex algorithm) - graphical (visual representation of the various coordinate systems used during algorithm) - numerical (practical computations using the computer algebr system SymPy)
6 | 7 |This "feature cut" was done back in Jun 2015. Here is the commit message:
9 |changeset: 185:3ca7ddf396f7
10 | user: Ivan Savov
11 | date: Sun Jun 14 11:35:15 2015 -0400
12 | summary: Moved Linear Programming out of Applications chapter.
13 | No way I can hold the readers attention for 20 pages with that shit.
14 | Will release it as a bonus free tutorial or something.
15 |
16 | Readers interested in learning about linear algebra explained in a text that gets to the point should check out the book where this tutorial was supposed to appear: the No bullshit guide to linear algebra.
24 | 25 |Feel free to use this text for you classes or tachings on LP. The license is MIT, a.k.a, do what you want with it.
27 |The TikZ code used to generate the graphs is available upon request.
28 | 29 | 30 | 31 | -------------------------------------------------------------------------------- /problems/problems.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/minireference/linear_programming/c4a78ee8eb7a41f0389e548d3de3dc0e48f5450e/problems/problems.pdf -------------------------------------------------------------------------------- /problems/problems.tex: -------------------------------------------------------------------------------- 1 | %!TEX root = ../tutorial.tex 2 | 3 | \begin{problems}{LPTUT} 4 | 5 | 6 | %%% LINEAR PROGRAMMING %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 7 | 8 | \begin{problem} 9 | Use \texttt{SymPy} to reproduce the steps in the following example video 10 | \href{http://youtu.be/XK26I9eoSl8}{\texttt{http://youtu.be/XK26I9eoSl8}} 11 | Confirm you obtain the same answer. 12 | 13 | \begin{answer}$\max_{x,y,z} P(x,y,z) = 708$ at $x=48$, $y=84$, $z=0$.\end{answer} 14 | 15 | % \begin{solution} 16 | % \end{solution} 17 | \end{problem} 18 | 19 | 20 | \begin{problem} 21 | Solve the following linear program: 22 | \begin{align*} 23 | \max_{x,y} g(x,y) \ &=\ x \, + \, 3y, \\ 24 | \intertext{subject to the constraints} 25 | 2x + 3y \ &\leq \ 24, \\ 26 | x - y \ &\leq \ 6, \\ 27 | y \ &\leq \ 6, \\ 28 | x \geq 0, & \quad y \geq 0. 29 | \end{align*} 30 | 31 | \begin{answer}$\max g(x,y) = 21$ and occurs at $x=3$, $y=6$.\end{answer} 32 | 33 | \begin{solution} 34 | Use the simplex method if solving the problem by hand. 35 | 36 | The code to solve using \texttt{linprog} is as follows: 37 | \begin{verbatim} 38 | from from scipy.optimize import linprog 39 | from numpy import array 40 | c = [-1,-3] 41 | D = array([[ 2, 3, 24], 42 | [ 1,-1, 6], 43 | [ 0, 1, 6], 44 | [-1, 0, 0], 45 | [ 0,-1, 0]] 46 | A = D[:,0:2] 47 | b = D[:,2] 48 | linprog(c, A_ub=A, b_ub=b) 49 | \end{verbatim} 50 | % status: 0 51 | % slack: array([ 0., 9., 0., 3., 6.]) 52 | % success: True 53 | % fun: -21.0 54 | % x: array([ 3., 6.]) 55 | % message: 'Optimization terminated successfully.' 56 | % nit: 2 57 | \end{solution} 58 | \end{problem} 59 | 60 | 61 | \end{problems} -------------------------------------------------------------------------------- /tutorial.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/minireference/linear_programming/c4a78ee8eb7a41f0389e548d3de3dc0e48f5450e/tutorial.pdf -------------------------------------------------------------------------------- /tutorial.tex: -------------------------------------------------------------------------------- 1 | \documentclass[11pt,oneside]{article} 2 | %\documentclass[oneside,10pt]{book} 3 | 4 | \title{ {\huge \sc Linear programming tutorial} } 5 | \author{Ivan Savov} 6 | \usepackage{amsthm,amsmath,amssymb,amsfonts,latexsym} 7 | \usepackage{graphicx} 8 | \usepackage{hyperref} 9 | 10 | 11 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 12 | % STEP 1: Choose the true/false values for DRAFTMODE 13 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 14 | \usepackage{ifthen} 15 | \newboolean{DRAFTMODE} % if PROOFREADING=true: 16 | \setboolean{DRAFTMODE}{true} % 10pt, dblspaced, 8.5'' x 11'' paper 17 | 18 | 19 | 20 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 21 | % STEP 2: done! 22 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 23 | 24 | \input{00.main.hdr.tex} 25 | 26 | %%%% EXERCISES and PROBLEMS %%%%%%%%%%%%%%%%%%%%%%%%%% 27 | %\usepackage{answers} 28 | \usepackage[nosolutionfiles]{answers} % use when PROOFREADING 29 | \input{00.exercises_problems.hdr.tex} 30 | 31 | 32 | 33 | \begin{document} 34 | 35 | 36 | \maketitle 37 | 38 | \vspace{-1cm} 39 | \noindent 40 | 41 | { \center 42 | git \gitrevisionnumber 43 | 44 | } 45 | 46 | \vspace{1cm} 47 | 48 | \setcounter{tocdepth}{2} 49 | \setcounter{secnumdepth}{2} 50 | \tableofcontents 51 | 52 | \vspace{1cm} 53 | 54 | \section{Linear programming} 55 | \label{applications:linear_programming} 56 | 57 | In the early days of computing, 58 | computers were primarily used to solve optimization problems 59 | so the term ``programming'' is often used to describe optimization problems. 60 | \emph{Linear programming} is the study of linear optimization problems that involve linear constraints. 61 | % linear programming, quadratic programming, semidefinite programming. 62 | Optimization problems play an important role in many business applications: 63 | the whole point of a corporation is to constantly optimize profits, 64 | subject to time, energy, and legal constraints. 65 | % their operations within budgetary, time, and production constraints. 66 | 67 | Suppose you want to maximize the quantity $g(x,y)$ subject to some constraints on the values $x$ and $y$. 68 | To maximize $g(x,y)$ means to find the values of $x$ and $y$ that make $g(x,y)$ as large as possible. 69 | Let's assume the \emph{objective function} $g(x,y)$ represents your company's revenue, 70 | and the variables $x$ and $y$ correspond to monthly production rates of ``Xapper'' machines and ``Yapper'' machines. 71 | You want to choose the production rates $x$ and $y$ that maximize revenue. 72 | If the revenue from each Xapper machine is \$3000 and the revenue from each Yapper machine is \$2000, 73 | the monthly revenue is described by the function $g(x,y) = 3000x + 2000y$. 74 | % This is the objective function. 75 | 76 | Due to the limitations of the current production facilities, 77 | the rates $(x,y)$ are subject to various constraints. 78 | We'll assume each constraint can be written in the form $a_1x+a_2y \leq b$. 79 | % 80 | The maximum number of Xapper machines that can be produced in a month is three, written $x\leq3$. 81 | Similarly, the company can produce at most four Yapper machines, denoted $y \leq 4$. 82 | Suppose it takes two employees to produce each Xapper machine and one employee to produce each Yapper machine. 83 | If the company has a total of seven employees, % TODO: change phrase to remove conditional If... 84 | the human resources limits impose the constraint $2x+y \leq 7$ on the production rates. 85 | Finally, logistic constraints allow for at most five machines to be shipped each month, which we write as $x+y \leq 5$. 86 | 87 | This production rate optimization problem can be expressed as the following \emph{linear program}: 88 | \begin{align*} 89 | \max_{x,y} g(x,y) \ &=\ 3000x \, + \, 2000y, \\ % \qquad [\$], \\ 90 | \intertext{subject to the constraints} 91 | x \ &\leq \ 3, \\ 92 | y \ &\leq \ 4, \\ 93 | 2x + y \ &\leq \ 7, \\ 94 | x + y \ &\leq \ 5, \\ 95 | x \geq 0, & \quad y \geq 0. 96 | \end{align*} 97 | Each of the inequalities represents one of the real-world production constraints. 98 | We also included the non-negativity constraints $x \geq 0$ and $y \geq 0$ to show 99 | it's impossible to produce a negative number of machines---we're not doing an accounting scam here, 100 | this is a legit Xapper--Yapper business. 101 | 102 | 103 | On first hand the problem looks deceptively simple. 104 | We want to find the coordinates $(x,y)$ that maximize the objective function $g(x,y)$, 105 | so we can simply find the direction of maximum growth of $g(x,y)$ and go as far as 106 | possible in that direction. 107 | 108 | Rather than attempt to plot the function $g(x,y)$ in three dimensions, 109 | we can visualize the growth of $g(x,y)$ by drawing \emph{level curves} of the function, 110 | which are analogous to the lines shown on topographic maps. 111 | Each line represents some constant height: $g(x,y)=cn$, for $n \in \{0,1,2,3, \ldots \}$. 112 | Figure~\ref{fig:linear_programming_level_curves_gxy} shows the level curves 113 | of the objective function $g(x,y)=3000x + 2000y$ at intervals of $3000$. 114 | 115 | \begin{figure}[htb] 116 | \begin{center} 117 | \includegraphics[width=0.6\textwidth]{figures/linear_algebra/linear_programming_level_curves_gxy.pdf} 118 | %TODO: annotate each line with g value 119 | \end{center} 120 | \vspace{-6mm} 121 | \caption{The objective function $g(x,y)=3000x + 2000y$ grows with $x$ and $y$. 122 | The direction of maximum growth is $(3,2)$---if you think of $g(x,y)$ 123 | as describing the height of a terrain, 124 | then the vector $(3,2)$ points uphill. 125 | The dashed lines in the graph represent the following \emph{level curves}: 126 | $g(x,y)=3000$, $g(x,y)=6000$, $g(x,y)=9000$, $g(x,y)=12000$, and so on.} % TODO: update here if nums go in fig 127 | \label{fig:linear_programming_level_curves_gxy} 128 | \end{figure} 129 | 130 | 131 | Linear programming problems are mainly interesting because of the constraints imposed on the feasible region. 132 | % 133 | Each inequality corresponds to a restriction on the possible production rates $(x,y)$. 134 | % 135 | A coordinate pair $(x,y)$ that satisfies all constraints is called a \emph{feasible point}. 136 | The \emph{feasible region} is the set of all feasible points. 137 | We can represent the constraint region graphically by shading out parts of the $xy$-plane, 138 | as show in Figure~\ref{fig:linear_programming_feasible_region}. 139 | 140 | 141 | \begin{figure}[htb] 142 | \begin{center} 143 | \includegraphics[width=0.6\textwidth]{figures/linear_algebra/linear_programming_feasible_region.pdf} 144 | % TODO: add x >0 and y >0 labels in bottom left corner 145 | \end{center} 146 | \vspace{-6mm} 147 | \caption{The \emph{feasible region} for the linear programming problem. 148 | The feasible region is the subset of the $xy$-plane that contains points $(x,y)$ satisfying all the constraints. 149 | } 150 | \label{fig:linear_programming_feasible_region} 151 | \end{figure} 152 | 153 | \noindent 154 | \textbf{Which feasible point produces the maximum value of $g(x,y)$?} 155 | This is the question we aim to answer in linear programming. 156 | % 157 | The linear programming problem illustrated in Figure~\ref{fig:linear_programming_feasible_region} is 158 | simple enough that you can solve it by simply looking at the graph. 159 | The highest level curve that touches the feasible region is $g(x,y)=12\,000$, 160 | and the feasible point that lies on this level curve is $(2,3)$. 161 | The solution to the optimization problem is: 162 | \[ 163 | \max_{(x,y) \textrm{ feasible}} g(x,y) = 12\,000 164 | \qquad 165 | \textrm{and} 166 | \qquad 167 | \mathop{\textrm{argmax}}_{(x,y) \textrm{ feasible}} g(x,y) = (2,3). 168 | \] 169 | % 170 | Real-life linear programming problems usually involve hundreds of variables, 171 | so it's not possible simply to ``look'' at the constraint region and find the optimal solution. 172 | % ---unless you can see surfaces and inequalities in 100-dimensions. 173 | % Wow how much coffee did you drink to do that? 174 | We need to develop a systematic approach---an algorithm---which doesn't 175 | depend on our ability to visualize the geometry of high-dimensional surfaces. 176 | 177 | 178 | \subsection{Simplex algorithm} 179 | \label{applications:simplex_algorithm} 180 | 181 | 182 | The \emph{simplex algorithm}, invented in 1947, 183 | is a systematic procedure for finding optimal solutions to linear programming problems. 184 | The main idea of the simplex algorithm is to start from one of the corner points of the 185 | feasible region and ``move'' along the sides of the feasible region until we find the maximum. 186 | The reason why this ``sticking to the sides'' strategy works is that maximum solutions to linear programming 187 | problems always occur at the corners of the feasible region. 188 | Therefore, we're sure to find the maximum if we visit all the corners. 189 | Furthermore, in each iteration of the simplex algorithm we will always move along an edge where $g(x,y)$ increases. 190 | Thus by moving from edge to edge in directions where $g(x,y)$ increases, 191 | sooner or later we'll reach the corner point where $g(x,y)$ is maximum. 192 | 193 | The steps of the simplex algorithm are as follows: 194 | 195 | \begin{description}[style=unboxed] 196 | \item[INPUTS:] Objective function $g(\vec{x})$ and constraints of the form $\vec{a}_i \cdot \vec{x} \leq b_i$. 197 | \item[SETUP:] Construct the \emph{tableau}: 198 | \begin{itemize} 199 | \item Place constraint equations and slack variables in the first rows. 200 | \item Place $-g(\vec{x})$ in the last row of the tableau. 201 | \end{itemize} 202 | \item[INITIALIZATION:] Start the algorithm from the coordinates $\vec{x}_0$ that correspond 203 | to a vertex of the constraint region. 204 | \item[MAIN LOOP:] Repeat while negative numbers in the last row of the tableau exist: 205 | \begin{description}[style=unboxed] 206 | \item[Choose pivot variable:] The pivot variable is the most negative number in the last row of the tableau. 207 | \item[Choose pivot row:] The pivot row corresponds to the first constraint that will become active when the pivot variable increases. 208 | \item[Move:] The algorithm ``moves'' from the vertex with coordinates $\vec{x}$ to the vertex with coordinates $\vec{x}^\prime$, 209 | which is defined as a vertex where the pivot row is active. 210 | \item[Change of variables:] 211 | Perform the change of variables from the coordinates system $B$ with origin at $\vec{x}$ 212 | to a coordinates system $B^\prime$ with origin at $\vec{x}^\prime$. 213 | This involves row operations on the tableau. 214 | \end{description} 215 | \item[OUTPUT:] When no more negative numbers in the last row exist, 216 | we have reached an optimal solution $\vec{x}^* \equiv \mathop{\textrm{argmax}}_{\vec{x}} g(\vec{x})$. 217 | 218 | \end{description} 219 | 220 | 221 | \noindent 222 | Though not exactly identical to the Gauss--Jordan elimination procedure, 223 | the simplex algorithm is similar to it because it depends on the use of row operations. 224 | For this reason, linear programming and the simplex algorithm are often forced upon 225 | students taking a linear algebra course, especially business students. 226 | I'm not going to lie to you and tell you the simplex algorithm is simple, 227 | but it is very powerful so you should know it exists, 228 | and develop a general intuition about how it works. 229 | And, as with all things corporate-related, 230 | it's worth learning about them so you'll know the techniques of the enemy. 231 | 232 | In the remainder of this section, we'll go through the steps of the simplex algorithm 233 | needed to find the optimal solution to the Xapper-and-Yapper production problem. 234 | We'll analyze the problem from three different angles: 235 | graphically by drawing shifted coordinate systems, 236 | analytically by writing systems of equations, 237 | and computationally by using a new matrix-like structure called a \emph{tableau}. 238 | %which is useful for performing simplex algorithm calculations. 239 | % 240 | 241 | 242 | \subsubsection{Definitions} 243 | 244 | We now introduce some useful terminology used in linear programming: 245 | 246 | \begin{itemize} 247 | \item An inequality $a_1x+a_2y \leq b$ is \emph{loose} (or \emph{slack}) for a given $(x,y)$ 248 | if there exists a positive constant $s >0$ such that $a_1x+a_2y +s = b$. 249 | \item An inequality $a_1x+a_2y \leq b$ is \emph{tight} for a given $(x,y)$ if the equality conditions holds $a_1x+a_2y = b$. 250 | \item $s$: \emph{slack variable}. Slack variables can be added to any inequality $a_1x+a_2y \leq b$ 251 | to transform it into an equality $a_1x+a_2y +s =b$. Note slack variables are always nonnegative $s \geq 0$. 252 | \item A \emph{vertex} of an $n$-dimensional constraint region is a point where $n$ inequalities are tight. 253 | 254 | \item An \emph{edge} of an $n$-dimensional constraint region is place where $n-1$ inequalities are tight. 255 | \item A \emph{pivot variable} is a variable whose increase leads to an increase in the objective function. 256 | \item A \emph{pivot row} is a row in the tableau that corresponds to the currently active edge during 257 | a ``move'' operation of the simplex algorithm. 258 | \item $\vec{v}$: a vector that represents the current \emph{state} of the linear program. 259 | \item A \emph{tableau} is a matrix that represent the constraints on the feasible region 260 | and the current value of the objective function. 261 | Tableaus allow us to solve linear programming problems using row operations. 262 | \end{itemize} 263 | 264 | 265 | \subsubsection{Introducing slack variables} 266 | 267 | In each step of the simplex algorithm, 268 | we'll keep track of which constraints are \emph{active} (tight) and which are \emph{inactive} (loose). 269 | % 270 | To help with this bookkeeping task, 271 | we introduce nonnegative a \emph{slack variable} $s_i \geq 0$ for each of the inequality constraints. 272 | % 273 | If $s_i > 0$, inequality $i$ is loose (not active), 274 | and if $s_i=0$, inequality $i$ is tight (active). 275 | If we want to remain within the feasible region, 276 | no $s_i$ can become negative, 277 | since $s_i< 0$ implies inequality $i$ is not satisfied. 278 | 279 | Introducing the slack variables, the linear program becomes: 280 | \begin{align*} 281 | \qquad \qquad \max_{x,y} g(x,y) &= 3000x + 2000y, \\ 282 | \intertext{subject to the equality constraints:} 283 | x \ \ \quad +s_1 \quad \quad \quad \ &= \ 3, \\ 284 | y \quad +s_2 \quad \quad \ &= \ 4, \\ 285 | 2x + y \quad \quad + s_3 \quad \ &= \ 7, \\ 286 | x + y \quad \quad \quad + s_4 \ &= \ 5, \\ 287 | x,y,s_1,s_2,s_3,s_4 \ &\geq \ 0. \\ 288 | \end{align*} 289 | 290 | \noindent 291 | This is starting to look a lot like a system of linear equations, right? 292 | Recall that what matters the most in systems of linear equations are the \emph{coefficients}, 293 | and not the variable names. 294 | Previously, 295 | when faced with overwhelming complexity in the form of linear equations with many variables, 296 | we used an augmented matrix to help us focus only on what matters. 297 | We'll use a similar approach again. 298 | 299 | 300 | \subsubsection{Introducing the tableau} 301 | 302 | A \emph{tableau} is a compact representation of a linear programming problem in the form of an array of numbers, 303 | analogous to the augmented matrix used to solve systems of linear equations. 304 | {\footnotesize 305 | \[ 306 | \begin{array}{rl} 307 | x \ \ \quad +s_1 \quad \quad \quad \ &= \ 3 \\ 308 | y \quad +s_2 \quad \quad \ &= \ 4 \\ 309 | 2x + y \quad \quad + s_3 \quad \ &= \ 7 \\ 310 | x + y \quad \quad \quad + s_4 \ &= \ 5 \\ 311 | - 3000x - 2000y \quad \quad \ \ \, \ \ &= \ \underline{?} 312 | \end{array} 313 | \quad 314 | \Leftrightarrow 315 | \quad 316 | \left[ 317 | \begin{array}{rrrrrr|r} 318 | 1& 0& 1& 0& 0& 0 \ \ & \ 3 \\ 319 | 0& 1& 0& 1& 0& 0 \ \ & \ 4 \\ 320 | 2& 1& 0& 0& 1& 0 \ \ & \ 7 \\ 321 | 1& 1& 0& 0& 0& 1\ \ & \ 5 \\ 322 | \!\!\!-3000& \!\!\!\!-2000& 0& 0& 0& 0 \ \ & \ \underline{?} 323 | \end{array} 324 | \right]\!. 325 | \]}% 326 | 327 | \noindent 328 | The first six columns of the tableau correspond to the variables $x$, $y$, $s_1$, $s_2$, $s_3$, $s_4$, 329 | and the last column contains the constants from the constraint equations. 330 | % 331 | In the last row, we use the coefficients of the negative objective function $-g(x,y)$. 332 | This strange initialization of the last row of the tableau is a trick we use to calculate the current value of the objective function $g(x,y)$ 333 | in the bottom right corner of the tableau (the entry that contains an underlined question mark in the above tableau). 334 | We defer the explanation of why we use the \emph{negative} of the objective function 335 | in the last row until after we learn about the interpretation of row operations on the tableau. 336 | 337 | The numbers in the example problem are deliberately chosen to emphasize the distinction between 338 | the constraint rows and the objective-function row. 339 | Small coefficients are used for the constraints and large coefficients in the objective function. 340 | % (the sides of the region) vs level curves 341 | This way it should be clear that two different types of data are recorded in the tableau. 342 | % 343 | 344 | Looking back to Figure~\ref{fig:linear_programming_feasible_region} 345 | can help you compare the two scales of the problem: 346 | the linear constraints that delimit the feasible region all fit in a $6\times 6$ coordinate system, 347 | while the distance between the level curves of $g(x,y)$ is $3000$. 348 | % 349 | You can think of the linear programming problem described in Figure~\ref{fig:linear_programming_feasible_region} 350 | as a three dimensional plot of the plane $z=g(x,y)=3000x+2000y$, 351 | restricted to the points that lie above the feasible region. 352 | % 353 | The scale of the $x$- and $y$-axes in such a plot would be much smaller than the scale of the $z$-axis. 354 | % TODO: produce 3d plot of constraint region. 355 | 356 | 357 | 358 | \subsubsection{Start from a vertex} 359 | 360 | In each step of the simplex algorithm, 361 | we want to move from one vertex of the constrain region to another vertex, 362 | moving along one of the sides. 363 | % 364 | For the algorithm to do its thing, 365 | it must start from one of the corners of the constraint region. 366 | % 367 | We can start from the origin $(0,0)$, 368 | which is the vertex formed by the intersection of the nonnegativity constraints $x\geq 0$, and $y \geq 0$. 369 | 370 | Given $x=0$ and $y=0$, we can deduce the values of the slack variables: 371 | $s_1=3$, $s_2=4$, $s_3=7$, $s_4=5$. 372 | In other words, all constraints are initially maximally slack. 373 | 374 | \subsubsection{State vector} 375 | 376 | The simplex algorithm requires two types of state. 377 | First, we must record the coordinates $(x,y)$ of the current position (the current vertex being visited). 378 | Second, we need to keep track which constraints are tight and which are slack. 379 | 380 | Now this is where things get a little weird. 381 | Instead of keeping track of these two types of information separately, 382 | we'll use a six-dimensional \emph{state vector} that represents the current position \emph{in terms of} the variables in the tableau. 383 | % 384 | The state vector that corresponds to starting the simplex algorithm from $(x,y)=(0,0)$ is 385 | \[ 386 | \vec{v} = (0,0, 1, 1, 1, 1 ). 387 | \] 388 | If the $i$\textsuperscript{th} entry of $\vec{v}$ is $1$ then the $i$\textsuperscript{th} column of the tableau is used in the current set of equations. 389 | Otherwise if the $i$\textsuperscript{th} entry of $\vec{v}$ is $0$ then the $i$\textsuperscript{th} column of the tableau 390 | should be ignored.\!\footnote{Readers familiar with programming will recognize $\vec{v}$ serves as a \emph{bitmask}.} 391 | 392 | Note that each state vector is tied to a particular tableau and has no meaning on its own. 393 | % 394 | We can understand the select-only-columns-with-one-in-the-state procedure as a matrix multiplication: 395 | {\footnotesize 396 | \[ 397 | \left[ 398 | \begin{array}{rrrrrr} 399 | 1& 0& 1& 0& 0& 0 \\ 400 | 0& 1& 0& 1& 0& 0 \\ 401 | 2& 1& 0& 0& 1& 0 \\ 402 | 1& 1& 0& 0& 0& 1 \\ 403 | \!\!\!-3000& \!\!\!\!-2000& 0& 0& 0& 0 404 | \end{array} 405 | \right] 406 | \!\! 407 | \begin{bmatrix} 408 | 0 \\ 0 \\ 1 \\ 1 \\ 1 \\ 1 409 | \end{bmatrix}\! 410 | = 411 | \! 412 | \begin{bmatrix} 413 | 3 \\ 4 \\ 7 \\ 5 \\ 0 414 | \end{bmatrix} 415 | \quad 416 | \Rightarrow 417 | \ \ \ 418 | \begin{array}{rl} 419 | s_1 \quad \quad \quad \ &= 3 \\ 420 | \quad s_2 \quad \quad \ &= 4 \\ 421 | \quad \quad s_3 \quad \ &= 7 \\ 422 | \quad \quad \quad s_4 \ &= 5 \\ 423 | - 3000(0) - 2000(0) \!\!&= 0. 424 | \end{array} 425 | \]}% 426 | The current problem variables have value $x=0$ and $y=0$, 427 | and each of the constraint equations is maximally slack. 428 | The value of the objective function at this vertex is $g(0,0)=0$. 429 | 430 | I know what you're thinking. 431 | Surely it would have been simpler to keep track of the position $(x,y)$ 432 | and the slackness separately $(s_1,s_2,s_3,s_4)$, 433 | rather than to invent a binary state vector $\vec{v} \in \{0, 1\}^{6}$, 434 | and then depend on matrix multiplication to find the coordinates. 435 | % 436 | % and thus operate on them simultaneously through row operations. 437 | % You'll see things will become clear when we start with the row operations, 438 | % but before we move on let's just look at Figure~\ref{fig:linear_programming_1} to see where we are. 439 | 440 | True that. 441 | But the energy we invested to represent the constraints as a tableau 442 | will allow us to perform complex geometrical operations by performing row operations on the tableau. 443 | % 444 | The key benefit of the combined representation is that it treats 445 | problem variables and slack variables on the same footing. 446 | 447 | \begin{figure}[htb] 448 | \begin{center} 449 | \includegraphics[width=0.5\textwidth]{figures/linear_algebra/linear_programming_1.pdf} 450 | \end{center} 451 | \vspace{-6mm} 452 | \caption{The $(x,y)$ coordinates are measured with respect to $(0,0)$.} 453 | \label{fig:linear_programming_1} 454 | \end{figure} 455 | 456 | 457 | 458 | It's now time to start making progress. 459 | Let's make a move. Let's cut some slack! 460 | 461 | 462 | 463 | \subsubsection{Choose the pivot variable} 464 | 465 | The simplex algorithm continues while there exist negative numbers in the last row of the tableau. 466 | Recall we initialized the last row of the tableau with the coefficients of $-g(x,y)$ 467 | and the initial value to $g(0,0)=0$. 468 | % 469 | \[ 470 | \left[ 471 | \begin{array}{rrrrrr|r} 472 | 1& 0& 1& 0& 0& 0 \ \ & \ 3 \ \ \\ 473 | 0& 1& 0& 1& 0& 0 \ \ & \ 4 \ \ \\ 474 | 2& 1& 0& 0& 1& 0 \ \ & \ 7 \ \ \\ 475 | 1& 1& 0& 0& 0& 1\ \ & \ 5 \ \ \\ 476 | -3000& -2000& 0& 0& 0& 0 \ \ & \ 0 \ \ 477 | \end{array} 478 | \right]. 479 | \] 480 | Both the $x$ and $y$ columns contain negative numbers. 481 | This means the objective function would increase if were to increase either the $x$ or the $y$ variables. 482 | % 483 | The coefficient $-3000$ in the last row represents our \emph{incentives} towards increasing the variable $x$: 484 | each increase of $1$ step in the Xapper production will result in an increase of $3000$ in the value of $g(x,y)$. 485 | Similarly, the coefficient $-2000$ indicates that a unit-step in the $y$ direction will increase $g(x,y)$ by $2000$. 486 | It's a bit complicated why we put the \emph{negatives} of the coefficients in the row for the objective function, 487 | but you'll have a chance to convince yourself it all works out nicely when you see the row operations. 488 | 489 | Given the options $x$ and $y$, 490 | we choose to increase the $x$ variable since it leads to a biggest gain in profits. 491 | Remember, we're playing the role of the greedy corporate capitalist whose only motivation is to maximize profits. 492 | 493 | 494 | \subsubsection{Choose the pivot row} 495 | 496 | We've decided to increase the $x$ variable. 497 | The next step is to check how big we can make the $x$ variable before hitting one of the constraints. 498 | % 499 | % To find out we check each of the four constraints to find which will become active 500 | % We say $x$ is the \emph{pivot variable}---the variable 501 | 502 | Which constraint do we hit first when we move in the $x$ direction? 503 | In the first equation $x + s_1 = 3$, we could increase $x$ up to $x=3$. 504 | In the second equation $y + s_2 = 0 + s_2 = 4$, the $x$-variable doesn't appear so it ``allows'' an arbitrary increase in $x$. 505 | The third constraint $2x + y + s_3 = 2x + 0 + s_3 = 7$ allows an increase in $x$ up to $x \leq 7/2=3.5$. 506 | Finally, the fourth equation $x + 0+ s_4 = 5$ imposes the constraint $x\leq 5$. 507 | 508 | The equation $x + s_1 = 3$ allows the smallest increase in $x$, 509 | so this equation will become our \emph{pivot row}. 510 | Specifically, pivoting through the equation $x + s_1 = 3$ means we're decreasing $s_1$ from $3$ to $0$ 511 | and correspondingly increasing $x$ from $0$ to $3$. 512 | We have thue de-slacked the constraint $x + s_1 = 3$. 513 | 514 | Performing the pivot operation corresponds to the following change in the state vector: 515 | \[ 516 | \vec{v} = (0,0, 1, 1, 1, 1 ) 517 | \qquad \to 518 | \qquad 519 | \vec{v}^{\prime} = (1,0, 0, 1, 1, 1 ). 520 | \] 521 | To understand the effect of this change of state, 522 | let's look at the constraint equations that result from the new state vector: 523 | {\footnotesize 524 | \[ 525 | \left[ 526 | \begin{array}{rrrrrr} 527 | 1& 0& 1& 0& 0& 0 \ \ \\ 528 | 0& 1& 0& 1& 0& 0 \ \ \\ 529 | 2& 1& 0& 0& 1& 0 \ \ \\ 530 | 1& 1& 0& 0& 0& 1 \ \ 531 | % \\ 532 | % -3000& -2000& 0& 0& 0& 0 \ \ 533 | \end{array} 534 | \right] 535 | \!\! 536 | \begin{bmatrix} 537 | 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 1 538 | \end{bmatrix} 539 | = 540 | \begin{bmatrix} 541 | 3 \\ 4 \\ 7 \\ 5 \\ 0 542 | \end{bmatrix} 543 | \qquad 544 | \Rightarrow 545 | \qquad 546 | \begin{array}{rl} 547 | x \ + \ \ \ \quad \quad \quad \ &= \ 3 \\ 548 | \quad s_2 \quad \quad \ &= \ 4 \\ 549 | \quad \quad s_3 \quad \ &= \ 7 \\ 550 | \quad \quad \quad s_4 \ &= \ 5 \ . 551 | % - 3000(0) - 2000(0) \quad \ &= \ \underline{0} 552 | \end{array} 553 | \]}% 554 | The effect of the new state $\vec{v}^\prime$ is to increase $x$ from $0$ to $3$, 555 | which leads to corresponding decrease of $s_1$ from $3$ to $0$. 556 | The value of the six variables are now: 557 | \[ 558 | x=3, \quad y=0, \quad s_1=0, \quad s_2=4, \quad s_3=7, \quad s_4=5. 559 | \] 560 | % 561 | Note the change of state vector $\vec{v} \to \vec{v}^\prime$ we performed did not change the 562 | system of equations that describe the constraints of the linear program: 563 | \[ 564 | \begin{array}{rl} 565 | x \ + \ \ \ \ \ + \ s_1 \quad \quad \quad \ &= \ 3 \\ 566 | y \ + \quad s_2 \quad \quad \ &= \ 4 \\ 567 | 2x \ + \ y \ + \quad \quad s_3 \quad \ &= \ 7 \\ 568 | \ x \ + \ y \ + \quad \quad \quad s_4 \ &= \ 5. 569 | \end{array} 570 | \] 571 | % 572 | This system of equations corresponds to the tableau: 573 | \[ 574 | \left[ 575 | \begin{array}{rrrrrr|r} 576 | 1& 0& 1& 0& 0& 0 \ \ & \ 3 \ \ \\ 577 | 0& 1& 0& 1& 0& 0 \ \ & \ 4 \ \ \\ 578 | 2& 1& 0& 0& 1& 0 \ \ & \ 7 \ \ \\ 579 | 1& 1& 0& 0& 0& 1\ \ & \ 5 \ \ \\ 580 | -3000& -2000& 0& 0& 0& 0 \ \ & \ 0 \ \ 581 | \end{array} 582 | \right]. 583 | \] 584 | Except for the increase in $x$ and the corresponding decrease in $s_1$, 585 | we haven't done anything to the tableau. 586 | The next step of the simplex algorithm involves performing row operations on the tableau. 587 | 588 | \subsubsection{Change of variables} 589 | 590 | Knowing $x=3$, we can subtract this equation from the other constraints, 591 | to remove the variable $x$ from the system of equations. 592 | We can eliminate the variable $x$ from the system of equations 593 | by performing the following row operations: 594 | \begin{itemize} 595 | \item $R_3 \gets R_3 - 2R_1$ 596 | \item $R_4 \gets R_4 - R_1$ 597 | \item $R_5 \gets R_5 + 3000R_1$ 598 | \end{itemize} 599 | % >>> M.row(2& lambda v,j: v-2*M[0,j] ) 600 | % >>> M.row(3& lambda v,j: v-1*M[0,j] ) 601 | % >>> M.row(4& lambda v,j: v+3000*M[0,j] ) 602 | 603 | \noindent 604 | The result is the following tableau: 605 | \[ 606 | \left[ 607 | \begin{array}{rrrrrr|r} 608 | 1& 0& 1& 0& 0& 0& 3 \\ 609 | 0& 1& 0& 1& 0& 0& 4 \\ 610 | 0& 1& -2& 0& 1& 0& 1 \\ 611 | 0& 1& -1& 0& 0& 1& 2 \\ 612 | 0& -2000& 3000& 0& 0& 0& 9000 613 | \end{array} 614 | \right]. 615 | \] 616 | Geometrically speaking, 617 | the effect of the row operations is a change of coordinate system. 618 | The equations in the new tableau are expressed with respect to the 619 | coordinate system $(x',y')$ whose origin is at $(3,0)$. 620 | Figure~\ref{fig:linear_programming_1} illustrates this change of coordinate system: 621 | 622 | \begin{figure}[htb] 623 | \begin{center} 624 | \includegraphics[width=0.5\textwidth]{figures/linear_algebra/linear_programming_2.pdf} 625 | \end{center} 626 | \vspace{-6mm} 627 | \caption{The origin of the original $(x,y)$ coordinates system is at $(0,0)$. 628 | The origin of the new $(x',y')$ coordinates system is at $(3,0)$.} 629 | \label{fig:linear_programming_1} 630 | \end{figure} 631 | 632 | 633 | The values of the six variables after performing the change of coordinate system are: 634 | \[ 635 | x=3, \quad y'=0, \quad s'_1=0, \quad s'_2=4, \quad s'_3=1, \quad s'_4=2, 636 | \] 637 | and the current state of the tableau is 638 | \vspace{1mm} 639 | {\footnotesize 640 | \[ 641 | \left[ 642 | \begin{array}{rrrrrr|r} 643 | 1& 0& 1& 0& 0& 0& 3 \\ 644 | 0& 1& 0& 1& 0& 0& 4 \\ 645 | 0& 1& -2& 0& 1& 0& 1 \\ 646 | 0& 1& -1& 0& 0& 1& 2 \\ 647 | 0& -2000& 3000& 0& 0& 0& 9000 648 | \end{array} 649 | \right] 650 | \quad 651 | \Leftrightarrow 652 | \quad 653 | \begin{array}{rl} 654 | x \ + \ \ \ \ \ + \, \ \ s'_1 \quad \quad \quad \ &= \ 3 \\ 655 | y' \ + \quad \ \ s'_2 \quad \quad \ &= \ 4 \\ 656 | y' \ - \ 2s'_1 \quad s'_3 \quad \ &= \ 1 \\ 657 | y' \ - \, \ \ s'_1 \quad \quad s'_4 \ &= \ 2. 658 | \end{array} 659 | \]}% 660 | Note $s'_1=0$ in the system of equations so we could completely ignore this variable and the associated column of the tableau. 661 | Nevertheless, we choose to keep the $s'_1$ column around as a bookkeeping device, 662 | because it's useful to keep track of how many times we have subtracted the first row from the other rows. 663 | 664 | We have now completed the first step in of the simplex algorithm, 665 | and we continue the procedure by looking for the next pivot. 666 | 667 | 668 | \subsubsection{Choose the pivot} 669 | 670 | Observe the second column of the row that corresponds to the objective function contains a negative number, 671 | which means the objective function will increase if we increase the variable $y$. 672 | % advantageous 673 | 674 | 675 | Having decided to increase $y'$ 676 | we must now decide by what amount can we increase the variable $y'$ before we hit one of the constraints? 677 | We can find this out by computing the ratios of the constants in the rightmost column of the tableau, 678 | and the $y'$ coefficients of each row. 679 | The first constraint does not contain $y'$ at all, so it allows any increase in $y'$. 680 | The second constraint will become active when $y' = 4$. 681 | The third constraint allows $y'$ to go up to $y'=1$, 682 | and the fourth constraint becomes active when $y'=2$. 683 | %>>> [ M[i,6]/M[i,1] for i in range(0,4) ] 684 | %[oo& 4& 1& 2] 685 | % 686 | The largest increase in $y'$ that satisfies all the constrains is $y'=1$. 687 | We'll use the third row of the tableau as the \emph{pivot row}, 688 | which means we de-slack $s'_3$ from $1$ to $0$ and correspondingly increasing $y'$ from $0$ to $1$. 689 | After pivoting, the values of the six variables are 690 | \[ 691 | x'=3, \quad y'=1, \quad s''_1=0, \quad s''_2=4, \quad s''_3=0, \quad s''_4=2. 692 | \] 693 | 694 | 695 | \subsubsection{Change of variables} 696 | 697 | We can now subtract the equation $y'=1$ from the other $y'$-containing rows. 698 | The required row operations are 699 | \begin{itemize} 700 | \item $R_2 \gets R_2 - R_3$ 701 | \item $R_4 \gets R_4 - R_3$ 702 | \item $R_5 \gets R_5 + 2000R_3$ 703 | \end{itemize} 704 | %>>> M.row(1& lambda v,j: v-1*M[2,j] ) 705 | %>>> M.row(3& lambda v,j: v-1*M[2,j] ) 706 | %>>> M.row(4& lambda v,j: v+2000*M[2,j] ) 707 | After applying these row operations, the resulting tableau is 708 | \[ 709 | \left[ 710 | \begin{array}{rrrrrr|r} 711 | 1& 0& 1& 0& 0& 0& 3 \\ 712 | 0& 0& 2& 1& -1& 0& 3 \\ 713 | 0& 1& -2& 0& 1& 0& 1 \\ 714 | 0& 0& 1& 0& -1& 1& 1 \\ 715 | 0& 0& -1000& 0& 2000& 0& 11000 716 | \end{array} 717 | \right]. 718 | \] 719 | 720 | \noindent 721 | The geometrical interpretation of subtracting the equation $y'=1$ from the other constraints 722 | is a change to a new coordinate system $(x'',y'')$ with origin at $(3,1)$, 723 | as illustrated in Figure~\ref{fig:linear_programming_3}. 724 | 725 | \begin{figure}[htb] 726 | \begin{center} 727 | \includegraphics[width=0.5\textwidth]{figures/linear_algebra/linear_programming_3.pdf} 728 | \end{center} 729 | \vspace{-6mm} 730 | \caption{The origin of the $(x'',y'')$ coordinate system is at $(3,1)$.} 731 | \label{fig:linear_programming_3} 732 | \end{figure} 733 | 734 | 735 | 736 | \subsubsection{Choose a pivot} 737 | 738 | The third column of the tableau contains a negative number, 739 | which means the objective function will increase if we increase $s''_1$. 740 | We pick $s''_1$ as the next pivot variable. 741 | 742 | We now check the different constraints containing $s''_1$ to see which will become 743 | active first when we increase $s''_1$. 744 | The first inequality imposes $s''_1 \leq 3$, 745 | the second imposes $s''_1 \leq 3/2=1.5$, 746 | the third equation contains a negative amount of $s''_1$ and thus can't be used as a pivot. 747 | The equation in the fourth row, $s''_1 -s''_3 + s'_4 = 1$, 748 | imposes the constraint $s''_1 \leq 1$, 749 | and is the tightest constraint. 750 | Therefore, we'll use the fourth row equation as the pivot. 751 | %>>> [ M[i,6]/M[i,2] for i in range(0,4) ] 752 | %[3& 3/2& -1/2& 1] 753 | 754 | After pivoting, the new values of the variables are 755 | \[ 756 | x''=3, \quad y''=1, \quad s''_1=1, \quad s''_2=4, \quad s''_3=0, \quad s''_4=0. 757 | \] 758 | We now carry out the following row operations to eliminate $s''_1$ from the other eqations: 759 | \begin{itemize} 760 | \item $R_1 \gets R_1 - R_4$ 761 | \item $R_2 \gets R_2 - 2R_4$ 762 | \item $R_3 \gets R_3 + 2R_4$ 763 | \item $R_5 \gets R_5 + 1000R_4$ 764 | \end{itemize} 765 | %>>> M.row(0& lambda v,j: v-1*M[3,j] ) 766 | %>>> M.row(1& lambda v,j: v-2*M[3,j] ) 767 | %>>> M.row(2& lambda v,j: v+2*M[3,j] ) 768 | %>>> M.row(4& lambda v,j: v+1000*M[3,j] ) 769 | 770 | \noindent 771 | The final tableau is 772 | \[ 773 | \left[ 774 | \begin{array}{rrrrrr|r} 775 | 1& 0& 0& 0& 1& -1& 2 \\ 776 | 0& 0& 0& 1& 1& -2& 1 \\ 777 | 0& 1& 0& 0& -1& 2& 3 \\ 778 | 0& 0& 1& 0& -1& 1& 1 \\ 779 | 0& 0& 0& 0& 1000& 1000& 12000 780 | \end{array} 781 | \right]. 782 | \] 783 | We state variables that correspond to this tableau are 784 | \[ 785 | x'''=2, \quad y'''=3, \quad s'''_1=1, \quad s'''_2=1, \quad s'''_3=0, \quad s'''_4=0. 786 | \] 787 | We know this tableau is \emph{final} because there are no more negative numbers in the last row. 788 | This means there are no more directions we can move in that would increase the objective function. 789 | % 790 | 791 | \begin{figure}[htb] 792 | \begin{center} 793 | \includegraphics[width=0.5\textwidth]{figures/linear_algebra/linear_programming_4.pdf} 794 | \end{center} 795 | \vspace{-3mm} 796 | \caption{The $(x''',y''')$ coordinates are measured with respect to the point $(2,3)$. 797 | The simplex algorithm stops because it's not possible to increase any variable 798 | in a direction to increase the objective function $g(x,y)$.} 799 | \label{fig:linear_programming_4} 800 | \end{figure} 801 | 802 | 803 | 804 | \subsubsection{Regroup} 805 | 806 | If you're finding linear programming and the simplex algorithm difficult to comprehend, 807 | you're not alone. 808 | It will take some time to understand the procedure, 809 | but if you solve a few practice problems, you'll get the hang of it. 810 | 811 | Linear programming problems with few parameters and few constraints are simple to solve, 812 | but problems that involve hundreds of variables and hundreds of constraints can be very difficult solve. 813 | Linear programming problems with hundreds and even thousands of variables are common in real-world scenarios. 814 | For this reason, it is very important to approach linear programming problems 815 | in a systematic manner so that we can teach a computer to do the steps for us. 816 | In the next section we enlist the help of \texttt{SymPy} to retrace the steps of the simplex algorithm for the Xapper--Yapper linear program. 817 | 818 | 819 | \subsection{Using \texttt{SymPy} to solve linear programming problems} 820 | 821 | Recall each \texttt{SymPy} \texttt{Matrix} object has a method called \texttt{.row(...)} that 822 | can be used to perform row operations on the matrix. 823 | % 824 | Let's see if we can't make linear programming a little bit more bearable by using \texttt{SymPy} 825 | to perform the row operations on the tableau. 826 | Letting \texttt{SymPy} do the tedious row operations work for us will 827 | let us focus on the \emph{state} of the algorithm in each step. 828 | 829 | Since \texttt{SymPy} will be doing the calculations for us, 830 | it will not hurt if we include the nonnegativity constraints $x\geq0$ and $y\geq 0$ into our analysis. 831 | % See Figure~\ref{fig:linear_programming_feasible_region} (page~\pageref{fig:linear_programming_feasible_region}) 832 | % for an illustration of the constraint region. 833 | We rewrite these constraints as $-x \leq 0$ and $-y \leq 0$ in order to conform to the standard convention ($a_1x+a_2y \leq b$) 834 | for expressing constraints. 835 | 836 | The full system of constrains that correspond to the feasible region is 837 | \[ 838 | \begin{array}{rl} 839 | x \ \ \quad +s_1 \quad \quad \quad \quad \quad \ &= \ 3 \\ 840 | y \quad +s_2 \quad \quad \quad \quad \ &= \ 4 \\ 841 | 2x + y \quad \quad + s_3 \quad \quad \quad \ &= \ 7 \\ 842 | x + y \quad \quad \quad + s_4 \quad \quad \ &= \ 5 \\ 843 | -x \ \ \ \quad \ \ \quad \quad \quad +s_5 \quad \ &= \ 0 \\ 844 | -y \quad \quad \quad \quad \quad + s_6 \ &= \ 0. \\ 845 | \end{array} 846 | \] 847 | 848 | \noindent 849 | We can create a tableau as a \texttt{SymPy} matrix object whose coefficients 850 | correspond to the above system of equations. Use the following command: 851 | \begin{verbatim} 852 | >>> M = Matrix([ 853 | [ 1, 0, 1, 0, 0, 0, 0, 0, 3] 854 | [ 0, 1, 0, 1, 0, 0, 0, 0, 4] 855 | [ 2, 1, 0, 0, 1, 0, 0, 0, 7] 856 | [ 1, 1, 0, 0, 0, 1, 0, 0, 5] 857 | [ -1, 0, 0, 0, 0, 0, 1, 0, 0] 858 | [ 0, -1, 0, 0, 0, 0, 0, 1, 0] 859 | [-3000,-2000, 0, 0, 0, 0, 0, 0, 0]] ) 860 | \end{verbatim} 861 | 862 | \noindent 863 | Note constructing the tableau did not require any special command---a tableau is just a regular \texttt{Matrix} object. 864 | The first six rows of the matrix~$M$ correspond to the constraints on the feasible region, 865 | while the last row contains the negatives of the coefficients of the objective function $-g(x,y)=-3000x-2000y$. 866 | 867 | 868 | \subsubsection{Vertices and sides} 869 | 870 | The simplex algorithm starts from a \emph{vertex} of the feasible region, 871 | and moves along the \emph{edges} of the feasible region. 872 | For the two-dimensional Xapper--Yapper production problem, 873 | the vertices of the region are the corners, 874 | and the edges are the sides of a two-dimensional \emph{polygon}. 875 | A polygon is the general math term for describing a subset of the $xy$-plane 876 | delimited by a finite chain of straight line segments. 877 | 878 | More generally, 879 | the constraint region of an $n$-dimensional linear programming problem has the shape of an $n$-dimensional \emph{polytope}. 880 | A vertex of the polytope corresponds to a place where $n$ constraint inequalities are satisfied, 881 | while an edge corresponds to the intersection of $n-1$ inequalities. 882 | In each step of the simplex algorithm, 883 | we'll keep track of which constraints in the tableau are \emph{active} (tight) and which are \emph{inactive} (loose). 884 | 885 | The simplex algorithm must be started from one of the vertices of the rate region. 886 | For the Xapper--Yapper production problem, 887 | we initialized the tableau from the corner $(0,0)$, 888 | which is the vertex where the two nonnegativity constraints intersect. 889 | 890 | \begin{verbatim} 891 | >>> M 892 | [ 1, 0, 1, 0, 0, 0, 0, 0, 3] 893 | [ 0, 1, 0, 1, 0, 0, 0, 0, 4] 894 | [ 2, 1, 0, 0, 1, 0, 0, 0, 7] 895 | [ 1, 1, 0, 0, 0, 1, 0, 0, 5] 896 | [ -1, 0, 0, 0, 0, 0, 1, 0, 0] (active) 897 | [ 0, -1, 0, 0, 0, 0, 0, 1, 0] (active) 898 | [-3000, -2000, 0, 0, 0, 0, 0, 0, 0] 899 | \end{verbatim} 900 | 901 | \noindent 902 | The current \emph{state} of the simplex algorithm is completely specified by the two \texttt{(active)} flags. 903 | We can deduce the value of all state variables from the knowledge that the inequalities 904 | $-x \leq 0$ and $-y \leq 0$ are tight. 905 | The fifth and sixths constraints are active, 906 | which means $s_5=0$ and $s_6=0$, 907 | from which we deduce that $x=0$ and $y=0$. 908 | If $x=0$ and $y=0$, 909 | then the other inequalities are maximally slack and thus $s_1=3$, $s_2=4$, $s_3=7$, $s_4=5$. 910 | 911 | 912 | \subsubsection{Choose the pivot 1} 913 | In the first step we choose to increase the $x$-variable since it leads to the bigger gain in $g(x,y)$ 914 | ($3000$ per unit step), 915 | as compared to an increase in the $y$-direction ($2000$ per unit step). 916 | 917 | Next we must choose the pivot row. 918 | How big of a step can we make in the $x$-direction before hitting some constraint? 919 | To calculate the maximum allowed step in each direction, 920 | you must divide the current slack available by the $x$ coefficient in each inequality: 921 | 922 | \begin{verbatim} 923 | >>> [ M[i,8]/M[i,0] for i in range(0,4) ] 924 | [3, oo, 7/2, 5] 925 | \end{verbatim} 926 | 927 | \noindent 928 | The above magical python invocation computes the largest step in the $x$ direction allowed by constant in each constraint. 929 | Since the step size of the first row (index 0) is the smallest, 930 | we must choose the first row as our pivot. 931 | 932 | \subsubsection{Change of variables 1} 933 | Carry out the necessary row operations to eliminate all other numbers in the $x$ column. 934 | 935 | \begin{verbatim} 936 | >>> M.row(2, lambda v,j: v-2*M[0,j] ) 937 | >>> M.row(3, lambda v,j: v-1*M[0,j] ) 938 | >>> M.row(4, lambda v,j: v+1*M[0,j] ) 939 | >>> M.row(6, lambda v,j: v+3000*M[0,j] ) 940 | >>> M 941 | [1, 0, 1, 0, 0, 0, 0, 0, 3] (active) 942 | [0, 1, 0, 1, 0, 0, 0, 0, 4] 943 | [0, 1, -2, 0, 1, 0, 0, 0, 1] 944 | [0, 1, -1, 0, 0, 1, 0, 0, 2] 945 | [0, 0, 1, 0, 0, 0, 1, 0, 3] 946 | [0, -1, 0, 0, 0, 0, 0, 1, 0] (active) 947 | [0, -2000, 3000, 0, 0, 0, 0, 0, 9000] 948 | \end{verbatim} 949 | 950 | \noindent 951 | Again, the state of the linear program can be deduced from the information about which constraints are currently active. 952 | Since the first constraint is active ($s_1=0$), 953 | we know $x=3$ and sixth inequality hasn't changed so we still have $y=0$. 954 | 955 | 956 | \subsubsection{Choose the pivot 2} 957 | This time we see there are gains to be had if we increase the $y$-variable, 958 | so we check which constraint will become active first as we increase the $y$-variable. 959 | Note we \texttt{M[i,1]} corresponds to the second column of the \texttt{Matrix} object \texttt{M} 960 | because \texttt{Python} uses $0$-based indexing. 961 | 962 | \begin{verbatim} 963 | >>> [ M[i,8]/M[i,1] for i in range(0,4) ] 964 | [oo, 4, 1, 2]. 965 | \end{verbatim} 966 | 967 | \noindent 968 | The third row (index \texttt{2}) is the corresponds to the constrain that allows the smallest step 969 | in the $y$-direction, so we must use the third row as our pivot. 970 | 971 | \subsubsection{Change of variables 2} 972 | The next step is business as usual: we use a set of row operations to eliminate all the 973 | $y$-coefficients above and below the third row. 974 | 975 | \begin{verbatim} 976 | >>> M.row(1, lambda v,j: v-1*M[2,j] ) 977 | >>> M.row(3, lambda v,j: v-1*M[2,j] ) 978 | >>> M.row(5, lambda v,j: v+1*M[2,j] ) 979 | >>> M.row(6, lambda v,j: v+2000*M[2,j] ) 980 | >>> M 981 | [1, 0, 1, 0, 0, 0, 0, 0, 3] (active) 982 | [0, 0, 2, 1, -1, 0, 0, 0, 3] 983 | [0, 1, -2, 0, 1, 0, 0, 0, 1] (active) 984 | [0, 0, 1, 0, -1, 1, 0, 0, 1] 985 | [0, 0, 1, 0, 0, 0, 1, 0, 3] 986 | [0, 0, -2, 0, 1, 0, 0, 1, 1] 987 | [0, 0, -1000, 0, 2000, 0, 0, 0, 11000] 988 | \end{verbatim} 989 | 990 | \noindent 991 | The first and the third row have become the active constraints, 992 | which means $x= 3$ and $2x+y= 7$. 993 | The full state of the algorithm is 994 | \[ 995 | x=3, \quad y=1, \quad s_1=0, \quad s_2=3, \quad s_3=0, \quad s_4=1. 996 | \] 997 | 998 | % \noindent 999 | % \texttt{TODO: check $s_2=3$ ??} 1000 | % \ \\ 1001 | % \ \\ 1002 | 1003 | 1004 | \bigskip 1005 | 1006 | \subsubsection{Choose the pivot 3} 1007 | The simplex algorithm continues because there still exists a negative coefficient 1008 | in the objective function row of the tableau. 1009 | The coefficient \texttt{-1000} in the column that corresponds to $s_1$ tells us 1010 | we can improve the value of the objective function if we increase the value of the $s_1$ variable. 1011 | Increasing $s_1$ from its current value $s_1=0$ means we're making the first inequality ($x\leq3$) slack. 1012 | 1013 | The fourth constraint allows the smallest increase in the $s_1$-direction, 1014 | as can be seen by the following calculation: 1015 | \begin{verbatim} 1016 | >>> [ M[i,8]/M[i,2] for i in range(0,4) ] 1017 | [3, 3/2, -1/2, 1] 1018 | \end{verbatim} 1019 | 1020 | \noindent 1021 | We choose the fourth row as our pivot. 1022 | 1023 | \subsubsection{Change of variables 3} 1024 | We can eliminate all the numbers in the $s_1$-column of the tableau 1025 | using the following row operations: 1026 | 1027 | \begin{verbatim} 1028 | >>> M.row(0, lambda v,j: v-1*M[3,j] ) 1029 | >>> M.row(1, lambda v,j: v-2*M[3,j] ) 1030 | >>> M.row(2, lambda v,j: v+2*M[3,j] ) 1031 | >>> M.row(4, lambda v,j: v-M[3,j] ) 1032 | >>> M.row(5, lambda v,j: v+2*M[3,j] ) 1033 | >>> M.row(6, lambda v,j: v+1000*M[3,j] ) 1034 | >>> M 1035 | [1, 0, 0, 0, 1, -1, 0, 0, 2] 1036 | [0, 0, 0, 1, 1, -2, 0, 0, 1] 1037 | [0, 1, 0, 0, -1, 2, 0, 0, 3] (active) 1038 | [0, 0, 1, 0, -1, 1, 0, 0, 1] (active) 1039 | [0, 0, 0, 0, 1, -1, 1, 0, 2] 1040 | [0, 0, 0, 0, -1, 2, 0, 1, 3] 1041 | [0, 0, 0, 0, 1000, 1000, 0, 0, 12000] 1042 | \end{verbatim} 1043 | 1044 | \noindent 1045 | The simplex algorithm now terminates, 1046 | since there are no more negative numbers in the last row. 1047 | % 1048 | The final state of the algorithm is 1049 | \[ 1050 | x=2, \quad y=3, \quad s_1=1, \quad s_2=1, \quad s_3=0, \quad s_4=0. 1051 | \] 1052 | The value of the function at this vertex is $g(2,3)= \$12000$. 1053 | 1054 | 1055 | 1056 | 1057 | 1058 | 1059 | 1060 | \subsection{Using a linear program solver} 1061 | 1062 | Using the \texttt{Matrix} row operations provided by \texttt{SymPy} is certainly a fast 1063 | method to perform the steps of the simplex algorithm. 1064 | But there is an even better approach: 1065 | we could outsource the work of solving the linear program 1066 | completely to the computer by using the function \texttt{linprog} 1067 | that comes with the \texttt{Python} package \texttt{scipy}. 1068 | 1069 | These functions are not available in the web interface at \texttt{live.sympy.org}, 1070 | so you'll need to install \href{https://www.python.org/downloads/}{\texttt{Python}} 1071 | and the packages \href{http://www.scipy.org/install.html}{\texttt{numpy}} 1072 | and \href{http://www.scipy.org/install.html}{\texttt{scipy}} on your computer. 1073 | Once this is done, solving the linear program is as simple as 1074 | setting up the problem description and calling the function \texttt{linprog}: 1075 | 1076 | \begin{verbatim} 1077 | >>> from scipy.optimize import linprog 1078 | >>> c = [-3000, -2000] # coefficients of f(x) 1079 | >>> A = [[ 1, 0], # coefficients of the constraints 1080 | [ 0, 1], 1081 | [ 2, 1], 1082 | [ 1, 1], 1083 | [ -1, 0], 1084 | [ 0, -1]] 1085 | >>> b = [3, 4, 7, 5, 0, 0] # constraint constants 1086 | 1087 | >>> linprog(c, A_ub=A, b_ub=b) # find min of f(x) s.t. Ax <= b 1088 | success: True 1089 | x: [ 2, 3] 1090 | fun: -12000 1091 | slack: [ 1, 1, 0, 0, 2, 3] 1092 | message: Optimization terminated successfully. 1093 | nit: 3 1094 | \end{verbatim} 1095 | 1096 | \noindent 1097 | The function \texttt{linprog} solves the \emph{minimization} problem $\min f(\vec{x})$, 1098 | subject to the constraints $A\vec{x} \leq \vec{b}$. 1099 | The objective function $f(x)\equiv \vec{c} \cdot \vec{x}$ is specified in terms of its 1100 | vector of coefficients $\vec{c}$. 1101 | % 1102 | We can't use \texttt{linprog} directly since the linear program we're trying to solve is a 1103 | maximization problem $\max g(\vec{x})$. 1104 | To work around this mismatch between what we want to do and what the function can do for us, 1105 | we can multiply the objective function by a negative sign. 1106 | Finding the maximum of $g(\vec{x})$ is equivalent to finding the minimum of $f(\vec{x}) \equiv -g(\vec{x})$, 1107 | since $\max g(\vec{x}) = \min -g(\vec{x}) = \min f(\vec{x})$. 1108 | This is the reason why we specified the coefficients of the objective function as $\vec{c} = (-3000, -2000)$. 1109 | We must also keep in mind the ``adaptor'' negative sign when interpreting the result: 1110 | the minimum value of $f(\vec{x})$ is $-12\,000$, which means the maximum value of $g(\vec{x})=12\,000$. 1111 | 1112 | For more information about other linear program options you can supply, 1113 | run the command \texttt{help(linprog)} from the \texttt{Python} command prompt. 1114 | The package \texttt{scipy} supplies a number of other useful optimization functions, 1115 | for quadratic programming, and optimization of non-linear functions. 1116 | It's good if you understand details of the simplex algorithm 1117 | (especially if you'll be asked to solve linear programs by hand on your exam), 1118 | but make sure also know how to check your answers using the computer. 1119 | 1120 | Another useful computer tool is Stefan Waner's linear program solver, 1121 | which can be accessed here \href{http://bit.ly/linprog_solver}{\texttt{bit.ly/linprog\_solver}}. 1122 | % http://www.zweigmedia.com/RealWorld/simplex.html 1123 | This solver not only shows you the solution to the linear program, 1124 | but also displays the intermediate tableaus used to obtain the solution. 1125 | This is very useful if you want to check your work. 1126 | 1127 | 1128 | 1129 | 1130 | 1131 | 1132 | \subsection{Duality} 1133 | 1134 | The notion of \emph{duality} is a deep mathematical connection between a given linear program 1135 | and another related linear program called the \emph{dual} program. 1136 | We'll now briefly discuss the general idea of duality. 1137 | I want to expose you to the idea mainly because it gives us a new way of looking at linear programming. 1138 | 1139 | \subsubsection{Shadow prices} 1140 | 1141 | Instead of thinking of the problem in terms of the production rates $x$ and~$y$, 1142 | and slack variables $\{ s_i \}$, 1143 | we can think of the problem in terms of the \emph{shadow prices} (also known as \emph{opportunity costs}) 1144 | associated with each of the resources the company uses. 1145 | The opportunity cost is an economics concept that describes the profits your company is foregoing 1146 | by virtue of choosing to use a given resource in production. 1147 | We treat these as ``costs'' because when choosing to use certain production facilities, 1148 | it means you won't be renting out the production facilities, and thus missing out on potential profits. 1149 | If your company chooses not to use a given resource, 1150 | it could instead sell this resource to the market---as part of an outsourcing deal, for example. 1151 | 1152 | The assumption is that a market exists where each type of resource (production facility, labour, and logistics) can be bought or sold. 1153 | The \emph{shadow prices} represent the price of each resource in such a market. 1154 | The company has the opportunity to rent out their production facilities instead of using them. 1155 | Alternatively, the company could increase its production by renting the necessary facilities from the market. 1156 | We'll use the variables $\lambda_1, \lambda_2, \lambda_3$, and $\lambda_4$ to describe the shadow prices: 1157 | 1158 | \begin{itemize} 1159 | \item $\lambda_1$: the price of one unit of the Xapper-machine producing facilities 1160 | \item $\lambda_2$: the price of one unit of the Yapper-machine producing facilities 1161 | \item $\lambda_3$: the price of one unit of human labour 1162 | \item $\lambda_4$: the price of one using one shipping dock 1163 | \end{itemize} 1164 | 1165 | \noindent 1166 | Note these prices are not imposed on the company by the market but are directly tied to the company's production operations, 1167 | and the company is required to buy or sell at the ``honest'' price, 1168 | after choosing to maximize its profit. 1169 | Being ``honest'' means the company should agree to sell resource $i$ at the price $\lambda_i$ at the same price 1170 | at which it would be willing to buy resource $i$ to increase its revenue. 1171 | This implies that the shadow price associated with each non-active resource constraint is zero. 1172 | A zero shadow price for a given resource means the company has an excess amount of this resource, 1173 | and is not interested (at all) in procuring more of this resource. 1174 | Alternatively, 1175 | we can say a zero shadow price means the company is willing to give away this resource 1176 | to be used by other companies for free, 1177 | because the resource is not critical for the company to achieve its optimal revenue. 1178 | This is a weird leftie share-selflessly-with-your-fellow-companies market---I know---but bear with me for the moment. 1179 | 1180 | 1181 | Recall maximum revenue for the Xapper-Yapper production problem is $\$12\,000$, 1182 | and is obtained for the production rates $x=2$ and $y=3$. 1183 | The constraints that are active at the optimal solution are the human resource constraint $2x + y \leq 7$ 1184 | (seven employees, and it takes 2 employees to produce each Xapper machine and one employee to produce each Yapper machine), 1185 | and the availability of shipping docks $x+ y \leq 5$ (only five shipping docks available). 1186 | 1187 | When solving the linear program using the tableau method, 1188 | the optimal shadow prices for each constrain of the problem appear as the coefficients in the last row of the final tableau. 1189 | Since the first two constraints are not active, their shadow prices are zero: $\lambda_1 = 0$ and $\lambda_2 = 0$. 1190 | The shadow price of contracting out an extra employee is $\lambda_3 = \$1000$, 1191 | and the shadow price of rending an extra shipping dock is $\lambda_4 = \$1000$. 1192 | % 1193 | If instead of using, you contract out one of your employees, or rent out a shipping dock, 1194 | the company's revenue will drop by $\$1000$. 1195 | Conversely, you can think of the shadow price $\$1000$ as how much you would be willing to 1196 | pay to relax a given constraint on your production problem. 1197 | 1198 | 1199 | Using the definition of the shadow prices, 1200 | we can describe the opportunity cost of producing each type of machine: 1201 | \begin{itemize} 1202 | \item $\lambda_1+2\lambda_3+\lambda_4$: the opportunity cost of producing one Xapper machine 1203 | \item $\lambda_2+\lambda_3+\lambda_4$: the opportunity cost of producing one Yapper machine 1204 | \end{itemize} 1205 | Since $\lambda_1=0$ and $\lambda_2=0$ are zero for the optimal solution in the production problem, 1206 | the cost of increasing the Xapper production by one unit is $0+2\lambda_3+\lambda_4 = 2(1000) + 1000 = 3000$, 1207 | and the cost of producing an extra Yapper machine is $0+ \lambda_3+\lambda_4=2000$. 1208 | % 1209 | 1210 | Observe that, for the optimal solution, 1211 | the opportunity cost of producing each type of machine equals the revenue generated when the machine is produced. 1212 | % If we value the company's total resources at the shadow prices, 1213 | % we find their value is exactly equal to the optimal value of the objective function of the firm's decision problem. 1214 | 1215 | 1216 | %= instantaneous rate of improvement with increase of bi in (LP) 1217 | %= instantaneous rate of improvement with decrease of ?bi in (P?) 1218 | % \item $\vec{x}_i \in \{0,1\}^k$: a \emph{message}. Any bitstring of length $k$ is a valid message. 1219 | 1220 | 1221 | 1222 | 1223 | \subsubsection{The dual problem} 1224 | 1225 | Imagine that your Xapper and Yapper company wants to increase its production rate by rent resources from the market. 1226 | The \emph{dual problem} is the task of finding the shadow prices $\{\lambda_i\}$ that minimize the production costs. 1227 | The original problem involves a maximization of the profit function, 1228 | while the dual problem involves a minimization of the costs function: 1229 | 1230 | {\footnotesize 1231 | \[ 1232 | \begin{array}{rl} 1233 | \max_{x,y} \ 3000x \ \ + & \!\!\!\!2000y, \\[1mm] 1234 | \textrm{subject to: } \ \ 1235 | x \ \ + \ \ \ &\leq \ 3 \\ 1236 | y \ &\leq \ 4 \\ 1237 | 2x + y \ &\leq \ 7 \\ 1238 | x + y \ &\leq \ 5 \\ 1239 | x \geq 0 \!\!\!& \ y \geq 0 1240 | \end{array} 1241 | \quad 1242 | \Leftrightarrow 1243 | \quad 1244 | \begin{array}{rl} 1245 | {\displaystyle \min_{\lambda_1,\lambda_2,\lambda_3,\lambda_4}} \ 3\lambda_1 \, + \, 4\lambda_2 \, + \, 7\lambda_3 \! & \!\!\! + \, 5\lambda_4, \\[3mm] 1246 | \textrm{subject to: } \qquad \qquad \qquad \qquad \qquad \\ 1247 | \lambda_1 + \ \ \ \ + 2\lambda_3 + \lambda_4 \ &\geq \ 3000 \\ 1248 | \lambda_2 \, + \,\, \lambda_3 + \lambda_4 \ &\geq \ 2000 \\ 1249 | \lambda_1 \geq 0 \ \ \lambda_2 \geq 0 \ \ \lambda_3 \geq 0 & \!\! \lambda_4 \geq 0. 1250 | \end{array} 1251 | \]}% 1252 | 1253 | 1254 | \noindent 1255 | Performing the minimization in the dual problem corresponds to the company following an ``honest'' but ``greedy'' strategy. 1256 | The constraints in the dual problem describe the ``honesty'' requirement for the company: 1257 | the company is willing to forgo producing each type of machine 1258 | and instead sell the resources if it is offered prices that exceed the profit it would obtain from producing the machine. 1259 | We assume that the firm starts with a position of $b_i$ units of the $i$\textsuperscript{th} resource 1260 | and we may buy or sell this resource at a price determined by an external market. 1261 | 1262 | 1263 | \subsubsection{General formulation} 1264 | 1265 | Consider the $n$-dimensional linear programming problem of maximizing $g(\vec{x}) \equiv \vec{c} \cdot \vec{x} \equiv \vec{c}^\sfT \vec{x}$. 1266 | The vector of constants $\vec{c} \in \mathbb{R}^n$ contains the coefficients of the objective function. 1267 | The constraints of the problem can be expressed as a matrix equation $A\vec{x} \leq \vec{b}$, 1268 | where the matrix $A \in \mathbb{R}^{m\times n}$ contains the coefficients of the $m$ constraints, 1269 | and the vector $\vec{b} \in \mathbb{R}^m$ contains the constants of the constraint equations. 1270 | We will call this the \emph{primal} problem. 1271 | 1272 | To every primal problem $\max_{\vec{x}} \vec{c}^\sfT \vec{x}$ subject to $A\vec{x} \leq \vec{b}$, 1273 | there exists a \emph{dual} optimization problem, which is also a linear program: 1274 | \[ 1275 | \begin{array}{rl} 1276 | \displaystyle \max_{\vec{x}} \ \vec{c} \cdot \vec{x} & \\[3mm] 1277 | \textrm{subject to } \ \ 1278 | A\vec{x} &\leq \ \vec{b} 1279 | \end{array} 1280 | \qquad 1281 | \Leftrightarrow 1282 | \qquad 1283 | \begin{array}{rl} 1284 | \displaystyle \min_{\vec{\lambda}} \ \vec{b} \cdot \vec{\lambda} & \\[3mm] 1285 | \textrm{subject to } \ \ 1286 | \vec{\lambda}^\sfT \!A &\geq \ \vec{c}^{\,\sfT}. 1287 | \end{array} 1288 | \] 1289 | Note the dual problem involves a minimization, 1290 | and the constraint region is obtained from the matrix $A^\sfT$ and greater-than-or-equal inequalities. 1291 | 1292 | 1293 | The variables of the dual problem $\vec{\lambda}=(\lambda_1, \lambda_2, \ldots, \lambda_m)$ 1294 | correspond to the coefficients of a linear combination of the $m$ constraint equations. 1295 | As a mathematical convention, 1296 | when we discuss linear combinations of constraint equations in optimization problems, 1297 | we call the coefficients of the linear combination \emph{Lagrange multipliers} and denote them $\lambda_i$ (the Greek letter \emph{lambda}). 1298 | The $m$ Lagrange multipliers $\lambda_i$ are complementary to the $m$ slack variables $s_i$. 1299 | The slack variable $s_i$ is nonzero when the $i$\textsuperscript{th} constraint is loose, 1300 | and zero when the constraint is tight. 1301 | Inversely, the Lagrange multiplier $\lambda_i$ is zero when the $i$\textsuperscript{th} constraint is loose, 1302 | and nonzero when a constraint is tight. 1303 | 1304 | % self-COMMENT fix explanation below: 1305 | We can understand the dual problem as the search for the minimum value 1306 | for the linear combination of the constraint equations for the rate region, 1307 | which satisfies the constraint 1308 | \[ 1309 | \vec{c} \leq \vec{\lambda}^\sfT A. 1310 | \] 1311 | The set of linear combinations $\vec{\lambda}$ that satisfy this constraint are called \emph{dual-feasible}. 1312 | 1313 | Using the transitivity property of inequalities, 1314 | we'll now show the function $h(\vec{\lambda}) = \vec{\lambda} \cdot \vec{b}$ 1315 | is an \emph{upper bound} on the objective function $g(\vec{x}) \equiv \vec{c} \cdot \vec{x}$ for the primal problem. 1316 | The following chain of inequalities is true for all primal-feasible $\vec{x}$ and dual-feasible $\vec{\lambda}$: 1317 | \[ 1318 | g(\vec{x}) 1319 | = \vec{c}^\sfT \vec{x} 1320 | \leq \left(\vec{\lambda}^\sfT A\right) \cdot \vec{x} 1321 | \leq \vec{\lambda} \cdot \vec{b} 1322 | = h(\vec{\lambda}). 1323 | \] 1324 | The first equality follows from definition of the objective function $g(\vec{x}) = \vec{c} \cdot \vec{x}$. 1325 | The first inequality follows from the assumption that $\vec{\lambda}$ is dual-feasible ($\vec{c} \leq \vec{\lambda}^\sfT A$). 1326 | The second inequality is true because $\vec{x}$ is assumed to be primal-feasible ($A\vec{x}\leq\vec{b}$). 1327 | The last equality follows from the definition of $h(\vec{\lambda}) = \vec{\lambda} \cdot \vec{b}$. 1328 | % 1329 | The \thrmname{weak duality theorem} states that if we have find 1330 | a primal-feasible $\vec{x}^*$ and dual-feasible $\vec{\lambda}^*$ such that 1331 | $\vec{c} \cdot \vec{x}^* = \vec{\lambda}^* \cdot \vec{b}$, 1332 | then 1333 | \[ 1334 | g(\vec{x}^*) \ = \ h(\vec{\lambda}^*). 1335 | \] 1336 | Since $g(\vec{x}^*)$ is equal to its upper bound, 1337 | the point $\vec{x}^*$ must be where $g(\vec{x})$ attains its maximum: 1338 | $\displaystyle \vec{x}^* = \mathop{\textrm{argmax}}_{\vec{x}} g(\vec{x})$. 1339 | Since $g(\vec{x}) \leq h(\vec{\lambda})$ for all $\vec{x}$ and $\vec{\lambda}$ that are feasible, 1340 | and since $g(\vec{x}^*)$ is the maximum value of the primal, 1341 | it means $\displaystyle \vec{\lambda}^* = \mathop{\textrm{argmin}}_{\vec{\lambda}} h(\vec{\lambda})$. 1342 | 1343 | % TODO incorporate insights from: http://www.slideshare.net/preetyrateria/economic-interpretation-of-duality-shadow-price-and-the-complementary-slackness-property 1344 | 1345 | The dual problem is the search for the minimum value of $h(\vec{\lambda})$. 1346 | Studying the dual of a linear programming problem can help us characterize the solution to of the primal problem, 1347 | even before we have found the actual solution. 1348 | 1349 | Switching from searching for the maximum of the function $g(\vec{x})$ to searching for the minimum of $h(\vec{\lambda})$ 1350 | sometimes makes the problem easier to solve. 1351 | % If $\vec{x}$ is primal-feasible an 1352 | 1353 | 1354 | \subsubsection{Complementary slackness conditions} 1355 | 1356 | If a variable is positive, its corresponding (complementary) dual constraint holds with equality. 1357 | If a dual constraint holds with strict inequality, then the corresponding (complementary) primal variable must be zero. 1358 | The shadow price is the instantaneous change, per unit of the constraint, in the objective value of the optimal solution 1359 | of an optimization problem obtained by relaxing the constraint. 1360 | In other words, it is the marginal utility of relaxing the constraint, 1361 | or, equivalently, the marginal cost of strengthening the constraint. 1362 | 1363 | More formally, the shadow price is the value of the Lagrange multiplier at the optimal solution, 1364 | which means that it is the infinitesimal change in the objective function arising from an infinitesimal change in the constraint. 1365 | This follows from the fact that at the optimal solution the gradient of the objective function 1366 | is a linear combination of the constraint function gradients with the weights equal to the Lagrange multipliers. 1367 | 1368 | 1369 | 1370 | \paragraph{Quadratic programming} 1371 | 1372 | When the objective function is quadratic and constraints are linear functions 1373 | the optimization problem is called \emph{quadratic programming}. 1374 | 1375 | 1376 | 1377 | \subsection{Practice problems} 1378 | 1379 | \input{problems/problems.tex} 1380 | 1381 | 1382 | 1383 | \subsection{Links} 1384 | 1385 | \noindent 1386 | [ Linear programming and the Simplex algorithm ] \\ 1387 | \href{http://en.wikipedia.org/wiki/Linear_programming}{\texttt{http://en.wikipedia.org/wiki/Linear\_programming}} \\ 1388 | \href{http://en.wikipedia.org/wiki/Simplex_algorithm}{\texttt{http://en.wikipedia.org/wiki/Simplex\_algorithm}} 1389 | 1390 | \medskip 1391 | \noindent 1392 | [ Simplex method demonstration by patrickJMT on YouTube ] \\ 1393 | \href{https://youtu.be/yL7JByLlfrw} 1394 | {\texttt{youtu.be/yL7JByLlfrw}} (part 1), \\ 1395 | \href{https://youtu.be/vVzjXpwW2xI} 1396 | {\texttt{youtu.be/vVzjXpwW2xI}} (part 2), \\ 1397 | \href{https://youtu.be/lPm46c1pfvQ} 1398 | {\texttt{youtu.be/lPm46c1pfvQ}} (part 3), \\ 1399 | \href{https://youtu.be/WeK4JjNLSgw} 1400 | {\texttt{youtu.be/WeK4JjNLSgw}} (part 4). 1401 | 1402 | 1403 | 1404 | \medskip 1405 | \noindent 1406 | [ More details about duality and different types of constraints ] \\ 1407 | \href{http://cstheory.stackexchange.com/a/16229/7273} 1408 | {\texttt{http://cstheory.stackexchange.com/a/16229/7273}} 1409 | 1410 | \medskip 1411 | \noindent 1412 | [ An application of linear programming in game theory ] \\ 1413 | \href{https://web.archive.org/web/20140309060321/http://blog.alabidan.me/?p=240} 1414 | {\texttt{https://web.archive.org/web/20140309060321/http://blog.alabidan.me/?p=240}} 1415 | 1416 | \medskip 1417 | \noindent 1418 | [ Linear programming course notes ] \\ 1419 | \href{http://www.math.washington.edu/~burke/crs/407/lectures/} 1420 | {\texttt{http://www.math.washington.edu/\textasciitilde{}burke/crs/407/lectures/}} 1421 | 1422 | \medskip 1423 | \noindent 1424 | [ Nice visual explanation of linear programming by Tom\'a\v{s} Sl\'ama ] \\ 1425 | \href{https://www.youtube.com/watch?v=E72DWgKP_1Y} 1426 | {\texttt{https://www.youtube.com/watch?v=E72DWgKP\char`_1Y}} 1427 | 1428 | 1429 | 1430 | 1431 | \end{document} 1432 | 1433 | --------------------------------------------------------------------------------