├── .gitignore ├── 00_math ├── README.md ├── week1 introduction.pdf ├── week2 Applied LA.pdf ├── week3 Multi_calculus.pdf ├── week4 Multi_optimization.pdf ├── 추가자료1_확률기초 .pdf ├── 추가자료2_확률분포 .pdf ├── 딥러닝과 수학_1강_LA.pdf ├── 딥러닝과 수학_2강_Applied_LA.pdf ├── 딥러닝과 수학_3강_calculus.pdf ├── 딥러닝과 수학_4강_optimization.pdf ├── 딥러닝과 수학_5강_확률통계.pdf ├── 딥러닝과 수학_6강_확률통계.pdf ├── 추가자료1_확률기초 .pdf └── 추가자료2_확률분포 .pdf ├── 01_programming ├── .ipynb_checkpoints │ └── [Week1] How to be pythonic-checkpoint.ipynb ├── Assign_problems.ipynb ├── Assign_solutions.ipynb ├── Groupby_problems.ipynb ├── Groupby_solutions.ipynb ├── Indexing_problems.ipynb ├── Indexing_solutions.ipynb ├── Pandas First Lecture.pdf ├── README.md ├── [Week1] How to be pythonic.ipynb ├── [Week1] Python Advanced Concepts.ipynb ├── [Week1] Python Introduction.ipynb ├── [Week2] Python Data Science Environment - IPython, Jupyter, and Conda.ipynb ├── [Week3] NumPy.ipynb ├── [Week4] 머신 러닝 프로젝트 Workflow.ipynb ├── [Week4] 분류 성능 평가 지표 (Classfication Metrics).ipynb ├── [Week5] 나무 기반 모델(Tree-Based Models).ipynb ├── [Week5] 단순 선형 회귀 (Simple linear regression).ipynb ├── [Week5] 로지스틱 회귀 (Logistic Regression).ipynb ├── [Week6] XGBoost Practice.ipynb ├── [Week6] 클러스터링 (Clustering).ipynb ├── image │ ├── built-in functions.png │ ├── if else example.png │ └── integrated example.png ├── movies.dat ├── ratings.csv └── users.dat ├── 02_deep_learning101 ├── README.md ├── codes │ ├── 1day │ │ ├── prac0_score_function.py │ │ └── prac1_softmax_loss.py │ ├── 2day │ │ ├── prac2_simple_two_Layer_net.py │ │ ├── prac2_train_neuralnet.py │ │ └── prac3_mlp_mnist.py │ ├── 3day │ │ └── prac3_mlp_mnist.py │ ├── common │ │ ├── __init__.py │ │ ├── functions.py │ │ ├── gradient.py │ │ ├── layers.py │ │ ├── multi_layer_net.py │ │ ├── multi_layer_net_extend.py │ │ ├── optimizer.py │ │ ├── trainer.py │ │ └── util.py │ └── dataset │ │ ├── __init__.py │ │ ├── mnist.pkl │ │ ├── mnist.py │ │ ├── t10k-images-idx3-ubyte.gz │ │ ├── t10k-labels-idx1-ubyte.gz │ │ ├── train-images-idx3-ubyte.gz │ │ └── train-labels-idx1-ubyte.gz └── lectureNotes │ ├── 01_lossOptmization.pdf │ ├── 02_Optimization.pdf │ ├── 03_ConvolutionalNN.pdf │ ├── 04_TrainingNeuralNetworks.pdf │ ├── 06_RNN_LSTM.pdf │ └── 07_wrapUp.pdf ├── 03_deep_image_processing └── README.md ├── 04_deep_reinforcement_learning └── README.md ├── 05_tools ├── README.md ├── git.pdf ├── jekyll-remote-theme 사용하기.markdown ├── slack.pdf ├── tensorflow_codes │ ├── code00_tf.usage.ipynb │ ├── code01_TF.ipynb │ ├── code02_tensorboard.ipynb │ ├── code03_TF_dimension.ipynb │ ├── code04_tf.Variable.ipynb │ ├── code05_tf.placeholder.ipynb │ ├── code06_tf.train.Saver.ipynb │ ├── code07_tf.cond.ipynb │ ├── code08_linear_regression.ipynb │ ├── code09_linear_regression_3rd_order.ipynb │ ├── code10_mnist_softmax.ipynb │ ├── code11_mnist_deep.ipynb │ ├── code12_mnist_deep_slim.ipynb │ ├── code13_mnist_slim_options.ipynb │ ├── code14_mnist_slim_arg_scope.ipynb │ ├── code15_mnist_summary.ipynb │ └── code16_mnist_dcgan.ipynb └── tool lecture tf.pdf ├── 06_latex ├── example │ ├── example01.tex │ ├── example02.tex │ ├── example03.tex │ ├── example04.tex │ ├── example05.tex │ ├── example06.tex │ ├── example07.tex │ └── example07_bib.bib ├── figures │ ├── 1_3_1_whitespace.png │ ├── 1_3_2_special_character.png │ ├── 1_3_3_latex_command1.png │ ├── 1_3_3_latex_command2.png │ ├── 1_3_3_latex_command3.png │ ├── 1_3_4_comments.png │ ├── 2_10_emph1.png │ ├── 2_10_emph2.png │ ├── 2_11_1_environment.png │ ├── 2_11_5_verbatim.png │ ├── 2_11_6_tabular1.png │ ├── 2_11_6_tabular2.png │ ├── 2_11_6_tabular3.png │ ├── 2_11_6_tabular4.png │ ├── 2_4_2_dash.png │ ├── 2_4_3_tilde.png │ ├── 2_4_8_accent_characters1.png │ ├── 2_4_8_accent_characters2.png │ ├── 2_7_sturcture.png │ ├── 2_8_ref.png │ ├── 2_9_footnote.png │ ├── 3_10_math_fonts.png │ ├── 3_1_1_inline.png │ ├── 3_1_2_displaymath.png │ ├── 3_1_3_equation.png │ ├── 3_1_4_math_mode.png │ ├── 3_2_binding.png │ ├── 3_3_10_sum_prod.png │ ├── 3_3_11_bracket.png │ ├── 3_3_12_big_bracket.png │ ├── 3_3_1_greek.png │ ├── 3_3_2_script.png │ ├── 3_3_3_sqrt.png │ ├── 3_3_4_underline.png │ ├── 3_3_5_underbrace.png │ ├── 3_3_6_derivative.png │ ├── 3_3_7_dot_product.png │ ├── 3_3_8_log.png │ ├── 3_3_9_frac.png │ ├── 3_4_quad.png │ ├── 3_5_1_matrix.png │ ├── 3_5_2_matrix.png │ ├── 3_5_3_long_equation.png │ ├── 3_9_boldsymbol.png │ └── modu.png └── latex.tex ├── 07.NLP └── README.md ├── 08.MEDICAL └── README.md ├── README.md └── images ├── RevSlider_modulabs_dlc01.png ├── ilguyi.png ├── photo_park.png ├── researchProject1.png └── researchProject2.png /.gitignore: -------------------------------------------------------------------------------- 1 | # always ignore some extensions 2 | *.pyc 3 | *.swp 4 | 5 | # OS or Editor floders 6 | .DS_Store 7 | Thumbs.db 8 | 9 | # Data folders 10 | 05_tools/mnist 11 | 05_tools/tensorflow_codes/graphs 12 | 05_tools/tensorflow_codes/.ipynb_checkpoints/ 13 | -------------------------------------------------------------------------------- /00_math/README.md: -------------------------------------------------------------------------------- 1 | # Math 2 | 3 | Mathematics in DeepLearning 4 | 5 | 1주 Elementary Linear Algebra 6 | 7 | scalar, vector, matrix, tensor, operations, matrix decompositions 8 | 9 | 2주 Applied Linear Algebra 10 | 11 | linear transformation, eigenvalue, eigenvector, least square problem, Principal component analysis 12 | 13 | 3주 Multivariate Functions 14 | 15 | function, limit, derivative, integral(Riemann integral), partial derivatives, gradient,Jacobian and Hessian matrix 16 | 17 | 4주 Optimization 18 | 19 | optimization problem, bounded, convex, first order optimization, second order optimization 20 | 21 | 5주 Probability 22 | 23 | elementary logic, set theory, 𝜎-algebra, measure, borel set, random variable, moments 24 | 25 | 6주 Probability and statistics 26 | 27 | functions of random variables, moment generating function, distributions, stochastic processes, bayes' theorem 28 | -------------------------------------------------------------------------------- /00_math/week1 introduction.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/week1 introduction.pdf -------------------------------------------------------------------------------- /00_math/week2 Applied LA.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/week2 Applied LA.pdf -------------------------------------------------------------------------------- /00_math/week3 Multi_calculus.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/week3 Multi_calculus.pdf -------------------------------------------------------------------------------- /00_math/week4 Multi_optimization.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/week4 Multi_optimization.pdf -------------------------------------------------------------------------------- /00_math/추가자료1_확률기초 .pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/추가자료1_확률기초 .pdf -------------------------------------------------------------------------------- /00_math/추가자료2_확률분포 .pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/추가자료2_확률분포 .pdf -------------------------------------------------------------------------------- /00_math/딥러닝과 수학_1강_LA.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/딥러닝과 수학_1강_LA.pdf -------------------------------------------------------------------------------- /00_math/딥러닝과 수학_2강_Applied_LA.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/딥러닝과 수학_2강_Applied_LA.pdf -------------------------------------------------------------------------------- /00_math/딥러닝과 수학_3강_calculus.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/딥러닝과 수학_3강_calculus.pdf -------------------------------------------------------------------------------- /00_math/딥러닝과 수학_4강_optimization.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/딥러닝과 수학_4강_optimization.pdf -------------------------------------------------------------------------------- /00_math/딥러닝과 수학_5강_확률통계.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/딥러닝과 수학_5강_확률통계.pdf -------------------------------------------------------------------------------- /00_math/딥러닝과 수학_6강_확률통계.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/딥러닝과 수학_6강_확률통계.pdf -------------------------------------------------------------------------------- /00_math/추가자료1_확률기초 .pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/추가자료1_확률기초 .pdf -------------------------------------------------------------------------------- /00_math/추가자료2_확률분포 .pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/00_math/추가자료2_확률분포 .pdf -------------------------------------------------------------------------------- /01_programming/Assign_problems.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": { 7 | "collapsed": true, 8 | "deletable": false, 9 | "nbgrader": { 10 | "checksum": "2de6ff757621c04d4acc98e323bc92a0", 11 | "grade": false, 12 | "grade_id": "pandas", 13 | "locked": true, 14 | "solution": false 15 | } 16 | }, 17 | "outputs": [], 18 | "source": [ 19 | "import pandas as pd" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": { 26 | "collapsed": true, 27 | "deletable": false, 28 | "nbgrader": { 29 | "checksum": "1ddd177afba5394e715350574986220b", 30 | "grade": false, 31 | "grade_id": "users", 32 | "locked": true, 33 | "solution": false 34 | } 35 | }, 36 | "outputs": [], 37 | "source": [ 38 | "unames = ['user_id', 'gender', 'age', 'occupation', 'zip'] \n", 39 | "users = pd.read_table('users.dat', sep='::', header=None, names=unames, engine='python', index_col='user_id')" 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": null, 45 | "metadata": { 46 | "collapsed": true, 47 | "deletable": false, 48 | "nbgrader": { 49 | "checksum": "1d0ded1670d5cb6cd725eb2df6ac867c", 50 | "grade": false, 51 | "grade_id": "movies", 52 | "locked": true, 53 | "solution": false 54 | } 55 | }, 56 | "outputs": [], 57 | "source": [ 58 | "mnames = ['movie_id', 'title', 'genres']\n", 59 | "movies = pd.read_table('movies.dat', sep='::', header=None, names=mnames, engine='python', index_col='movie_id')" 60 | ] 61 | }, 62 | { 63 | "cell_type": "code", 64 | "execution_count": null, 65 | "metadata": { 66 | "collapsed": true, 67 | "deletable": false, 68 | "nbgrader": { 69 | "checksum": "e666eefc8c078e19f23ed7911f66e9a5", 70 | "grade": false, 71 | "grade_id": "ratings", 72 | "locked": true, 73 | "solution": false 74 | } 75 | }, 76 | "outputs": [], 77 | "source": [ 78 | "ratings = pd.read_table('ratings.csv', sep=',', index_col='id')" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": {}, 84 | "source": [ 85 | "# assign" 86 | ] 87 | }, 88 | { 89 | "cell_type": "markdown", 90 | "metadata": {}, 91 | "source": [ 92 | "assign은 원본 데이터를 건드리지 않고 새로운 컬럼을 추가하는 함수에요. \n", 93 | "assign(column_name = lambda 함수)의 형식으로 주로 씁니다. \n", 94 | "lambda 함수가 받는 인자는 그 데이터 프레임 전체에요. 거기에 아무짓이나 해서 컬럼 하나로만 만들어주면 돼요." 95 | ] 96 | }, 97 | { 98 | "cell_type": "code", 99 | "execution_count": null, 100 | "metadata": { 101 | "collapsed": false 102 | }, 103 | "outputs": [], 104 | "source": [ 105 | "movies.assign(year = lambda x: x['title'].map(lambda x: x[-5:-1])).head()" 106 | ] 107 | }, 108 | { 109 | "cell_type": "markdown", 110 | "metadata": {}, 111 | "source": [ 112 | "method chaining으로 여러번 쓸 수 있어요." 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": null, 118 | "metadata": { 119 | "collapsed": false 120 | }, 121 | "outputs": [], 122 | "source": [ 123 | "(movies\n", 124 | " .assign(year = lambda x: x['title'].map(lambda x: x[-5:-1]))\n", 125 | " .assign(num_genres = lambda x: x['genres'].map(lambda x: len(x.split('|'))))\n", 126 | " ).head()" 127 | ] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": {}, 132 | "source": [ 133 | "movies는 전혀 변하지 않았죠." 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": null, 139 | "metadata": { 140 | "collapsed": false 141 | }, 142 | "outputs": [], 143 | "source": [ 144 | "movies.head()" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "의미있는 단계마다 변수를 만들어 저장해두면 좋아요." 152 | ] 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": null, 157 | "metadata": { 158 | "collapsed": false 159 | }, 160 | "outputs": [], 161 | "source": [ 162 | "movies_added = (movies\n", 163 | " .assign(year = lambda x: x['title'].map(lambda x: x[-5:-1]))\n", 164 | " .assign(num_genres = lambda x: x['genres'].map(lambda x: len(x.split('|'))))\n", 165 | " )\n", 166 | "movies_added.head()" 167 | ] 168 | }, 169 | { 170 | "cell_type": "markdown", 171 | "metadata": {}, 172 | "source": [ 173 | "컬럼 이름을 똑같이 쓰면 덮어쓰기가 됩니다. 하지만 이건 데이터를 변경하기 때문에 꼭 필요할 때만 쓰세요." 174 | ] 175 | }, 176 | { 177 | "cell_type": "markdown", 178 | "metadata": {}, 179 | "source": [ 180 | "movies_added의 year를 string에서 int로 바꿔보겠습니다." 181 | ] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": null, 186 | "metadata": { 187 | "collapsed": false 188 | }, 189 | "outputs": [], 190 | "source": [ 191 | "(movies_added\n", 192 | " .assign(year = lambda x: x['year'].astype(int))\n", 193 | " ).head()" 194 | ] 195 | }, 196 | { 197 | "cell_type": "markdown", 198 | "metadata": {}, 199 | "source": [ 200 | "이제 year의 중앙값을 구할 수 있겠네요." 201 | ] 202 | }, 203 | { 204 | "cell_type": "code", 205 | "execution_count": null, 206 | "metadata": { 207 | "collapsed": false 208 | }, 209 | "outputs": [], 210 | "source": [ 211 | "movies_added['year'].median()" 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "metadata": {}, 217 | "source": [ 218 | "# Q" 219 | ] 220 | }, 221 | { 222 | "cell_type": "markdown", 223 | "metadata": {}, 224 | "source": [ 225 | "Q. ratings에서 rating 컬럼을 1~5점에서 0~100점으로 변환해서 'rating_100'이라는 컬럼에 넣어보세요.\n", 226 | "> hint : (x.rating - 1) * 25" 227 | ] 228 | }, 229 | { 230 | "cell_type": "code", 231 | "execution_count": null, 232 | "metadata": { 233 | "collapsed": false, 234 | "deletable": false, 235 | "nbgrader": { 236 | "checksum": "5aad315dd8c0d451bb74d33ad2420a57", 237 | "grade": false, 238 | "grade_id": "A0", 239 | "locked": false, 240 | "solution": true 241 | } 242 | }, 243 | "outputs": [], 244 | "source": [ 245 | "# A0 = \n", 246 | "# YOUR CODE HERE\n", 247 | "raise NotImplementedError()" 248 | ] 249 | }, 250 | { 251 | "cell_type": "code", 252 | "execution_count": null, 253 | "metadata": { 254 | "collapsed": false, 255 | "deletable": false, 256 | "nbgrader": { 257 | "checksum": "134a22c15f4182ee6cd3ef0cb167158a", 258 | "grade": true, 259 | "grade_id": "A0_test", 260 | "locked": true, 261 | "points": 1, 262 | "solution": false 263 | } 264 | }, 265 | "outputs": [], 266 | "source": [ 267 | "assert 'rating_100' in A0.columns\n", 268 | "assert A0.rating_100.max() == 100\n", 269 | "assert A0.rating_100.min() == 0\n", 270 | "assert A0.rating_100.mean() == 64.6" 271 | ] 272 | } 273 | ], 274 | "metadata": { 275 | "anaconda-cloud": {}, 276 | "kernelspec": { 277 | "display_name": "Python [Root]", 278 | "language": "python", 279 | "name": "Python [Root]" 280 | }, 281 | "language_info": { 282 | "codemirror_mode": { 283 | "name": "ipython", 284 | "version": 3 285 | }, 286 | "file_extension": ".py", 287 | "mimetype": "text/x-python", 288 | "name": "python", 289 | "nbconvert_exporter": "python", 290 | "pygments_lexer": "ipython3", 291 | "version": "3.5.2" 292 | } 293 | }, 294 | "nbformat": 4, 295 | "nbformat_minor": 0 296 | } 297 | -------------------------------------------------------------------------------- /01_programming/Groupby_problems.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": { 7 | "collapsed": true, 8 | "deletable": false, 9 | "nbgrader": { 10 | "checksum": "2de6ff757621c04d4acc98e323bc92a0", 11 | "grade": false, 12 | "grade_id": "pandas", 13 | "locked": true, 14 | "solution": false 15 | } 16 | }, 17 | "outputs": [], 18 | "source": [ 19 | "import pandas as pd" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": { 26 | "collapsed": true, 27 | "deletable": false, 28 | "nbgrader": { 29 | "checksum": "1ddd177afba5394e715350574986220b", 30 | "grade": false, 31 | "grade_id": "users", 32 | "locked": true, 33 | "solution": false 34 | } 35 | }, 36 | "outputs": [], 37 | "source": [ 38 | "unames = ['user_id', 'gender', 'age', 'occupation', 'zip'] \n", 39 | "users = pd.read_table('users.dat', sep='::', header=None, names=unames, engine='python', index_col='user_id')" 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": null, 45 | "metadata": { 46 | "collapsed": true, 47 | "deletable": false, 48 | "nbgrader": { 49 | "checksum": "1d0ded1670d5cb6cd725eb2df6ac867c", 50 | "grade": false, 51 | "grade_id": "movies", 52 | "locked": true, 53 | "solution": false 54 | } 55 | }, 56 | "outputs": [], 57 | "source": [ 58 | "mnames = ['movie_id', 'title', 'genres']\n", 59 | "movies = pd.read_table('movies.dat', sep='::', header=None, names=mnames, engine='python', index_col='movie_id')" 60 | ] 61 | }, 62 | { 63 | "cell_type": "code", 64 | "execution_count": null, 65 | "metadata": { 66 | "collapsed": true, 67 | "deletable": false, 68 | "nbgrader": { 69 | "checksum": "e666eefc8c078e19f23ed7911f66e9a5", 70 | "grade": false, 71 | "grade_id": "ratings", 72 | "locked": true, 73 | "solution": false 74 | } 75 | }, 76 | "outputs": [], 77 | "source": [ 78 | "ratings = pd.read_table('ratings.csv', sep=',', index_col='id')" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": {}, 84 | "source": [ 85 | "---" 86 | ] 87 | }, 88 | { 89 | "cell_type": "markdown", 90 | "metadata": {}, 91 | "source": [ 92 | "# Groupby" 93 | ] 94 | }, 95 | { 96 | "cell_type": "markdown", 97 | "metadata": {}, 98 | "source": [ 99 | "Q. ratings를 movie 별로 묶어서 각 영화마다 ratings의 평균을 구해보세요. " 100 | ] 101 | }, 102 | { 103 | "cell_type": "code", 104 | "execution_count": null, 105 | "metadata": { 106 | "collapsed": false, 107 | "deletable": false, 108 | "nbgrader": { 109 | "checksum": "5aad315dd8c0d451bb74d33ad2420a57", 110 | "grade": false, 111 | "grade_id": "A0", 112 | "locked": false, 113 | "solution": true 114 | }, 115 | "scrolled": true 116 | }, 117 | "outputs": [], 118 | "source": [ 119 | "# A0 = \n", 120 | "# YOUR CODE HERE\n", 121 | "raise NotImplementedError()" 122 | ] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "metadata": {}, 127 | "source": [ 128 | "```\n", 129 | "movie\n", 130 | "1 4.333333\n", 131 | "2 2.333333\n", 132 | "3 4.000000\n", 133 | "5 4.000000\n", 134 | "6 3.500000\n", 135 | "Name: rating, dtype: float64\n", 136 | "```" 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": null, 142 | "metadata": { 143 | "collapsed": true, 144 | "deletable": false, 145 | "nbgrader": { 146 | "checksum": "809403cfda9c05f569c6fcf99f109866", 147 | "grade": true, 148 | "grade_id": "A0_test", 149 | "locked": true, 150 | "points": 1, 151 | "solution": false 152 | } 153 | }, 154 | "outputs": [], 155 | "source": [ 156 | "assert len(A0) == 1770\n", 157 | "assert A0.name == 'rating'\n", 158 | "assert round(A0.mean(), 2) == 3.42" 159 | ] 160 | }, 161 | { 162 | "cell_type": "markdown", 163 | "metadata": {}, 164 | "source": [ 165 | "Q. ratings를 user별로 묶어서 각 유저 별 평점 평균을 구해보세요." 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": null, 171 | "metadata": { 172 | "collapsed": false, 173 | "deletable": false, 174 | "nbgrader": { 175 | "checksum": "06a2cccd1b8104d10227f4af0432c63a", 176 | "grade": false, 177 | "grade_id": "A1", 178 | "locked": false, 179 | "solution": true 180 | } 181 | }, 182 | "outputs": [], 183 | "source": [ 184 | "# A1 = \n", 185 | "# YOUR CODE HERE\n", 186 | "raise NotImplementedError()" 187 | ] 188 | }, 189 | { 190 | "cell_type": "markdown", 191 | "metadata": {}, 192 | "source": [ 193 | "```\n", 194 | "user\n", 195 | "2783 3.0\n", 196 | "2785 3.0\n", 197 | "2786 4.0\n", 198 | "2788 4.0\n", 199 | "2790 4.0\n", 200 | "Name: rating, dtype: float64\n", 201 | "```" 202 | ] 203 | }, 204 | { 205 | "cell_type": "code", 206 | "execution_count": null, 207 | "metadata": { 208 | "collapsed": true, 209 | "deletable": false, 210 | "nbgrader": { 211 | "checksum": "b71ef05bdf9b380c34cab10adde9ac49", 212 | "grade": true, 213 | "grade_id": "A1_test", 214 | "locked": true, 215 | "points": 1, 216 | "solution": false 217 | } 218 | }, 219 | "outputs": [], 220 | "source": [ 221 | "assert len(A1) == 1987\n", 222 | "assert A1.index.name == 'user'\n", 223 | "assert round(A1.mean(), 2) == 3.66" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": {}, 229 | "source": [ 230 | "Q. ratings에서 각 rating 점수 별로 몇 번 평가가 이루어졌는지 구해보세요." 231 | ] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "execution_count": null, 236 | "metadata": { 237 | "collapsed": false, 238 | "deletable": false, 239 | "nbgrader": { 240 | "checksum": "e983b20dfdb0cc6562f21db6dd5dbffc", 241 | "grade": false, 242 | "grade_id": "A2", 243 | "locked": false, 244 | "solution": true 245 | }, 246 | "scrolled": true 247 | }, 248 | "outputs": [], 249 | "source": [ 250 | "# A2 = \n", 251 | "# YOUR CODE HERE\n", 252 | "raise NotImplementedError()" 253 | ] 254 | }, 255 | { 256 | "cell_type": "markdown", 257 | "metadata": {}, 258 | "source": [ 259 | "```\n", 260 | "rating\n", 261 | "1 305\n", 262 | "2 487\n", 263 | "3 1321\n", 264 | "4 1757\n", 265 | "5 1130\n", 266 | "dtype: int64\n", 267 | "```" 268 | ] 269 | }, 270 | { 271 | "cell_type": "code", 272 | "execution_count": null, 273 | "metadata": { 274 | "collapsed": true, 275 | "deletable": false, 276 | "nbgrader": { 277 | "checksum": "58839b946935ac9a3f88426369a577aa", 278 | "grade": true, 279 | "grade_id": "A2_test", 280 | "locked": true, 281 | "points": 1, 282 | "solution": false 283 | } 284 | }, 285 | "outputs": [], 286 | "source": [ 287 | "assert len(A2) == 5\n", 288 | "assert A2.sum() == 5000\n", 289 | "assert A2.index.name == 'rating'" 290 | ] 291 | }, 292 | { 293 | "cell_type": "markdown", 294 | "metadata": {}, 295 | "source": [ 296 | "Q. users를 gender 별로 묶어서 여자와 남자가 각각 몇 명이 있는지 구해보세요." 297 | ] 298 | }, 299 | { 300 | "cell_type": "code", 301 | "execution_count": null, 302 | "metadata": { 303 | "collapsed": false, 304 | "deletable": false, 305 | "nbgrader": { 306 | "checksum": "5c20052daff268dbda89c1289f1da025", 307 | "grade": false, 308 | "grade_id": "A3", 309 | "locked": false, 310 | "solution": true 311 | } 312 | }, 313 | "outputs": [], 314 | "source": [ 315 | "# A3 = \n", 316 | "# YOUR CODE HERE\n", 317 | "raise NotImplementedError()" 318 | ] 319 | }, 320 | { 321 | "cell_type": "markdown", 322 | "metadata": {}, 323 | "source": [ 324 | "```\n", 325 | "gender\n", 326 | "F 1709\n", 327 | "M 4331\n", 328 | "dtype: int64\n", 329 | "```" 330 | ] 331 | }, 332 | { 333 | "cell_type": "code", 334 | "execution_count": null, 335 | "metadata": { 336 | "collapsed": true, 337 | "deletable": false, 338 | "nbgrader": { 339 | "checksum": "b29ea6ef4c8b16242892b085383ee485", 340 | "grade": true, 341 | "grade_id": "A3_test", 342 | "locked": true, 343 | "points": 1, 344 | "solution": false 345 | } 346 | }, 347 | "outputs": [], 348 | "source": [ 349 | "assert A3.index.name == 'gender'\n", 350 | "assert 'F' in A3.index and 'M' in A3.index\n", 351 | "assert A3.sum() == 6040" 352 | ] 353 | }, 354 | { 355 | "cell_type": "markdown", 356 | "metadata": {}, 357 | "source": [ 358 | "Q. users에서 gender 별로 age의 평균이 얼마인지 구해보세요." 359 | ] 360 | }, 361 | { 362 | "cell_type": "code", 363 | "execution_count": null, 364 | "metadata": { 365 | "collapsed": false, 366 | "deletable": false, 367 | "nbgrader": { 368 | "checksum": "8c7856814f9f4caec90f40ec9a174169", 369 | "grade": false, 370 | "grade_id": "A4", 371 | "locked": false, 372 | "solution": true 373 | } 374 | }, 375 | "outputs": [], 376 | "source": [ 377 | "# A4 = \n", 378 | "# YOUR CODE HERE\n", 379 | "raise NotImplementedError()" 380 | ] 381 | }, 382 | { 383 | "cell_type": "markdown", 384 | "metadata": {}, 385 | "source": [ 386 | "```\n", 387 | "gender\n", 388 | "F 30.859567\n", 389 | "M 30.552297\n", 390 | "Name: age, dtype: float64\n", 391 | "```" 392 | ] 393 | }, 394 | { 395 | "cell_type": "code", 396 | "execution_count": null, 397 | "metadata": { 398 | "collapsed": true, 399 | "deletable": false, 400 | "nbgrader": { 401 | "checksum": "fec821f1b08e66ff0ae89d141f98f288", 402 | "grade": true, 403 | "grade_id": "A4_test", 404 | "locked": true, 405 | "points": 1, 406 | "solution": false 407 | } 408 | }, 409 | "outputs": [], 410 | "source": [ 411 | "assert A4.index.name == 'gender'\n", 412 | "assert 'F' in A4.index and 'M' in A4.index\n", 413 | "assert round(A4.F, 2) == 30.86\n", 414 | "assert round(A4.M, 2) == 30.55" 415 | ] 416 | } 417 | ], 418 | "metadata": { 419 | "anaconda-cloud": {}, 420 | "kernelspec": { 421 | "display_name": "Python [Root]", 422 | "language": "python", 423 | "name": "Python [Root]" 424 | }, 425 | "language_info": { 426 | "codemirror_mode": { 427 | "name": "ipython", 428 | "version": 3 429 | }, 430 | "file_extension": ".py", 431 | "mimetype": "text/x-python", 432 | "name": "python", 433 | "nbconvert_exporter": "python", 434 | "pygments_lexer": "ipython3", 435 | "version": "3.5.2" 436 | } 437 | }, 438 | "nbformat": 4, 439 | "nbformat_minor": 0 440 | } 441 | -------------------------------------------------------------------------------- /01_programming/Groupby_solutions.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": { 7 | "nbgrader": { 8 | "grade": false, 9 | "grade_id": "pandas", 10 | "locked": true, 11 | "solution": false 12 | } 13 | }, 14 | "outputs": [], 15 | "source": [ 16 | "import pandas as pd" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": 2, 22 | "metadata": { 23 | "nbgrader": { 24 | "grade": false, 25 | "grade_id": "users", 26 | "locked": true, 27 | "solution": false 28 | } 29 | }, 30 | "outputs": [], 31 | "source": [ 32 | "unames = ['user_id', 'gender', 'age', 'occupation', 'zip'] \n", 33 | "users = pd.read_table('users.dat', sep='::', header=None, names=unames, engine='python', index_col='user_id')" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": 3, 39 | "metadata": { 40 | "nbgrader": { 41 | "grade": false, 42 | "grade_id": "movies", 43 | "locked": true, 44 | "solution": false 45 | } 46 | }, 47 | "outputs": [], 48 | "source": [ 49 | "mnames = ['movie_id', 'title', 'genres']\n", 50 | "movies = pd.read_table('movies.dat', sep='::', header=None, names=mnames, engine='python', index_col='movie_id')" 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": 4, 56 | "metadata": { 57 | "nbgrader": { 58 | "grade": false, 59 | "grade_id": "ratings", 60 | "locked": true, 61 | "solution": false 62 | } 63 | }, 64 | "outputs": [], 65 | "source": [ 66 | "ratings = pd.read_table('ratings.csv', sep=',', index_col='id')" 67 | ] 68 | }, 69 | { 70 | "cell_type": "markdown", 71 | "metadata": {}, 72 | "source": [ 73 | "---" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "# Groupby" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "Q. ratings를 movie 별로 묶어서 각 영화마다 ratings의 평균을 구해보세요. " 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": 5, 93 | "metadata": { 94 | "nbgrader": { 95 | "grade": false, 96 | "grade_id": "A0", 97 | "locked": false, 98 | "solution": true 99 | }, 100 | "scrolled": true 101 | }, 102 | "outputs": [], 103 | "source": [ 104 | "# A0 = \n", 105 | "### BEGIN SOLUTION\n", 106 | "A0 = ratings.groupby('movie')['rating'].mean()\n", 107 | "### END SOLUTION" 108 | ] 109 | }, 110 | { 111 | "cell_type": "markdown", 112 | "metadata": {}, 113 | "source": [ 114 | "```\n", 115 | "movie\n", 116 | "1 4.333333\n", 117 | "2 2.333333\n", 118 | "3 4.000000\n", 119 | "5 4.000000\n", 120 | "6 3.500000\n", 121 | "Name: rating, dtype: float64\n", 122 | "```" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": 6, 128 | "metadata": { 129 | "nbgrader": { 130 | "grade": true, 131 | "grade_id": "A0_test", 132 | "locked": true, 133 | "points": 1, 134 | "solution": false 135 | } 136 | }, 137 | "outputs": [], 138 | "source": [ 139 | "assert len(A0) == 1770\n", 140 | "assert A0.name == 'rating'\n", 141 | "assert round(A0.mean(), 2) == 3.42" 142 | ] 143 | }, 144 | { 145 | "cell_type": "markdown", 146 | "metadata": {}, 147 | "source": [ 148 | "Q. ratings를 user별로 묶어서 각 유저 별 평점 평균을 구해보세요." 149 | ] 150 | }, 151 | { 152 | "cell_type": "code", 153 | "execution_count": 7, 154 | "metadata": { 155 | "nbgrader": { 156 | "grade": false, 157 | "grade_id": "A1", 158 | "locked": false, 159 | "solution": true 160 | } 161 | }, 162 | "outputs": [], 163 | "source": [ 164 | "# A1 = \n", 165 | "### BEGIN SOLUTION\n", 166 | "A1 = ratings.groupby('user')['rating'].mean()\n", 167 | "### END SOLUTION" 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "```\n", 175 | "user\n", 176 | "2783 3.0\n", 177 | "2785 3.0\n", 178 | "2786 4.0\n", 179 | "2788 4.0\n", 180 | "2790 4.0\n", 181 | "Name: rating, dtype: float64\n", 182 | "```" 183 | ] 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": 8, 188 | "metadata": { 189 | "nbgrader": { 190 | "grade": true, 191 | "grade_id": "A1_test", 192 | "locked": true, 193 | "points": 1, 194 | "solution": false 195 | } 196 | }, 197 | "outputs": [], 198 | "source": [ 199 | "assert len(A1) == 1987\n", 200 | "assert A1.index.name == 'user'\n", 201 | "assert round(A1.mean(), 2) == 3.66" 202 | ] 203 | }, 204 | { 205 | "cell_type": "markdown", 206 | "metadata": {}, 207 | "source": [ 208 | "Q. ratings에서 각 rating 점수 별로 몇 번 평가가 이루어졌는지 구해보세요." 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": 9, 214 | "metadata": { 215 | "nbgrader": { 216 | "grade": false, 217 | "grade_id": "A2", 218 | "locked": false, 219 | "solution": true 220 | }, 221 | "scrolled": true 222 | }, 223 | "outputs": [], 224 | "source": [ 225 | "# A2 = \n", 226 | "### BEGIN SOLUTION\n", 227 | "A2 = ratings.groupby('rating').size()\n", 228 | "### END SOLUTION" 229 | ] 230 | }, 231 | { 232 | "cell_type": "markdown", 233 | "metadata": {}, 234 | "source": [ 235 | "```\n", 236 | "rating\n", 237 | "1 305\n", 238 | "2 487\n", 239 | "3 1321\n", 240 | "4 1757\n", 241 | "5 1130\n", 242 | "dtype: int64\n", 243 | "```" 244 | ] 245 | }, 246 | { 247 | "cell_type": "code", 248 | "execution_count": 10, 249 | "metadata": { 250 | "nbgrader": { 251 | "grade": true, 252 | "grade_id": "A2_test", 253 | "locked": true, 254 | "points": 1, 255 | "solution": false 256 | } 257 | }, 258 | "outputs": [], 259 | "source": [ 260 | "assert len(A2) == 5\n", 261 | "assert A2.sum() == 5000\n", 262 | "assert A2.index.name == 'rating'" 263 | ] 264 | }, 265 | { 266 | "cell_type": "markdown", 267 | "metadata": {}, 268 | "source": [ 269 | "Q. users를 gender 별로 묶어서 여자와 남자가 각각 몇 명이 있는지 구해보세요." 270 | ] 271 | }, 272 | { 273 | "cell_type": "code", 274 | "execution_count": 11, 275 | "metadata": { 276 | "nbgrader": { 277 | "grade": false, 278 | "grade_id": "A3", 279 | "locked": false, 280 | "solution": true 281 | } 282 | }, 283 | "outputs": [], 284 | "source": [ 285 | "# A3 = \n", 286 | "### BEGIN SOLUTION\n", 287 | "A3 = users.groupby('gender').size()\n", 288 | "### END SOLUTION" 289 | ] 290 | }, 291 | { 292 | "cell_type": "markdown", 293 | "metadata": {}, 294 | "source": [ 295 | "```\n", 296 | "gender\n", 297 | "F 1709\n", 298 | "M 4331\n", 299 | "dtype: int64\n", 300 | "```" 301 | ] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": 12, 306 | "metadata": { 307 | "nbgrader": { 308 | "grade": true, 309 | "grade_id": "A3_test", 310 | "locked": true, 311 | "points": 1, 312 | "solution": false 313 | } 314 | }, 315 | "outputs": [], 316 | "source": [ 317 | "assert A3.index.name == 'gender'\n", 318 | "assert 'F' in A3.index and 'M' in A3.index\n", 319 | "assert A3.sum() == 6040" 320 | ] 321 | }, 322 | { 323 | "cell_type": "markdown", 324 | "metadata": {}, 325 | "source": [ 326 | "Q. users에서 gender 별로 age의 평균이 얼마인지 구해보세요." 327 | ] 328 | }, 329 | { 330 | "cell_type": "code", 331 | "execution_count": 13, 332 | "metadata": { 333 | "nbgrader": { 334 | "grade": false, 335 | "grade_id": "A4", 336 | "locked": false, 337 | "solution": true 338 | } 339 | }, 340 | "outputs": [], 341 | "source": [ 342 | "# A4 = \n", 343 | "### BEGIN SOLUTION\n", 344 | "A4 = users.groupby('gender')['age'].mean()\n", 345 | "### END SOLUTION" 346 | ] 347 | }, 348 | { 349 | "cell_type": "markdown", 350 | "metadata": {}, 351 | "source": [ 352 | "```\n", 353 | "gender\n", 354 | "F 30.859567\n", 355 | "M 30.552297\n", 356 | "Name: age, dtype: float64\n", 357 | "```" 358 | ] 359 | }, 360 | { 361 | "cell_type": "code", 362 | "execution_count": 14, 363 | "metadata": { 364 | "nbgrader": { 365 | "grade": true, 366 | "grade_id": "A4_test", 367 | "locked": true, 368 | "points": 1, 369 | "solution": false 370 | } 371 | }, 372 | "outputs": [], 373 | "source": [ 374 | "assert A4.index.name == 'gender'\n", 375 | "assert 'F' in A4.index and 'M' in A4.index\n", 376 | "assert round(A4.F, 2) == 30.86\n", 377 | "assert round(A4.M, 2) == 30.55" 378 | ] 379 | } 380 | ], 381 | "metadata": { 382 | "anaconda-cloud": {}, 383 | "celltoolbar": "Create Assignment", 384 | "kernelspec": { 385 | "display_name": "Python [conda env:jupyterlab2]", 386 | "language": "python", 387 | "name": "conda-env-jupyterlab2-py" 388 | }, 389 | "language_info": { 390 | "codemirror_mode": { 391 | "name": "ipython", 392 | "version": 3 393 | }, 394 | "file_extension": ".py", 395 | "mimetype": "text/x-python", 396 | "name": "python", 397 | "nbconvert_exporter": "python", 398 | "pygments_lexer": "ipython3", 399 | "version": "3.6.3" 400 | } 401 | }, 402 | "nbformat": 4, 403 | "nbformat_minor": 1 404 | } 405 | -------------------------------------------------------------------------------- /01_programming/Pandas First Lecture.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/01_programming/Pandas First Lecture.pdf -------------------------------------------------------------------------------- /01_programming/README.md: -------------------------------------------------------------------------------- 1 | # Programming & Machine Learning 2 | 3 | 4 | 5 | ## Week 1 6 | 7 | [Python Introduction](http://nbviewer.jupyter.org/github/moduDLC/Lectures/blob/master/01_programming/%5BWeek1%5D%20Python%20Introduction.ipynb) 8 | 9 | [Python Advanced Concepts](http://nbviewer.jupyter.org/github/moduDLC/Lectures/blob/master/01_programming/%5BWeek1%5D%20Python%20Advanced%20Concepts.ipynb) 10 | 11 | [How to be Pythonic](http://nbviewer.jupyter.org/github/moduDLC/Lectures/blob/master/01_programming/%5BWeek1%5D%20How%20to%20be%20pythonic.ipynb) 12 | 13 | -------------------------------------------------------------------------------- /01_programming/image/built-in functions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/01_programming/image/built-in functions.png -------------------------------------------------------------------------------- /01_programming/image/if else example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/01_programming/image/if else example.png -------------------------------------------------------------------------------- /01_programming/image/integrated example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/01_programming/image/integrated example.png -------------------------------------------------------------------------------- /01_programming/movies.dat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/01_programming/movies.dat -------------------------------------------------------------------------------- /02_deep_learning101/README.md: -------------------------------------------------------------------------------- 1 | # Deep Learning 101 2 | 3 | hahahahahah -------------------------------------------------------------------------------- /02_deep_learning101/codes/1day/prac0_score_function.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | #%% 3 | X = np.array([1.0, 0.5]) 4 | W = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]]) 5 | B = np.array([0.1, 0.2, 0.3]) 6 | #%% 7 | print(X.shape) 8 | print(W.shape) 9 | print(B.shape) 10 | #%% 11 | # matrix multiplication : np.dot 12 | # element wise multiplication : just * 13 | # matrix add : just + 14 | 15 | # your code 16 | A = np.dot(X, W) + B # make : XW+B 17 | print(A) 18 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/1day/prac1_softmax_loss.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import numpy as np 3 | #%% 4 | X = np.array([1.0, 0.5]) 5 | W = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]]) 6 | B = np.array([0.1, 0.2, 0.3]) 7 | #%% 8 | print(X.shape) 9 | print(W.shape) 10 | print(B.shape) 11 | #%% 12 | A = np.dot(X, W)+B 13 | print(A) 14 | #%% 15 | def softmax(x): 16 | x = x - np.max(x) # 오버플로 대책 17 | return np.exp(x) / np.sum(np.exp(x)) 18 | #%% 19 | y = softmax(A) 20 | #%% 21 | # Correct one hot vector 22 | t = [0, 0, 1] 23 | #%% 24 | # hint : np.sum, np.log(x + delta) for preventing log(0) 25 | # 즉. np.log를 사용할때 np.log(0)이 되는 것을 막기 위해 delta를 더하세요. 26 | ################################### 27 | #### your code ############ 28 | ################################### 29 | def cross_entropy_error(y, t): 30 | delta = 1e-7 31 | loss = -np.sum(t*np.log(y+delta)) # write code here 32 | return loss 33 | 34 | #%% 35 | lloss = cross_entropy_error(y,t) 36 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/2day/prac2_simple_two_Layer_net.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import sys, os 3 | sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 4 | from common.functions import * 5 | from common.gradient import numerical_gradient 6 | 7 | 8 | class TwoLayerNet: 9 | 10 | def __init__(self, input_size, hidden_size, output_size, weight_init_std=0.01): 11 | # 가중치 초기화 12 | self.params = {} 13 | self.params['W1'] = weight_init_std * np.random.randn(input_size, hidden_size) 14 | self.params['b1'] = np.zeros(hidden_size) 15 | self.params['W2'] = weight_init_std * np.random.randn(hidden_size, output_size) 16 | self.params['b2'] = np.zeros(output_size) 17 | 18 | def predict(self, x): 19 | W1, W2 = self.params['W1'], self.params['W2'] 20 | b1, b2 = self.params['b1'], self.params['b2'] 21 | 22 | a1 = np.dot(x, W1) + b1 23 | z1 = sigmoid(a1) 24 | a2 = np.dot(z1, W2) + b2 25 | y = softmax(a2) 26 | 27 | return y 28 | 29 | # x : 입력 데이터, t : 정답 레이블 30 | def loss(self, x, t): 31 | y = self.predict(x) 32 | 33 | return cross_entropy_error(y, t) 34 | 35 | def accuracy(self, x, t): 36 | y = self.predict(x) 37 | y = np.argmax(y, axis=1) 38 | t = np.argmax(t, axis=1) 39 | 40 | accuracy = np.sum(y == t) / float(x.shape[0]) 41 | return accuracy 42 | 43 | # x : 입력 데이터, t : 정답 레이블 44 | def numerical_gradient(self, x, t): 45 | loss_W = lambda W: self.loss(x, t) 46 | 47 | grads = {} 48 | grads['W1'] = numerical_gradient(loss_W, self.params['W1']) 49 | grads['b1'] = numerical_gradient(loss_W, self.params['b1']) 50 | grads['W2'] = numerical_gradient(loss_W, self.params['W2']) 51 | grads['b2'] = numerical_gradient(loss_W, self.params['b2']) 52 | 53 | return grads 54 | 55 | def gradient(self, x, t): 56 | W1, W2 = self.params['W1'], self.params['W2'] 57 | b1, b2 = self.params['b1'], self.params['b2'] 58 | grads = {} 59 | 60 | batch_num = x.shape[0] 61 | 62 | # forward 63 | a1 = np.dot(x, W1) + b1 64 | z1 = sigmoid(a1) 65 | a2 = np.dot(z1, W2) + b2 66 | y = softmax(a2) 67 | 68 | # backward 69 | dy = (y - t) / batch_num 70 | ################################## 71 | ######### Write your codes ####### 72 | ################################## 73 | grads['W2'] = None 74 | grads['b2'] = None 75 | 76 | dz1 = None 77 | da1 = None 78 | grads['W1'] = None 79 | grads['b1'] = None 80 | 81 | return grads 82 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/2day/prac2_train_neuralnet.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import sys, os 3 | sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | from dataset.mnist import load_mnist 7 | from prac1_simple_two_Layer_net import TwoLayerNet 8 | 9 | # 데이터 읽기 10 | (x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True) 11 | 12 | network = TwoLayerNet(input_size=784, hidden_size=50, output_size=10) 13 | 14 | # 하이퍼파라미터 15 | iters_num = 10000 # 반복 횟수를 적절히 설정한다. 16 | train_size = x_train.shape[0] 17 | batch_size = 100 # 미니배치 크기 18 | learning_rate = 0.1 19 | 20 | train_loss_list = [] 21 | train_acc_list = [] 22 | test_acc_list = [] 23 | 24 | # 1에폭당 반복 수 25 | iter_per_epoch = max(train_size / batch_size, 1) 26 | 27 | for i in range(iters_num): 28 | # 미니배치 획득 29 | batch_mask = np.random.choice(train_size, batch_size) 30 | x_batch = x_train[batch_mask] 31 | t_batch = t_train[batch_mask] 32 | 33 | # 기울기 계산 34 | #grad = network.numerical_gradient(x_batch, t_batch) 35 | grad = network.gradient(x_batch, t_batch) 36 | 37 | # 매개변수 갱신 38 | for key in ('W1', 'b1', 'W2', 'b2'): 39 | network.params[key] -= learning_rate * grad[key] 40 | 41 | # 학습 경과 기록 42 | loss = network.loss(x_batch, t_batch) 43 | train_loss_list.append(loss) 44 | 45 | # 1에폭당 정확도 계산 46 | if i % iter_per_epoch == 0: 47 | train_acc = network.accuracy(x_train, t_train) 48 | test_acc = network.accuracy(x_test, t_test) 49 | train_acc_list.append(train_acc) 50 | test_acc_list.append(test_acc) 51 | print("train acc, test acc | " + str(train_acc) + ", " + str(test_acc)) 52 | 53 | # 그래프 그리기 54 | markers = {'train': 'o', 'test': 's'} 55 | x = np.arange(len(train_acc_list)) 56 | plt.plot(x, train_acc_list, label='train acc') 57 | plt.plot(x, test_acc_list, label='test acc', linestyle='--') 58 | plt.xlabel("epochs") 59 | plt.ylabel("accuracy") 60 | plt.ylim(0, 1.0) 61 | plt.legend(loc='lower right') 62 | plt.show() 63 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/2day/prac3_mlp_mnist.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from __future__ import print_function 4 | 5 | import keras 6 | from keras.datasets import mnist 7 | from keras.models import Sequential 8 | from keras.layers import Dense, Dropout 9 | 10 | batch_size = 128 11 | num_classes = 10 12 | epochs = 20 13 | 14 | # the data, shuffled and split between train and test sets 15 | (x_train, y_train), (x_test, y_test) = mnist.load_data() 16 | 17 | x_train = x_train.reshape(60000, 784) 18 | x_test = x_test.reshape(10000, 784) 19 | x_train = x_train.astype('float32') 20 | x_test = x_test.astype('float32') 21 | x_train /= 255 22 | x_test /= 255 23 | print(x_train.shape[0], 'train samples') 24 | print(x_test.shape[0], 'test samples') 25 | 26 | # convert class vectors to binary class matrices 27 | y_train = keras.utils.to_categorical(y_train, num_classes) 28 | y_test = keras.utils.to_categorical(y_test, num_classes) 29 | 30 | model = Sequential() 31 | 32 | ############################################# 33 | ################## Code ################## 34 | ############################################# 35 | #### 주석에 맡게 ? 부분을 채우세요. 36 | # 512개의 뉴런, activation : 'relu'의 Dense Layer 37 | model.add(Dense(?, activation='?', input_shape=(784,))) 38 | # 512개의 뉴런, activation : 'relu' 39 | model.add(Dense(?, activation='?')) 40 | # Class의 수 많큼의 뉴런, activation : 'softmax' 41 | model.add(Dense(?, activation='?')) 42 | 43 | ############################################# 44 | ################## End of your Code ################## 45 | ############################################# 46 | 47 | model.summary() 48 | 49 | # loss 는 크로스 엔트로피, Optimizer는 RMSprop(), metric : 학습이 아닌 평가할 때 사용하는 것. 50 | model.compile(loss='categorical_crossentropy', 51 | optimizer='sgd', 52 | metrics=['accuracy']) 53 | 54 | # epoch : 모델이 학습데이터 젠체를 살펴본 횟수 55 | history = model.fit(x_train, y_train, 56 | batch_size=batch_size, 57 | epochs=epochs, 58 | verbose=1, 59 | validation_data=(x_test, y_test)) 60 | score = model.evaluate(x_test, y_test, verbose=0) 61 | print('Test loss:', score[0]) 62 | print('Test accuracy:', score[1]) 63 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/3day/prac3_mlp_mnist.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from __future__ import print_function 4 | 5 | import keras 6 | from keras.datasets import mnist 7 | from keras.models import Sequential 8 | from keras.layers import Dense, Dropout 9 | 10 | batch_size = 128 11 | num_classes = 10 12 | epochs = 20 13 | 14 | # the data, shuffled and split between train and test sets 15 | (x_train, y_train), (x_test, y_test) = mnist.load_data() 16 | 17 | x_train = x_train.reshape(60000, 784) 18 | x_test = x_test.reshape(10000, 784) 19 | x_train = x_train.astype('float32') 20 | x_test = x_test.astype('float32') 21 | x_train /= 255 22 | x_test /= 255 23 | print(x_train.shape[0], 'train samples') 24 | print(x_test.shape[0], 'test samples') 25 | 26 | # convert class vectors to binary class matrices 27 | y_train = keras.utils.to_categorical(y_train, num_classes) 28 | y_test = keras.utils.to_categorical(y_test, num_classes) 29 | 30 | model = Sequential() 31 | 32 | ############################################# 33 | ################## Code ################## 34 | ############################################# 35 | #### 주석에 맡게 ? 부분을 채우세요. 36 | # 512개의 뉴런, activation : 'relu'의 Dense Layer 37 | model.add(Dense(?, activation='?', input_shape=(784,))) 38 | # 512개의 뉴런, activation : 'relu' 39 | model.add(Dense(?, activation='?')) 40 | # Class의 수 많큼의 뉴런, activation : 'softmax' 41 | model.add(Dense(?, activation='?')) 42 | 43 | ############################################# 44 | ################## End of your Code ################## 45 | ############################################# 46 | 47 | model.summary() 48 | 49 | # loss 는 크로스 엔트로피, Optimizer는 RMSprop(), metric : 학습이 아닌 평가할 때 사용하는 것. 50 | model.compile(loss='categorical_crossentropy', 51 | optimizer='sgd', 52 | metrics=['accuracy']) 53 | 54 | # epoch : 모델이 학습데이터 젠체를 살펴본 횟수 55 | history = model.fit(x_train, y_train, 56 | batch_size=batch_size, 57 | epochs=epochs, 58 | verbose=1, 59 | validation_data=(x_test, y_test)) 60 | score = model.evaluate(x_test, y_test, verbose=0) 61 | print('Test loss:', score[0]) 62 | print('Test accuracy:', score[1]) 63 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/codes/common/__init__.py -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/functions.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import numpy as np 3 | 4 | 5 | def identity_function(x): 6 | return x 7 | 8 | 9 | def step_function(x): 10 | return np.array(x > 0, dtype=np.int) 11 | 12 | 13 | def sigmoid(x): 14 | return 1 / (1 + np.exp(-x)) 15 | 16 | 17 | def sigmoid_grad(x): 18 | return (1.0 - sigmoid(x)) * sigmoid(x) 19 | 20 | 21 | def relu(x): 22 | return np.maximum(0, x) 23 | 24 | 25 | def relu_grad(x): 26 | grad = np.zeros(x) 27 | grad[x>=0] = 1 28 | return grad 29 | 30 | 31 | def softmax(x): 32 | if x.ndim == 2: 33 | x = x.T 34 | x = x - np.max(x, axis=0) 35 | y = np.exp(x) / np.sum(np.exp(x), axis=0) 36 | return y.T 37 | 38 | x = x - np.max(x) # 오버플로 대책 39 | return np.exp(x) / np.sum(np.exp(x)) 40 | 41 | 42 | def mean_squared_error(y, t): 43 | return 0.5 * np.sum((y-t)**2) 44 | 45 | 46 | def cross_entropy_error(y, t): 47 | if y.ndim == 1: 48 | t = t.reshape(1, t.size) 49 | y = y.reshape(1, y.size) 50 | 51 | # 훈련 데이터가 원-핫 벡터라면 정답 레이블의 인덱스로 반환 52 | if t.size == y.size: 53 | t = t.argmax(axis=1) 54 | 55 | batch_size = y.shape[0] 56 | return -np.sum(np.log(y[np.arange(batch_size), t])) / batch_size 57 | 58 | 59 | def softmax_loss(X, t): 60 | y = softmax(X) 61 | return cross_entropy_error(y, t) 62 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/gradient.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import numpy as np 3 | 4 | def _numerical_gradient_1d(f, x): 5 | h = 1e-4 # 0.0001 6 | grad = np.zeros_like(x) 7 | 8 | for idx in range(x.size): 9 | tmp_val = x[idx] 10 | x[idx] = float(tmp_val) + h 11 | fxh1 = f(x) # f(x+h) 12 | 13 | x[idx] = tmp_val - h 14 | fxh2 = f(x) # f(x-h) 15 | grad[idx] = (fxh1 - fxh2) / (2*h) 16 | 17 | x[idx] = tmp_val # 값 복원 18 | 19 | return grad 20 | 21 | 22 | def numerical_gradient_2d(f, X): 23 | if X.ndim == 1: 24 | return _numerical_gradient_1d(f, X) 25 | else: 26 | grad = np.zeros_like(X) 27 | 28 | for idx, x in enumerate(X): 29 | grad[idx] = _numerical_gradient_1d(f, x) 30 | 31 | return grad 32 | 33 | 34 | def numerical_gradient(f, x): 35 | h = 1e-4 # 0.0001 36 | grad = np.zeros_like(x) 37 | 38 | it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite']) 39 | while not it.finished: 40 | idx = it.multi_index 41 | tmp_val = x[idx] 42 | x[idx] = float(tmp_val) + h 43 | fxh1 = f(x) # f(x+h) 44 | 45 | x[idx] = tmp_val - h 46 | fxh2 = f(x) # f(x-h) 47 | grad[idx] = (fxh1 - fxh2) / (2*h) 48 | 49 | x[idx] = tmp_val # 값 복원 50 | it.iternext() 51 | 52 | return grad 53 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/layers.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import numpy as np 3 | from common.functions import * 4 | from common.util import im2col, col2im 5 | 6 | 7 | class Relu: 8 | def __init__(self): 9 | self.mask = None 10 | 11 | def forward(self, x): 12 | self.mask = (x <= 0) 13 | out = x.copy() 14 | out[self.mask] = 0 15 | 16 | return out 17 | 18 | def backward(self, dout): 19 | dout[self.mask] = 0 20 | dx = dout 21 | 22 | return dx 23 | 24 | 25 | class Sigmoid: 26 | def __init__(self): 27 | self.out = None 28 | 29 | def forward(self, x): 30 | out = sigmoid(x) 31 | self.out = out 32 | return out 33 | 34 | def backward(self, dout): 35 | dx = dout * (1.0 - self.out) * self.out 36 | 37 | return dx 38 | 39 | 40 | class Affine: 41 | def __init__(self, W, b): 42 | self.W = W 43 | self.b = b 44 | 45 | self.x = None 46 | self.original_x_shape = None 47 | # 가중치와 편향 매개변수의 미분 48 | self.dW = None 49 | self.db = None 50 | 51 | def forward(self, x): 52 | # 텐서 대응 53 | self.original_x_shape = x.shape 54 | x = x.reshape(x.shape[0], -1) 55 | self.x = x 56 | 57 | out = np.dot(self.x, self.W) + self.b 58 | 59 | return out 60 | 61 | def backward(self, dout): 62 | dx = np.dot(dout, self.W.T) 63 | self.dW = np.dot(self.x.T, dout) 64 | self.db = np.sum(dout, axis=0) 65 | 66 | dx = dx.reshape(*self.original_x_shape) # 입력 데이터 모양 변경(텐서 대응) 67 | return dx 68 | 69 | 70 | class SoftmaxWithLoss: 71 | def __init__(self): 72 | self.loss = None # 손실함수 73 | self.y = None # softmax의 출력 74 | self.t = None # 정답 레이블(원-핫 인코딩 형태) 75 | 76 | def forward(self, x, t): 77 | self.t = t 78 | self.y = softmax(x) 79 | self.loss = cross_entropy_error(self.y, self.t) 80 | 81 | return self.loss 82 | 83 | def backward(self, dout=1): 84 | batch_size = self.t.shape[0] 85 | if self.t.size == self.y.size: # 정답 레이블이 원-핫 인코딩 형태일 때 86 | dx = (self.y - self.t) / batch_size 87 | else: 88 | dx = self.y.copy() 89 | dx[np.arange(batch_size), self.t] -= 1 90 | dx = dx / batch_size 91 | 92 | return dx 93 | 94 | 95 | class Dropout: 96 | """ 97 | http://arxiv.org/abs/1207.0580 98 | """ 99 | def __init__(self, dropout_ratio=0.5): 100 | self.dropout_ratio = dropout_ratio 101 | self.mask = None 102 | 103 | def forward(self, x, train_flg=True): 104 | if train_flg: 105 | self.mask = np.random.rand(*x.shape) > self.dropout_ratio 106 | return x * self.mask 107 | else: 108 | return x * (1.0 - self.dropout_ratio) 109 | 110 | def backward(self, dout): 111 | return dout * self.mask 112 | 113 | 114 | class BatchNormalization: 115 | """ 116 | http://arxiv.org/abs/1502.03167 117 | """ 118 | def __init__(self, gamma, beta, momentum=0.9, running_mean=None, running_var=None): 119 | self.gamma = gamma 120 | self.beta = beta 121 | self.momentum = momentum 122 | self.input_shape = None # 합성곱 계층은 4차원, 완전연결 계층은 2차원 123 | 124 | # 시험할 때 사용할 평균과 분산 125 | self.running_mean = running_mean 126 | self.running_var = running_var 127 | 128 | # backward 시에 사용할 중간 데이터 129 | self.batch_size = None 130 | self.xc = None 131 | self.std = None 132 | self.dgamma = None 133 | self.dbeta = None 134 | 135 | def forward(self, x, train_flg=True): 136 | self.input_shape = x.shape 137 | if x.ndim != 2: 138 | N, C, H, W = x.shape 139 | x = x.reshape(N, -1) 140 | 141 | out = self.__forward(x, train_flg) 142 | 143 | return out.reshape(*self.input_shape) 144 | 145 | def __forward(self, x, train_flg): 146 | if self.running_mean is None: 147 | N, D = x.shape 148 | self.running_mean = np.zeros(D) 149 | self.running_var = np.zeros(D) 150 | 151 | if train_flg: 152 | mu = x.mean(axis=0) 153 | xc = x - mu 154 | var = np.mean(xc**2, axis=0) 155 | std = np.sqrt(var + 10e-7) 156 | xn = xc / std 157 | 158 | self.batch_size = x.shape[0] 159 | self.xc = xc 160 | self.xn = xn 161 | self.std = std 162 | self.running_mean = self.momentum * self.running_mean + (1-self.momentum) * mu 163 | self.running_var = self.momentum * self.running_var + (1-self.momentum) * var 164 | else: 165 | xc = x - self.running_mean 166 | xn = xc / ((np.sqrt(self.running_var + 10e-7))) 167 | 168 | out = self.gamma * xn + self.beta 169 | return out 170 | 171 | def backward(self, dout): 172 | if dout.ndim != 2: 173 | N, C, H, W = dout.shape 174 | dout = dout.reshape(N, -1) 175 | 176 | dx = self.__backward(dout) 177 | 178 | dx = dx.reshape(*self.input_shape) 179 | return dx 180 | 181 | def __backward(self, dout): 182 | dbeta = dout.sum(axis=0) 183 | dgamma = np.sum(self.xn * dout, axis=0) 184 | dxn = self.gamma * dout 185 | dxc = dxn / self.std 186 | dstd = -np.sum((dxn * self.xc) / (self.std * self.std), axis=0) 187 | dvar = 0.5 * dstd / self.std 188 | dxc += (2.0 / self.batch_size) * self.xc * dvar 189 | dmu = np.sum(dxc, axis=0) 190 | dx = dxc - dmu / self.batch_size 191 | 192 | self.dgamma = dgamma 193 | self.dbeta = dbeta 194 | 195 | return dx 196 | 197 | 198 | class Convolution: 199 | def __init__(self, W, b, stride=1, pad=0): 200 | self.W = W 201 | self.b = b 202 | self.stride = stride 203 | self.pad = pad 204 | 205 | # 중간 데이터(backward 시 사용) 206 | self.x = None 207 | self.col = None 208 | self.col_W = None 209 | 210 | # 가중치와 편향 매개변수의 기울기 211 | self.dW = None 212 | self.db = None 213 | 214 | def forward(self, x): 215 | FN, C, FH, FW = self.W.shape 216 | N, C, H, W = x.shape 217 | out_h = 1 + int((H + 2*self.pad - FH) / self.stride) 218 | out_w = 1 + int((W + 2*self.pad - FW) / self.stride) 219 | 220 | col = im2col(x, FH, FW, self.stride, self.pad) 221 | col_W = self.W.reshape(FN, -1).T 222 | 223 | out = np.dot(col, col_W) + self.b 224 | out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2) 225 | 226 | self.x = x 227 | self.col = col 228 | self.col_W = col_W 229 | 230 | return out 231 | 232 | def backward(self, dout): 233 | FN, C, FH, FW = self.W.shape 234 | dout = dout.transpose(0,2,3,1).reshape(-1, FN) 235 | 236 | self.db = np.sum(dout, axis=0) 237 | self.dW = np.dot(self.col.T, dout) 238 | self.dW = self.dW.transpose(1, 0).reshape(FN, C, FH, FW) 239 | 240 | dcol = np.dot(dout, self.col_W.T) 241 | dx = col2im(dcol, self.x.shape, FH, FW, self.stride, self.pad) 242 | 243 | return dx 244 | 245 | 246 | class Pooling: 247 | def __init__(self, pool_h, pool_w, stride=1, pad=0): 248 | self.pool_h = pool_h 249 | self.pool_w = pool_w 250 | self.stride = stride 251 | self.pad = pad 252 | 253 | self.x = None 254 | self.arg_max = None 255 | 256 | def forward(self, x): 257 | N, C, H, W = x.shape 258 | out_h = int(1 + (H - self.pool_h) / self.stride) 259 | out_w = int(1 + (W - self.pool_w) / self.stride) 260 | 261 | col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad) 262 | col = col.reshape(-1, self.pool_h*self.pool_w) 263 | 264 | arg_max = np.argmax(col, axis=1) 265 | out = np.max(col, axis=1) 266 | out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2) 267 | 268 | self.x = x 269 | self.arg_max = arg_max 270 | 271 | return out 272 | 273 | def backward(self, dout): 274 | dout = dout.transpose(0, 2, 3, 1) 275 | 276 | pool_size = self.pool_h * self.pool_w 277 | dmax = np.zeros((dout.size, pool_size)) 278 | dmax[np.arange(self.arg_max.size), self.arg_max.flatten()] = dout.flatten() 279 | dmax = dmax.reshape(dout.shape + (pool_size,)) 280 | 281 | dcol = dmax.reshape(dmax.shape[0] * dmax.shape[1] * dmax.shape[2], -1) 282 | dx = col2im(dcol, self.x.shape, self.pool_h, self.pool_w, self.stride, self.pad) 283 | 284 | return dx 285 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/multi_layer_net.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import sys, os 3 | sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 4 | import numpy as np 5 | from collections import OrderedDict 6 | from common.layers import * 7 | from common.gradient import numerical_gradient 8 | 9 | 10 | class MultiLayerNet: 11 | """완전연결 다층 신경망 12 | 13 | Parameters 14 | ---------- 15 | input_size : 입력 크기(MNIST의 경우엔 784) 16 | hidden_size_list : 각 은닉층의 뉴런 수를 담은 리스트(e.g. [100, 100, 100]) 17 | output_size : 출력 크기(MNIST의 경우엔 10) 18 | activation : 활성화 함수 - 'relu' 혹은 'sigmoid' 19 | weight_init_std : 가중치의 표준편차 지정(e.g. 0.01) 20 | 'relu'나 'he'로 지정하면 'He 초깃값'으로 설정 21 | 'sigmoid'나 'xavier'로 지정하면 'Xavier 초깃값'으로 설정 22 | weight_decay_lambda : 가중치 감소(L2 법칙)의 세기 23 | """ 24 | def __init__(self, input_size, hidden_size_list, output_size, 25 | activation='relu', weight_init_std='relu', weight_decay_lambda=0): 26 | self.input_size = input_size 27 | self.output_size = output_size 28 | self.hidden_size_list = hidden_size_list 29 | self.hidden_layer_num = len(hidden_size_list) 30 | self.weight_decay_lambda = weight_decay_lambda 31 | self.params = {} 32 | 33 | # 가중치 초기화 34 | self.__init_weight(weight_init_std) 35 | 36 | # 계층 생성 37 | activation_layer = {'sigmoid': Sigmoid, 'relu': Relu} 38 | self.layers = OrderedDict() 39 | for idx in range(1, self.hidden_layer_num+1): 40 | self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)], 41 | self.params['b' + str(idx)]) 42 | self.layers['Activation_function' + str(idx)] = activation_layer[activation]() 43 | 44 | idx = self.hidden_layer_num + 1 45 | self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)], 46 | self.params['b' + str(idx)]) 47 | 48 | self.last_layer = SoftmaxWithLoss() 49 | 50 | def __init_weight(self, weight_init_std): 51 | """가중치 초기화 52 | 53 | Parameters 54 | ---------- 55 | weight_init_std : 가중치의 표준편차 지정(e.g. 0.01) 56 | 'relu'나 'he'로 지정하면 'He 초깃값'으로 설정 57 | 'sigmoid'나 'xavier'로 지정하면 'Xavier 초깃값'으로 설정 58 | """ 59 | all_size_list = [self.input_size] + self.hidden_size_list + [self.output_size] 60 | for idx in range(1, len(all_size_list)): 61 | scale = weight_init_std 62 | if str(weight_init_std).lower() in ('relu', 'he'): 63 | scale = np.sqrt(2.0 / all_size_list[idx - 1]) # ReLU를 사용할 때의 권장 초깃값 64 | elif str(weight_init_std).lower() in ('sigmoid', 'xavier'): 65 | scale = np.sqrt(1.0 / all_size_list[idx - 1]) # sigmoid를 사용할 때의 권장 초깃값 66 | self.params['W' + str(idx)] = scale * np.random.randn(all_size_list[idx-1], all_size_list[idx]) 67 | self.params['b' + str(idx)] = np.zeros(all_size_list[idx]) 68 | 69 | def predict(self, x): 70 | for layer in self.layers.values(): 71 | x = layer.forward(x) 72 | 73 | return x 74 | 75 | def loss(self, x, t): 76 | """손실 함수를 구한다. 77 | 78 | Parameters 79 | ---------- 80 | x : 입력 데이터 81 | t : 정답 레이블 82 | 83 | Returns 84 | ------- 85 | 손실 함수의 값 86 | """ 87 | y = self.predict(x) 88 | 89 | weight_decay = 0 90 | for idx in range(1, self.hidden_layer_num + 2): 91 | W = self.params['W' + str(idx)] 92 | weight_decay += 0.5 * self.weight_decay_lambda * np.sum(W ** 2) 93 | 94 | return self.last_layer.forward(y, t) + weight_decay 95 | 96 | def accuracy(self, x, t): 97 | y = self.predict(x) 98 | y = np.argmax(y, axis=1) 99 | if t.ndim != 1 : t = np.argmax(t, axis=1) 100 | 101 | accuracy = np.sum(y == t) / float(x.shape[0]) 102 | return accuracy 103 | 104 | def numerical_gradient(self, x, t): 105 | """기울기를 구한다(수치 미분). 106 | 107 | Parameters 108 | ---------- 109 | x : 입력 데이터 110 | t : 정답 레이블 111 | 112 | Returns 113 | ------- 114 | 각 층의 기울기를 담은 딕셔너리(dictionary) 변수 115 | grads['W1']、grads['W2']、... 각 층의 가중치 116 | grads['b1']、grads['b2']、... 각 층의 편향 117 | """ 118 | loss_W = lambda W: self.loss(x, t) 119 | 120 | grads = {} 121 | for idx in range(1, self.hidden_layer_num+2): 122 | grads['W' + str(idx)] = numerical_gradient(loss_W, self.params['W' + str(idx)]) 123 | grads['b' + str(idx)] = numerical_gradient(loss_W, self.params['b' + str(idx)]) 124 | 125 | return grads 126 | 127 | def gradient(self, x, t): 128 | """기울기를 구한다(오차역전파법). 129 | 130 | Parameters 131 | ---------- 132 | x : 입력 데이터 133 | t : 정답 레이블 134 | 135 | Returns 136 | ------- 137 | 각 층의 기울기를 담은 딕셔너리(dictionary) 변수 138 | grads['W1']、grads['W2']、... 각 층의 가중치 139 | grads['b1']、grads['b2']、... 각 층의 편향 140 | """ 141 | # forward 142 | self.loss(x, t) 143 | 144 | # backward 145 | dout = 1 146 | dout = self.last_layer.backward(dout) 147 | 148 | layers = list(self.layers.values()) 149 | layers.reverse() 150 | for layer in layers: 151 | dout = layer.backward(dout) 152 | 153 | # 결과 저장 154 | grads = {} 155 | for idx in range(1, self.hidden_layer_num+2): 156 | grads['W' + str(idx)] = self.layers['Affine' + str(idx)].dW + self.weight_decay_lambda * self.layers['Affine' + str(idx)].W 157 | grads['b' + str(idx)] = self.layers['Affine' + str(idx)].db 158 | 159 | return grads 160 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/multi_layer_net_extend.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import sys, os 3 | sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 4 | import numpy as np 5 | from collections import OrderedDict 6 | from common.layers import * 7 | from common.gradient import numerical_gradient 8 | 9 | class MultiLayerNetExtend: 10 | """완전 연결 다층 신경망(확장판) 11 | 가중치 감소, 드롭아웃, 배치 정규화 구현 12 | 13 | Parameters 14 | ---------- 15 | input_size : 입력 크기(MNIST의 경우엔 784) 16 | hidden_size_list : 각 은닉층의 뉴런 수를 담은 리스트(e.g. [100, 100, 100]) 17 | output_size : 출력 크기(MNIST의 경우엔 10) 18 | activation : 활성화 함수 - 'relu' 혹은 'sigmoid' 19 | weight_init_std : 가중치의 표준편차 지정(e.g. 0.01) 20 | 'relu'나 'he'로 지정하면 'He 초깃값'으로 설정 21 | 'sigmoid'나 'xavier'로 지정하면 'Xavier 초깃값'으로 설정 22 | weight_decay_lambda : 가중치 감소(L2 법칙)의 세기 23 | use_dropout : 드롭아웃 사용 여부 24 | dropout_ration : 드롭아웃 비율 25 | use_batchNorm : 배치 정규화 사용 여부 26 | """ 27 | def __init__(self, input_size, hidden_size_list, output_size, 28 | activation='relu', weight_init_std='relu', weight_decay_lambda=0, 29 | use_dropout = False, dropout_ration = 0.5, use_batchnorm=False): 30 | self.input_size = input_size 31 | self.output_size = output_size 32 | self.hidden_size_list = hidden_size_list 33 | self.hidden_layer_num = len(hidden_size_list) 34 | self.use_dropout = use_dropout 35 | self.weight_decay_lambda = weight_decay_lambda 36 | self.use_batchnorm = use_batchnorm 37 | self.params = {} 38 | 39 | # 가중치 초기화 40 | self.__init_weight(weight_init_std) 41 | 42 | # 계층 생성 43 | activation_layer = {'sigmoid': Sigmoid, 'relu': Relu} 44 | self.layers = OrderedDict() 45 | for idx in range(1, self.hidden_layer_num+1): 46 | self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)], 47 | self.params['b' + str(idx)]) 48 | if self.use_batchnorm: 49 | self.params['gamma' + str(idx)] = np.ones(hidden_size_list[idx-1]) 50 | self.params['beta' + str(idx)] = np.zeros(hidden_size_list[idx-1]) 51 | self.layers['BatchNorm' + str(idx)] = BatchNormalization(self.params['gamma' + str(idx)], self.params['beta' + str(idx)]) 52 | 53 | self.layers['Activation_function' + str(idx)] = activation_layer[activation]() 54 | 55 | if self.use_dropout: 56 | self.layers['Dropout' + str(idx)] = Dropout(dropout_ration) 57 | 58 | idx = self.hidden_layer_num + 1 59 | self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)], self.params['b' + str(idx)]) 60 | 61 | self.last_layer = SoftmaxWithLoss() 62 | 63 | def __init_weight(self, weight_init_std): 64 | """가중치 초기화 65 | 66 | Parameters 67 | ---------- 68 | weight_init_std : 가중치의 표준편차 지정(e.g. 0.01) 69 | 'relu'나 'he'로 지정하면 'He 초깃값'으로 설정 70 | 'sigmoid'나 'xavier'로 지정하면 'Xavier 초깃값'으로 설정 71 | """ 72 | all_size_list = [self.input_size] + self.hidden_size_list + [self.output_size] 73 | for idx in range(1, len(all_size_list)): 74 | scale = weight_init_std 75 | if str(weight_init_std).lower() in ('relu', 'he'): 76 | scale = np.sqrt(2.0 / all_size_list[idx - 1]) # ReLUを使う場合に推奨される初期値 77 | elif str(weight_init_std).lower() in ('sigmoid', 'xavier'): 78 | scale = np.sqrt(1.0 / all_size_list[idx - 1]) # sigmoidを使う場合に推奨される初期値 79 | self.params['W' + str(idx)] = scale * np.random.randn(all_size_list[idx-1], all_size_list[idx]) 80 | self.params['b' + str(idx)] = np.zeros(all_size_list[idx]) 81 | 82 | def predict(self, x, train_flg=False): 83 | for key, layer in self.layers.items(): 84 | if "Dropout" in key or "BatchNorm" in key: 85 | x = layer.forward(x, train_flg) 86 | else: 87 | x = layer.forward(x) 88 | 89 | return x 90 | 91 | def loss(self, x, t, train_flg=False): 92 | """손실 함수를 구한다. 93 | 94 | Parameters 95 | ---------- 96 | x : 입력 데이터 97 | t : 정답 레이블 98 | """ 99 | y = self.predict(x, train_flg) 100 | 101 | weight_decay = 0 102 | for idx in range(1, self.hidden_layer_num + 2): 103 | W = self.params['W' + str(idx)] 104 | weight_decay += 0.5 * self.weight_decay_lambda * np.sum(W**2) 105 | 106 | return self.last_layer.forward(y, t) + weight_decay 107 | 108 | def accuracy(self, X, T): 109 | Y = self.predict(X, train_flg=False) 110 | Y = np.argmax(Y, axis=1) 111 | if T.ndim != 1 : T = np.argmax(T, axis=1) 112 | 113 | accuracy = np.sum(Y == T) / float(X.shape[0]) 114 | return accuracy 115 | 116 | def numerical_gradient(self, X, T): 117 | """기울기를 구한다(수치 미분). 118 | 119 | Parameters 120 | ---------- 121 | x : 입력 데이터 122 | t : 정답 레이블 123 | 124 | Returns 125 | ------- 126 | 각 층의 기울기를 담은 사전(dictionary) 변수 127 | grads['W1']、grads['W2']、... 각 층의 가중치 128 | grads['b1']、grads['b2']、... 각 층의 편향 129 | """ 130 | loss_W = lambda W: self.loss(X, T, train_flg=True) 131 | 132 | grads = {} 133 | for idx in range(1, self.hidden_layer_num+2): 134 | grads['W' + str(idx)] = numerical_gradient(loss_W, self.params['W' + str(idx)]) 135 | grads['b' + str(idx)] = numerical_gradient(loss_W, self.params['b' + str(idx)]) 136 | 137 | if self.use_batchnorm and idx != self.hidden_layer_num+1: 138 | grads['gamma' + str(idx)] = numerical_gradient(loss_W, self.params['gamma' + str(idx)]) 139 | grads['beta' + str(idx)] = numerical_gradient(loss_W, self.params['beta' + str(idx)]) 140 | 141 | return grads 142 | 143 | def gradient(self, x, t): 144 | # forward 145 | self.loss(x, t, train_flg=True) 146 | 147 | # backward 148 | dout = 1 149 | dout = self.last_layer.backward(dout) 150 | 151 | layers = list(self.layers.values()) 152 | layers.reverse() 153 | for layer in layers: 154 | dout = layer.backward(dout) 155 | 156 | # 결과 저장 157 | grads = {} 158 | for idx in range(1, self.hidden_layer_num+2): 159 | grads['W' + str(idx)] = self.layers['Affine' + str(idx)].dW + self.weight_decay_lambda * self.params['W' + str(idx)] 160 | grads['b' + str(idx)] = self.layers['Affine' + str(idx)].db 161 | 162 | if self.use_batchnorm and idx != self.hidden_layer_num+1: 163 | grads['gamma' + str(idx)] = self.layers['BatchNorm' + str(idx)].dgamma 164 | grads['beta' + str(idx)] = self.layers['BatchNorm' + str(idx)].dbeta 165 | 166 | return grads 167 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/optimizer.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import numpy as np 3 | 4 | class SGD: 5 | 6 | """확률적 경사 하강법(Stochastic Gradient Descent)""" 7 | 8 | def __init__(self, lr=0.01): 9 | self.lr = lr 10 | 11 | def update(self, params, grads): 12 | for key in params.keys(): 13 | params[key] -= self.lr * grads[key] 14 | 15 | 16 | class Momentum: 17 | 18 | """모멘텀 SGD""" 19 | 20 | def __init__(self, lr=0.01, momentum=0.9): 21 | self.lr = lr 22 | self.momentum = momentum 23 | self.v = None 24 | 25 | def update(self, params, grads): 26 | if self.v is None: 27 | self.v = {} 28 | for key, val in params.items(): 29 | self.v[key] = np.zeros_like(val) 30 | 31 | for key in params.keys(): 32 | self.v[key] = self.momentum*self.v[key] - self.lr*grads[key] 33 | params[key] += self.v[key] 34 | 35 | 36 | class Nesterov: 37 | 38 | """Nesterov's Accelerated Gradient (http://arxiv.org/abs/1212.0901)""" 39 | # NAG는 모멘텀에서 한 단계 발전한 방법이다. (http://newsight.tistory.com/224) 40 | 41 | def __init__(self, lr=0.01, momentum=0.9): 42 | self.lr = lr 43 | self.momentum = momentum 44 | self.v = None 45 | 46 | def update(self, params, grads): 47 | if self.v is None: 48 | self.v = {} 49 | for key, val in params.items(): 50 | self.v[key] = np.zeros_like(val) 51 | 52 | for key in params.keys(): 53 | self.v[key] *= self.momentum 54 | self.v[key] -= self.lr * grads[key] 55 | params[key] += self.momentum * self.momentum * self.v[key] 56 | params[key] -= (1 + self.momentum) * self.lr * grads[key] 57 | 58 | 59 | class AdaGrad: 60 | 61 | """AdaGrad""" 62 | 63 | def __init__(self, lr=0.01): 64 | self.lr = lr 65 | self.h = None 66 | 67 | def update(self, params, grads): 68 | if self.h is None: 69 | self.h = {} 70 | for key, val in params.items(): 71 | self.h[key] = np.zeros_like(val) 72 | 73 | for key in params.keys(): 74 | self.h[key] += grads[key] * grads[key] 75 | params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7) 76 | 77 | 78 | class RMSprop: 79 | 80 | """RMSprop""" 81 | 82 | def __init__(self, lr=0.01, decay_rate = 0.99): 83 | self.lr = lr 84 | self.decay_rate = decay_rate 85 | self.h = None 86 | 87 | def update(self, params, grads): 88 | if self.h is None: 89 | self.h = {} 90 | for key, val in params.items(): 91 | self.h[key] = np.zeros_like(val) 92 | 93 | for key in params.keys(): 94 | self.h[key] *= self.decay_rate 95 | self.h[key] += (1 - self.decay_rate) * grads[key] * grads[key] 96 | params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7) 97 | 98 | 99 | class Adam: 100 | 101 | """Adam (http://arxiv.org/abs/1412.6980v8)""" 102 | 103 | def __init__(self, lr=0.001, beta1=0.9, beta2=0.999): 104 | self.lr = lr 105 | self.beta1 = beta1 106 | self.beta2 = beta2 107 | self.iter = 0 108 | self.m = None 109 | self.v = None 110 | 111 | def update(self, params, grads): 112 | if self.m is None: 113 | self.m, self.v = {}, {} 114 | for key, val in params.items(): 115 | self.m[key] = np.zeros_like(val) 116 | self.v[key] = np.zeros_like(val) 117 | 118 | self.iter += 1 119 | lr_t = self.lr * np.sqrt(1.0 - self.beta2**self.iter) / (1.0 - self.beta1**self.iter) 120 | 121 | for key in params.keys(): 122 | #self.m[key] = self.beta1*self.m[key] + (1-self.beta1)*grads[key] 123 | #self.v[key] = self.beta2*self.v[key] + (1-self.beta2)*(grads[key]**2) 124 | self.m[key] += (1 - self.beta1) * (grads[key] - self.m[key]) 125 | self.v[key] += (1 - self.beta2) * (grads[key]**2 - self.v[key]) 126 | 127 | params[key] -= lr_t * self.m[key] / (np.sqrt(self.v[key]) + 1e-7) 128 | 129 | #unbias_m += (1 - self.beta1) * (grads[key] - self.m[key]) # correct bias 130 | #unbisa_b += (1 - self.beta2) * (grads[key]*grads[key] - self.v[key]) # correct bias 131 | #params[key] += self.lr * unbias_m / (np.sqrt(unbisa_b) + 1e-7) 132 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/trainer.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import sys, os 3 | sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 4 | import numpy as np 5 | from common.optimizer import * 6 | 7 | class Trainer: 8 | """신경망 훈련을 대신 해주는 클래스 9 | """ 10 | def __init__(self, network, x_train, t_train, x_test, t_test, 11 | epochs=20, mini_batch_size=100, 12 | optimizer='SGD', optimizer_param={'lr':0.01}, 13 | evaluate_sample_num_per_epoch=None, verbose=True): 14 | self.network = network 15 | self.verbose = verbose 16 | self.x_train = x_train 17 | self.t_train = t_train 18 | self.x_test = x_test 19 | self.t_test = t_test 20 | self.epochs = epochs 21 | self.batch_size = mini_batch_size 22 | self.evaluate_sample_num_per_epoch = evaluate_sample_num_per_epoch 23 | 24 | # optimzer 25 | optimizer_class_dict = {'sgd':SGD, 'momentum':Momentum, 'nesterov':Nesterov, 26 | 'adagrad':AdaGrad, 'rmsprpo':RMSprop, 'adam':Adam} 27 | self.optimizer = optimizer_class_dict[optimizer.lower()](**optimizer_param) 28 | 29 | self.train_size = x_train.shape[0] 30 | self.iter_per_epoch = max(self.train_size / mini_batch_size, 1) 31 | self.max_iter = int(epochs * self.iter_per_epoch) 32 | self.current_iter = 0 33 | self.current_epoch = 0 34 | 35 | self.train_loss_list = [] 36 | self.train_acc_list = [] 37 | self.test_acc_list = [] 38 | 39 | def train_step(self): 40 | batch_mask = np.random.choice(self.train_size, self.batch_size) 41 | x_batch = self.x_train[batch_mask] 42 | t_batch = self.t_train[batch_mask] 43 | 44 | grads = self.network.gradient(x_batch, t_batch) 45 | self.optimizer.update(self.network.params, grads) 46 | 47 | loss = self.network.loss(x_batch, t_batch) 48 | self.train_loss_list.append(loss) 49 | if self.verbose: print("train loss:" + str(loss)) 50 | 51 | if self.current_iter % self.iter_per_epoch == 0: 52 | self.current_epoch += 1 53 | 54 | x_train_sample, t_train_sample = self.x_train, self.t_train 55 | x_test_sample, t_test_sample = self.x_test, self.t_test 56 | if not self.evaluate_sample_num_per_epoch is None: 57 | t = self.evaluate_sample_num_per_epoch 58 | x_train_sample, t_train_sample = self.x_train[:t], self.t_train[:t] 59 | x_test_sample, t_test_sample = self.x_test[:t], self.t_test[:t] 60 | 61 | train_acc = self.network.accuracy(x_train_sample, t_train_sample) 62 | test_acc = self.network.accuracy(x_test_sample, t_test_sample) 63 | self.train_acc_list.append(train_acc) 64 | self.test_acc_list.append(test_acc) 65 | 66 | if self.verbose: print("=== epoch:" + str(self.current_epoch) + ", train acc:" + str(train_acc) + ", test acc:" + str(test_acc) + " ===") 67 | self.current_iter += 1 68 | 69 | def train(self): 70 | for i in range(self.max_iter): 71 | self.train_step() 72 | 73 | test_acc = self.network.accuracy(self.x_test, self.t_test) 74 | 75 | if self.verbose: 76 | print("=============== Final Test Accuracy ===============") 77 | print("test acc:" + str(test_acc)) 78 | 79 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/common/util.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import numpy as np 3 | 4 | 5 | def smooth_curve(x): 6 | """손실 함수의 그래프를 매끄럽게 하기 위해 사용 7 | 8 | 참고:http://glowingpython.blogspot.jp/2012/02/convolution-with-numpy.html 9 | """ 10 | window_len = 11 11 | s = np.r_[x[window_len-1:0:-1], x, x[-1:-window_len:-1]] 12 | w = np.kaiser(window_len, 2) 13 | y = np.convolve(w/w.sum(), s, mode='valid') 14 | return y[5:len(y)-5] 15 | 16 | 17 | def shuffle_dataset(x, t): 18 | """데이터셋을 뒤섞는다. 19 | 20 | Parameters 21 | ---------- 22 | x : 훈련 데이터 23 | t : 정답 레이블 24 | 25 | Returns 26 | ------- 27 | x, t : 뒤섞은 훈련 데이터와 정답 레이블 28 | """ 29 | permutation = np.random.permutation(x.shape[0]) 30 | x = x[permutation,:] if x.ndim == 2 else x[permutation,:,:,:] 31 | t = t[permutation] 32 | 33 | return x, t 34 | 35 | def conv_output_size(input_size, filter_size, stride=1, pad=0): 36 | return (input_size + 2*pad - filter_size) / stride + 1 37 | 38 | 39 | def im2col(input_data, filter_h, filter_w, stride=1, pad=0): 40 | """다수의 이미지를 입력받아 2차원 배열로 변환한다(평탄화). 41 | 42 | Parameters 43 | ---------- 44 | input_data : 4차원 배열 형태의 입력 데이터(이미지 수, 채널 수, 높이, 너비) 45 | filter_h : 필터의 높이 46 | filter_w : 필터의 너비 47 | stride : 스트라이드 48 | pad : 패딩 49 | 50 | Returns 51 | ------- 52 | col : 2차원 배열 53 | """ 54 | N, C, H, W = input_data.shape 55 | out_h = (H + 2*pad - filter_h)//stride + 1 56 | out_w = (W + 2*pad - filter_w)//stride + 1 57 | 58 | img = np.pad(input_data, [(0,0), (0,0), (pad, pad), (pad, pad)], 'constant') 59 | col = np.zeros((N, C, filter_h, filter_w, out_h, out_w)) 60 | 61 | for y in range(filter_h): 62 | y_max = y + stride*out_h 63 | for x in range(filter_w): 64 | x_max = x + stride*out_w 65 | col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride] 66 | 67 | col = col.transpose(0, 4, 5, 1, 2, 3).reshape(N*out_h*out_w, -1) 68 | return col 69 | 70 | 71 | def col2im(col, input_shape, filter_h, filter_w, stride=1, pad=0): 72 | """(im2col과 반대) 2차원 배열을 입력받아 다수의 이미지 묶음으로 변환한다. 73 | 74 | Parameters 75 | ---------- 76 | col : 2차원 배열(입력 데이터) 77 | input_shape : 원래 이미지 데이터의 형상(예:(10, 1, 28, 28)) 78 | filter_h : 필터의 높이 79 | filter_w : 필터의 너비 80 | stride : 스트라이드 81 | pad : 패딩 82 | 83 | Returns 84 | ------- 85 | img : 변환된 이미지들 86 | """ 87 | N, C, H, W = input_shape 88 | out_h = (H + 2*pad - filter_h)//stride + 1 89 | out_w = (W + 2*pad - filter_w)//stride + 1 90 | col = col.reshape(N, out_h, out_w, C, filter_h, filter_w).transpose(0, 3, 4, 5, 1, 2) 91 | 92 | img = np.zeros((N, C, H + 2*pad + stride - 1, W + 2*pad + stride - 1)) 93 | for y in range(filter_h): 94 | y_max = y + stride*out_h 95 | for x in range(filter_w): 96 | x_max = x + stride*out_w 97 | img[:, :, y:y_max:stride, x:x_max:stride] += col[:, :, y, x, :, :] 98 | 99 | return img[:, :, pad:H + pad, pad:W + pad] 100 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/dataset/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/codes/dataset/__init__.py -------------------------------------------------------------------------------- /02_deep_learning101/codes/dataset/mnist.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/codes/dataset/mnist.pkl -------------------------------------------------------------------------------- /02_deep_learning101/codes/dataset/mnist.py: -------------------------------------------------------------------------------- 1 | try: 2 | import urllib.request 3 | except ImportError: 4 | raise ImportError('You should use Python 3.x') 5 | import os.path 6 | import gzip 7 | import pickle 8 | import os 9 | import numpy as np 10 | 11 | 12 | url_base = 'http://yann.lecun.com/exdb/mnist/' 13 | key_file = { 14 | 'train_img':'train-images-idx3-ubyte.gz', 15 | 'train_label':'train-labels-idx1-ubyte.gz', 16 | 'test_img':'t10k-images-idx3-ubyte.gz', 17 | 'test_label':'t10k-labels-idx1-ubyte.gz' 18 | } 19 | 20 | dataset_dir = os.path.dirname(os.path.abspath(__file__)) 21 | save_file = dataset_dir + "/mnist.pkl" 22 | 23 | train_num = 60000 24 | test_num = 10000 25 | img_dim = (1, 28, 28) 26 | img_size = 784 27 | 28 | 29 | def _download(file_name): 30 | file_path = dataset_dir + "/" + file_name 31 | 32 | if os.path.exists(file_path): 33 | return 34 | 35 | print("Downloading " + file_name + " ... ") 36 | urllib.request.urlretrieve(url_base + file_name, file_path) 37 | print("Done") 38 | 39 | def download_mnist(): 40 | for v in key_file.values(): 41 | _download(v) 42 | 43 | def _load_label(file_name): 44 | file_path = dataset_dir + "/" + file_name 45 | 46 | print("Converting " + file_name + " to NumPy Array ...") 47 | with gzip.open(file_path, 'rb') as f: 48 | labels = np.frombuffer(f.read(), np.uint8, offset=8) 49 | print("Done") 50 | 51 | return labels 52 | 53 | def _load_img(file_name): 54 | file_path = dataset_dir + "/" + file_name 55 | 56 | print("Converting " + file_name + " to NumPy Array ...") 57 | with gzip.open(file_path, 'rb') as f: 58 | data = np.frombuffer(f.read(), np.uint8, offset=16) 59 | data = data.reshape(-1, img_size) 60 | print("Done") 61 | 62 | return data 63 | 64 | def _convert_numpy(): 65 | dataset = {} 66 | dataset['train_img'] = _load_img(key_file['train_img']) 67 | dataset['train_label'] = _load_label(key_file['train_label']) 68 | dataset['test_img'] = _load_img(key_file['test_img']) 69 | dataset['test_label'] = _load_label(key_file['test_label']) 70 | 71 | return dataset 72 | 73 | def init_mnist(): 74 | download_mnist() 75 | dataset = _convert_numpy() 76 | print("Creating pickle file ...") 77 | with open(save_file, 'wb') as f: 78 | pickle.dump(dataset, f, -1) 79 | print("Done!") 80 | 81 | def _change_ont_hot_label(X): 82 | T = np.zeros((X.size, 10)) 83 | for idx, row in enumerate(T): 84 | row[X[idx]] = 1 85 | 86 | return T 87 | 88 | 89 | def load_mnist(normalize=True, flatten=True, one_hot_label=False): 90 | if not os.path.exists(save_file): 91 | init_mnist() 92 | 93 | with open(save_file, 'rb') as f: 94 | dataset = pickle.load(f, encoding='latin1') 95 | 96 | if normalize: 97 | for key in ('train_img', 'test_img'): 98 | dataset[key] = dataset[key].astype(np.float32) 99 | dataset[key] /= 255.0 100 | 101 | if one_hot_label: 102 | dataset['train_label'] = _change_ont_hot_label(dataset['train_label']) 103 | dataset['test_label'] = _change_ont_hot_label(dataset['test_label']) 104 | 105 | if not flatten: 106 | for key in ('train_img', 'test_img'): 107 | dataset[key] = dataset[key].reshape(-1, 1, 28, 28) 108 | 109 | return (dataset['train_img'], dataset['train_label']), (dataset['test_img'], dataset['test_label']) 110 | 111 | 112 | if __name__ == '__main__': 113 | init_mnist() 114 | -------------------------------------------------------------------------------- /02_deep_learning101/codes/dataset/t10k-images-idx3-ubyte.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/codes/dataset/t10k-images-idx3-ubyte.gz -------------------------------------------------------------------------------- /02_deep_learning101/codes/dataset/t10k-labels-idx1-ubyte.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/codes/dataset/t10k-labels-idx1-ubyte.gz -------------------------------------------------------------------------------- /02_deep_learning101/codes/dataset/train-images-idx3-ubyte.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/codes/dataset/train-images-idx3-ubyte.gz -------------------------------------------------------------------------------- /02_deep_learning101/codes/dataset/train-labels-idx1-ubyte.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/codes/dataset/train-labels-idx1-ubyte.gz -------------------------------------------------------------------------------- /02_deep_learning101/lectureNotes/01_lossOptmization.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/lectureNotes/01_lossOptmization.pdf -------------------------------------------------------------------------------- /02_deep_learning101/lectureNotes/02_Optimization.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/lectureNotes/02_Optimization.pdf -------------------------------------------------------------------------------- /02_deep_learning101/lectureNotes/03_ConvolutionalNN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/lectureNotes/03_ConvolutionalNN.pdf -------------------------------------------------------------------------------- /02_deep_learning101/lectureNotes/04_TrainingNeuralNetworks.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/lectureNotes/04_TrainingNeuralNetworks.pdf -------------------------------------------------------------------------------- /02_deep_learning101/lectureNotes/06_RNN_LSTM.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/lectureNotes/06_RNN_LSTM.pdf -------------------------------------------------------------------------------- /02_deep_learning101/lectureNotes/07_wrapUp.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/02_deep_learning101/lectureNotes/07_wrapUp.pdf -------------------------------------------------------------------------------- /03_deep_image_processing/README.md: -------------------------------------------------------------------------------- 1 | # Deep Learning 101 -------------------------------------------------------------------------------- /04_deep_reinforcement_learning/README.md: -------------------------------------------------------------------------------- 1 | # Deep Reinforcement Learning -------------------------------------------------------------------------------- /05_tools/README.md: -------------------------------------------------------------------------------- 1 | # Tools -------------------------------------------------------------------------------- /05_tools/git.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/05_tools/git.pdf -------------------------------------------------------------------------------- /05_tools/jekyll-remote-theme 사용하기.markdown: -------------------------------------------------------------------------------- 1 | 2 | .github.io라는 url로 익숙한 GitHub Pages는 개인 블로그, 특히 개발 블로그 용으로 인기가 높습니다. 이 글에서는 가장 간단하게 수준 급의 GitHub Pages로 static 페이지를 호스팅하는 방법을 소개해 보겠습니다. 3 | 4 | 이미 많은 글들이 GitHub Pages를 사용하는 방법을 설명하고 있지만, 이 글에서 소개하는 방법은 다음과 같은 특징이 있습니다. 5 | 6 | - 쉽고 빠르다 : 프로그램 설치가 없습니다. 이 글에서 소개하는 방법으로는 로컬 컴퓨터에 Ruby, Jekyll 등의 프로그램을 설치하지 않고도 블로그를 만들 수 있습니다. 그렇기 때문에 누구나 쉽게 10분 만에 따라할 수 있습니다. 7 | - 수준 급이다 : 수백 개가 넘는 Jekyll theme이 이미 공개되어 있습니다. 이 글의 방법은 그 중 직접 마음에 드는 theme을 선택하고 그대로 내 블로그에 쓸 수 있게 해줍니다. 덕분에 프론트에 대한 고민 없이도 수준 급의 멋있는 블로그를 가질 수 있습니다. 8 | 9 | 그럼 시작해 볼까요? 10 | 11 | # 1. 새 저장소(repository) 만들기 12 | 13 | GitHub에서 새 저장소(repository)를 만듭니다. 이 때 저장소의 이름을 자신의 username 뒤에 `.github.io`가 붙은 이름으로 짓습니다. 이렇게 해야 `yourname.github.io`의 도메인으로 접속할 수 있는 블로그가 됩니다. 14 | 15 | ![새 프로젝트 만들기](https://files.slack.com/files-pri/T25783BPY-F8YCAF664/screenshot_2018-01-26_16.02.44.png?pub_secret=615fd6f28e) 16 | 17 | > 프로젝트 별 페이지 만들기 18 | > 19 | > 프로젝트 별로 페이지를 만들 수도 있습니다. 이 때는 프로젝트 이름이 `yourname.github.io`가 아니어도 상관 없으며 이미 존재하는 프로젝트에 페이지를 만들 수도 있습니다. 다만 이렇게 만든 페이지는 `yourname.github.io/projectname`의 url로 접속하게 됩니다. 20 | 21 | # 2. 마음에 드는 Jekyll 테마 찾기 22 | 23 | GitHub에는 이미 [수백 개의 Jekyll 테마](https://github.com/topics/jekyll-theme)가 공개되어 있습니다. jekyll-theme이라는 topic을 갖고 있는 repository는 모두 공개된 jekyll 테마인데요. 이 중 어떤 테마든 내 블로그에 적용하고 싶은 테마를 선택해 보세요. 24 | 25 | ![공개된 Jekyll 테마들](https://files.slack.com/files-pri/T25783BPY-F8Z1NP17W/screenshot_2018-01-26_16.18.35.png?pub_secret=66a1198973) 26 | 27 | 이 글에서는 테마 리스트에서 가장 위에 보이는 minimal-mistakes라는 테마를 사용해 보겠습니다. 28 | 29 | # 3. `_config.yml` 파일 가져오기 30 | 31 | * 선택한 테마 저장소(repository)에 들어가서 `_config.yml`라는 파일의 내용을 복사합니다. 이때 **Raw**를 누르면 군더더기 없이 파일의 내용만 볼 수 있습니다. 이 파일은 모든 Jekyll 사이트가 갖고 있는 설정 파일입니다. 32 | * 자신이 방금 만든 저장소(repository)에 똑같이 `_config.yml`이라는 이름의 파일을 만들고 복사한 내용을 붙여넣기 합니다. 33 | * 이 파일에서 다음과 같이 한 줄을 추가하고 두 줄을 변경해야 합니다. 34 | 35 | 다음과 같은 한 줄을 추가합니다. 36 | 37 | ```yaml 38 | remote_theme : mmistakes/minimal-mistakes 39 | ``` 40 | 41 | 이는 remote_theme을 지정하는 설정이며 GitHub의 `OWNER/REPOSITORY` 의 형식으로 이루어집니다. 아래는 mmistakes라는 owner의 minimal-mistakes라는 저장소의 Jekyll 테마를 가져오겠다는 뜻입니다. 42 | 43 | 다음과 같이 설정 중에 url과 baseurl 두 줄을 수정합니다. url은 `yourname`을 자신의 username으로 바꾸어야 하며, baseurl은 빈 문자열로 두어야 합니다. 44 | 45 | ```yaml 46 | url : "https://yourname.github.io" # the base hostname & protocol for your site e.g. "https://mmistakes.github.io" 47 | baseurl : "" # the subpath of your site, e.g. "/blog" 48 | ``` 49 | 50 | > 프로젝트 별 페이지 만들기 51 | > 52 | > 프로젝트 별 페이지를 만들 때 baseurl은 `/projectname`처럼 써야 합니다. 이렇게 두어야 `https://yourname.github.io/projectname`의 url로 접속할 수 있습니다. 53 | 54 | 제가 선택한 minimal-mistakes의 원본 `_config.yml` 중 일부분을 가져와 보았습니다. 55 | 56 | ```yaml 57 | # Welcome to Jekyll! 58 | # 59 | # This config file is meant for settings that affect your entire site, values 60 | # which you are expected to set up once and rarely need to edit after that. 61 | # For technical reasons, this file is *NOT* reloaded automatically when you use 62 | # `jekyll serve`. If you change this file, please restart the server process. 63 | 64 | minimal_mistakes_skin : "default" # "air", "aqua", "contrast", "dark", "dirt", "neon", "mint", "plum", "sunrise" 65 | 66 | # Site Settings 67 | locale : "en-US" 68 | title : "Site Title" 69 | title_separator : "-" 70 | name : "Your Name" 71 | description : "An amazing website." 72 | url : # the base hostname & protocol for your site e.g. "https://mmistakes.github.io" 73 | baseurl : # the subpath of your site, e.g. "/blog" 74 | repository : # GitHub username/repo-name e.g. "mmistakes/minimal-mistakes" 75 | (뒷부분 생략) 76 | ``` 77 | 78 | 수정된 설정 파일은 다음과 같습니다. 79 | 80 | ```yaml 81 | # Welcome to Jekyll! 82 | # 83 | # This config file is meant for settings that affect your entire site, values 84 | # which you are expected to set up once and rarely need to edit after that. 85 | # For technical reasons, this file is *NOT* reloaded automatically when you use 86 | # `jekyll serve`. If you change this file, please restart the server process. 87 | 88 | remote_theme : mmistakes/minimal-mistakes 89 | minimal_mistakes_skin : "default" # "air", "aqua", "contrast", "dark", "dirt", "neon", "mint", "plum", "sunrise" 90 | 91 | # Site Settings 92 | locale : "en-US" 93 | title : "Site Title" 94 | title_separator : "-" 95 | name : "Your Name" 96 | description : "An amazing website." 97 | url : "https://yourname.github.io" # the base hostname & protocol for your site e.g. "https://mmistakes.github.io" 98 | baseurl : "" # the subpath of your site, e.g. "/blog" 99 | repository : # GitHub username/repo-name e.g. "mmistakes/minimal-mistakes" 100 | (뒷부분 생략) 101 | ``` 102 | 103 | 설정 파일에서 이외에도 제목, 저자, 설명, 페이스북 설정 등 원하는 대로 설정을 바꾸셔도 됩니다. 104 | 105 | # 4. index 파일 가져오기 106 | 107 | * 선택한 테마 저장소(repository)에서 `index.html`, `index.md` 등의 파일을 찾습니다. 이 예시에서 사용하는 테마는 `index.html`이라는 이름의 파일로 있습니다. 이 파일이 Jekyll이 사이트를 생성할 때 가장 처음 보여주는 페이지입니다. 108 | * `index` 파일과 동일한 이름과 동일한 내용의 파일을 자신의 저장소(repository)에서도 만듭니다. 이 예시에서 `index.html`의 내용은 다음과 같이 몇 줄 되지 않습니다. 109 | 110 | ```markdown 111 | --- 112 | layout: home 113 | author_profile: true 114 | --- 115 | ``` 116 | 117 | ![index.html 파일 생성하기](https://files.slack.com/files-pri/T25783BPY-F90EHDNK0/screenshot_2018-01-27_14.52.30.png?pub_secret=93cc25e567) 118 | 119 | # 5. `_posts`에 새 글 쓰기 120 | 121 | * 자신의 GitHub 저장소에서 **Create new file**을 눌러 새 파일을 생성합니다. 122 | * 파일 이름에 `_posts/2018-01-26-first-post.md`을 입력합니다. 이 때 `/` 앞의 `_posts`는 폴더 이름으로 인식되며 GitHub에서 자동으로 폴더를 생성합니다. Jekyll은 `_posts` 아래의 markdown 글들을 블로그 포스트로 인식하고 블로그에서 보여주는데요. 파일 이름은 일반적으로 `YYYY-MM-DD-name-of-post.md` 형식으로 짓습니다. 123 | * 파일 내용 맨 처음에는 다음과 같이 글의 제목과 날짜 등을 적어줍니다. 이 형식은 [Front matter](https://jekyllrb.com/docs/frontmatter/)라고 불리며, Jekyll에서 글의 메타 데이터를 확인하는 방법입니다. 테마에 따라서 여기에 들어가는 정보가 조금씩 다를 수 있으니 각 테마 별로 예시를 확인해 주세요. 124 | 125 | ```markdown 126 | --- 127 | title: "Welcome to Jekyll!" 128 | date: 2017-10-20 08:26:28 -0400 129 | categories: jekyll update 130 | --- 131 | ``` 132 | 133 | * 이 뒤의 내용은 무엇을 쓰셔도 좋습니다. 다음은 제가 사용한 예시 파일의 전문입니다. 134 | 135 | ```markdown 136 | --- 137 | title: "Welcome to Jekyll!" 138 | date: 2017-10-20 08:26:28 -0400 139 | categories: jekyll update 140 | --- 141 | You’ll find this post in your `_posts` directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run `jekyll serve`, which launches a web server and auto-regenerates your site when a file is updated. 142 | 143 | To add new posts, simply add a file in the `_posts` directory that follows the convention `YYYY-MM-DD-name-of-post.ext` and includes the necessary front matter. Take a look at the source for this post to get an idea about how it works. 144 | 145 | Jekyll also offers powerful support for code snippets: 146 | 147 | ​```python 148 | def print_hi(name): 149 | print("hello", name) 150 | print_hi('Tom') 151 | ​``` 152 | 153 | Check out the [Jekyll docs][jekyll-docs] for more info on how to get the most out of Jekyll. File all bugs/feature requests at [Jekyll’s GitHub repo][jekyll-gh]. If you have questions, you can ask them on [Jekyll Talk][jekyll-talk]. 154 | 155 | [jekyll-docs]: https://jekyllrb.com/docs/home 156 | [jekyll-gh]: https://github.com/jekyll/jekyll 157 | [jekyll-talk]: https://talk.jekyllrb.com/ 158 | ``` 159 | 160 | ![새 포스트 만들기](https://files.slack.com/files-pri/T25783BPY-F8Z2HJ9PC/screenshot_2018-01-26_17.06.25.png?pub_secret=c19592b80f) 161 | 162 | # 확인하기 163 | 164 | `yourname.github.io`로 접속하여 GitHub Pages가 잘 만들어졌는지 확인할 수 있습니다. 165 | 166 | ![만들어진 minimal-mistakes 테마의 GitHub Pages 블로그](https://files.slack.com/files-pri/T25783BPY-F8YCMJQHW/screenshot_2018-01-26_16.52.30.png?pub_secret=04ae040a08) 167 | 168 | > 프로젝트 별 페이지 만들기 169 | > 170 | > 프로젝트 별 페이지를 만들 때는 한가지 과정이 더 필요합니다. 바로 GitHub 설정에서 Gihub Pages를 활성화하는 것입니다. (`yourname.github.io`라는 이름의 저장소는 이 설정이 자동으로 활성화되어 있습니다.) 171 | > 172 | > GitHub 저장소 메뉴 중에 **Settings**에 들어간 뒤 스크롤을 내리면 GitHub Pages 설정을 볼 수 있습니다. 여기에서 **Source**를 **master branch**로 설정한 뒤 Save를 누르면 GitHub Pages 설정이 완료되고, 접속할 수 있는 url이 나타납니다. 이 url로 들어가 GitHub Pages가 제대로 작동하는지 확인할 수 있습니다. url은 `https://yourname.github.io/projectname`의 형식으로 보이게 됩니다. 173 | 174 | ![GitHub Pages 활성화하기](https://files.slack.com/files-pri/T25783BPY-F8ZJAPDD2/screenshot_2018-01-27_15.02.43.png?pub_secret=b0da820a8c) 175 | 176 | # 마치며 177 | 178 | 이 글에서 소개드린 방법의 핵심은 [jekyll remote theme](https://github.com/benbalter/jekyll-remote-theme)이었습니다. Jekyll remote theme은 GitHub에 공개되어 있는 어떤 Jekyll 테마든지 가져와서 쓸 수 있도록 하는 Jekyll의 플러그인입니다. [Jekyll remote theme이 GitHub Pages에 내장](https://github.com/blog/2464-use-any-theme-with-github-pages)되어 있었기 때문에 이 글에서 소개한 방법이 가능했던 것입니다. 179 | 180 | 물론 이 방법으로는 블로그의 css 등을 커스터마이징하는 것은 불가능합니다. 커스터마이징을 하기 위해서는 자신만의 `_layouts`, `_includes`, `_sass`, `assets` 등의 폴더들을 만들어야 합니다. (다른 말로 하면, jekyll remote theme은 이 폴더들의 내용을 지정한 테마에서 가져옵니다.) 또한 자신의 로컬 컴퓨터에 Jekyll 환경을 설치하고 빌드해 보아야 합니다. 자세한 내용은 [Jekyll의 공식 웹사이트](https://jekyllrb.com)를 참고해야 합니다. 181 | 182 | 커스터마이징을 하지 않고 이미 공개된 테마를 그대로 쓰고 싶다면 remote theme은 최소한의 설정으로 원하는 Jekyll 테마를 사용할 수 있는 쉽고 빠른 방법입니다. -------------------------------------------------------------------------------- /05_tools/slack.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/05_tools/slack.pdf -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code01_TF.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# TensorFlow 기본 구조 이해" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import tensorflow as tf\n", 17 | "print(tf.__version__)" 18 | ] 19 | }, 20 | { 21 | "cell_type": "markdown", 22 | "metadata": {}, 23 | "source": [ 24 | "### 그래프 생성\n", 25 | "\n", 26 | "* 변수를 생성하고 계산을 한다는 것은 그래프를 만드는 과정\n", 27 | "* `a`를 출력하면 **계산된 값**이 나오지 않음" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "a = tf.add(3, 5)" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "metadata": {}, 43 | "outputs": [], 44 | "source": [ 45 | "print(a)" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "### Session 실행\n", 53 | "\n", 54 | "* 위의 과정은 그래프 형태만 만들어놓음\n", 55 | "* 실제 계산은 `tf.Session()`을 실행하여 계산함\n", 56 | "* 마치 파이프에 물(데이터)을 흘려보내는 것과 비슷함\n", 57 | "* `tf.Session()`을 열면 TF default로 GPU 메모리를 다 잡아버림\n", 58 | " * 그것을 방지하기 위해 `gpu_options`을 다음과 같이 준다\n", 59 | "* GPU 사용량 확인하는 명령어\n", 60 | " * `nvidia-smi`\n", 61 | " * `watch`라는 명령어와 함께 쓰면 계속 갱신하면서 메모리 변화를 볼 수 있음\n", 62 | " * `watch -n 1 nvidia-smi`" 63 | ] 64 | }, 65 | { 66 | "cell_type": "code", 67 | "execution_count": null, 68 | "metadata": {}, 69 | "outputs": [], 70 | "source": [ 71 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 72 | "#sess = tf.Session()\n", 73 | "sess = tf.Session(config=sess_config)\n", 74 | "print(sess.run(a))\n", 75 | "sess.close()" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "metadata": {}, 81 | "source": [ 82 | "### `Session`을 `with` 구문으로\n", 83 | "\n", 84 | "* `session`을 열면 `sess.close()`로 명시적으로 닫아줘야 한다.\n", 85 | "* `with` 구문이 끝나면 알아서 `session`이 닫힌다." 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": null, 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "with tf.Session(config=sess_config) as sess:\n", 95 | " print(sess.run(a))" 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": {}, 101 | "source": [ 102 | "### `sess.run()`대신 `eval()`을 쓸 수 있다" 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": null, 108 | "metadata": {}, 109 | "outputs": [], 110 | "source": [ 111 | "with tf.Session(config=sess_config) as sess:\n", 112 | " print(a.eval())" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "### `tf.InteractiveSession()`" 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "execution_count": null, 125 | "metadata": {}, 126 | "outputs": [], 127 | "source": [ 128 | "#sess = tf.Session(config=sess_config)\n", 129 | "sess = tf.InteractiveSession(config=sess_config)\n", 130 | "print(a.eval())\n", 131 | "sess.close()" 132 | ] 133 | }, 134 | { 135 | "cell_type": "markdown", 136 | "metadata": {}, 137 | "source": [ 138 | "### 약간 더 복잡한 계산" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": null, 144 | "metadata": {}, 145 | "outputs": [], 146 | "source": [ 147 | "x = 2\n", 148 | "y = 3\n", 149 | "w = tf.add(x, y)\n", 150 | "z = tf.multiply(x, y)\n", 151 | "p = tf.pow(z, w)\n", 152 | "with tf.Session(config=sess_config) as sess:\n", 153 | " print(sess.run(p))" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": {}, 159 | "source": [ 160 | "### `Tensor`변수와 일반 `python` 변수" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": null, 166 | "metadata": {}, 167 | "outputs": [], 168 | "source": [ 169 | "import numpy as np\n", 170 | "\n", 171 | "x = 2\n", 172 | "y = 3\n", 173 | "w = x + y\n", 174 | "z = x * y\n", 175 | "p = np.power(z, w)\n", 176 | "with tf.Session(config=sess_config) as sess:\n", 177 | " print(sess.run(p)) # p는 sess.run()에 넣는 Tensor가 아니고 일반 python 변수이기 때문에 에러가 난다" 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "execution_count": null, 183 | "metadata": {}, 184 | "outputs": [], 185 | "source": [ 186 | "x = tf.constant(2)\n", 187 | "y = tf.constant(3)\n", 188 | "w = x + y\n", 189 | "z = x * y\n", 190 | "p = tf.pow(z, w)\n", 191 | "with tf.Session(config=sess_config) as sess:\n", 192 | " print(sess.run(p))" 193 | ] 194 | } 195 | ], 196 | "metadata": { 197 | "kernelspec": { 198 | "display_name": "Python 3", 199 | "language": "python", 200 | "name": "python3" 201 | }, 202 | "language_info": { 203 | "codemirror_mode": { 204 | "name": "ipython", 205 | "version": 3 206 | }, 207 | "file_extension": ".py", 208 | "mimetype": "text/x-python", 209 | "name": "python", 210 | "nbconvert_exporter": "python", 211 | "pygments_lexer": "ipython3", 212 | "version": "3.6.2" 213 | } 214 | }, 215 | "nbformat": 4, 216 | "nbformat_minor": 2 217 | } 218 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code02_tensorboard.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Tensorboard 사용법" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import tensorflow as tf\n", 17 | "\n", 18 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))" 19 | ] 20 | }, 21 | { 22 | "cell_type": "markdown", 23 | "metadata": {}, 24 | "source": [ 25 | "### `tf.summary.FileWriter`를 사용하여 그래프를 그림\n", 26 | "\n", 27 | "* [`tf.summary.FileWriter`](https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter)\n", 28 | "* 그래프를 확인할 때는 아래의 명령어를 입력함\n", 29 | "\n", 30 | "```shell\n", 31 | "$ tensorboard --logdir graphs\n", 32 | "```\n", 33 | "\n", 34 | "* 그리고 웹브라우저를 열어 `localhost:6006`을 입력함" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": null, 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "a = tf.constant(2)\n", 44 | "b = tf.constant(3)\n", 45 | "x = tf.add(a, b)\n", 46 | "\n", 47 | "with tf.Session(config=sess_config) as sess:\n", 48 | " # add this line to use TensorBoard.\n", 49 | " writer = tf.summary.FileWriter(\"./graphs/code02_1\", sess.graph)\n", 50 | " print(sess.run(x))\n", 51 | "writer.close() # close the writer when you’re done using it" 52 | ] 53 | }, 54 | { 55 | "cell_type": "markdown", 56 | "metadata": {}, 57 | "source": [ 58 | "### 같은 내용을 한번 더 실행해보자\n", 59 | "\n", 60 | "* 실행 후 `tensorboard`를 열어서 그래프 모양을 확인해보자" 61 | ] 62 | }, 63 | { 64 | "cell_type": "code", 65 | "execution_count": null, 66 | "metadata": {}, 67 | "outputs": [], 68 | "source": [ 69 | "a = tf.constant(2)\n", 70 | "b = tf.constant(3)\n", 71 | "x = tf.add(a, b)\n", 72 | "\n", 73 | "with tf.Session(config=sess_config) as sess:\n", 74 | " # add this line to use TensorBoard.\n", 75 | " writer = tf.summary.FileWriter(\"./graphs/code02_2\", sess.graph)\n", 76 | " print(sess.run(x))\n", 77 | "writer.close() # close the writer when you’re done using it" 78 | ] 79 | }, 80 | { 81 | "cell_type": "markdown", 82 | "metadata": {}, 83 | "source": [ 84 | "### 같은 내용을 한번 더 실행해보자\n", 85 | "\n", 86 | "* 이번엔 위에 만들었던 그래프들을 지우고 다시 그래프를 그려보자\n", 87 | "* 실행 후 `tensorboard`를 열어서 그래프 모양을 확인해보자" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "metadata": {}, 94 | "outputs": [], 95 | "source": [ 96 | "# Only necessary if you use IDLE or a jupyter notebook\n", 97 | "tf.reset_default_graph()\n", 98 | "\n", 99 | "a = tf.constant(2)\n", 100 | "b = tf.constant(3)\n", 101 | "x = tf.add(a, b)\n", 102 | "\n", 103 | "with tf.Session(config=sess_config) as sess:\n", 104 | " # add this line to use TensorBoard.\n", 105 | " writer = tf.summary.FileWriter(\"./graphs/code02_3\", sess.graph)\n", 106 | " print(sess.run(x))\n", 107 | "writer.close() # close the writer when you’re done using it" 108 | ] 109 | }, 110 | { 111 | "cell_type": "markdown", 112 | "metadata": {}, 113 | "source": [ 114 | "### Explicitly Name\n", 115 | "\n", 116 | "* 명시적으로 변수에 이름을 정하지 않으면 tensorflow 내부적으로 tensorflow 기본이름을 붙여준다.\n", 117 | " * `Const`, `Const_1`, `Const_2`, 이런식으로 같은 타입의 변수들은 자동으로 숫자가 붙는다." 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": null, 123 | "metadata": {}, 124 | "outputs": [], 125 | "source": [ 126 | "# Only necessary if you use IDLE or a jupyter notebook\n", 127 | "tf.reset_default_graph()\n", 128 | "\n", 129 | "a = tf.constant(2, name='a')\n", 130 | "b = tf.constant(3, name='b')\n", 131 | "x = tf.add(a, b, name='add')\n", 132 | "with tf.Session() as sess:\n", 133 | " # add this line to use TensorBoard.\n", 134 | " writer = tf.summary.FileWriter(\"./graphs/code02_4\", sess.graph)\n", 135 | " print(sess.run(x))\n", 136 | "writer.close() # close the writer when you’re done using it" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": {}, 142 | "source": [ 143 | "## 직접 실습" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "metadata": {}, 149 | "source": [ 150 | "### 본인이 직접 연산을 구현하고 Tensorboard로 확인 할 것" 151 | ] 152 | }, 153 | { 154 | "cell_type": "code", 155 | "execution_count": null, 156 | "metadata": {}, 157 | "outputs": [], 158 | "source": [ 159 | "# TODO" 160 | ] 161 | } 162 | ], 163 | "metadata": { 164 | "kernelspec": { 165 | "display_name": "Python 3", 166 | "language": "python", 167 | "name": "python3" 168 | }, 169 | "language_info": { 170 | "codemirror_mode": { 171 | "name": "ipython", 172 | "version": 3 173 | }, 174 | "file_extension": ".py", 175 | "mimetype": "text/x-python", 176 | "name": "python", 177 | "nbconvert_exporter": "python", 178 | "pygments_lexer": "ipython3", 179 | "version": "3.6.2" 180 | } 181 | }, 182 | "nbformat": 4, 183 | "nbformat_minor": 2 184 | } 185 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code03_TF_dimension.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# TensorFlow Dimension" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import tensorflow as tf\n", 17 | "\n", 18 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))" 19 | ] 20 | }, 21 | { 22 | "cell_type": "markdown", 23 | "metadata": {}, 24 | "source": [ 25 | "### Dimension과 자주 쓰이는 Constant Value Tensors API를 함께 알아보자\n", 26 | "\n", 27 | "* [Constant Value Tensors](https://www.tensorflow.org/api_guides/python/constant_op#Constant_Value_Tensors)\n", 28 | " * `tf.constant`\n", 29 | " * `tf.zeros`\n", 30 | " * `tf.zeros_like`\n", 31 | " * `tf.ones`\n", 32 | " * `tf.ones_like`\n", 33 | " * `tf.fill`\n", 34 | "* Sequences Tensors\n", 35 | " * `tf.linspace`\n", 36 | " * `tf.range`" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "metadata": {}, 43 | "outputs": [], 44 | "source": [ 45 | "# rank 0 tensor: scalar\n", 46 | "a = tf.constant(3)" 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "execution_count": null, 52 | "metadata": {}, 53 | "outputs": [], 54 | "source": [ 55 | "# rank 1 tensor: vector\n", 56 | "b = tf.zeros([2])" 57 | ] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": null, 62 | "metadata": {}, 63 | "outputs": [], 64 | "source": [ 65 | "# rank 2 tensor: matrix\n", 66 | "c = tf.ones([2, 3])" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": null, 72 | "metadata": {}, 73 | "outputs": [], 74 | "source": [ 75 | "# rank 3 tensor: 3-tensor\n", 76 | "d = tf.fill([2, 3, 4], 3)" 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": null, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "# ones_like\n", 86 | "e = tf.ones_like(d)" 87 | ] 88 | }, 89 | { 90 | "cell_type": "code", 91 | "execution_count": null, 92 | "metadata": {}, 93 | "outputs": [], 94 | "source": [ 95 | "# linespace\n", 96 | "f = tf.linspace(1.0, 5.0, 4)" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "execution_count": null, 102 | "metadata": {}, 103 | "outputs": [], 104 | "source": [ 105 | "print(a)\n", 106 | "print(b)\n", 107 | "print(c)\n", 108 | "print(d)\n", 109 | "print(e)\n", 110 | "print(f)" 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": null, 116 | "metadata": {}, 117 | "outputs": [], 118 | "source": [ 119 | "print(a.shape)\n", 120 | "print(b.shape)\n", 121 | "print(c.shape)\n", 122 | "print(d.shape)\n", 123 | "print(e.shape)\n", 124 | "print(f.shape)" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "metadata": {}, 131 | "outputs": [], 132 | "source": [ 133 | "print(a.name)\n", 134 | "print(b.name)\n", 135 | "print(c.name)\n", 136 | "print(d.name)\n", 137 | "print(e.name)\n", 138 | "print(f.name)" 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": {}, 144 | "source": [ 145 | "### `tf.Session()`을 사용하여 값을 출력해보자" 146 | ] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": null, 151 | "metadata": {}, 152 | "outputs": [], 153 | "source": [ 154 | "with tf.Session(config=sess_config) as sess:\n", 155 | " print(sess.run(a), '\\n')\n", 156 | " print(sess.run(b), '\\n')\n", 157 | " print(sess.run(c), '\\n')\n", 158 | " print(sess.run(d), '\\n')\n", 159 | " print(sess.run(e), '\\n')\n", 160 | " print(sess.run(f))" 161 | ] 162 | }, 163 | { 164 | "cell_type": "markdown", 165 | "metadata": {}, 166 | "source": [ 167 | "### 여러 변수들 session을 이용하여 한번에 실행" 168 | ] 169 | }, 170 | { 171 | "cell_type": "code", 172 | "execution_count": null, 173 | "metadata": {}, 174 | "outputs": [], 175 | "source": [ 176 | "with tf.Session(config=sess_config) as sess:\n", 177 | " u, v, w, x, y, z = sess.run([a, b, c, d, e, f])\n", 178 | " #print(u, v, w, x, y, z)\n", 179 | " print(u, '\\n')\n", 180 | " print(v, '\\n')\n", 181 | " print(w, '\\n')\n", 182 | " print(x, '\\n')\n", 183 | " print(y, '\\n')\n", 184 | " print(z)" 185 | ] 186 | }, 187 | { 188 | "cell_type": "markdown", 189 | "metadata": {}, 190 | "source": [ 191 | "### 자주 쓰는 Random Tensors 들도 사용해보자\n", 192 | "\n", 193 | "* [Random Tensors](https://www.tensorflow.org/api_guides/python/constant_op#Random_Tensors)\n", 194 | " * `tf.random_normal`\n", 195 | " * `tf.truncated_normal`\n", 196 | " * `tf.random_uniform`\n", 197 | " * `tf.random_shuffle`\n", 198 | " * `tf.random_crop`\n", 199 | " * `tf.multinomial`\n", 200 | " * `tf.random_gamma`\n", 201 | " * `tf.set_random_seed`" 202 | ] 203 | }, 204 | { 205 | "cell_type": "code", 206 | "execution_count": null, 207 | "metadata": {}, 208 | "outputs": [], 209 | "source": [ 210 | "tf.set_random_seed(219)" 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": null, 216 | "metadata": {}, 217 | "outputs": [], 218 | "source": [ 219 | "g = tf.random_normal([2, 3])\n", 220 | "h = tf.random_uniform([2, 3, 4])" 221 | ] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": null, 226 | "metadata": {}, 227 | "outputs": [], 228 | "source": [ 229 | "print(g)\n", 230 | "print(h)" 231 | ] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "execution_count": null, 236 | "metadata": {}, 237 | "outputs": [], 238 | "source": [ 239 | "print(g.shape)\n", 240 | "print(h.shape)" 241 | ] 242 | }, 243 | { 244 | "cell_type": "code", 245 | "execution_count": null, 246 | "metadata": {}, 247 | "outputs": [], 248 | "source": [ 249 | "print(g.name)\n", 250 | "print(h.name)" 251 | ] 252 | }, 253 | { 254 | "cell_type": "markdown", 255 | "metadata": {}, 256 | "source": [ 257 | "### `tf.Session()`을 사용하여 값을 출력해보자" 258 | ] 259 | }, 260 | { 261 | "cell_type": "code", 262 | "execution_count": null, 263 | "metadata": {}, 264 | "outputs": [], 265 | "source": [ 266 | "with tf.Session(config=sess_config) as sess:\n", 267 | " p, q = sess.run([g, h])\n", 268 | " print(p, '\\n')\n", 269 | " print('--------')\n", 270 | " print(q, '\\n')" 271 | ] 272 | }, 273 | { 274 | "cell_type": "markdown", 275 | "metadata": {}, 276 | "source": [ 277 | "## 직접 실습" 278 | ] 279 | }, 280 | { 281 | "cell_type": "markdown", 282 | "metadata": {}, 283 | "source": [ 284 | "### 2D Tensor (matrix) 더하기 및 곱하기\n", 285 | "\n", 286 | "* 2 x 2 random matrix 생성\n", 287 | " * x = [ [1, 2], [3, 4] ]\n", 288 | " * y = [ [5, 6], [7, 8] ]\n", 289 | "* elementwise 더하기\n", 290 | "* elementwise 곱하기\n", 291 | "* matrix 곱하기" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": null, 297 | "metadata": {}, 298 | "outputs": [], 299 | "source": [ 300 | "# TODO\n", 301 | "x = tf.convert_to_tensor([ [1, 2], [3, 4] ])\n", 302 | "y = tf.convert_to_tensor([ [5, 6], [7, 8] ])\n", 303 | "# z: elementwise 더하기\n", 304 | "z = x + y\n", 305 | "# w: elementwise 곱하기\n", 306 | "w = x * y\n", 307 | "# v: matrix 곱하기\n", 308 | "v = tf.matmul(x, y)\n", 309 | "\n", 310 | "with tf.Session(config=sess_config) as sess:\n", 311 | " print(sess.run(x))\n", 312 | " print(sess.run(y))\n", 313 | " print(sess.run(z))\n", 314 | " print(sess.run(w))\n", 315 | " print(sess.run(v))" 316 | ] 317 | }, 318 | { 319 | "cell_type": "markdown", 320 | "metadata": {}, 321 | "source": [ 322 | "##### output\n", 323 | "```\n", 324 | "[[1 2]\n", 325 | " [3 4]]\n", 326 | "[[5 6]\n", 327 | " [7 8]]\n", 328 | "[[ 6 8]\n", 329 | " [10 12]]\n", 330 | "[[ 5 12]\n", 331 | " [21 32]]\n", 332 | "[[19 22]\n", 333 | " [43 50]]\n", 334 | "```" 335 | ] 336 | }, 337 | { 338 | "cell_type": "markdown", 339 | "metadata": {}, 340 | "source": [ 341 | "### 3D Tensor 단면 자르기\n", 342 | "* range를 이용하여 3 x 2 x 2 = 12 element list 생성\n", 343 | "* tf.convert_to_tensor 및 tf.reshape을 통해 3D Tensor 변환" 344 | ] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "execution_count": null, 349 | "metadata": {}, 350 | "outputs": [], 351 | "source": [ 352 | "# TODO\n", 353 | "#x = list(range(12))\n", 354 | "#x = tf.convert_to_tensor(x)\n", 355 | "x = tf.range(12)\n", 356 | "x = tf.reshape(x, [3, 2, 2])\n", 357 | "\n", 358 | "# index를 이용하여 slice\n", 359 | "y = x[0,:,:] + x[2,:,:]\n", 360 | "\n", 361 | "with tf.Session(config=sess_config) as sess:\n", 362 | " print(sess.run(x), '\\n')\n", 363 | " print(sess.run(x[0,:,:]), '\\n')\n", 364 | " print(sess.run(x[2,:,:]), '\\n')\n", 365 | " print(sess.run(y))" 366 | ] 367 | }, 368 | { 369 | "cell_type": "markdown", 370 | "metadata": {}, 371 | "source": [ 372 | "##### output\n", 373 | "```\n", 374 | "[[[ 0 1]\n", 375 | " [ 2 3]]\n", 376 | "\n", 377 | " [[ 4 5]\n", 378 | " [ 6 7]]\n", 379 | "\n", 380 | " [[ 8 9]\n", 381 | " [10 11]]]\n", 382 | "\n", 383 | "\n", 384 | "[[0 1]\n", 385 | " [2 3]]\n", 386 | "[[ 8 9]\n", 387 | " [10 11]]\n", 388 | "[[ 8 10]\n", 389 | " [12 14]]\n", 390 | "```" 391 | ] 392 | }, 393 | { 394 | "cell_type": "code", 395 | "execution_count": null, 396 | "metadata": {}, 397 | "outputs": [], 398 | "source": [] 399 | } 400 | ], 401 | "metadata": { 402 | "kernelspec": { 403 | "display_name": "Python 3", 404 | "language": "python", 405 | "name": "python3" 406 | }, 407 | "language_info": { 408 | "codemirror_mode": { 409 | "name": "ipython", 410 | "version": 3 411 | }, 412 | "file_extension": ".py", 413 | "mimetype": "text/x-python", 414 | "name": "python", 415 | "nbconvert_exporter": "python", 416 | "pygments_lexer": "ipython3", 417 | "version": "3.6.2" 418 | } 419 | }, 420 | "nbformat": 4, 421 | "nbformat_minor": 2 422 | } 423 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code04_tf.Variable.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# tf.Variable\n", 8 | "\n", 9 | "* 신경망에서 가중치와 같은 학습 가능한 parameter를 정의 할 때나 코드가 실행될 때 값이 변경 될 사항이 있을 때 유용함\n", 10 | " * cf) tf.constant: session을 통해 실행될 때 값이 변하지 않음 (immutable)\n", 11 | "* `tf.Variable`은 사용하기 전에 꼭 **initializer**를 사용해야 함" 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": null, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "import tensorflow as tf\n", 21 | "\n", 22 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 23 | "\n", 24 | "tf.set_random_seed(219)" 25 | ] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "execution_count": null, 30 | "metadata": {}, 31 | "outputs": [], 32 | "source": [ 33 | "a = tf.Variable(2, name='scalar')\n", 34 | "b = tf.Variable([2, 3], name='vector')\n", 35 | "c = tf.Variable([[0, 1], [2, 3]], name='matrix')" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": null, 41 | "metadata": {}, 42 | "outputs": [], 43 | "source": [ 44 | "x = a + a\n", 45 | "y = a + b\n", 46 | "print(x)\n", 47 | "print(y)" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": null, 53 | "metadata": {}, 54 | "outputs": [], 55 | "source": [ 56 | "with tf.Session(config=sess_config) as sess:\n", 57 | " print(sess.run(x))\n", 58 | " print(sess.run(y))\n", 59 | " # initialize를 하지 않아 에러가 난다" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "metadata": {}, 65 | "source": [ 66 | "### Initialize\n", 67 | "\n", 68 | "* 보통은 `tf.global_variables_initializer()`를 사용하여 모든 `tf.Variable`들은 한번에 초기화한다.\n", 69 | " * 각 변수를 인자로 넣어 각각 변수별로 initialize 할 수 있다." 70 | ] 71 | }, 72 | { 73 | "cell_type": "markdown", 74 | "metadata": {}, 75 | "source": [ 76 | "#### 모든 변수 초기화" 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": null, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "init_op = tf.global_variables_initializer()\n", 86 | "with tf.Session(config=sess_config) as sess:\n", 87 | " sess.run(init_op)\n", 88 | " print(sess.run(x))\n", 89 | " print(sess.run(y))" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "#### 변수 지정하여 초기화\n", 97 | "\n", 98 | "* 변수 `a`와 `b`는 초기화 `c`는 초기화하지 않음\n", 99 | " * `a`, `b`: 정상\n", 100 | " * `c`: error" 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": null, 106 | "metadata": {}, 107 | "outputs": [], 108 | "source": [ 109 | "# Initialize only a subset of variables\n", 110 | "init_ab = tf.variables_initializer([a, b], name=\"init_ab\")\n", 111 | "with tf.Session(config=sess_config) as sess:\n", 112 | " sess.run(init_ab)\n", 113 | " print(sess.run(a))\n", 114 | " print(sess.run(b))\n", 115 | " print(sess.run(c)) # a, b는 initialize가 되어있고, c는 안되어있어서 에러가 난다" 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "execution_count": null, 121 | "metadata": {}, 122 | "outputs": [], 123 | "source": [ 124 | "# Initialize a single variable\n", 125 | "W = tf.Variable(tf.zeros([3, 2]))\n", 126 | "with tf.Session(config=sess_config) as sess:\n", 127 | " sess.run(W.initializer)\n", 128 | " print(sess.run(W))" 129 | ] 130 | }, 131 | { 132 | "cell_type": "markdown", 133 | "metadata": {}, 134 | "source": [ 135 | "### `tf.Variable.eval()`\n", 136 | "\n", 137 | "* `with` 구문 안에서 `sess.run()` 대신에 `Tensor`에 직접 실행 명령을 할 수 있다." 138 | ] 139 | }, 140 | { 141 | "cell_type": "code", 142 | "execution_count": null, 143 | "metadata": {}, 144 | "outputs": [], 145 | "source": [ 146 | "# Initialize a single variable\n", 147 | "W = tf.Variable(tf.random_normal([3, 2]))\n", 148 | "with tf.Session(config=sess_config) as sess:\n", 149 | " sess.run(W.initializer)\n", 150 | " print(W.eval())" 151 | ] 152 | }, 153 | { 154 | "cell_type": "markdown", 155 | "metadata": {}, 156 | "source": [ 157 | "### `tf.Variable.assign()`" 158 | ] 159 | }, 160 | { 161 | "cell_type": "code", 162 | "execution_count": null, 163 | "metadata": {}, 164 | "outputs": [], 165 | "source": [ 166 | "W = tf.Variable(10)\n", 167 | "W.assign(100)\n", 168 | "with tf.Session(config=sess_config) as sess:\n", 169 | " sess.run(W.initializer)\n", 170 | " print(W.eval())" 171 | ] 172 | }, 173 | { 174 | "cell_type": "code", 175 | "execution_count": null, 176 | "metadata": {}, 177 | "outputs": [], 178 | "source": [ 179 | "W = tf.Variable(10)\n", 180 | "assign_op = W.assign(100)\n", 181 | "with tf.Session(config=sess_config) as sess:\n", 182 | " sess.run(W.initializer)\n", 183 | " sess.run(assign_op)\n", 184 | " print(W.eval())" 185 | ] 186 | }, 187 | { 188 | "cell_type": "code", 189 | "execution_count": null, 190 | "metadata": {}, 191 | "outputs": [], 192 | "source": [ 193 | "# create a variable whose original value is 2\n", 194 | "my_var = tf.Variable(2, name=\"my_var\")\n", 195 | "\n", 196 | "# assign a * 2 to a and call that op a_times_two\n", 197 | "my_var_times_two = my_var.assign(2 * my_var)\n", 198 | "\n", 199 | "with tf.Session(config=sess_config) as sess:\n", 200 | " sess.run(my_var.initializer)\n", 201 | " print(sess.run(my_var_times_two)) # >> 4\n", 202 | " print(sess.run(my_var_times_two)) # >> 8\n", 203 | " print(sess.run(my_var_times_two)) # >> 16" 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "metadata": {}, 209 | "source": [ 210 | "## Two Sessions\n", 211 | "\n", 212 | "* `tf.Session()`을 동시에 두개를 돌려보자\n", 213 | "* 같은 변수 `W`가 서로 다른 Session에서 각각 다른 값을 가지고 있다" 214 | ] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "execution_count": null, 219 | "metadata": {}, 220 | "outputs": [], 221 | "source": [ 222 | "W = tf.Variable(10)\n", 223 | "\n", 224 | "sess1 = tf.Session(config=sess_config)\n", 225 | "sess2 = tf.Session(config=sess_config)\n", 226 | "\n", 227 | "sess1.run(W.initializer)\n", 228 | "sess2.run(W.initializer)\n", 229 | "\n", 230 | "print(sess1.run(W.assign_add(10))) # >> 20\n", 231 | "print(sess2.run(W.assign_sub(2))) # >> 8\n", 232 | "\n", 233 | "print(sess1.run(W.assign_add(100))) # >> 120\n", 234 | "print(sess2.run(W.assign_sub(50))) # >> -42\n", 235 | "\n", 236 | "sess1.close()\n", 237 | "sess2.close()" 238 | ] 239 | } 240 | ], 241 | "metadata": { 242 | "kernelspec": { 243 | "display_name": "Python 3", 244 | "language": "python", 245 | "name": "python3" 246 | }, 247 | "language_info": { 248 | "codemirror_mode": { 249 | "name": "ipython", 250 | "version": 3 251 | }, 252 | "file_extension": ".py", 253 | "mimetype": "text/x-python", 254 | "name": "python", 255 | "nbconvert_exporter": "python", 256 | "pygments_lexer": "ipython3", 257 | "version": "3.6.2" 258 | } 259 | }, 260 | "nbformat": 4, 261 | "nbformat_minor": 2 262 | } 263 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code05_tf.placeholder.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# tf.placeholder\n", 8 | "\n", 9 | "* `tf.Session()` 을 실행 할 때 외부에서 값을 넣어줌\n", 10 | "* 학습데이터 또는 추론(inference) 할 때의 개별 데이터처럼 그래프 외부에서 값을 넣어주는 형태로 만들 필요가 있을 때 유용함" 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": null, 16 | "metadata": {}, 17 | "outputs": [], 18 | "source": [ 19 | "import tensorflow as tf\n", 20 | "\n", 21 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 22 | "\n", 23 | "tf.set_random_seed(219)" 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": {}, 29 | "source": [ 30 | "### `tf.placeholder`\n", 31 | "\n", 32 | "* `tf.Session()`을 열어서 실행할 때 값을 넣어줘야 한다.\n", 33 | "* 아래 예제처럼 그냥 Session을 실행하면 error가 생긴다." 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "# create a placeholder\n", 43 | "a = tf.placeholder(tf.float32, shape=[3])\n", 44 | "\n", 45 | "# create a constant of type\n", 46 | "b = tf.constant([5, 5, 5], tf.float32)\n", 47 | "\n", 48 | "# use the placeholder as you would a constant or a variable\n", 49 | "c = a + b # Short for tf.add(a, b)\n", 50 | "\n", 51 | "with tf.Session(config=sess_config) as sess:\n", 52 | " print(sess.run(c)) # Error because a doesn’t have any" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": {}, 58 | "source": [ 59 | "### `tf.placeholder` 올바른 예제\n", 60 | "\n", 61 | "* `sess.run`할 때 `feed_dict`이라는 인자를 사용하여 placeholder `a`의 실제 값을 넣어준다." 62 | ] 63 | }, 64 | { 65 | "cell_type": "code", 66 | "execution_count": null, 67 | "metadata": {}, 68 | "outputs": [], 69 | "source": [ 70 | "# create a placeholder\n", 71 | "a = tf.placeholder(tf.float32, shape=[3])\n", 72 | "\n", 73 | "# create a constant of type\n", 74 | "b = tf.constant([5, 5, 5], tf.float32)\n", 75 | "\n", 76 | "# use the placeholder as you would a constant or a variable\n", 77 | "c = a + b # Short for tf.add(a, b)\n", 78 | "\n", 79 | "with tf.Session(config=sess_config) as sess:\n", 80 | " print(sess.run(c, feed_dict={a: [1, 2, 3]}))" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "### Normal Loading" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "metadata": {}, 94 | "outputs": [], 95 | "source": [ 96 | "# Only necessary if you use IDLE or a jupyter notebook\n", 97 | "tf.reset_default_graph()\n", 98 | "\n", 99 | "x = tf.Variable(10, name='x')\n", 100 | "y = tf.Variable(20, name='y')\n", 101 | "z = tf.add(x, y) # you create the node for add node before executing the graph\n", 102 | "\n", 103 | "with tf.Session(config=sess_config) as sess:\n", 104 | " sess.run(tf.global_variables_initializer())\n", 105 | " writer = tf.summary.FileWriter('./graphs/code05_normal', sess.graph)\n", 106 | " for _ in range(10):\n", 107 | " print(sess.run(z))\n", 108 | " writer.close()\n", 109 | " print('\\n')\n", 110 | "\n", 111 | " for node in tf.get_default_graph().as_graph_def().node:\n", 112 | " print(node.name)" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "### Lazy Loading" 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "execution_count": null, 125 | "metadata": {}, 126 | "outputs": [], 127 | "source": [ 128 | "# Only necessary if you use IDLE or a jupyter notebook\n", 129 | "tf.reset_default_graph()\n", 130 | "\n", 131 | "x = tf.Variable(10, name='x')\n", 132 | "y = tf.Variable(20, name='y')\n", 133 | "\n", 134 | "with tf.Session(config=sess_config) as sess:\n", 135 | " sess.run(tf.global_variables_initializer())\n", 136 | " writer = tf.summary.FileWriter('./graphs/code05_lazy', sess.graph)\n", 137 | " for _ in range(10):\n", 138 | " print(sess.run(tf.add(x, y)))\n", 139 | " writer.close()\n", 140 | " print('\\n')\n", 141 | " \n", 142 | " for node in tf.get_default_graph().as_graph_def().node:\n", 143 | " print(node.name)" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "metadata": {}, 149 | "source": [ 150 | "## 직접 실습" 151 | ] 152 | }, 153 | { 154 | "cell_type": "markdown", 155 | "metadata": {}, 156 | "source": [ 157 | "### tf.Variable, tf.placeholder를 이용하여 linear operator + relu 만들기\n", 158 | "\n", 159 | "```\n", 160 | "w = tf.Variable() >> 2\n", 161 | "b = tf.Variable() >> -3\n", 162 | "x = tf.placeholder() >> np.random.normal\n", 163 | "z = w * x + b\n", 164 | "a = relu(z)\n", 165 | "```\n", 166 | "\n", 167 | "### summary를 이용하여 graph도 저장하고 tensorboard로 확인하기" 168 | ] 169 | }, 170 | { 171 | "cell_type": "code", 172 | "execution_count": null, 173 | "metadata": {}, 174 | "outputs": [], 175 | "source": [ 176 | "import numpy as np\n", 177 | "\n", 178 | "np.random.seed(219)\n", 179 | "\n", 180 | "# Only necessary if you use IDLE or a jupyter notebook\n", 181 | "tf.reset_default_graph()\n", 182 | "\n", 183 | "# TODO\n", 184 | "w = tf.Variable(2., name='w')\n", 185 | "b = tf.Variable(-3., name='b')\n", 186 | "x = tf.placeholder(tf.float32, shape=[1], name='x')\n", 187 | "z = w * x + b\n", 188 | "a = tf.maximum(z, 0)\n", 189 | "#a = tf.nn.relu(z)\n", 190 | "print(w)\n", 191 | "print(b)\n", 192 | "print(x)\n", 193 | "print(z)\n", 194 | "print(a)\n", 195 | "\n", 196 | "with tf.Session(config=sess_config) as sess:\n", 197 | " sess.run(tf.global_variables_initializer())\n", 198 | " writer = tf.summary.FileWriter('./graphs/code05_linear', sess.graph)\n", 199 | " rnd = np.random.normal([1])\n", 200 | " print(sess.run([z, a], feed_dict={x: rnd}))\n", 201 | "writer.close() # close the writer when you’re done using it" 202 | ] 203 | }, 204 | { 205 | "cell_type": "markdown", 206 | "metadata": {}, 207 | "source": [ 208 | "##### output\n", 209 | "```\n", 210 | "\n", 211 | "\n", 212 | "Tensor(\"x:0\", shape=(1,), dtype=float32)\n", 213 | "Tensor(\"add:0\", shape=(1,), dtype=float32)\n", 214 | "Tensor(\"Maximum:0\", shape=(1,), dtype=float32)\n", 215 | "[array([-2.1427393], dtype=float32), array([ 0.], dtype=float32)]\n", 216 | "```" 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": null, 222 | "metadata": {}, 223 | "outputs": [], 224 | "source": [] 225 | } 226 | ], 227 | "metadata": { 228 | "kernelspec": { 229 | "display_name": "Python 3", 230 | "language": "python", 231 | "name": "python3" 232 | }, 233 | "language_info": { 234 | "codemirror_mode": { 235 | "name": "ipython", 236 | "version": 3 237 | }, 238 | "file_extension": ".py", 239 | "mimetype": "text/x-python", 240 | "name": "python", 241 | "nbconvert_exporter": "python", 242 | "pygments_lexer": "ipython3", 243 | "version": "3.6.2" 244 | } 245 | }, 246 | "nbformat": 4, 247 | "nbformat_minor": 2 248 | } 249 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code06_tf.train.Saver.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# tf.train.Saver\n", 8 | "\n", 9 | "* 변수를 저장할 때 쓴다" 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": null, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import tensorflow as tf\n", 19 | "\n", 20 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))" 21 | ] 22 | }, 23 | { 24 | "cell_type": "markdown", 25 | "metadata": {}, 26 | "source": [ 27 | "### 변수선언" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "a = tf.Variable(5)\n", 37 | "b = tf.Variable(4, name=\"my_variable\")\n", 38 | "x = tf.add(a, b, name=\"add\")\n", 39 | "\n", 40 | "# set the value of a to 3\n", 41 | "op = tf.assign(a, 3) " 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "metadata": {}, 47 | "source": [ 48 | "### saver를 이용한 변수 값 저장" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": null, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "# create saver object\n", 58 | "saver = tf.train.Saver()\n", 59 | "\n", 60 | "with tf.Session(config=sess_config) as sess:\n", 61 | " sess.run(tf.global_variables_initializer())\n", 62 | "\n", 63 | " sess.run(op)\n", 64 | "\n", 65 | " print (\"a:\", sess.run(a))\n", 66 | " print (\"my_variable:\", sess.run(b))\n", 67 | "\n", 68 | " # use saver object to save variables\n", 69 | " # within the context of the current session \n", 70 | " saver.save(sess, \"graphs/code06/my_model.ckpt\")\n", 71 | " \n", 72 | " print(a.name)\n", 73 | " print(b.name)" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "### saver를 이용하여 모델 load 하기\n", 81 | "\n", 82 | "* `saver.save`는 그래프 자체를 저장하지 않는다.\n", 83 | "* 변수의 값만 저장할 뿐이다.\n", 84 | "* 따라서 `saver.restore`를 하기전에 그래프 구조를 만들어줘야 한다.\n", 85 | "* 위의 예제에서 save할 때는 `a`와 `b`를 저장하였으나 로드 할때는 `c`와 `d`를 만들어서 로드한다.\n", 86 | "* 중요한 것은 변수의 tensorflow로 지정한 이름이다.\n", 87 | " * python 변수 이름 형태인 `a`, `b`, `c`, `d`가 아니다.\n", 88 | "* `name=my_variable`형태로 저장된 변수의 값을 불러와서 새로운 `c`와 `d` 라는 변수에 넣었다.\n", 89 | " * 저장할 때 `a`와 `b`라는 변수 이름은 중요하지 않다." 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": null, 95 | "metadata": {}, 96 | "outputs": [], 97 | "source": [ 98 | "# Only necessary if you use IDLE or a jupyter notebook\n", 99 | "tf.reset_default_graph()\n", 100 | "\n", 101 | "# make a dummy variable\n", 102 | "# the value is arbitrary, here just zero\n", 103 | "# but the shape must the the same as in the saved model\n", 104 | "c = tf.Variable(0)\n", 105 | "d = tf.Variable(0, name=\"my_variable\")\n", 106 | "y = tf.add(c, d, name='add')\n", 107 | "\n", 108 | "saver = tf.train.Saver()\n", 109 | "\n", 110 | "with tf.Session(config=sess_config) as sess:\n", 111 | "\n", 112 | " # use saver object to load variables from the saved model\n", 113 | " saver.restore(sess, \"graphs/code06/my_model.ckpt\")\n", 114 | "\n", 115 | " print (\"c:\", sess.run(c))\n", 116 | " print (\"my_variable:\", sess.run(d))\n", 117 | " \n", 118 | " print(c.name)\n", 119 | " print(d.name)" 120 | ] 121 | } 122 | ], 123 | "metadata": { 124 | "kernelspec": { 125 | "display_name": "Python 3", 126 | "language": "python", 127 | "name": "python3" 128 | }, 129 | "language_info": { 130 | "codemirror_mode": { 131 | "name": "ipython", 132 | "version": 3 133 | }, 134 | "file_extension": ".py", 135 | "mimetype": "text/x-python", 136 | "name": "python", 137 | "nbconvert_exporter": "python", 138 | "pygments_lexer": "ipython3", 139 | "version": "3.6.2" 140 | } 141 | }, 142 | "nbformat": 4, 143 | "nbformat_minor": 2 144 | } 145 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code07_tf.cond.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import numpy as np\n", 10 | "import matplotlib.pyplot as plt\n", 11 | "import tensorflow as tf\n", 12 | "\n", 13 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 14 | "\n", 15 | "tf.set_random_seed(219)\n", 16 | "np.random.seed(219)" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "## A frequent mistake" 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": null, 29 | "metadata": {}, 30 | "outputs": [], 31 | "source": [ 32 | "def what_is_x():\n", 33 | " if np.random.rand() < 0.5:\n", 34 | " x = tf.constant(10)\n", 35 | " else:\n", 36 | " x = tf.constant(20)\n", 37 | " \n", 38 | " return x" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "### Nomral loading" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": null, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "tf.reset_default_graph()\n", 55 | "x = what_is_x()\n", 56 | "with tf.Session(config=sess_config) as sess:\n", 57 | " for i in range(10):\n", 58 | " print(sess.run(x))" 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "### Lazy loading" 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": null, 71 | "metadata": {}, 72 | "outputs": [], 73 | "source": [ 74 | "tf.reset_default_graph()\n", 75 | "with tf.Session(config=sess_config) as sess:\n", 76 | " for i in range(10):\n", 77 | " print(sess.run(what_is_x()))" 78 | ] 79 | }, 80 | { 81 | "cell_type": "markdown", 82 | "metadata": {}, 83 | "source": [ 84 | "## How to solve" 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "execution_count": null, 90 | "metadata": {}, 91 | "outputs": [], 92 | "source": [ 93 | "def what_is_x2():\n", 94 | " def f1(): return tf.constant(10)\n", 95 | " def f2(): return tf.constant(20)\n", 96 | " x = tf.cond(tf.less(tf.random_uniform([1]), 0.5)[0], f1, f2)\n", 97 | " \n", 98 | " return x" 99 | ] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "execution_count": null, 104 | "metadata": {}, 105 | "outputs": [], 106 | "source": [ 107 | "tf.reset_default_graph()\n", 108 | "x = what_is_x2()\n", 109 | "with tf.Session(config=sess_config) as sess:\n", 110 | " for i in range(10):\n", 111 | " print(sess.run(x))" 112 | ] 113 | } 114 | ], 115 | "metadata": { 116 | "kernelspec": { 117 | "display_name": "Python 3", 118 | "language": "python", 119 | "name": "python3" 120 | }, 121 | "language_info": { 122 | "codemirror_mode": { 123 | "name": "ipython", 124 | "version": 3 125 | }, 126 | "file_extension": ".py", 127 | "mimetype": "text/x-python", 128 | "name": "python", 129 | "nbconvert_exporter": "python", 130 | "pygments_lexer": "ipython3", 131 | "version": "3.6.2" 132 | } 133 | }, 134 | "nbformat": 4, 135 | "nbformat_minor": 2 136 | } 137 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code08_linear_regression.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Linear Regression\n", 8 | "\n", 9 | "* y와 한 개 이상의 독립 변수 (또는 설명 변수) X와의 선형 상관 관계를 모델링하는 회귀분석 기법이다. 한 개의 설명 변수에 기반한 경우에는 단순 선형 회귀, 둘 이상의 설명 변수에 기반한 경우에는 다중 선형 회귀라고 한다. [참고: 위키피디아](https://ko.wikipedia.org/wiki/선형_회귀)\n", 10 | "\n", 11 | "$$y_{\\textrm{pred}} = \\boldsymbol{W}^{\\top}\\boldsymbol{x} + b$$\n", 12 | "\n", 13 | "* $\\boldsymbol{x} = [x_{1}, x_{2}, \\cdots, x_{d}]$\n", 14 | "* $\\boldsymbol{W} = [w_{1}, w_{2}, \\cdots, w_{d}]$\n", 15 | "* Loss function: $\\mathcal{L} = \\sum^{N} (y_{\\textrm{pred}} - y)^{2}$" 16 | ] 17 | }, 18 | { 19 | "cell_type": "code", 20 | "execution_count": null, 21 | "metadata": {}, 22 | "outputs": [], 23 | "source": [ 24 | "import numpy as np\n", 25 | "import matplotlib.pyplot as plt\n", 26 | "import tensorflow as tf\n", 27 | "\n", 28 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 29 | "\n", 30 | "tf.set_random_seed(219)\n", 31 | "np.random.seed(219)" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "metadata": {}, 37 | "source": [ 38 | "## Phase 1. Build a model" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "### Make data" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": null, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "a = 3\n", 55 | "b = -3\n", 56 | "data_X = np.random.uniform(low=0, high=5, size=200)\n", 57 | "data_y = a * data_X + b + np.random.normal(0, 2, 200)\n", 58 | "\n", 59 | "plt.plot(data_X, data_y, 'ro')\n", 60 | "plt.axhline(0, color='black', lw=1)\n", 61 | "plt.axvline(0, color='black', lw=1)\n", 62 | "plt.show()" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "metadata": {}, 68 | "source": [ 69 | "### Create placeholders for inputs and labels" 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "metadata": {}, 76 | "outputs": [], 77 | "source": [ 78 | "# X: inputs\n", 79 | "X = tf.placeholder(tf.float32, name='X')\n", 80 | "# y: labels\n", 81 | "y = tf.placeholder(tf.float32, name='y')" 82 | ] 83 | }, 84 | { 85 | "cell_type": "markdown", 86 | "metadata": {}, 87 | "source": [ 88 | "### Create weight and bias" 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": null, 94 | "metadata": {}, 95 | "outputs": [], 96 | "source": [ 97 | "# create Variables\n", 98 | "W = tf.Variable(tf.random_uniform([1]), name=\"weights\")\n", 99 | "b = tf.Variable(tf.random_normal([1]), name=\"bias\")" 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "metadata": {}, 105 | "source": [ 106 | "### Build a model: $y = Wx + b$" 107 | ] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "execution_count": null, 112 | "metadata": {}, 113 | "outputs": [], 114 | "source": [ 115 | "y_pred = W * X + b" 116 | ] 117 | }, 118 | { 119 | "cell_type": "markdown", 120 | "metadata": {}, 121 | "source": [ 122 | "### Define loss function" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": null, 128 | "metadata": {}, 129 | "outputs": [], 130 | "source": [ 131 | "loss = tf.square(y_pred - y, name=\"loss\")" 132 | ] 133 | }, 134 | { 135 | "cell_type": "markdown", 136 | "metadata": {}, 137 | "source": [ 138 | "### Create a optimizer" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": null, 144 | "metadata": {}, 145 | "outputs": [], 146 | "source": [ 147 | "train_op = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)" 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": {}, 153 | "source": [ 154 | "## Phase 2. Train a model" 155 | ] 156 | }, 157 | { 158 | "cell_type": "markdown", 159 | "metadata": {}, 160 | "source": [ 161 | "### Train a model" 162 | ] 163 | }, 164 | { 165 | "cell_type": "code", 166 | "execution_count": null, 167 | "metadata": {}, 168 | "outputs": [], 169 | "source": [ 170 | "with tf.Session(config=sess_config) as sess:\n", 171 | " # Initialize all variables\n", 172 | " sess.run(tf.global_variables_initializer())\n", 173 | " \n", 174 | " writer = tf.summary.FileWriter('graphs/code08_linear_reg', sess.graph)\n", 175 | " \n", 176 | " # train the model\n", 177 | " max_epoch = 100\n", 178 | " for epoch in range(max_epoch+1):\n", 179 | " total_loss = 0.0\n", 180 | " shuffle_index = np.random.permutation(len(data_X))\n", 181 | " for i in shuffle_index:\n", 182 | " x_ = data_X[i]\n", 183 | " y_ = data_y[i]\n", 184 | " _, loss_ = sess.run([train_op, loss],\n", 185 | " feed_dict={X: x_,\n", 186 | " y: y_})\n", 187 | " total_loss += loss_\n", 188 | " total_loss /= len(data_X)\n", 189 | " if epoch % 10 == 0:\n", 190 | " print('Epoch %d: total_loss: %f' % (epoch, total_loss))\n", 191 | " \n", 192 | " writer.close()\n", 193 | " W_, b_ = sess.run([W, b])" 194 | ] 195 | }, 196 | { 197 | "cell_type": "markdown", 198 | "metadata": {}, 199 | "source": [ 200 | "### Print the results: W and b\n", 201 | "\n", 202 | "* 정답 W = 3, b = -3" 203 | ] 204 | }, 205 | { 206 | "cell_type": "code", 207 | "execution_count": null, 208 | "metadata": {}, 209 | "outputs": [], 210 | "source": [ 211 | "print(W_, b_)" 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "metadata": {}, 217 | "source": [ 218 | "### Plot the results" 219 | ] 220 | }, 221 | { 222 | "cell_type": "code", 223 | "execution_count": null, 224 | "metadata": {}, 225 | "outputs": [], 226 | "source": [ 227 | "plt.plot(data_X, data_y, 'ro', label='Real data')\n", 228 | "plt.plot(data_X, W_ * data_X + b_, lw=5, label='model')\n", 229 | "plt.axhline(0, color='black', lw=1)\n", 230 | "plt.axvline(0, color='black', lw=1)\n", 231 | "plt.legend()\n", 232 | "plt.show()" 233 | ] 234 | } 235 | ], 236 | "metadata": { 237 | "kernelspec": { 238 | "display_name": "Python 3", 239 | "language": "python", 240 | "name": "python3" 241 | }, 242 | "language_info": { 243 | "codemirror_mode": { 244 | "name": "ipython", 245 | "version": 3 246 | }, 247 | "file_extension": ".py", 248 | "mimetype": "text/x-python", 249 | "name": "python", 250 | "nbconvert_exporter": "python", 251 | "pygments_lexer": "ipython3", 252 | "version": "3.6.2" 253 | } 254 | }, 255 | "nbformat": 4, 256 | "nbformat_minor": 2 257 | } 258 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code09_linear_regression_3rd_order.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Linear Regression\n", 8 | "\n", 9 | "* y와 한 개 이상의 독립 변수 (또는 설명 변수) X와의 선형 상관 관계를 모델링하는 회귀분석 기법이다. 한 개의 설명 변수에 기반한 경우에는 단순 선형 회귀, 둘 이상의 설명 변수에 기반한 경우에는 다중 선형 회귀라고 한다. [참고: 위키피디아](https://ko.wikipedia.org/wiki/선형_회귀)\n", 10 | "\n", 11 | "$$y_{\\textrm{pred}} = \\boldsymbol{W}^{\\top}\\boldsymbol{x} + b$$\n", 12 | "\n", 13 | "* $\\boldsymbol{x} = [x_{1}, x_{2}, \\cdots, x_{d}]$\n", 14 | "* $\\boldsymbol{W} = [w_{1}, w_{2}, \\cdots, w_{d}]$\n", 15 | "* Loss function: $\\mathcal{L} = \\sum^{N} (y_{\\textrm{pred}} - y)^{2}$" 16 | ] 17 | }, 18 | { 19 | "cell_type": "code", 20 | "execution_count": null, 21 | "metadata": {}, 22 | "outputs": [], 23 | "source": [ 24 | "import numpy as np\n", 25 | "import matplotlib.pyplot as plt\n", 26 | "import tensorflow as tf\n", 27 | "\n", 28 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 29 | "\n", 30 | "tf.set_random_seed(219)\n", 31 | "np.random.seed(219)" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "metadata": {}, 37 | "source": [ 38 | "## Phase 1. Build a model" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "### Make data" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": null, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "a = 1\n", 55 | "b = -4\n", 56 | "c = 2\n", 57 | "d = -1\n", 58 | "data_X = np.random.uniform(low=-1, high=4, size=200)\n", 59 | "data_y = a * data_X**3 + b * data_X**2 + c * data_X + d + np.random.normal(0, 1, 200)\n", 60 | "\n", 61 | "plt.plot(data_X, data_y, 'ro')\n", 62 | "plt.axhline(0, color='black', lw=1)\n", 63 | "plt.axvline(0, color='black', lw=1)\n", 64 | "plt.show()" 65 | ] 66 | }, 67 | { 68 | "cell_type": "markdown", 69 | "metadata": {}, 70 | "source": [ 71 | "### Create placeholders for inputs and labels" 72 | ] 73 | }, 74 | { 75 | "cell_type": "code", 76 | "execution_count": null, 77 | "metadata": {}, 78 | "outputs": [], 79 | "source": [ 80 | "# X: inputs\n", 81 | "X = tf.placeholder(tf.float32, shape=[3], name='X')\n", 82 | "# y: labels\n", 83 | "y = tf.placeholder(tf.float32, name='y')" 84 | ] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "metadata": {}, 89 | "source": [ 90 | "### Create weight and bias" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": null, 96 | "metadata": {}, 97 | "outputs": [], 98 | "source": [ 99 | "# create Variables\n", 100 | "W = tf.Variable(tf.random_normal(shape=[3]), name=\"weights\")\n", 101 | "b = tf.Variable(tf.random_normal([1]), name=\"bias\")" 102 | ] 103 | }, 104 | { 105 | "cell_type": "markdown", 106 | "metadata": {}, 107 | "source": [ 108 | "### Build a model: $y = \\boldsymbol{W} \\boldsymbol{x} + b$" 109 | ] 110 | }, 111 | { 112 | "cell_type": "code", 113 | "execution_count": null, 114 | "metadata": {}, 115 | "outputs": [], 116 | "source": [ 117 | "y_pred = tf.reduce_sum(W * X) + b" 118 | ] 119 | }, 120 | { 121 | "cell_type": "markdown", 122 | "metadata": {}, 123 | "source": [ 124 | "### Define loss function" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "metadata": {}, 131 | "outputs": [], 132 | "source": [ 133 | "loss = tf.square(y_pred - y, name=\"loss\")" 134 | ] 135 | }, 136 | { 137 | "cell_type": "markdown", 138 | "metadata": {}, 139 | "source": [ 140 | "### Create a optimizer" 141 | ] 142 | }, 143 | { 144 | "cell_type": "code", 145 | "execution_count": null, 146 | "metadata": {}, 147 | "outputs": [], 148 | "source": [ 149 | "train_op = tf.train.GradientDescentOptimizer(learning_rate=0.0001).minimize(loss)" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "## Phase2. Train a model" 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": {}, 162 | "source": [ 163 | "### Train a model" 164 | ] 165 | }, 166 | { 167 | "cell_type": "code", 168 | "execution_count": null, 169 | "metadata": {}, 170 | "outputs": [], 171 | "source": [ 172 | "with tf.Session(config=sess_config) as sess:\n", 173 | " # Initialize all variables\n", 174 | " sess.run(tf.global_variables_initializer())\n", 175 | " \n", 176 | " writer = tf.summary.FileWriter('graphs/code09_linear_reg_3', sess.graph)\n", 177 | " \n", 178 | " # train the model\n", 179 | " max_epoch = 100\n", 180 | " for epoch in range(max_epoch+1):\n", 181 | " total_loss = 0.0\n", 182 | " shuffle_index = np.random.permutation(len(data_X))\n", 183 | " for i in shuffle_index:\n", 184 | " x_ = data_X[i]\n", 185 | " y_ = data_y[i]\n", 186 | " feed_X = [x_**3, x_**2, x_]\n", 187 | " _, loss_ = sess.run([train_op, loss],\n", 188 | " feed_dict={X: feed_X,\n", 189 | " y: y_})\n", 190 | " total_loss += loss_\n", 191 | " total_loss /= len(data_X)\n", 192 | " if epoch % 10 == 0:\n", 193 | " print('Epoch %d: total_loss: %f' % (epoch, total_loss))\n", 194 | " \n", 195 | " writer.close()\n", 196 | " W_, b_ = sess.run([W, b])" 197 | ] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": {}, 202 | "source": [ 203 | "### Print the results: W and b" 204 | ] 205 | }, 206 | { 207 | "cell_type": "code", 208 | "execution_count": null, 209 | "metadata": {}, 210 | "outputs": [], 211 | "source": [ 212 | "#a = 1\n", 213 | "#b = -4\n", 214 | "#c = 2\n", 215 | "#d = -1\n", 216 | "print(W_, b_)" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "### Plot the results" 224 | ] 225 | }, 226 | { 227 | "cell_type": "code", 228 | "execution_count": null, 229 | "metadata": {}, 230 | "outputs": [], 231 | "source": [ 232 | "plt.plot(data_X, data_y, 'ro', label='Real data')\n", 233 | "data_X.sort()\n", 234 | "plt.plot(data_X, W_[0] * data_X**3 + W_[1] * data_X**2 + W_[0] * data_X + d, lw=5, label='model')\n", 235 | "\n", 236 | "plt.axhline(0, color='black', lw=1)\n", 237 | "plt.axvline(0, color='black', lw=1)\n", 238 | "plt.legend()\n", 239 | "plt.show()" 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "metadata": {}, 245 | "source": [ 246 | "## 직접 실습\n", 247 | "\n", 248 | "* 여러가지 hyper-parameter들을 바꿔가면서 accuracy를 높혀보자" 249 | ] 250 | } 251 | ], 252 | "metadata": { 253 | "kernelspec": { 254 | "display_name": "Python 3", 255 | "language": "python", 256 | "name": "python3" 257 | }, 258 | "language_info": { 259 | "codemirror_mode": { 260 | "name": "ipython", 261 | "version": 3 262 | }, 263 | "file_extension": ".py", 264 | "mimetype": "text/x-python", 265 | "name": "python", 266 | "nbconvert_exporter": "python", 267 | "pygments_lexer": "ipython3", 268 | "version": "3.6.2" 269 | } 270 | }, 271 | "nbformat": 4, 272 | "nbformat_minor": 2 273 | } 274 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code10_mnist_softmax.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# MNIST softmax\n", 8 | "\n", 9 | "* MNIST data를 가지고 softmax classifier를 만들어보자.\n", 10 | " * [참고: TensorFlow.org](https://www.tensorflow.org/get_started/mnist/beginners)\n", 11 | " * [소스: mnist_softmax.py](https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/examples/tutorials/mnist/mnist_softmax.py)" 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": {}, 17 | "source": [ 18 | "### Import modules" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "\"\"\"A very simple MNIST classifier.\n", 28 | "See extensive documentation at\n", 29 | "https://www.tensorflow.org/get_started/mnist/beginners\n", 30 | "\"\"\"\n", 31 | "from __future__ import absolute_import\n", 32 | "from __future__ import division\n", 33 | "from __future__ import print_function\n", 34 | "\n", 35 | "from tensorflow.examples.tutorials.mnist import input_data\n", 36 | "\n", 37 | "import tensorflow as tf\n", 38 | "\n", 39 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 40 | "\n", 41 | "tf.set_random_seed(219)" 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "metadata": {}, 47 | "source": [ 48 | "### Import data" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": null, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "data_dir = '../mnist'\n", 58 | "mnist = input_data.read_data_sets(data_dir, one_hot=True)" 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "### Show the MNIST" 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": null, 71 | "metadata": {}, 72 | "outputs": [], 73 | "source": [ 74 | "import numpy as np\n", 75 | "import matplotlib.pyplot as plt\n", 76 | "\n", 77 | "index = 100\n", 78 | "print(\"label = \", np.argmax(mnist.train.labels[index]))\n", 79 | "plt.imshow(mnist.train.images[index].reshape(28, 28), cmap='gray')\n", 80 | "plt.show()" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "### Create the model" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "metadata": {}, 94 | "outputs": [], 95 | "source": [ 96 | "x = tf.placeholder(tf.float32, [None, 784])\n", 97 | "W = tf.Variable(tf.zeros([784, 10]))\n", 98 | "b = tf.Variable(tf.zeros([10]))\n", 99 | "y = tf.matmul(x, W) + b" 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "metadata": {}, 105 | "source": [ 106 | "### Define loss and optimizer\n", 107 | "\n", 108 | "* [`tf.nn.softmax_cross_entropy_with_logits`](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits)" 109 | ] 110 | }, 111 | { 112 | "cell_type": "code", 113 | "execution_count": null, 114 | "metadata": {}, 115 | "outputs": [], 116 | "source": [ 117 | "y_ = tf.placeholder(tf.float32, [None, 10])\n", 118 | "# The raw formulation of cross-entropy,\n", 119 | "#\n", 120 | "# tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)),\n", 121 | "# reduction_indices=[1]))\n", 122 | "#\n", 123 | "# can be numerically unstable.\n", 124 | "#\n", 125 | "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n", 126 | "# outputs of 'y', and then average across the batch.\n", 127 | "cross_entropy = tf.reduce_mean(\n", 128 | " tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n", 129 | "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)" 130 | ] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "metadata": {}, 135 | "source": [ 136 | "### tf.InteractiveSession() and train" 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": null, 142 | "metadata": {}, 143 | "outputs": [], 144 | "source": [ 145 | "sess = tf.InteractiveSession(config=sess_config)\n", 146 | "tf.global_variables_initializer().run()\n", 147 | "# Train\n", 148 | "max_step = 100\n", 149 | "for step in range(max_step+1):\n", 150 | " batch_xs, batch_ys = mnist.train.next_batch(32)\n", 151 | " _, loss = sess.run([train_step, cross_entropy], feed_dict={x: batch_xs, y_: batch_ys})\n", 152 | " if step % 10 == 0:\n", 153 | " print(\"step: %d, loss: %g\" % (step, loss))" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": {}, 159 | "source": [ 160 | "### Test trained model\n", 161 | "\n", 162 | "* test accuracy: 0.8731" 163 | ] 164 | }, 165 | { 166 | "cell_type": "code", 167 | "execution_count": null, 168 | "metadata": {}, 169 | "outputs": [], 170 | "source": [ 171 | "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n", 172 | "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", 173 | "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n", 174 | " y_: mnist.test.labels}))" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": null, 180 | "metadata": {}, 181 | "outputs": [], 182 | "source": [ 183 | "import numpy as np\n", 184 | "import matplotlib.pyplot as plt\n", 185 | "%matplotlib inline\n", 186 | "\n", 187 | "test_batch_size = 16\n", 188 | "batch_xs, batch_ys = mnist.test.next_batch(test_batch_size)\n", 189 | "y_pred = sess.run(y, feed_dict={x: batch_xs})\n", 190 | "\n", 191 | "fig = plt.figure(figsize=(16, 10))\n", 192 | "for i, (px, py) in enumerate(zip(batch_xs, y_pred)):\n", 193 | " p = fig.add_subplot(4, 8, i+1)\n", 194 | " p.set_title(\"y_pred: {}\".format(np.argmax(py)))\n", 195 | " p.imshow(px.reshape(28, 28), cmap='gray')\n", 196 | " p.axis('off')" 197 | ] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": {}, 202 | "source": [ 203 | "## 직접 실습\n", 204 | "\n", 205 | "* 여러가지 hyper-parameter들을 바꿔가면서 accuracy를 높혀보자" 206 | ] 207 | } 208 | ], 209 | "metadata": { 210 | "kernelspec": { 211 | "display_name": "Python 3", 212 | "language": "python", 213 | "name": "python3" 214 | }, 215 | "language_info": { 216 | "codemirror_mode": { 217 | "name": "ipython", 218 | "version": 3 219 | }, 220 | "file_extension": ".py", 221 | "mimetype": "text/x-python", 222 | "name": "python", 223 | "nbconvert_exporter": "python", 224 | "pygments_lexer": "ipython3", 225 | "version": "3.6.2" 226 | } 227 | }, 228 | "nbformat": 4, 229 | "nbformat_minor": 2 230 | } 231 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code11_mnist_deep.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# MNIST convolutional neural networks\n", 8 | "\n", 9 | "* MNIST data를 가지고 **convolutional neural networks**를 만들어보자.\n", 10 | " * [참고: TensorFlow.org](https://www.tensorflow.org/get_started/mnist/pros)\n", 11 | " * [소스: mnist_deep.py](https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/examples/tutorials/mnist/mnist_deep.py)" 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": {}, 17 | "source": [ 18 | "### Import modules" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "\"\"\"A deep MNIST classifier using convolutional layers.\n", 28 | "See extensive documentation at\n", 29 | "https://www.tensorflow.org/get_started/mnist/pros\n", 30 | "\"\"\"\n", 31 | "from __future__ import absolute_import\n", 32 | "from __future__ import division\n", 33 | "from __future__ import print_function\n", 34 | "\n", 35 | "from tensorflow.examples.tutorials.mnist import input_data\n", 36 | "\n", 37 | "import tensorflow as tf\n", 38 | "\n", 39 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 40 | "\n", 41 | "tf.set_random_seed(219)" 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "metadata": {}, 47 | "source": [ 48 | "### Import data" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": null, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "data_dir = '../mnist'\n", 58 | "mnist = input_data.read_data_sets(data_dir, one_hot=True)" 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "### Show the MNIST" 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": null, 71 | "metadata": {}, 72 | "outputs": [], 73 | "source": [ 74 | "import numpy as np\n", 75 | "import matplotlib.pyplot as plt\n", 76 | "\n", 77 | "index = 3000\n", 78 | "print(\"label = \", np.argmax(mnist.train.labels[index]))\n", 79 | "plt.imshow(mnist.train.images[index].reshape(28, 28), cmap='gray')\n", 80 | "plt.show()" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "## Create the model" 88 | ] 89 | }, 90 | { 91 | "cell_type": "markdown", 92 | "metadata": {}, 93 | "source": [ 94 | "#### Weight Initialization" 95 | ] 96 | }, 97 | { 98 | "cell_type": "code", 99 | "execution_count": null, 100 | "metadata": {}, 101 | "outputs": [], 102 | "source": [ 103 | "def weight_variable(shape):\n", 104 | " \"\"\"weight_variable generates a weight variable of a given shape.\"\"\"\n", 105 | " initial = tf.truncated_normal(shape, stddev=0.1)\n", 106 | " return tf.Variable(initial)\n", 107 | "\n", 108 | "\n", 109 | "def bias_variable(shape):\n", 110 | " \"\"\"bias_variable generates a bias variable of a given shape.\"\"\"\n", 111 | " initial = tf.constant(0.1, shape=shape)\n", 112 | " return tf.Variable(initial)" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "#### Convolution and Pooling\n", 120 | "\n", 121 | "* [`tf.nn.conv2d`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d)\n", 122 | "* [`tf.nn.max_pool`](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool)" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": null, 128 | "metadata": {}, 129 | "outputs": [], 130 | "source": [ 131 | "def conv2d(x, W):\n", 132 | " \"\"\"conv2d returns a 2d convolution layer with full stride.\"\"\"\n", 133 | " return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')\n", 134 | "\n", 135 | "def max_pool_2x2(x):\n", 136 | " \"\"\"max_pool_2x2 downsamples a feature map by 2X.\"\"\"\n", 137 | " return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],\n", 138 | " strides=[1, 2, 2, 1], padding='SAME')" 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": {}, 144 | "source": [ 145 | "#### Create the model" 146 | ] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": null, 151 | "metadata": {}, 152 | "outputs": [], 153 | "source": [ 154 | "def deepnn(x):\n", 155 | " \"\"\"deepnn builds the graph for a deep net for classifying digits.\n", 156 | " Args:\n", 157 | " x: an input tensor with the dimensions (N_examples, 784), where 784 is the\n", 158 | " number of pixels in a standard MNIST image.\n", 159 | " Returns:\n", 160 | " A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values\n", 161 | " equal to the logits of classifying the digit into one of 10 classes (the\n", 162 | " digits 0-9). keep_prob is a scalar placeholder for the probability of\n", 163 | " dropout.\n", 164 | " \"\"\"\n", 165 | " # Reshape to use within a convolutional neural net.\n", 166 | " # Last dimension is for \"features\" - there is only one here, since images are\n", 167 | " # grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc.\n", 168 | " with tf.name_scope('reshape'):\n", 169 | " x_image = tf.reshape(x, [-1, 28, 28, 1])\n", 170 | "\n", 171 | " # First convolutional layer - maps one grayscale image to 32 feature maps.\n", 172 | " with tf.name_scope('conv1'):\n", 173 | " W_conv1 = weight_variable([5, 5, 1, 32])\n", 174 | " b_conv1 = bias_variable([32])\n", 175 | " h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)\n", 176 | "\n", 177 | " # Pooling layer - downsamples by 2X.\n", 178 | " with tf.name_scope('pool1'):\n", 179 | " h_pool1 = max_pool_2x2(h_conv1)\n", 180 | "\n", 181 | " # Second convolutional layer -- maps 32 feature maps to 64.\n", 182 | " with tf.name_scope('conv2'):\n", 183 | " W_conv2 = weight_variable([5, 5, 32, 64])\n", 184 | " b_conv2 = bias_variable([64])\n", 185 | " h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)\n", 186 | "\n", 187 | " # Second pooling layer.\n", 188 | " with tf.name_scope('pool2'):\n", 189 | " h_pool2 = max_pool_2x2(h_conv2)\n", 190 | "\n", 191 | " # Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n", 192 | " # is down to 7x7x64 feature maps -- maps this to 1024 features.\n", 193 | " with tf.name_scope('fc1'):\n", 194 | " W_fc1 = weight_variable([7 * 7 * 64, 1024])\n", 195 | " b_fc1 = bias_variable([1024])\n", 196 | "\n", 197 | " h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])\n", 198 | " h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)\n", 199 | "\n", 200 | " # Dropout - controls the complexity of the model, prevents co-adaptation of\n", 201 | " # features.\n", 202 | " with tf.name_scope('dropout'):\n", 203 | " keep_prob = tf.placeholder(tf.float32)\n", 204 | " h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n", 205 | "\n", 206 | " # Map the 1024 features to 10 classes, one for each digit\n", 207 | " with tf.name_scope('fc2'):\n", 208 | " W_fc2 = weight_variable([1024, 10])\n", 209 | " b_fc2 = bias_variable([10])\n", 210 | "\n", 211 | " y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2\n", 212 | " return y_conv, keep_prob" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "### Create the placeholder" 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "execution_count": null, 225 | "metadata": {}, 226 | "outputs": [], 227 | "source": [ 228 | "x = tf.placeholder(tf.float32, [None, 784])\n", 229 | "y_ = tf.placeholder(tf.float32, [None, 10])" 230 | ] 231 | }, 232 | { 233 | "cell_type": "markdown", 234 | "metadata": {}, 235 | "source": [ 236 | "### Build the model and define loss and optimizer" 237 | ] 238 | }, 239 | { 240 | "cell_type": "code", 241 | "execution_count": null, 242 | "metadata": {}, 243 | "outputs": [], 244 | "source": [ 245 | "# Build the graph for the deep net\n", 246 | "y_conv, keep_prob = deepnn(x)\n", 247 | "\n", 248 | "with tf.name_scope('loss'):\n", 249 | " cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_,\n", 250 | " logits=y_conv)\n", 251 | "cross_entropy = tf.reduce_mean(cross_entropy)\n", 252 | "\n", 253 | "with tf.name_scope('adam_optimizer'):\n", 254 | " train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\n", 255 | "\n", 256 | "with tf.name_scope('accuracy'):\n", 257 | " correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))\n", 258 | " correct_prediction = tf.cast(correct_prediction, tf.float32)\n", 259 | "accuracy = tf.reduce_mean(correct_prediction)\n", 260 | "\n", 261 | "graph_location = 'graphs/code11_mnist_deep'\n", 262 | "print('Saving graph to: %s' % graph_location)\n", 263 | "train_writer = tf.summary.FileWriter(graph_location)\n", 264 | "train_writer.add_graph(tf.get_default_graph())" 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "### tf.InteractiveSession() and train" 272 | ] 273 | }, 274 | { 275 | "cell_type": "code", 276 | "execution_count": null, 277 | "metadata": { 278 | "scrolled": true 279 | }, 280 | "outputs": [], 281 | "source": [ 282 | "sess = tf.InteractiveSession(config=sess_config)\n", 283 | "sess.run(tf.global_variables_initializer())\n", 284 | "\n", 285 | "max_step = 100\n", 286 | "for i in range(max_step+1):\n", 287 | " batch = mnist.train.next_batch(32)\n", 288 | " _, ce = sess.run([train_step, cross_entropy], feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n", 289 | " if i % 10 == 0:\n", 290 | " train_accuracy = sess.run(accuracy, feed_dict={x: batch[0],\n", 291 | " y_: batch[1],\n", 292 | " keep_prob: 1.0})\n", 293 | " print('step %d, training accuracy %g, cross_entropy %g' % (i, train_accuracy, ce))" 294 | ] 295 | }, 296 | { 297 | "cell_type": "markdown", 298 | "metadata": {}, 299 | "source": [ 300 | "### Test trained model\n", 301 | "\n", 302 | "* test accuracy: 0.8518" 303 | ] 304 | }, 305 | { 306 | "cell_type": "code", 307 | "execution_count": null, 308 | "metadata": {}, 309 | "outputs": [], 310 | "source": [ 311 | "print('test accuracy %g' % sess.run(accuracy, feed_dict={\n", 312 | " x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))" 313 | ] 314 | }, 315 | { 316 | "cell_type": "markdown", 317 | "metadata": {}, 318 | "source": [ 319 | "### test images inference" 320 | ] 321 | }, 322 | { 323 | "cell_type": "code", 324 | "execution_count": null, 325 | "metadata": {}, 326 | "outputs": [], 327 | "source": [ 328 | "import numpy as np\n", 329 | "import matplotlib.pyplot as plt\n", 330 | "%matplotlib inline\n", 331 | "\n", 332 | "test_batch_size = 16\n", 333 | "batch_xs, _ = mnist.test.next_batch(test_batch_size)\n", 334 | "y_pred = sess.run(y_conv, feed_dict={x: batch_xs, keep_prob: 1.0})\n", 335 | "\n", 336 | "fig = plt.figure(figsize=(16, 10))\n", 337 | "for i, (px, py) in enumerate(zip(batch_xs, y_pred)):\n", 338 | " p = fig.add_subplot(4, 8, i+1)\n", 339 | " p.set_title(\"y_pred: {}\".format(np.argmax(py)))\n", 340 | " p.imshow(px.reshape(28, 28), cmap='gray')\n", 341 | " p.axis('off')" 342 | ] 343 | }, 344 | { 345 | "cell_type": "markdown", 346 | "metadata": {}, 347 | "source": [ 348 | "## 직접 실습\n", 349 | "\n", 350 | "* 여러가지 hyper-parameter들을 바꿔가면서 accuracy를 높혀보자" 351 | ] 352 | } 353 | ], 354 | "metadata": { 355 | "kernelspec": { 356 | "display_name": "Python 3", 357 | "language": "python", 358 | "name": "python3" 359 | }, 360 | "language_info": { 361 | "codemirror_mode": { 362 | "name": "ipython", 363 | "version": 3 364 | }, 365 | "file_extension": ".py", 366 | "mimetype": "text/x-python", 367 | "name": "python", 368 | "nbconvert_exporter": "python", 369 | "pygments_lexer": "ipython3", 370 | "version": "3.6.2" 371 | } 372 | }, 373 | "nbformat": 4, 374 | "nbformat_minor": 2 375 | } 376 | -------------------------------------------------------------------------------- /05_tools/tensorflow_codes/code12_mnist_deep_slim.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# MNIST convolutional neural networks with slim\n", 8 | "\n", 9 | "* MNIST data를 가지고 **convolutional neural networks**를 만들어보자.\n", 10 | " * [참고: TensorFlow.org](https://www.tensorflow.org/get_started/mnist/pros)\n", 11 | " * [소스: mnist_deep.py](https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/examples/tutorials/mnist/mnist_deep.py)를 slim을 이용하여 수정함\n", 12 | " * [`tf.contrib.slim` 참고](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim)" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "metadata": {}, 18 | "source": [ 19 | "### Import modules" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": {}, 26 | "outputs": [], 27 | "source": [ 28 | "\"\"\"A deep MNIST classifier using convolutional layers.\n", 29 | "See extensive documentation at\n", 30 | "https://www.tensorflow.org/get_started/mnist/pros\n", 31 | "\"\"\"\n", 32 | "from __future__ import absolute_import\n", 33 | "from __future__ import division\n", 34 | "from __future__ import print_function\n", 35 | "\n", 36 | "from tensorflow.examples.tutorials.mnist import input_data\n", 37 | "\n", 38 | "import tensorflow as tf\n", 39 | "\n", 40 | "slim = tf.contrib.slim\n", 41 | "\n", 42 | "tf.set_random_seed(219)" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "### Import data" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": null, 55 | "metadata": {}, 56 | "outputs": [], 57 | "source": [ 58 | "data_dir = '../mnist'\n", 59 | "mnist = input_data.read_data_sets(data_dir, one_hot=True)" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "metadata": {}, 65 | "source": [ 66 | "### Show the MNIST" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": null, 72 | "metadata": {}, 73 | "outputs": [], 74 | "source": [ 75 | "import numpy as np\n", 76 | "import matplotlib.pyplot as plt\n", 77 | "\n", 78 | "index = 999\n", 79 | "print(\"label = \", np.argmax(mnist.train.labels[index]))\n", 80 | "plt.imshow(mnist.train.images[index].reshape(28, 28), cmap='gray')\n", 81 | "plt.show()" 82 | ] 83 | }, 84 | { 85 | "cell_type": "markdown", 86 | "metadata": {}, 87 | "source": [ 88 | "### Create the model\n", 89 | "\n", 90 | "* [`tf.contrib.layers`](https://www.tensorflow.org/api_guides/python/contrib.layers)\n", 91 | "* [`tf.contrib.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/conv2d)\n", 92 | "* [`tf.contrib.layers.max_pool2d`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/max_pool2d)" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": null, 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "def deepnn_slim(x):\n", 102 | " \"\"\"deepnn builds the graph for a deep net for classifying digits.\n", 103 | " Args:\n", 104 | " x: an input tensor with the dimensions (N_examples, 784), where 784 is the\n", 105 | " number of pixels in a standard MNIST image.\n", 106 | " Returns:\n", 107 | " A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values\n", 108 | " equal to the logits of classifying the digit into one of 10 classes (the\n", 109 | " digits 0-9). keep_prob is a scalar placeholder for the probability of\n", 110 | " dropout.\n", 111 | " \"\"\"\n", 112 | " # Reshape to use within a convolutional neural net.\n", 113 | " # Last dimension is for \"features\" - there is only one here, since images are\n", 114 | " # grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc.\n", 115 | " with tf.name_scope('reshape'):\n", 116 | " x_image = tf.reshape(x, [-1, 28, 28, 1])\n", 117 | "\n", 118 | " # First convolutional layer - maps one grayscale image to 32 feature maps.\n", 119 | " h_conv1 = slim.conv2d(x_image, 32, [5, 5], scope='conv1')\n", 120 | "\n", 121 | " # Pooling layer - downsamples by 2X.\n", 122 | " h_pool1 = slim.max_pool2d(h_conv1, [2, 2], scope='pool1')\n", 123 | "\n", 124 | " # Second convolutional layer -- maps 32 feature maps to 64.\n", 125 | " h_conv2 = slim.conv2d(h_pool1, 64, [5, 5], scope='conv2')\n", 126 | "\n", 127 | " # Second pooling layer.\n", 128 | " h_pool2 = slim.max_pool2d(h_conv2, [2, 2], scope='pool2')\n", 129 | "\n", 130 | " # Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n", 131 | " # is down to 7x7x64 feature maps -- maps this to 1024 features.\n", 132 | " h_pool2_flat = slim.flatten(h_pool2, scope='flatten')\n", 133 | " h_fc1 = slim.fully_connected(h_pool2_flat, 1024, scope='fc1')\n", 134 | "\n", 135 | " # Dropout - controls the complexity of the model, prevents co-adaptation of\n", 136 | " # features.\n", 137 | " keep_prob = tf.placeholder(tf.float32)\n", 138 | " h_fc1_drop = slim.dropout(h_fc1, keep_prob, scope='dropout')\n", 139 | "\n", 140 | " # Map the 1024 features to 10 classes, one for each digit\n", 141 | " y_conv = slim.fully_connected(h_fc1_drop, 10, activation_fn=None, scope='fc2')\n", 142 | " \n", 143 | " return y_conv, keep_prob" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "metadata": {}, 149 | "source": [ 150 | "### Create the placeholder" 151 | ] 152 | }, 153 | { 154 | "cell_type": "code", 155 | "execution_count": null, 156 | "metadata": {}, 157 | "outputs": [], 158 | "source": [ 159 | "x = tf.placeholder(tf.float32, [None, 784])\n", 160 | "y_ = tf.placeholder(tf.float32, [None, 10])" 161 | ] 162 | }, 163 | { 164 | "cell_type": "markdown", 165 | "metadata": {}, 166 | "source": [ 167 | "### Build the model and define loss and optimizer\n", 168 | "\n", 169 | "* [`tf.losses.softmax_cross_entropy`](https://www.tensorflow.org/api_docs/python/tf/losses/softmax_cross_entropy)" 170 | ] 171 | }, 172 | { 173 | "cell_type": "code", 174 | "execution_count": null, 175 | "metadata": {}, 176 | "outputs": [], 177 | "source": [ 178 | "# Build the graph for the deep net\n", 179 | "y_conv, keep_prob = deepnn_slim(x)\n", 180 | "\n", 181 | "#with tf.name_scope('loss'):\n", 182 | "# cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_,\n", 183 | "# logits=y_conv)\n", 184 | "#cross_entropy = tf.reduce_mean(cross_entropy)\n", 185 | "cross_entropy = tf.losses.softmax_cross_entropy(onehot_labels=y_,\n", 186 | " logits=y_conv,\n", 187 | " scope='loss')\n", 188 | "\n", 189 | "with tf.name_scope('adam_optimizer'):\n", 190 | " train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\n", 191 | "\n", 192 | "with tf.name_scope('accuracy'):\n", 193 | " correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))\n", 194 | " correct_prediction = tf.cast(correct_prediction, tf.float32)\n", 195 | "accuracy = tf.reduce_mean(correct_prediction)\n", 196 | "\n", 197 | "graph_location = 'graphs/code12_mnist_slim'\n", 198 | "print('Saving graph to: %s' % graph_location)\n", 199 | "train_writer = tf.summary.FileWriter(graph_location)\n", 200 | "train_writer.add_graph(tf.get_default_graph())" 201 | ] 202 | }, 203 | { 204 | "cell_type": "markdown", 205 | "metadata": {}, 206 | "source": [ 207 | "### tf.Session() and train" 208 | ] 209 | }, 210 | { 211 | "cell_type": "code", 212 | "execution_count": null, 213 | "metadata": {}, 214 | "outputs": [], 215 | "source": [ 216 | "sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))\n", 217 | "sess = tf.Session(config=sess_config)\n", 218 | "sess.run(tf.global_variables_initializer())\n", 219 | "\n", 220 | "max_step = 100\n", 221 | "for step in range(max_step+1):\n", 222 | " batch = mnist.train.next_batch(32)\n", 223 | " feed_dict_train = {x: batch[0],\n", 224 | " y_: batch[1],\n", 225 | " keep_prob: 0.5}\n", 226 | " feed_dict_eval = {x: batch[0],\n", 227 | " y_: batch[1],\n", 228 | " keep_prob: 1.0}\n", 229 | " _, ce = sess.run([train_step, cross_entropy], feed_dict=feed_dict_train)\n", 230 | " if step % 10 == 0:\n", 231 | " train_accuracy = sess.run(accuracy, feed_dict=feed_dict_eval)\n", 232 | " print('step %d, training accuracy %g, cross_entropy %g' % (step, train_accuracy, ce))" 233 | ] 234 | }, 235 | { 236 | "cell_type": "markdown", 237 | "metadata": {}, 238 | "source": [ 239 | "### Test trained model\n", 240 | "\n", 241 | "* test accuracy: 0.8308" 242 | ] 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": null, 247 | "metadata": {}, 248 | "outputs": [], 249 | "source": [ 250 | "print('test accuracy %g' % sess.run(accuracy, feed_dict={\n", 251 | " x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))" 252 | ] 253 | }, 254 | { 255 | "cell_type": "markdown", 256 | "metadata": {}, 257 | "source": [ 258 | "### test images inference" 259 | ] 260 | }, 261 | { 262 | "cell_type": "code", 263 | "execution_count": null, 264 | "metadata": {}, 265 | "outputs": [], 266 | "source": [ 267 | "import numpy as np\n", 268 | "import matplotlib.pyplot as plt\n", 269 | "%matplotlib inline\n", 270 | "\n", 271 | "test_batch_size = 16\n", 272 | "batch_xs, _ = mnist.test.next_batch(test_batch_size)\n", 273 | "y_pred = sess.run(y_conv, feed_dict={x: batch_xs, keep_prob: 1.0})\n", 274 | "\n", 275 | "fig = plt.figure(figsize=(16, 10))\n", 276 | "for i, (px, py) in enumerate(zip(batch_xs, y_pred)):\n", 277 | " p = fig.add_subplot(4, 8, i+1)\n", 278 | " p.set_title(\"y_pred: {}\".format(np.argmax(py)))\n", 279 | " p.imshow(px.reshape(28, 28), cmap='gray')\n", 280 | " p.axis('off')" 281 | ] 282 | }, 283 | { 284 | "cell_type": "markdown", 285 | "metadata": {}, 286 | "source": [ 287 | "## 직접 실습\n", 288 | "\n", 289 | "* 여러가지 hyper-parameter들을 바꿔가면서 accuracy를 높혀보자" 290 | ] 291 | } 292 | ], 293 | "metadata": { 294 | "kernelspec": { 295 | "display_name": "Python 3", 296 | "language": "python", 297 | "name": "python3" 298 | }, 299 | "language_info": { 300 | "codemirror_mode": { 301 | "name": "ipython", 302 | "version": 3 303 | }, 304 | "file_extension": ".py", 305 | "mimetype": "text/x-python", 306 | "name": "python", 307 | "nbconvert_exporter": "python", 308 | "pygments_lexer": "ipython3", 309 | "version": "3.6.2" 310 | } 311 | }, 312 | "nbformat": 4, 313 | "nbformat_minor": 2 314 | } 315 | -------------------------------------------------------------------------------- /05_tools/tool lecture tf.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/05_tools/tool lecture tf.pdf -------------------------------------------------------------------------------- /06_latex/example/example01.tex: -------------------------------------------------------------------------------- 1 | \documentclass{article} 2 | \begin{document} 3 | Hello, \LaTeX. 4 | \end{document} 5 | -------------------------------------------------------------------------------- /06_latex/example/example02.tex: -------------------------------------------------------------------------------- 1 | \documentclass[a4paper,11pt]{article} 2 | 3 | % define the title 4 | \author{H.~Partl} 5 | \title{Minimalism} 6 | 7 | \begin{document} 8 | % generates the title 9 | \maketitle 10 | 11 | % insert the table of contents 12 | \tableofcontents 13 | 14 | \section{Some Interesting Words} 15 | Well, and here begins my lovely article. 16 | 17 | \section{Good Bye World} 18 | \ldots{} and here it ends. 19 | 20 | \end{document} 21 | -------------------------------------------------------------------------------- /06_latex/example/example03.tex: -------------------------------------------------------------------------------- 1 | \documentclass{article} 2 | 3 | \begin{document} 4 | 5 | "In" its most general form, convolution is an operation on two functions 6 | of a real-valued argument. 7 | ''To motivate 'the' definition'' of convolution, we start with examples of 8 | two functions we might use. 9 | 10 | Suppose we are tracking the location of a spaceship with a laser sensor. 11 | Our laser sensor provides a single output $x(t)$, the position of the 12 | spaceship at time $t$. 13 | Both $x$ and $t$ are real-valued, i.e., we can get a different reading 14 | from the laser sensor at any instant in time.\\ 15 | Now suppose that our laser sensor is somewhat noisy. 16 | To obtain a less noisy estimate of the spaceship’s position, 17 | we would like to average together several measurements. 18 | Of course, more recent measurements are more relevant, so we will 19 | want this to be a weighted average that gives more weight to recent 20 | measurements. 21 | \newpage 22 | We can do this with a weighting function $w(a)$, where a is the age of 23 | a measurement. 24 | If we apply such a weighted average operation at every moment, we obtain 25 | a new function $s$ providing a smoothed estimate of the position 26 | of the spaceship: 27 | 28 | \end{document} 29 | -------------------------------------------------------------------------------- /06_latex/example/example04.tex: -------------------------------------------------------------------------------- 1 | \documentclass[a4paper,11pt]{article} 2 | 3 | \usepackage{kotex} 4 | 5 | % define the title 6 | \author{Il Gu Yi} 7 | \title{Structure of text} 8 | \date{\today} 9 | 10 | \begin{document} 11 | % generates the title 12 | \maketitle 13 | 14 | % insert the table of contents 15 | \tableofcontents 16 | 17 | \section{나는 section} 18 | 그냥 아무말이나 써야겠다. 19 | 그냥 아무말이나 써야겠다. 20 | 그냥 아무말이나 써야겠다. 21 | 22 | \subsection{나는 subsection} 23 | \subsubsection{나는 subsubsection} 24 | 그냥 아무말이나 써야겠다. 25 | 그냥 아무말이나 써야겠다. 26 | 그냥 아무말이나 써야겠다. 27 | 28 | 29 | \section{나도 section} 30 | \subsection{나도 subsection} 31 | \subsubsection{나도 subsubsection} 32 | 그냥 아무말이나 써야겠다. 33 | 그냥 아무말이나 써야겠다. 34 | 그냥 아무말이나 써야겠다. 35 | 36 | \subsubsection*{나도 subsubsection} 37 | 그냥 아무말이나 써야겠다. 38 | 그냥 아무말이나 써야겠다. 39 | 그냥 아무말이나 써야겠다. 40 | 41 | 42 | \paragraph{나는 paragraph} 43 | blar blar blar blar blar blar blar blar 44 | blar blar blar blar blar blar blar blar 45 | blar blar blar blar blar blar blar blar 46 | 47 | \subparagraph{나는 subparagraph} 48 | blar blar blar blar blar blar blar blar 49 | blar blar blar blar blar blar blar blar 50 | blar blar blar blar blar blar blar blar 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | \end{document} 69 | -------------------------------------------------------------------------------- /06_latex/example/example05.tex: -------------------------------------------------------------------------------- 1 | \documentclass[a4paper,11pt]{article} 2 | 3 | \usepackage{kotex} 4 | 5 | % define the title 6 | \author{Il Gu Yi} 7 | \title{Structure of text} 8 | \date{\today} 9 | 10 | \begin{document} 11 | % generates the title 12 | \maketitle 13 | 14 | % insert the table of contents 15 | \tableofcontents 16 | 17 | \newpage 18 | \section{나는 section} 19 | \label{sec:this} 20 | 그냥 아무말이나 써야겠다. 21 | 그냥 아무말이나 써야겠다. 22 | 이번 section~\ref{sec:this}은 그냥 아무말이나 써야겠다. 23 | 그냥 아무말이나 써야겠다. 24 | 25 | 26 | \subsection{나는 subsection} 27 | \subsubsection{나는 subsubsection} 28 | 그냥 아무말이나 써야겠다. 29 | 그냥 아무말이나 써야겠다. 30 | 그냥 아무말이나 써야겠다. 31 | 32 | 33 | \newpage 34 | \section{나도 section} 35 | \subsection{나도 subsection} 36 | \subsubsection{나도 subsubsection} 37 | 그냥 아무말이나 써야겠다. 38 | 그냥 아무말이나 써야겠다. 39 | 이전 section~\ref{sec:this}은 page~\pageref{sec:this} 에 있다. 40 | 그냥 아무말이나 써야겠다. 41 | 42 | \subsubsection*{나도 subsubsection} 43 | 그냥 아무말이나 써야겠다. 44 | 그냥 아무말이나 써야겠다. 45 | 그냥 아무말이나 써야겠다. 46 | 47 | 48 | \paragraph{나는 paragraph} 49 | blar blar blar blar blar blar blar blar 50 | blar blar blar blar blar blar blar blar 51 | blar blar blar blar blar blar blar blar 52 | 53 | \subparagraph{나는 subparagraph} 54 | blar blar blar blar blar blar blar blar 55 | blar blar blar blar blar blar blar blar 56 | blar blar blar blar blar blar blar blar 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | \end{document} 75 | -------------------------------------------------------------------------------- /06_latex/example/example06.tex: -------------------------------------------------------------------------------- 1 | \documentclass[a4paper,11pt]{article} 2 | 3 | \usepackage{kotex} 4 | 5 | % define the title 6 | \author{Il Gu Yi} 7 | \title{Structure of text} 8 | \date{\today} 9 | 10 | \begin{document} 11 | % generates the title 12 | \maketitle 13 | 14 | % insert the table of contents 15 | \tableofcontents 16 | 17 | \newpage 18 | \section{나는 section} 19 | \label{sec:this} 20 | 그냥 아무말이나 써야겠다. 21 | 이번 section~\ref{sec:this}은 그냥 아무말이나 써야겠다. 22 | 그냥 아무말이나 써야겠다. 23 | 24 | \flushleft 25 | \begin{enumerate} 26 | \item You can mix the list environments to your taste: 27 | \begin{itemize} 28 | \item But it might start to look silly. 29 | \item[-] With a dash. 30 | \end{itemize} 31 | \item Therefore remember: 32 | \begin{description} 33 | \item[Stupid] things will not 34 | become smart because they are in a list. 35 | \item[Smart] things, though, can be 36 | presented beautifully in a list. 37 | \end{description} 38 | \end{enumerate} 39 | 40 | 41 | 42 | \newpage 43 | \section{나도 section} 44 | \subsection{나도 subsection} 45 | \subsubsection{나도 subsubsection} 46 | 그냥 아무말이나 써야겠다. 47 | 그냥 아무말이나 써야겠다. 48 | 이전 section~\ref{sec:this}은 page~\pageref{sec:this} 에 있다. 49 | 그냥 아무말이나 써야겠다. 50 | 51 | \subsubsection*{나도 subsubsection} 52 | 그냥 아무말이나 써야겠다. 53 | 그냥 아무말이나 써야겠다. 54 | 그냥 아무말이나 써야겠다. 55 | 56 | 57 | \paragraph{나는 paragraph} 58 | blar blar blar blar blar blar blar blar 59 | blar blar blar blar blar blar blar blar 60 | blar blar blar blar blar blar blar blar 61 | 62 | \subparagraph{나는 subparagraph} 63 | blar blar blar blar blar blar blar blar 64 | blar blar blar blar blar blar blar blar 65 | blar blar blar blar blar blar blar blar 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | \end{document} 84 | -------------------------------------------------------------------------------- /06_latex/example/example07.tex: -------------------------------------------------------------------------------- 1 | \documentclass[a4paper,11pt]{article} 2 | 3 | \usepackage{kotex} 4 | 5 | % define the title 6 | \author{Il Gu Yi} 7 | \title{Structure of text} 8 | \date{\today} 9 | 10 | \begin{document} 11 | % generates the title 12 | \maketitle 13 | 14 | % insert the table of contents 15 | \tableofcontents 16 | 17 | \newpage 18 | \section{나는 section} 19 | \label{sec:this} 20 | 그냥 아무말이나 써야겠다. 21 | 이번 section~\ref{sec:this}은 그냥 아무말이나 써야겠다. 22 | 그냥 아무말이나 써야겠다. 23 | 24 | \flushleft 25 | \begin{enumerate} 26 | \item You can mix the list environments to your taste: 27 | \begin{itemize} 28 | \item But it might start to look silly. 29 | \item[-] With a dash. 30 | \end{itemize} 31 | \item Therefore remember: 32 | \begin{description} 33 | \item[Stupid] things will not 34 | become smart because they are in a list. 35 | \item[Smart] things, though, can be 36 | presented beautifully in a list. 37 | \end{description} 38 | \end{enumerate} 39 | 40 | 41 | 42 | \newpage 43 | \section{나도 section} 44 | \subsection{나도 subsection} 45 | \subsubsection{나도 subsubsection} 46 | 그냥 아무말이나 써야겠다. 47 | 그냥 아무말이나 써야겠다. 48 | 이전 section~\ref{sec:this}은 page~\pageref{sec:this} 에 있다. 49 | 그냥 아무말이나 써야겠다. 50 | 51 | \subsubsection*{나도 subsubsection} 52 | 그냥 아무말이나 써야겠다. 53 | 그냥 아무말이나 써야겠다. 54 | 그냥 아무말이나 써야겠다. 55 | 56 | 57 | \subsubsection{reference cite 연습} 58 | Ian Goodfellow~\cite{Goodfellow2015}는 너무 재밌어. 59 | Sutton~\cite{Sutton2018} 아저씨 책은 너무 어려워 ㅠㅠ. 60 | PRML~\cite{Bishop2006}은 기본이지. 61 | 62 | 63 | 64 | \paragraph{나는 paragraph} 65 | blar blar blar blar blar blar blar blar 66 | blar blar blar blar blar blar blar blar 67 | blar blar blar blar blar blar blar blar 68 | 69 | \subparagraph{나는 subparagraph} 70 | blar blar blar blar blar blar blar blar 71 | blar blar blar blar blar blar blar blar 72 | blar blar blar blar blar blar blar blar 73 | 74 | 75 | \newpage 76 | \bibliographystyle{unsrt} 77 | \bibliography{example07_bib} 78 | 79 | 80 | 81 | \end{document} 82 | -------------------------------------------------------------------------------- /06_latex/example/example07_bib.bib: -------------------------------------------------------------------------------- 1 | @article{Newman1999, 2 | abstract = {This book provides an introduction to Monte Carlo simulations in classical statistical physics and is aimed both at students beginning work in the field and at more experienced researchers who wish to learn more about Monte Carlo methods. It includes methods for both equilibrium and out of equilibrium systems, and discusses in detail such common algorithms as the Metropolis and heat-bath algorithms, as well as more sophisticated ones such as continuous time Monte Carlo, cluster algorithms, multigrid methods, entropic sampling and simulated tempering. Data analysis techniques are also explained starting with straightforward measurement and error-estimation techniques and progressing to topics such as the single and multiple histogram methods and finite size scaling. The last few chapters of the book are devoted to implementation issues, including lattice representations, efficient implementation of data structures, multispin coding, parallelization of Monte Carlo algorithms, and random number generation. The book also includes example programs which show how to apply these techniques to a variety of well-known models.}, 3 | author = {Newman, Mej and Barkema, Gt}, 4 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/1999/Newman, Barkema/1999 - Newman, Barkema - Monte Carlo Methods in Statistical Physics.pdf:pdf}, 5 | isbn = {978-0-19-851797-9}, 6 | pages = {496}, 7 | title = {{Monte Carlo Methods in Statistical Physics}}, 8 | url = {http://global.oup.com/academic/product/monte-carlo-methods-in-statistical-physics-9780198517979;jsessionid=6390B822300F1BB12A0B0173949B9515?cc=us{\&}lang=en{\&}}, 9 | year = {1999} 10 | } 11 | 12 | @book{Sutton2018, 13 | author = {Sutton, Richard S. and Barto, Andrew G.}, 14 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/2018/Sutton, Barto/2018 - Sutton, Barto - Reinforcement Learning An Introduction.pdf:pdf}, 15 | isbn = {9780262193986}, 16 | title = {{Reinforcement Learning: An Introduction}}, 17 | year = {2018} 18 | } 19 | 20 | @book{Bishop2006, 21 | abstract = {The dramatic growth in practical applications for machine learning over the last ten years has been accompanied by many important developments in the underlying algorithms and techniques. For example, Bayesian methods have grown from a specialist niche to become mainstream, while graphical models have emerged as a general framework for describing and applying probabilistic techniques. The practical applicability of Bayesian methods has been greatly enhanced by the development of a range of approximate inference algorithms such as variational Bayes and expectation propagation, while new models based on kernels have had a significant impact on both algorithms and applications. This completely new textbook reflects these recent developments while providing a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. Familiarity with multivariate calculus and basic linear algebra is required, and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory. The book is suitable for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. Extensive support is provided for course instructors, including more than 400 exercises, graded according to difficulty. Example solutions for a subset of the exercises are available from the book web site, while solutions for the remainder can be obtained by instructors from the publisher. The book is supported by a great deal of additional material, and the reader is encouraged to visit the book web site for the latest information. A forthcoming companion volume will deal with practical aspects of pattern recognition and machine learning, and will include free software implementations of the key algorithms along with example data sets and demonstration programs. Christopher Bishop is Assistant Director at Microsoft Research Cambridge, and also holds a Chair in Computer Science at the University of Edinburgh. He is a Fellow of Darwin College Cambridge, and was recently elected Fellow of the Royal Academy of Engineering. The author's previous textbook "Neural Networks for Pattern Recognition" has been widely adopted.}, 22 | archivePrefix = {arXiv}, 23 | arxivId = {0-387-31073-8}, 24 | author = {Bishop, Christopher M CM Christopher M.}, 25 | booktitle = {Pattern Recognition}, 26 | doi = {10.1117/1.2819119}, 27 | eprint = {0-387-31073-8}, 28 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/2006/Bishop/2006 - Bishop - Pattern Recognition and Machine Learning.pdf:pdf}, 29 | isbn = {978-0387310732}, 30 | issn = {10179909}, 31 | number = {4}, 32 | pages = {738}, 33 | pmid = {8943268}, 34 | title = {{Pattern Recognition and Machine Learning}}, 35 | url = {http://soic.iupui.edu/syllabi/semesters/4142/INFO{\_}B529{\_}Liu{\_}s.pdf{\%}5Cnhttp://www.library.wisc.edu/selectedtocs/bg0137.pdf{\%}5Cnhttp://www.library.wisc.edu/selectedtocs/bg0137.pdf}, 36 | volume = {4}, 37 | year = {2006} 38 | } 39 | 40 | @book{James2000, 41 | abstract = {3'-Azido-2',3'-dideoxythymidine (AZT, 1, zidovudine, RetrovirTM) is used to treat patients with human immunodeficiency virus (HIV) infection. AZT, after conversion to AZT-5'-triphosphate (AZT-TP) by cellular enzymes, inhibits HIV-reverse transcriptase (HIV-RT). The major clinical limitations of AZT are due to clinical toxicities that include bone marrow suppression, hepatic abnormalities and myopathy, absolute dependence on host cell kinase-mediated activation which leads to low activity, limited brain uptake, a short half-life of about one hour in plasma that dictates frequent administration to maintain therapeutic drug levels, low potential for metabolic activation and/or high susceptibility to catabolism, and the rapid development of resistance by HIV-1. These limitations have prompted the development of strategies for designing prodrugs of AZT. A variety of 5'-O-substituted prodrugs of AZT constitute the subject of this review. The drug-design rationale on which these approaches are based is that the ester conjugate will be converted by hydrolysis and/or enzymatic cleavage to AZT or its 5{\&}prime;-monophosphate (AZT-MP). Most prodrug derivatives of AZT have been prepared by derivatization of AZT at its 5'-O position to provide two prominent classes of compounds that encompass: A) 5'-O-carboxylic esters derived from 1) cyclic 5'-O-carboxylic acids such as steroidal 17b-carboxylic acids, 1-adamantanecarboxylic acid, bicyclam carboxylic acid derivatives, O-acetylsalicylic acid, and carbohydrate derivatives, 2) amino acids, 3) 1, 4-dihydro-1-methyl-3-pyridinylcarboxylic acid, 4) aliphatic fatty acid analogs such as myristic acid containing a heteroatom, or without a heteroatom such as stearic acid, and 5) long chain polyunsaturated fatty acid analogs such as retinoic acid, and B) masked phosphates such as 1) phosphodiesters that include monoalkyl or monoaryl phosphate, carbohydrate, ether lipid, ester lipid, and foscarnet derivatives, 2) a variety of phosphotriesters that include dialkylphosphotriesters, diarylphosphotriesters, glycolate and lactate phosphotriesters, phosphotriester approaches using simultaneous enzymatic and chemical hydrolysis of bis(4-acyloxybenzyl) esters, bis(S-acyl-2-thioethyl) (SATE) esters, cyclosaligenyl prodrugs, glycosyl phosphotriesters, and steroidal phosphotriesters, 3) phosphoramidate derivatives, 4) dinucleoside phosphate derivatives that possess a second anti-HIV moiety such as AZT-P-ddA, AZT-P-ddI, AZTP2AZT, AZTP2ACV), and 5) 5'-hydrogen phosphonate and 5'-methylene phosphonate derivatives of AZT. In these prodrugs, the conjugating moiety is linked to AZT via a 5'-O-ester or 5'-O-phosphate group. 5'-O-Substituted AZT prodrugs have been designed with the objectives of improving anti-HIV activity, enhancing blood-brain barrier penetration, modifying pharmacokinetic properties to increase plasma half-life and improving drug delivery with respect to site-specific targeting or drug localization. Bypassing the first phosphorylation step, regulating transport and conferring sustained release of AZT prolong its duration of action, decrease toxicity and improve patient acceptability. The properties of these prodrugs and their anti-HIV activities are now reviewed.}, 42 | archivePrefix = {arXiv}, 43 | arxivId = {arXiv:1011.1669v3}, 44 | author = {James, Gareth and Witten, Daniela and Hastie, Trevor and Tibshirani, Robert}, 45 | booktitle = {Current medicinal chemistry}, 46 | doi = {10.1007/978-1-4614-7138-7}, 47 | eprint = {arXiv:1011.1669v3}, 48 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/2000/James et al/2000 - James et al. - An introduction to Statistical Learning.pdf:pdf}, 49 | isbn = {978-1-4614-7137-0}, 50 | issn = {0929-8673}, 51 | number = {10}, 52 | pages = {995--1039}, 53 | pmid = {10911016}, 54 | title = {{An introduction to Statistical Learning}}, 55 | volume = {7}, 56 | year = {2000} 57 | } 58 | 59 | @article{Hastie2001, 60 | abstract = {During the past decade there has been an explosion in computation and information technology. With it has come a vast amount of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics.}, 61 | archivePrefix = {arXiv}, 62 | arxivId = {1010.3003}, 63 | author = {Hastie, Trevor and Tibshirani, Robert and Friedman, Jerome}, 64 | doi = {10.1198/jasa.2004.s339}, 65 | eprint = {1010.3003}, 66 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/2001/Hastie, Tibshirani, Friedman/2001 - Hastie, Tibshirani, Friedman - The Elements of Statistical Learning.pdf:pdf}, 67 | isbn = {978-0-387-84857-0}, 68 | issn = {03436993}, 69 | journal = {The Mathematical Intelligencer}, 70 | keywords = {inger series in statistics}, 71 | number = {2}, 72 | pages = {83--85}, 73 | pmid = {21196786}, 74 | title = {{The Elements of Statistical Learning}}, 75 | url = {http://www.springerlink.com/index/D7X7KX6772HQ2135.pdf{\%}255Cnhttp://www-stat.stanford.edu/{~}tibs/book/preface.ps}, 76 | volume = {27}, 77 | year = {2001} 78 | } 79 | 80 | @book{NikhilKetkar2017, 81 | author = {{Nikhil Ketkar}}, 82 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/2017/Nikhil Ketkar/2017 - Nikhil Ketkar - Deep Learning with Python.pdf:pdf}, 83 | isbn = {9781617294433}, 84 | title = {{Deep Learning with Python}}, 85 | year = {2017} 86 | } 87 | 88 | @article{Nie2014, 89 | author = {Nie, Jian-yun and Eds, Yansong Feng}, 90 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/2014/Nie, Eds/2014 - Nie, Eds - Natural Language Processing.pdf:pdf}, 91 | isbn = {9783662459232}, 92 | title = {{Natural Language Processing}}, 93 | year = {2014} 94 | } 95 | 96 | @article{Barber2010, 97 | abstract = {Machine learning methods extract value from vast data sets quickly and with modest resources. They are established tools in a wide range of industrial applications, including search engines, DNA sequencing, stock market analysis, and robot locomotion, and their use is spreading rapidly. People who know the methods have their choice of rewarding jobs. This hands-on text opens these opportunities to computer science students with modest mathematical backgrounds. It is designed for final-year undergraduates and master's students with limited background in linear algebra and calculus. Comprehensive and coherent, it develops everything from basic reasoning to advanced techniques within the framework of graphical models. Students learn more than a menu of techniques, they develop analytical and problem-solving skills that equip them for the real world. Numerous examples and exercises, both computer based and theoretical, are included in every chapter. Resources for students and instructors, including a MATLAB toolbox, are available online.}, 98 | archivePrefix = {arXiv}, 99 | arxivId = {arXiv:1011.1669v3}, 100 | author = {Barber, David}, 101 | doi = {10.1017/CBO9780511804779}, 102 | eprint = {arXiv:1011.1669v3}, 103 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/2010/Barber/2010 - Barber - Bayesian Reasoning And Machine Learning.pdf:pdf}, 104 | isbn = {9780521518147}, 105 | issn = {9780521518147}, 106 | keywords = {Computational,Information-Theoretic Learning with,Learning/Statistics {\&} Optimisation,Theory {\&} Algorithms}, 107 | pages = {610}, 108 | pmid = {16931139}, 109 | title = {{Bayesian Reasoning And Machine Learning}}, 110 | year = {2010} 111 | } 112 | 113 | @article{Goodfellow2015, 114 | abstract = {www.deeplearningbook.org}, 115 | author = {Goodfellow, Ian and Bengio, Yoshua and Courville, Aaron}, 116 | file = {:Users/ilguyi/Dropbox/앱/Mendeley/2015/Goodfellow, Bengio, Courville/2015 - Goodfellow, Bengio, Courville - Deep Learning.pdf:pdf}, 117 | journal = {deeplearningbook}, 118 | title = {{Deep Learning}}, 119 | year = {2015} 120 | } 121 | -------------------------------------------------------------------------------- /06_latex/figures/1_3_1_whitespace.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/1_3_1_whitespace.png -------------------------------------------------------------------------------- /06_latex/figures/1_3_2_special_character.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/1_3_2_special_character.png -------------------------------------------------------------------------------- /06_latex/figures/1_3_3_latex_command1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/1_3_3_latex_command1.png -------------------------------------------------------------------------------- /06_latex/figures/1_3_3_latex_command2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/1_3_3_latex_command2.png -------------------------------------------------------------------------------- /06_latex/figures/1_3_3_latex_command3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/1_3_3_latex_command3.png -------------------------------------------------------------------------------- /06_latex/figures/1_3_4_comments.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/1_3_4_comments.png -------------------------------------------------------------------------------- /06_latex/figures/2_10_emph1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_10_emph1.png -------------------------------------------------------------------------------- /06_latex/figures/2_10_emph2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_10_emph2.png -------------------------------------------------------------------------------- /06_latex/figures/2_11_1_environment.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_11_1_environment.png -------------------------------------------------------------------------------- /06_latex/figures/2_11_5_verbatim.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_11_5_verbatim.png -------------------------------------------------------------------------------- /06_latex/figures/2_11_6_tabular1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_11_6_tabular1.png -------------------------------------------------------------------------------- /06_latex/figures/2_11_6_tabular2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_11_6_tabular2.png -------------------------------------------------------------------------------- /06_latex/figures/2_11_6_tabular3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_11_6_tabular3.png -------------------------------------------------------------------------------- /06_latex/figures/2_11_6_tabular4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_11_6_tabular4.png -------------------------------------------------------------------------------- /06_latex/figures/2_4_2_dash.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_4_2_dash.png -------------------------------------------------------------------------------- /06_latex/figures/2_4_3_tilde.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_4_3_tilde.png -------------------------------------------------------------------------------- /06_latex/figures/2_4_8_accent_characters1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_4_8_accent_characters1.png -------------------------------------------------------------------------------- /06_latex/figures/2_4_8_accent_characters2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_4_8_accent_characters2.png -------------------------------------------------------------------------------- /06_latex/figures/2_7_sturcture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_7_sturcture.png -------------------------------------------------------------------------------- /06_latex/figures/2_8_ref.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_8_ref.png -------------------------------------------------------------------------------- /06_latex/figures/2_9_footnote.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/2_9_footnote.png -------------------------------------------------------------------------------- /06_latex/figures/3_10_math_fonts.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_10_math_fonts.png -------------------------------------------------------------------------------- /06_latex/figures/3_1_1_inline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_1_1_inline.png -------------------------------------------------------------------------------- /06_latex/figures/3_1_2_displaymath.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_1_2_displaymath.png -------------------------------------------------------------------------------- /06_latex/figures/3_1_3_equation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_1_3_equation.png -------------------------------------------------------------------------------- /06_latex/figures/3_1_4_math_mode.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_1_4_math_mode.png -------------------------------------------------------------------------------- /06_latex/figures/3_2_binding.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_2_binding.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_10_sum_prod.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_10_sum_prod.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_11_bracket.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_11_bracket.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_12_big_bracket.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_12_big_bracket.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_1_greek.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_1_greek.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_2_script.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_2_script.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_3_sqrt.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_3_sqrt.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_4_underline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_4_underline.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_5_underbrace.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_5_underbrace.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_6_derivative.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_6_derivative.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_7_dot_product.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_7_dot_product.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_8_log.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_8_log.png -------------------------------------------------------------------------------- /06_latex/figures/3_3_9_frac.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_3_9_frac.png -------------------------------------------------------------------------------- /06_latex/figures/3_4_quad.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_4_quad.png -------------------------------------------------------------------------------- /06_latex/figures/3_5_1_matrix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_5_1_matrix.png -------------------------------------------------------------------------------- /06_latex/figures/3_5_2_matrix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_5_2_matrix.png -------------------------------------------------------------------------------- /06_latex/figures/3_5_3_long_equation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_5_3_long_equation.png -------------------------------------------------------------------------------- /06_latex/figures/3_9_boldsymbol.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/3_9_boldsymbol.png -------------------------------------------------------------------------------- /06_latex/figures/modu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/06_latex/figures/modu.png -------------------------------------------------------------------------------- /07.NLP/README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/07.NLP/README.md -------------------------------------------------------------------------------- /08.MEDICAL/README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/08.MEDICAL/README.md -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Deep Learning College (DLC) Lectures 2 | 3 | 안녕하세요. DLC (Deep Learning College) 강좌를 위한 공간 입니다. 4 | 5 | [TOC] 6 | 7 | ## 3개월 기초교육 강좌 스케쥴 8 | 9 | | 주차 | 월요일 | 수요일 | 10 | | :---------------------- | ------------------------------------------------------------ | :----------------------------------------------------------- | 11 | | 1주차 (8월 13, 16일) | 오리엔테이션 | 공통과정 : 딥러닝 기초-1 | 12 | | 2주차 (8월 20, 22일) | 공통과정 : 딥러닝 기초-2 | 공통과정 : 딥러닝 기초-3 | 13 | | 3주차 (8월 27, 29일) | 공통과정 : 수학-1 | 공통과정 : 수학-2 | 14 | | 4주차 (9월 3, 5일) | 공통과정 : 수학-3 | 공통과정 : 딥러닝 기초-4 | 15 | | 5주차 (9월 10, 12일) | 공통과정 : 수학-4 | 공통과정 : 딥러닝 기초-5 | 16 | | 6주차 (9월 17, 19일) | 공통과정 : 수학-5 | 공통과정 : 수학-6 | 17 | | 7주차 (10월 1, 4일) | 공통과정 : 딥러닝 기초-6 | 공통과정 : 딥러닝 기초-7 | 18 | | 8주차 (10월 8, 10일) | 전공과정 - 1 (의료영상,자연어)
특별강연 : 정지훈 교수님 (의료영상) | 전공과정 - 2 (의료영상,자연어)
특별강연 : VUNO (의료영상) | 19 | | 9주차 (10월 15, 17일) | 전공과정 - 3 (의료영상,자연어)
특별강연 : DoAI | 전공과정 - 4 (의료영상,자연어) | 20 | | 10주차 (10월 22, 24일) | 전공과정 - 5 (의료영상,자연어) | 전공과정 - 6 (의료영상,자연어) | 21 | | 11주차 (10월 29, 31일) | 전공과정 - 7 (의료영상,자연어) | 전공과정 - 8 (의료영상,자연어) | 22 | 23 | 24 | 25 | ## DLC 교육과정 26 | 27 | - 1단계인 3개월 **기초교육 강좌**가 완료되면 2단계인 **연구프로젝트** 단계로 들어갑니다. 28 | - 2단계 연구프로젝트 단계는 **팀 프로젝트**와 **개인 프로젝트**로 나뉘어 집니다. 29 | - **팀 프로젝트 단계 (4개월)** : 팀별 주제를 선정하고 팀 프로젝트를 수행하며 최종적으로 소논문을 작성합니다. 30 | - 주 1회 오프라인 모임을 갖습니다. 31 | 32 | ![researchProject1](./images/researchProject1.png) 33 | 34 | - **개인 프로젝트 단계 (5개월)** : 개인 프로젝트 주제를 선정 한 후 프로젝트 관련 소논문을 작성합니다. 35 | - 주 1회 오프라인 모임을 갖습니다. 36 | 37 | ![researchProject2](./images/researchProject2.png) 38 | 39 | 40 | 41 | ## 3개월 기초교육 강사진 42 | 43 | | 강좌명 | 강사소개 | 사진 | 44 | | --------------- | ------------------------------------------------------------ | --------------------------------------------------------- | 45 | | 딥러닝 수학 | - **이주희**
- 현) 이화여자 대학교 수리과학연구소 연구교수
- 모두의연구소 *IoT 연구실* 멤버
- 모두의연구소 *제멋대로 딥러닝* 멤버 | | 46 | | | | | 47 | | 딥러닝의 이해 | - **박은수**
- 현) 모두의연구소 Research Director
- "Convolutional Neural Network" 강의 (인하대학교)
- "Python Basics" 강의 (인하대학교)
- "실전! Keras를 이용한 딥러닝 활용" 강의 (삼성멀티캠퍼스) | | 48 | | | | | 49 | | 딥러닝 영상처리 | - **박은수**
- 딥러닝의 이해 강사로 이력이 동일 합니다. | | 50 | | *강좌 도우미* | - **이일구**
- 현) 모두의연구소 Research Scientist
- "실전! TensorFlow로 배우는 딥러닝" 강의 (삼성멀티캠퍼스)
- 모두의연구소 DeepLAB 논문반 멤버 | | 51 | 52 | # 53 | 54 | ![RevSlider_modulabs_dlc01](./images/RevSlider_modulabs_dlc01.png) 55 | -------------------------------------------------------------------------------- /images/RevSlider_modulabs_dlc01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/images/RevSlider_modulabs_dlc01.png -------------------------------------------------------------------------------- /images/ilguyi.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/images/ilguyi.png -------------------------------------------------------------------------------- /images/photo_park.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/images/photo_park.png -------------------------------------------------------------------------------- /images/researchProject1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/images/researchProject1.png -------------------------------------------------------------------------------- /images/researchProject2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/moduDLC/Lectures/261b34d64811eecaf73673801a0eb990c32a9118/images/researchProject2.png --------------------------------------------------------------------------------