├── 01_probability_theory ├── 01_random_variables.pdf ├── 02. expected_value.pdf ├── 03_probability_distribution_and_deeplearning.pdf ├── 04_parameters_in_deep_learning.pdf ├── 05_odds_logit_sigmoid_and_softmax.pdf ├── 06_sampling_representation_and_monte_carlo_approximation.pdf ├── 07_stochastic.pdf ├── 07_stochastic.py ├── 08_01_Bayesian_vs_frequentist.pdf ├── 08_02_Bayes_theorem_example_and_simulation.pdf ├── 08_02_Bayes_theorem_simulation.py ├── 08_03_Bayes_theorem_applying_into_deeplearning.pdf ├── 09_Maximum_Likelihood_Estimation_(MLE).pdf ├── 10_Maximum_A_Posterior_(MAP).pdf └── 11_bayesian_neural_networks.pdf ├── 02_linear_algebra ├── 01_orientation_and_motivations.pdf ├── 02_intro_definition_notation.pdf ├── 03_matrix_operation.pdf ├── 04_determinant_and_inverse_matrix.pdf ├── 05_vector_introduction.pdf ├── 06_vector_addtion_scalar_mulitplication.pdf ├── 07_vector_product.pdf ├── 08_vector_vector_space.pdf ├── 09_vector_linear_combination_and_span.pdf ├── 10_vector_join_vector_and_matrix_linear_transformation.pdf ├── 11_vector_transformation_with_matrix.pdf ├── 12_vector_matrix_composition.pdf ├── 13_vector_determinant_with_figures.pdf ├── 14_from_vector_to_tensor.pdf ├── 15_linear_empathy.pdf ├── 16_eigenvector_eigenvalue.pdf ├── 17_dimension_reduction_expansion.pdf ├── 18_rank_in_matrix.pdf ├── 19_information_compression_expansion.pdf └── 20_adjourning.pdf ├── 03_differentiation ├── 01_orientation.pdf ├── 02_learning_in_deeplearning.pdf ├── 03_principal_of_differentiation.pdf ├── 04_partial_differentiation.pdf ├── 04_partial_differentiation_law_of_exponents1.jpg ├── 05_gradient_descent.pdf ├── 06_chain_rule.pdf ├── 07_matrix_differentiation.pdf ├── 07_matrix_differentiation_excercise_slide.pdf ├── 07_matrix_differentiation_excercise_toy_example.py ├── 08_back_propagation.pdf ├── 09_gradient_vanishing.pdf └── 10_adjourning.pdf ├── 04_set_theory └── 01_intepretation_of_number_set.pdf ├── 05_information_theory ├── 01_orientation.pdf ├── 02_information.pdf ├── 03_1_information_entropy_theory.pdf ├── 03_2_information_entropy_practice.pdf ├── 04_entropy_in_deeplearning.pdf ├── 05_entropy_loss.pdf ├── 06_KL_divergence.pdf ├── 07_adjourning.pdf ├── entropy_loss.py └── rolling_dices.py └── README.md /01_probability_theory/01_random_variables.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/01_random_variables.pdf -------------------------------------------------------------------------------- /01_probability_theory/02. expected_value.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/02. expected_value.pdf -------------------------------------------------------------------------------- /01_probability_theory/03_probability_distribution_and_deeplearning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/03_probability_distribution_and_deeplearning.pdf -------------------------------------------------------------------------------- /01_probability_theory/04_parameters_in_deep_learning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/04_parameters_in_deep_learning.pdf -------------------------------------------------------------------------------- /01_probability_theory/05_odds_logit_sigmoid_and_softmax.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/05_odds_logit_sigmoid_and_softmax.pdf -------------------------------------------------------------------------------- /01_probability_theory/06_sampling_representation_and_monte_carlo_approximation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/06_sampling_representation_and_monte_carlo_approximation.pdf -------------------------------------------------------------------------------- /01_probability_theory/07_stochastic.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/07_stochastic.pdf -------------------------------------------------------------------------------- /01_probability_theory/07_stochastic.py: -------------------------------------------------------------------------------- 1 | def rolling_die_1(): 2 | '''return 1에서 6사이 하나의 정수값''' 3 | 4 | 5 | # def rolling_die_2(): 6 | # '''return 1에서 6사이 정수 값 중 7 | # 랜던하게 선택된 숫자 8 | # ''' 9 | 10 | import random 11 | import math 12 | 13 | def rolling_die_2(): 14 | '''return 1에서 6사이 정수 값 중 15 | 랜던하게 선택된 숫자 16 | ''' 17 | return random.choice([1, 2, 3, 4, 5, 6]) 18 | 19 | 20 | def run_rolling(test_num=10): 21 | '''주사위 던지기 실행''' 22 | result = [] 23 | for i in range(test_num): 24 | result.append(rolling_die_2()) 25 | print(result) 26 | 27 | 28 | def die_prob_estimator(target: str, trials:int): 29 | '''원하는 주사위 값 나올 확률 계산''' 30 | hit_count = 0 31 | for _ in range(trials): 32 | result = '' 33 | for _ in range(len(target)): 34 | result += str(rolling_die_2()) 35 | if target == result: 36 | hit_count += 1 37 | prob_estimation = round(hit_count/trials, 6) 38 | print(f'Estimated Probability of {target}: {prob_estimation}') 39 | 40 | 41 | def same_birthday_prob( 42 | num_people: int, 43 | num_same: int 44 | ) -> bool: 45 | '''num_same 보다 많은 생일이 있는지 판별''' 46 | possible_date = range(365) # 2월 29일 포함 47 | birthdays = [0] * 365 48 | for _ in range(num_people): 49 | birth_date = random.choice(possible_date) 50 | birthdays[birth_date] += 1 51 | return max(birthdays) >= num_same # return boolean 52 | 53 | 54 | def birthday_estimator( 55 | num_people: int, 56 | num_same: int, 57 | trials: int) -> float: 58 | '''num_people 중에 생일이 같은 사람이 있을 확률 계산''' 59 | hit_count = 0 60 | for _ in range(trials): 61 | if same_birthday_prob(num_people, num_same): 62 | hit_count += 1 63 | return hit_count/trials 64 | 65 | 66 | def run_birthday_simulation( 67 | num_same: int, 68 | peoples: list) -> None: 69 | '''같은 생일이 있는지 시뮬레이션 수행''' 70 | for people in peoples: 71 | print('---'*10) 72 | estimated_probability = round(birthday_estimator(people, num_same, 1000), 3) 73 | numerator = math.factorial(365) 74 | denominator = (365**people)*math.factorial(365-people) 75 | theoretical_prob = round(1 - numerator/denominator, 3) 76 | print(f'{people}명 중 같은 생일이 있을 확률: \n\ 77 | 시뮬레이션: {estimated_probability}\t이론값: {theoretical_prob}') 78 | 79 | 80 | 81 | if __name__=='__main__': 82 | # die_prob_estimator('11111', 1000000) 83 | run_birthday_simulation(2, [10, 20, 40, 100]) 84 | # run_birthday_simulation(3, [10, 20, 40, 100]) 85 | # run_birthday_simulation(4, [10, 20, 40, 100]) -------------------------------------------------------------------------------- /01_probability_theory/08_01_Bayesian_vs_frequentist.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/08_01_Bayesian_vs_frequentist.pdf -------------------------------------------------------------------------------- /01_probability_theory/08_02_Bayes_theorem_example_and_simulation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/08_02_Bayes_theorem_example_and_simulation.pdf -------------------------------------------------------------------------------- /01_probability_theory/08_02_Bayes_theorem_simulation.py: -------------------------------------------------------------------------------- 1 | '''Simulation of Bayesian Inference''' 2 | 3 | import random 4 | from scipy.special import comb 5 | from typing import Union 6 | 7 | 8 | def experiment(n: int = 8, x: int = 3) -> Union[None, str]: 9 | '''Bayesian experiment''' 10 | # Bob이 이길 확률 (랜덤값) 11 | prob_bob_win = random.random() 12 | 13 | # Alice 5점, Bob 3점 상태에 이를 확률(likelihood) 14 | prob_current_status = comb(n, x) * prob_bob_win**x * (1-prob_bob_win)**(n-x) 15 | 16 | # 실제로 Alice 5점, Bob 3점이 발생했는지 확인 17 | test_result = None # 결괏값 18 | prob_test = random.random() # 현재 상황이 발생하였는지를 체크하기 위해 랜덤값 생성 19 | # 랜덤 확률이 현재 상황보다 작다면 -> 실제로 발생한 경우로 처리 20 | if prob_test < prob_current_status: 21 | # Bob 3번 연속으로 이길 확률 22 | prob_bob_three_in_a_row = prob_bob_win ** x 23 | # Bob이 실제로 이겼는지 확인 24 | if random.random() < prob_bob_three_in_a_row: 25 | test_result = 'Bob' 26 | else: 27 | test_result = 'Alice' 28 | 29 | return test_result 30 | 31 | 32 | def run_simulation(trials: int = 100000): 33 | '''Conduct simulation for Bayesianist''' 34 | result = [experiment() for _ in range(trials)] 35 | bob_wins = result.count('Bob') 36 | alice_wins = result.count('Alice') 37 | print(f'# Experiment: {trials}') 38 | print(f'Bayesian prob : {bob_wins/(bob_wins + alice_wins)*100:.2f}%') 39 | print(f'Frequentist prob: 5.3%') 40 | 41 | 42 | if __name__=='__main__': 43 | run_simulation() 44 | -------------------------------------------------------------------------------- /01_probability_theory/08_03_Bayes_theorem_applying_into_deeplearning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/08_03_Bayes_theorem_applying_into_deeplearning.pdf -------------------------------------------------------------------------------- /01_probability_theory/09_Maximum_Likelihood_Estimation_(MLE).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/09_Maximum_Likelihood_Estimation_(MLE).pdf -------------------------------------------------------------------------------- /01_probability_theory/10_Maximum_A_Posterior_(MAP).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/10_Maximum_A_Posterior_(MAP).pdf -------------------------------------------------------------------------------- /01_probability_theory/11_bayesian_neural_networks.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/01_probability_theory/11_bayesian_neural_networks.pdf -------------------------------------------------------------------------------- /02_linear_algebra/01_orientation_and_motivations.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/01_orientation_and_motivations.pdf -------------------------------------------------------------------------------- /02_linear_algebra/02_intro_definition_notation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/02_intro_definition_notation.pdf -------------------------------------------------------------------------------- /02_linear_algebra/03_matrix_operation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/03_matrix_operation.pdf -------------------------------------------------------------------------------- /02_linear_algebra/04_determinant_and_inverse_matrix.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/04_determinant_and_inverse_matrix.pdf -------------------------------------------------------------------------------- /02_linear_algebra/05_vector_introduction.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/05_vector_introduction.pdf -------------------------------------------------------------------------------- /02_linear_algebra/06_vector_addtion_scalar_mulitplication.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/06_vector_addtion_scalar_mulitplication.pdf -------------------------------------------------------------------------------- /02_linear_algebra/07_vector_product.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/07_vector_product.pdf -------------------------------------------------------------------------------- /02_linear_algebra/08_vector_vector_space.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/08_vector_vector_space.pdf -------------------------------------------------------------------------------- /02_linear_algebra/09_vector_linear_combination_and_span.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/09_vector_linear_combination_and_span.pdf -------------------------------------------------------------------------------- /02_linear_algebra/10_vector_join_vector_and_matrix_linear_transformation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/10_vector_join_vector_and_matrix_linear_transformation.pdf -------------------------------------------------------------------------------- /02_linear_algebra/11_vector_transformation_with_matrix.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/11_vector_transformation_with_matrix.pdf -------------------------------------------------------------------------------- /02_linear_algebra/12_vector_matrix_composition.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/12_vector_matrix_composition.pdf -------------------------------------------------------------------------------- /02_linear_algebra/13_vector_determinant_with_figures.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/13_vector_determinant_with_figures.pdf -------------------------------------------------------------------------------- /02_linear_algebra/14_from_vector_to_tensor.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/14_from_vector_to_tensor.pdf -------------------------------------------------------------------------------- /02_linear_algebra/15_linear_empathy.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/15_linear_empathy.pdf -------------------------------------------------------------------------------- /02_linear_algebra/16_eigenvector_eigenvalue.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/16_eigenvector_eigenvalue.pdf -------------------------------------------------------------------------------- /02_linear_algebra/17_dimension_reduction_expansion.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/17_dimension_reduction_expansion.pdf -------------------------------------------------------------------------------- /02_linear_algebra/18_rank_in_matrix.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/18_rank_in_matrix.pdf -------------------------------------------------------------------------------- /02_linear_algebra/19_information_compression_expansion.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/19_information_compression_expansion.pdf -------------------------------------------------------------------------------- /02_linear_algebra/20_adjourning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/02_linear_algebra/20_adjourning.pdf -------------------------------------------------------------------------------- /03_differentiation/01_orientation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/01_orientation.pdf -------------------------------------------------------------------------------- /03_differentiation/02_learning_in_deeplearning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/02_learning_in_deeplearning.pdf -------------------------------------------------------------------------------- /03_differentiation/03_principal_of_differentiation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/03_principal_of_differentiation.pdf -------------------------------------------------------------------------------- /03_differentiation/04_partial_differentiation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/04_partial_differentiation.pdf -------------------------------------------------------------------------------- /03_differentiation/04_partial_differentiation_law_of_exponents1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/04_partial_differentiation_law_of_exponents1.jpg -------------------------------------------------------------------------------- /03_differentiation/05_gradient_descent.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/05_gradient_descent.pdf -------------------------------------------------------------------------------- /03_differentiation/06_chain_rule.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/06_chain_rule.pdf -------------------------------------------------------------------------------- /03_differentiation/07_matrix_differentiation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/07_matrix_differentiation.pdf -------------------------------------------------------------------------------- /03_differentiation/07_matrix_differentiation_excercise_slide.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/07_matrix_differentiation_excercise_slide.pdf -------------------------------------------------------------------------------- /03_differentiation/07_matrix_differentiation_excercise_toy_example.py: -------------------------------------------------------------------------------- 1 | '''소프트웨어 꼰대강의 - 선형 시스템 편미분 (실습)''' 2 | 3 | from torch import nn 4 | import torch 5 | 6 | class Network(nn.Module): 7 | '''Toy Network''' 8 | def __init__(self, input_size, hidden_size, output_size) -> None: 9 | super(Network, self).__init__() 10 | self.fc1 = nn.Linear(input_size, hidden_size) 11 | self.relu = nn.ReLU() 12 | self.fc2 = nn.Linear(hidden_size, output_size) 13 | 14 | def forward(self, x): 15 | '''Forward Propagation''' 16 | x = self.fc1(x) 17 | x = self.relu(x) 18 | return self.fc2(x) 19 | 20 | 21 | class Train(): 22 | '''Toy Train''' 23 | def __init__(self) -> None: 24 | self.model = Network(4, 5, 2) 25 | 26 | def run(self,): 27 | '''Run network''' 28 | inputs = torch.randn(1, 4) 29 | output = self.model(inputs) 30 | 31 | # Loss 계산 32 | label = torch.tensor([1.0, 0.0]) 33 | loss = torch.sum((output - label)**2) 34 | print(f'Predict (Y_hat): {output}') 35 | print(f'Loss: {loss}') 36 | 37 | loss.backward() 38 | 39 | gradients = [param for param in self.model.parameters()] 40 | 41 | print('Gradients Test') 42 | for idx, grad in enumerate(gradients): 43 | print('----' * 20) 44 | print(f'Layer {idx}, shape: {grad.shape}') 45 | print(grad) 46 | 47 | if __name__=='__main__': 48 | train = Train() 49 | train.run() 50 | -------------------------------------------------------------------------------- /03_differentiation/08_back_propagation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/08_back_propagation.pdf -------------------------------------------------------------------------------- /03_differentiation/09_gradient_vanishing.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/09_gradient_vanishing.pdf -------------------------------------------------------------------------------- /03_differentiation/10_adjourning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/03_differentiation/10_adjourning.pdf -------------------------------------------------------------------------------- /04_set_theory/01_intepretation_of_number_set.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/04_set_theory/01_intepretation_of_number_set.pdf -------------------------------------------------------------------------------- /05_information_theory/01_orientation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/05_information_theory/01_orientation.pdf -------------------------------------------------------------------------------- /05_information_theory/02_information.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/05_information_theory/02_information.pdf -------------------------------------------------------------------------------- /05_information_theory/03_1_information_entropy_theory.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/05_information_theory/03_1_information_entropy_theory.pdf -------------------------------------------------------------------------------- /05_information_theory/03_2_information_entropy_practice.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/05_information_theory/03_2_information_entropy_practice.pdf -------------------------------------------------------------------------------- /05_information_theory/04_entropy_in_deeplearning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/05_information_theory/04_entropy_in_deeplearning.pdf -------------------------------------------------------------------------------- /05_information_theory/05_entropy_loss.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/05_information_theory/05_entropy_loss.pdf -------------------------------------------------------------------------------- /05_information_theory/06_KL_divergence.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/05_information_theory/06_KL_divergence.pdf -------------------------------------------------------------------------------- /05_information_theory/07_adjourning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kafa46/deeplearning_math/3be000044ebe7d857dfa6b36f2ffe468820afac2/05_information_theory/07_adjourning.pdf -------------------------------------------------------------------------------- /05_information_theory/entropy_loss.py: -------------------------------------------------------------------------------- 1 | '''소프트웨어 꼰대강의 2 | Practice for cross entropy loss 3 | ''' 4 | 5 | from typing import List 6 | import numpy as np 7 | import matplotlib.pyplot as plt 8 | 9 | class BCELoss(): 10 | '''Class: Binary Cross Entropy Loss''' 11 | def __init__(self, label: bool = True) -> None: 12 | self.label = label 13 | self.p = np.linspace(start=0.001, stop=1.0, num=1000) 14 | 15 | def get_cross_entropy(self) -> list: 16 | '''Random var (Label: Treu, False) 값이 주어졌을 경우 엔트로피 분포 리턴 17 | Entropy = -p * log(p) -> -log(p) 18 | -1*log(1-p) 19 | 정답(label) True, False 모든 경우에 1을 곱해주게 됩니다. 20 | ''' 21 | return -1*np.log2(self.p) if self.label else -1*np.log2(1-self.p) 22 | 23 | def get_squared_error(self) -> list: 24 | '''제곱 오차 확률 분포 리턴 25 | (y - y_hat)^2 26 | ''' 27 | if self.label: 28 | squared_error = 1 + (-2*self.p) + np.square(self.p) 29 | else: 30 | squared_error = 1 + (-2*(1-self.p)) + np.square((1-self.p)) 31 | return squared_error 32 | 33 | def plot_data( 34 | self, 35 | x: np.ndarray, 36 | y: np.ndarray, 37 | title: str = None, 38 | y_label: str = None, 39 | ) -> None: 40 | '''x, y 값을 받아서 그래프 생성''' 41 | plt.xlabel('Probability') 42 | y_label = y_label if y_label else 'Result' 43 | plt.ylabel(y_label) 44 | title = title if title else 'no-title' 45 | plt.title(title) 46 | plt.plot(x, y) 47 | plt.savefig(f'{title}.png', dpi=300) 48 | plt.show() 49 | 50 | def combine_plots( 51 | self, 52 | ys: List[tuple], 53 | title: str = None, 54 | y_label: str = None 55 | ) -> None: 56 | '''여러개 그림(데이터) 들어올 경우 합쳐서 하나의 평면에 그리기 57 | Ex. ys = [('True', array), ('False', array), ...] 58 | ''' 59 | plt.xlabel('Probability') 60 | y_label = y_label if y_label else '-plog(p)' 61 | plt.ylabel(y_label) 62 | if title: 63 | plt.title(title) 64 | for label, y in ys: 65 | plt.plot(self.p, y, label=label) 66 | plt.legend() 67 | plt.savefig(f'{title}.png', dpi=300) 68 | plt.show() 69 | 70 | 71 | def main() -> None: 72 | '''main 함수''' 73 | 74 | # Label True/False 인 경우의 처리 75 | for label in [True, False]: 76 | bce = BCELoss(label) 77 | bce.plot_data( 78 | x=bce.p, 79 | y=bce.get_cross_entropy(), 80 | title=f'Label_{bce.label}, log(p)', 81 | y_label='-plog(p)' 82 | ) 83 | 84 | # Cross entropy 합쳐진 plot 그리기 85 | ys_entropy = [ 86 | (True, BCELoss(True).get_cross_entropy()), 87 | (False, BCELoss(False).get_cross_entropy()), 88 | ] 89 | BCELoss().combine_plots(ys=ys_entropy, title='BCE_combined') 90 | 91 | # True인 경우 Squared error와 Entropy 비교 plot 92 | ys_squared_error = [ 93 | ('entropy (True)', BCELoss(True).get_cross_entropy()), 94 | ('Squared Error (True)', BCELoss(True).get_squared_error()), 95 | ] 96 | BCELoss().combine_plots(ys=ys_squared_error, title='BCE+SE_combined_True', y_label='-plog(p)/SE') 97 | 98 | # False 경우 Squared error와 Entropy 비교 plot 99 | ys_squared_error = [ 100 | ('entropy (False)', BCELoss(False).get_cross_entropy()), 101 | ('Squared Error (False)', BCELoss(False).get_squared_error()), 102 | ] 103 | BCELoss().combine_plots(ys=ys_squared_error, title='BCE+SE_combined_False', y_label='-plog(p)/SE') 104 | 105 | # 모든 Squared error와 Entropy 비교 plot 106 | ys_squared_error = [ 107 | ('entropy (True)', BCELoss(True).get_cross_entropy()), 108 | ('Squared Error (True)', BCELoss(True).get_squared_error()), 109 | ('entropy (False)', BCELoss(False).get_cross_entropy()), 110 | ('Squared Error (False)', BCELoss(False).get_squared_error()), 111 | ] 112 | BCELoss().combine_plots(ys=ys_squared_error, title='BCE+SE_combined_all', y_label='-plog(p)/SE') 113 | 114 | 115 | if __name__=='__main__': 116 | main() 117 | # bce = BCELoss(True) 118 | # ce = bce.get_cross_entropy() 119 | # se = bce.get_squared_error() 120 | # bce.plot_data(bce.p, se, title='bce_true', y_label='-plog(p)') 121 | # print(se) 122 | -------------------------------------------------------------------------------- /05_information_theory/rolling_dices.py: -------------------------------------------------------------------------------- 1 | '''Entropy excercise 2 | Random variable X: n개의 주사위를 던졌을 경우 나온 값의 합 3 | ''' 4 | 5 | from itertools import product # 중복 순열 구하기 6 | from functools import reduce 7 | from math import log2 8 | import matplotlib.pyplot as plt 9 | 10 | OUTCOMES = [1, 2, 3, 4, 5, 6] 11 | 12 | 13 | def get_permutations(num_dices: int = 2) -> list: 14 | '''모든 가능한 결과의 순열''' 15 | result = [x for x in product(OUTCOMES, repeat=num_dices)] 16 | return result 17 | 18 | 19 | def get_probability(target_value: int = None, target_prob: float = None) -> dict: 20 | '''각 주사위 값 나올 확률''' 21 | if not target_value: 22 | equal_prob = 1/len(OUTCOMES) 23 | prob_dic = {value: equal_prob for value in OUTCOMES} 24 | return prob_dic 25 | elif not target_prob: 26 | '''주사위 값은 지정하고, 그 확률을 지정하지 않은 경우''' 27 | print(f'지정한 주사위 값 "{target_value}"이 나올 확률(target_prob)을 지정하지 않았습니다.') 28 | print(f'모든 확률(target_prob)을 동일하게 지정하겠습니다.') 29 | equal_prob = 1/len(OUTCOMES) 30 | prob_dic = {value: equal_prob for value in OUTCOMES} 31 | return prob_dic 32 | else: 33 | probability = (1.0 - target_prob)/(len(OUTCOMES)-1) 34 | prob_dic = {value: probability for value in OUTCOMES if value != target_value} 35 | prob_dic[target_value] = target_prob 36 | return prob_dic 37 | 38 | def get_outcome_prob(outcome_set: set, prob_dic: dict) -> float: 39 | '''해당 outcome이 나올 확률''' 40 | prob_list = [] 41 | for x in outcome_set: 42 | prob_list.append(prob_dic[x]) 43 | prob_of_outcome = reduce(lambda x, y: x*y, prob_list) 44 | return prob_of_outcome 45 | 46 | def compute_entropy(num_dices, target_value, target_prob) -> float: 47 | '''결괏값에 따른 엔트로피 계산''' 48 | outcome_set = get_permutations(num_dices) 49 | prob_dic = get_probability(target_value, target_prob) 50 | 51 | # Random variable 구하기 52 | rand_var_list = [sum(outcome) for outcome in outcome_set] 53 | 54 | # Random variable 확률값 초기화 55 | rand_var_dic = {x: 0.0 for x in rand_var_list} 56 | 57 | # Random variable 확률값 계산 58 | for outcome in outcome_set: 59 | p = get_outcome_prob(outcome, prob_dic) 60 | random_var = sum(outcome) 61 | rand_var_dic[random_var] += p 62 | 63 | # Random variable 확률분포의 엔트로피 계산 64 | entropy = 0.0 65 | for p in rand_var_dic.values(): 66 | temp = -1.0 * p * log2(p) 67 | entropy += temp 68 | 69 | # Random variable 확률분포 시각화 70 | x = rand_var_dic.keys() 71 | y = rand_var_dic.values() 72 | plt.bar(x, y) 73 | plt.xlabel('Random variable (sum of values)') 74 | plt.ylabel('Probability') 75 | target_prob = 'Equal' if not target_prob else target_prob 76 | plt.title(f'#Dices: {num_dices}, Tgt_val: {target_value}, Tgt_prob: {target_prob}, Entropy: {entropy: .2f}') 77 | plt.show() 78 | 79 | 80 | if __name__=='__main__': 81 | num_dices = input('실험에 사용할 주사위 개수: ') 82 | if not num_dices: 83 | num_dices = 2 84 | else: 85 | num_dices = int(num_dices) 86 | 87 | target_value = input('확률을 지정할 주사위 값: ') 88 | if not target_value: 89 | target_value = None 90 | else: 91 | target_value = int(target_value) 92 | 93 | target_prob = input('주사위 값이 나올 확률: ') 94 | if not target_prob: 95 | target_prob = None 96 | else: 97 | target_prob = float(target_prob) 98 | 99 | # 엔트로피 계산 100 | compute_entropy(num_dices, target_value, target_prob) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 소프트웨어 꼰대 강의 - 딥러닝 수학 2 | 3 |

4 | 안녕하세요? 소프트웨어 꼰대강의를 운영하고 있는 노기섭 교수입니다. 5 | 이 저장소는 '딥러닝 수학'과 관련된 자료를 제공하고 있습니다. 딥러닝을 공부하다 보면 피해갈 수 없는 장벽을 만나게 됩니다. 바로 수학과 관련된 지식입니다. 개인적으로 딥러닝을 공부하면서 누군가 쉽게 풀어서 설명하는 자료가 있으면 좋겠다는 생각을 종종 했습니다. 제가 이해한 방식을 기반으로 딥러닝 관련된 수학 지식을 제공하고자 합니다. 6 |

7 | 8 | 꼰대 교수님 홈페이지: 바로가기 [(click me)](https://prof.acin.kr/) 9 | 10 | 궁금한 사항, 오류 등이 있으면 [kafa46@cju.ac.kr](mailto:kafa46@cju.ac.kr)로 연락주시면 감사하겠습니다. 11 | 12 | 13 | ## 딥러닝 수학 시리즈 구성 14 | - [1장. Probability (확률)](#prob) 15 | - [2장. Linear Algebra (선형대수)](#linear) 16 | - [3장. Differentiation (미분)](#diff) 17 | - [4장. Information Theory (정보이론)](#info) 18 | - [Bonus. 집합론 (곱집합)](#set) 19 |
20 | 21 | ### Chapter 1. Probability (확률) 22 | 23 | |분야|주제|Youtube|PPT|Codes| 24 | |---|---|---|---|---| 25 | |확률|01. Random Variables (확률변수)... 이게 뭔가요?|[click](https://youtu.be/iTxTGBOhzCA)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/01_random_variables.pdf)|없음| 26 | |확률|02. Expected Value (기댓값)... 이게 뭔가요?|[click](https://youtu.be/nvHyIScyQxs)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/02.%20expected_value.pdf)|없음| 27 | |확률|03. 딥러닝과 확률 분포의 관계는 무엇인가요?|[click](https://youtu.be/qpbIKg21mvI)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/03_probability_distribution_and_deeplearning.pdf)|없음| 28 | |확률|04. 파라미터와 딥러닝|[click](https://youtu.be/PobNLp279-w)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/04_parameters_in_deep_learning.pdf)|없음| 29 | |확률|05. odds, logit, sigmoid, and softmax 개념설명|[click](https://youtu.be/V0uyiu6X4Zs)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/05_odds_logit_sigmoid_and_softmax.pdf)|없음| 30 | |확률|06. 샘플링 표현에 대한 이해와 몬테 카를로 근사|[click](https://youtu.be/nw_tVBCw0Z8)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/06_sampling_representation_and_monte_carlo_approximation.pdf)|없음| 31 | |확률|07. Stochastic Approach (확률적 접근법)|[click](https://youtu.be/LBT41oKsHWg)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/07_stochastic.pdf)|없음| 32 | |확률|08. Stochastic 실습 및 분석|[click](https://youtu.be/k6xog4ZNnT0)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/07_stochastic.pdf)|[codes](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/07_stochastic.py)| 33 | |확률|09. 제발 Stochastic Process가 뭔지 설명 좀 해주세요!|[click](https://youtu.be/HtJ-q8tc5qQ)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/07_stochastic.pdf)|없음| 34 | |확률|10. Stochastic Gradient Descent에 왜 "Stochastic"라는 단어가 붙은 건가?|[click](https://youtu.be/DEQhCJ0nav4)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/07_stochastic.pdf)|없음| 35 | |확률|11. Bayesian vs. Frequentist|[click](https://youtu.be/Kmw1pCsAqfM)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/08_01_Bayesian_vs_frequentist.pdf)|없음| 36 | |확률|12. Bayes theorem_예제 풀이(당구공 굴리기)|[click](https://youtu.be/xdMor6957E0)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/08_02_Bayes_theorem_example_and_simulation.pdf)|없음| 37 | |확률|13. Bayes theorem_예제 시뮬레이션(파이썬)|[click](https://youtu.be/7nyj0DvUluI)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/08_02_Bayes_theorem_example_and_simulation.pdf)|[codes](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/08_02_Bayes_theorem_simulation.py)| 38 | |확률|14. Bayes theorem 딥러닝에 적용하여 해석하기|[click](https://youtu.be/YvWqPQhliaI)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/08_03_Bayes_theorem_applying_into_deeplearning.pdf)|없음| 39 | |확률|15. Maximum Likelihood Estimation (MLE) 완벽히 파헤치기 (deep dive)!|[click](https://youtu.be/vzNRLY_hLlM)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/09_Maximum_Likelihood_Estimation_(MLE).pdf)|없음| 40 | |확률|16. Maximum A Posterior (MAP) 완벽히 파헤치기 (deep dive)!|[click](https://youtu.be/H342QehYSqo)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/10_Maximum_A_Posterior_(MAP).pdf)|없음| 41 | |확률|17. Bayesian Neural Networks 깊은 이해 (Bayesian Inference)|[click](https://youtu.be/126JfX_kJTU)|[link](https://github.com/kafa46/deeplearning_math/blob/master/01_probability_theory/11_bayesian_neural_networks.pdf)|없음| 42 | 43 | [맨위로 이동](#top) 44 |
45 | 46 | ### Chapter 2. Linear Algebra (선형대수) 47 | 48 | |분야|주제|Youtube|PPT|Codes| 49 | |---|---|---|---|---| 50 | |선형대수|01. 딥러닝에서의 선형대수! 오리엔테이션 (동기부여)|[click](https://youtu.be/Si2QxZEz8Po)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/01_orientation_and_motivations.pdf)|없음| 51 | |선형대수|02. Matrix(행렬) 역사, 정의, 표기법, 용어 정리|[click](https://youtu.be/ToWPEh1neCY)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/02_intro_definition_notation.pdf)|없음| 52 | |선형대수|03. Matrix(행렬) 기본연산(행렬의 덧셈과 곱셈), 가우스-조던 소거법|[click](https://youtu.be/hj3PBuvW5TA)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/03_matrix_operation.pdf)|없음| 53 | |선형대수|04. Determinant(행렬식), Inverse Matrix(역행렬) 깊게 이해하기|[click](https://youtu.be/rfeEx1saFVE)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/04_determinant_and_inverse_matrix.pdf)|없음| 54 | |선형대수|05. 벡터 소개 (역사, 정의 및 표현법, 종류, 기저, Norm)|[click](https://youtu.be/MKzejgqrW6Q)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/05_vector_introduction.pdf)|없음| 55 | |선형대수|06. 벡터의 덧셈/뺄셈, 스칼라배|[click](https://youtu.be/S4B4CKURlEs)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/06_vector_addtion_scalar_mulitplication.pdf)|없음| 56 | |선형대수|07. 벡터의 곱셈 (내적, Cosine Similarity, Cross곱)|[click](https://youtu.be/VophYxpve0k)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/07_vector_product.pdf)|없음| 57 | |선형대수|08. 벡터 공간 (vector space) 및 부분 공간 (vector subspace)|[click](https://youtu.be/6EjSnqXGwHQ)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/08_vector_vector_space.pdf)|없음| 58 | |선형대수|09. 선형결합(linear combination) 및 생성(span)|[click](https://youtu.be/HTXay7LuSlY)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/09_vector_linear_combination_and_span.pdf)|없음| 59 | |선형대수|10. 선형변환(liear transformation), 벡터와 행렬의 만남|[click](https://youtu.be/dlFRj45ckXE)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/10_vector_join_vector_and_matrix_linear_transformation.pdf)|없음| 60 | |선형대수|11. 행렬을 이용한 선형변환|[click](https://youtu.be/1E02Md0o-Vc)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/11_vector_transformation_with_matrix.pdf)|없음| 61 | |선형대수|12. 행렬과 행렬을 곱한다는 의미 (선형변환의 합성)|[click](https://youtu.be/EXMWzuZHbfo)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/12_vector_matrix_composition.pdf)|없음| 62 | |선형대수|13. 그림으로 이해하는 행렬식(determinant)|[click](https://youtu.be/6qdZygiry_E)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/13_vector_determinant_with_figures.pdf)|없음| 63 | |선형대수|14. 벡터에서 텐서로 - 텐서의 깊은 이해|[click](https://youtu.be/pPIFauuiwEU)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/14_from_vector_to_tensor.pdf)|없음| 64 | |선형대수|15. 선형 시스템(공간)간의 해석(linear empathy)|[click](https://youtu.be/JQ1k8axkbFY)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/15_linear_empathy.pdf)|없음| 65 | |선형대수|16. 고윳값(eigenvalue)과 고유벡터(eigenvector) 이해하기|[click](https://youtu.be/FDEIHuBanwM)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/16_eigenvector_eigenvalue.pdf)|없음| 66 | |선형대수|17. 차원 축소 및 확장 (dimension reduction & expansion)|[click](https://youtu.be/uAVlPBC8TGE)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/17_dimension_reduction_expansion.pdf)|없음| 67 | |선형대수|18. 행렬에서의 랭크 (rank in matrix)|[click](https://youtu.be/ORSP-Rd2NcU)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/18_rank_in_matrix.pdf)|없음| 68 | |선형대수|19. 선형변환 요약, 정보 압축 및 팽창|[click](https://youtu.be/2vgpAgsqSEc)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/19_information_compression_expansion.pdf)|없음| 69 | |선형대수|20. 선형대수를 마무리하며, 인사말 및 감사인사 (adjourning)|[click](https://youtu.be/3uDd-xaipoU)|[link](https://github.com/kafa46/deeplearning_math/blob/master/02_linear_algebra/20_adjourning.pdf)|없음| 70 | 71 | [맨위로 이동](#top) 72 | 73 |
74 | 75 | ### Chapter 3. Differentiantion (미분) 76 | 77 | 78 | |분야|주제|Youtube|PPT|Codes| 79 | |---|---|---|---|---| 80 | |미분|01. 미분 시리즈를 시작하며,
오리엔테이션 (Orientation)|[click](https://youtu.be/j1D1jY71Wjg)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/01_orientation.pdf)|없음| 81 | |미분|02. 딥러닝은 어떻게 데이터로부터 지식을 배우는가?|[click](https://youtu.be/wHRAvJehL20)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/02_learning_in_deeplearning.pdf)|없음| 82 | |미분|03. 미분의 원리 - 한번은 짚고 넘어가야 할 내용|[click](https://youtu.be/C8yzd1UOEq4)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/03_principal_of_differentiation.pdf)|없음| 83 | |미분|04. 딥러닝에서 편미분 개념과 연산|[click](https://youtu.be/_8chLG-JFDo)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/04_partial_differentiation.pdf)|없음| 84 | |미분|05. 경사하강 알고리즘의 개념, 해석, 학습|[click](https://youtu.be/QU99KdtT81I)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/05_gradient_descent.pdf)|없음| 85 | |미분|06. 연쇄법칙 (chain rule) 개념과 및 연산|[click](https://youtu.be/4aWZePsJ0Ro)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/06_chain_rule.pdf)|없음| 86 | |미분|07-01. 선형시스템에서의 편미분|[click](https://youtu.be/o3IeCWtHxG4)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/07_matrix_differentiation.pdf)|없음| 87 | |미분|07-02. 선형시스템에서의 편미분 (실습 with toy example)|[click](https://youtu.be/_cAIzjWQ3Bg)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/07_matrix_differentiation_excercise_slide.pdf)|[codes](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/07_matrix_differentiation_excercise_toy_example.py)| 88 | |미분|08. 역전파 학습 (back-propagation) 및 Computation Graph|[click](https://youtu.be/T4Im-qIl0Xc)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/08_back_propagation.pdf)|없음| 89 | |미분|09. 활성함수 간단 소개, 기울기 소실의 근본 원인과 대책|[click](https://youtu.be/dYdHdoYVzxc)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/09_gradient_vanishing.pdf)|없음| 90 | |미분|10. 딥러닝 수학 시리즈 전체를 마무리하며, 감사인사 (adjourning)|[click](https://youtu.be/KX8C9EU4lGc)|[link](https://github.com/kafa46/deeplearning_math/blob/master/03_differentiation/10_adjourning.pdf)|없음| 91 | 92 | 93 | [맨위로 이동](#top) 94 | 95 |
96 | 97 | ### Chapter 4. Information Theory (정보이론) 98 | 99 | 100 | |분야|주제|Youtube|PPT|Codes| 101 | |---|---|---|---|---| 102 | |정보이론|01. 오리엔테이션 - 딥러닝과 정보이론의 관계, 학습목표, 과정 소개 등|[click](https://youtu.be/3_6O1ErlCJs)|[link](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/01_orientation.pdf)|없음| 103 | |정보이론|02. 정보이론에서 '정보(information)'이란 무엇일까?|[click](https://youtu.be/03usOd5Uwa0)|[link](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/02_information.pdf)|없음| 104 | |정보이론|[정보이론]_03-1. 정보 엔트로피(entropy) 개념, 표기, 연산 - 이론|[click](https://youtu.be/5ikry4jIDkA)|[link](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/03_1_information_entropy_theory.pdf)|없음| 105 | |정보이론|[정보이론]_03-2. 정보 엔트로피(entropy) 개념, 표기, 연산 - 실습|[click](https://youtu.be/Gz1mKx5L7K8)|[link](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/03_2_information_entropy_practice.pdf)|[codes](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/rolling_dices.py)| 106 | |정보이론|04. 딥러닝에서의 엔트로피(entropy)|[click](https://youtu.be/EwME3JH_EaM)|[link](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/04_entropy_in_deeplearning.pdf)|없음| 107 | |정보이론|05-1. 엔트로피 손실 (Binary Cross Entropy, Cross Entropy) - Deep dive|[click](https://youtu.be/t6WIGyz0tD8)|[link](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/05_entropy_loss.pdf)|없음| 108 | |정보이론|05-2. 엔트로피 손실(Cross Entropy Loss) 실습|[click](https://youtu.be/HJBsq19pcbY)|[없음]()|[codes](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/entropy_loss.py)| 109 | |정보이론|06. KL 발산 (Kullback-Leibler Divergence) in Deeplearning|[click](https://youtu.be/bE9dSiW6_h4)|[link](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/06_KL_divergence.pdf)|없음| 110 | |정보이론|07. 정보 이론 시리즈를 마무리 하며 (요약, 정리, 인사말씀)|[click](https://youtu.be/6pRJnQTCoYE)|[link](https://github.com/kafa46/deeplearning_math/blob/master/05_information_theory/07_adjourning.pdf)|없음| 111 | 112 | 113 | [맨위로 이동](#top) 114 |
115 | 116 | ### Bonus. Number Set (집합론) 117 | 118 | |분야|주제|Youtube|PPT|Codes| 119 | |---|---|---|---|---| 120 | |집합|$R_{2,3}$ vs. $R^{2\times3}$ 차이가 도대체 뭐야?|[click](https://youtu.be/m7dSzu-G_Mk)|[link](https://github.com/kafa46/deeplearning_math/blob/master/04_set_theory/01_intepretation_of_number_set.pdf)|없음| 121 | 122 | [맨위로 이동](#top) 123 | --------------------------------------------------------------------------------