├── LICENSE ├── Tensorflow Fundamentals ├── 03 Advanced TensorFlow Concepts │ ├── 04 Deployment │ │ ├── README.md │ │ └── deployment.py │ ├── 03 Custom Extensions │ │ ├── README.md │ │ └── custom_extensions.py │ ├── 02 Advanced Architectures │ │ ├── README.md │ │ └── advanced_architectures.py │ └── 01 Distributed Training │ │ ├── README.md │ │ └── distributed_training.py ├── 01 Core TensorFlow Foundations │ ├── 01 Tensors and Operations │ │ ├── README.md │ │ └── tensors_and_operations.py │ ├── 02 Automatic Differentiation │ │ ├── README.md │ │ └── automatic_differentiation.py │ ├── 03 Neural Networks (tf.keras) │ │ ├── README.md │ │ └── neural_networks_keras.py │ ├── 04 Datasets and Data Loading │ │ ├── README.md │ │ └── datasets_and_data_loading.py │ └── 05 Training Pipeline │ │ ├── README.md │ │ └── training_pipeline.py ├── 02 Intermediate TensorFlow Concepts │ ├── 03 Optimization │ │ ├── README.md │ │ └── optimization.py │ ├── 01 Model Architectures │ │ ├── README.md │ │ └── model_architectures.py │ └── 02 Customization │ │ ├── README.md │ │ └── customization.py └── 04 Specialized TensorFlow Libraries │ ├── README.md │ └── specialized_libraries.py ├── README.md └── Tensorflow Interview Questions └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 rohanmistry231 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /Tensorflow Fundamentals/03 Advanced TensorFlow Concepts/04 Deployment/README.md: -------------------------------------------------------------------------------- 1 | # Deployment (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | Deploying TensorFlow models enables production use. This guide covers model export (SavedModel), serving (TensorFlow Serving), and edge deployment (TensorFlow Lite, TensorFlow.js), with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Export models as SavedModel for serving. 8 | - Serve models with TensorFlow Serving. 9 | - Deploy models on edge devices with TensorFlow Lite/JS. 10 | 11 | ## 🔑 Key Concepts 12 | - **SavedModel**: Standard TensorFlow format for export. 13 | - **TensorFlow Serving**: Scalable serving via REST/gRPC. 14 | - **TensorFlow Lite**: Lightweight models for mobile/edge. 15 | - **TensorFlow.js**: Browser-based inference. 16 | 17 | ## 📝 Example Walkthrough 18 | The `deployment.py` file demonstrates: 19 | 1. **Model**: Training a CNN on MNIST. 20 | 2. **SavedModel**: Exporting the model. 21 | 3. **TensorFlow Serving**: Instructions for serving. 22 | 4. **TensorFlow Lite/JS**: Converting and evaluating. 23 | 5. **Visualization**: Visualizing predictions. 24 | 25 | Example code: 26 | ```python 27 | import tensorflow as tf 28 | model = tf.keras.Sequential([...]) 29 | model.save("saved_model/mnist_cnn") 30 | ``` 31 | 32 | ## 🛠️ Practical Tasks 33 | 1. Export a trained model as SavedModel. 34 | 2. Set up TensorFlow Serving for the model. 35 | 3. Convert a model to TensorFlow Lite and evaluate it. 36 | 4. Prepare a model for TensorFlow.js deployment. 37 | 5. Visualize predictions from a deployed model. 38 | 39 | ## 💡 Interview Tips 40 | - **Common Questions**: 41 | - What is the SavedModel format? 42 | - How does TensorFlow Serving handle requests? 43 | - Why use TensorFlow Lite for edge devices? 44 | - **Tips**: 45 | - Explain SavedModel’s portability. 46 | - Highlight TensorFlow Lite’s quantization benefits. 47 | - Be ready to describe a deployment pipeline. 48 | 49 | ## 📚 Resources 50 | - [TensorFlow SavedModel Guide](https://www.tensorflow.org/guide/saved_model) 51 | - [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving) 52 | - [TensorFlow Lite](https://www.tensorflow.org/lite) 53 | - [TensorFlow.js](https://www.tensorflow.org/js) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/03 Advanced TensorFlow Concepts/03 Custom Extensions/README.md: -------------------------------------------------------------------------------- 1 | # Custom Extensions (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | Custom extensions in TensorFlow allow tailored functionality. This guide covers custom gradient functions, TensorFlow Addons-inspired losses, and custom optimizers, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Implement custom gradient functions with `@tf.custom_gradient`. 8 | - Use advanced losses inspired by TensorFlow Addons. 9 | - Create custom optimizers for specialized training. 10 | 11 | ## 🔑 Key Concepts 12 | - **Custom Gradients**: Define custom backpropagation logic. 13 | - **TensorFlow Addons**: Advanced losses/metrics (emulated here due to deprecation). 14 | - **Custom Optimizers**: Extend `tf.keras.optimizers.Optimizer` for unique updates. 15 | 16 | ## 📝 Example Walkthrough 17 | The `custom_extensions.py` file demonstrates: 18 | 1. **Custom Gradient**: Clipping operation with custom gradients. 19 | 2. **Focal Loss**: Emulating Addons-style loss for MNIST. 20 | 3. **Custom Optimizer**: Momentum-based optimizer. 21 | 4. **Visualization**: Comparing model accuracy. 22 | 23 | Example code: 24 | ```python 25 | import tensorflow as tf 26 | @tf.custom_gradient 27 | def clip_by_value(x, clip_min, clip_max): 28 | y = tf.clip_by_value(x, clip_min, clip_max) 29 | def grad(dy): 30 | return dy * tf.where((x >= clip_min) & (x <= clip_max), 1.0, 0.0), None, None 31 | return y, grad 32 | ``` 33 | 34 | ## 🛠️ Practical Tasks 35 | 1. Implement a custom gradient for a non-linear operation. 36 | 2. Create a focal loss for MNIST classification. 37 | 3. Build a custom optimizer with momentum and test it. 38 | 4. Compare performance of custom vs. standard optimizers. 39 | 40 | ## 💡 Interview Tips 41 | - **Common Questions**: 42 | - How do you implement a custom gradient? 43 | - What is the purpose of a focal loss? 44 | - How would you design a custom optimizer? 45 | - **Tips**: 46 | - Explain `@tf.custom_gradient` structure. 47 | - Highlight focal loss for imbalanced data. 48 | - Be ready to code a custom optimizer. 49 | 50 | ## 📚 Resources 51 | - [TensorFlow Custom Gradients](https://www.tensorflow.org/guide/autodiff) 52 | - [TensorFlow Optimizers](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers) 53 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/03 Advanced TensorFlow Concepts/02 Advanced Architectures/README.md: -------------------------------------------------------------------------------- 1 | # Advanced Architectures (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | Advanced architectures push TensorFlow’s capabilities for complex tasks. This guide covers Transformers (Vision Transformers), Generative Models (VAEs), and Reinforcement Learning (TF-Agents), with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand Vision Transformers for image tasks. 8 | - Implement VAEs for generative modeling. 9 | - Apply TF-Agents for reinforcement learning. 10 | 11 | ## 🔑 Key Concepts 12 | - **Vision Transformers**: Patch-based attention for images. 13 | - **VAEs**: Encoder-decoder with latent space for generation. 14 | - **Reinforcement Learning**: DQN with TF-Agents for decision-making. 15 | 16 | ## 📝 Example Walkthrough 17 | The `advanced_architectures.py` file demonstrates: 18 | 1. **Vision Transformer**: Simplified ViT for CIFAR-10. 19 | 2. **VAE**: Generating CIFAR-10 images. 20 | 3. **DQN**: Training on CartPole with TF-Agents. 21 | 4. **Visualization**: Comparing original and generated images. 22 | 23 | Example code: 24 | ```python 25 | import tensorflow as tf 26 | class ViT(tf.keras.Model): 27 | def __init__(self, num_classes, patch_size, num_patches, d_model, num_heads): 28 | super().__init__() 29 | self.transformer = tf.keras.layers.MultiHeadAttention(num_heads=num_heads, key_dim=d_model) 30 | self.dense = tf.keras.layers.Dense(num_classes, activation='softmax') 31 | ``` 32 | 33 | ## 🛠️ Practical Tasks 34 | 1. Train a Vision Transformer on CIFAR-10. 35 | 2. Build a VAE for image generation and visualize outputs. 36 | 3. Implement a DQN agent for CartPole and evaluate rewards. 37 | 4. Experiment with Transformer hyperparameters (e.g., num_heads). 38 | 39 | ## 💡 Interview Tips 40 | - **Common Questions**: 41 | - How do Transformers differ from CNNs? 42 | - What is the role of the latent space in VAEs? 43 | - How does DQN balance exploration and exploitation? 44 | - **Tips**: 45 | - Explain ViT’s patch embedding process. 46 | - Highlight VAE’s reconstruction and KL losses. 47 | - Be ready to code a simple RL agent. 48 | 49 | ## 📚 Resources 50 | - [TensorFlow Transformers Guide](https://www.tensorflow.org/text/tutorials/transformer) 51 | - [TensorFlow Agents](https://www.tensorflow.org/agents) 52 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/03 Advanced TensorFlow Concepts/01 Distributed Training/README.md: -------------------------------------------------------------------------------- 1 | # Distributed Training (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | Distributed training scales TensorFlow models across multiple devices (GPUs/TPUs). This guide covers Data Parallelism (`MirroredStrategy`), Multi-GPU/TPU Training (`TPUStrategy`), and Distributed Datasets, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand data parallelism with `MirroredStrategy`. 8 | - Implement multi-GPU/TPU training with `TPUStrategy`. 9 | - Optimize datasets for distributed training. 10 | 11 | ## 🔑 Key Concepts 12 | - **Data Parallelism**: `MirroredStrategy` replicates model across GPUs with synchronized gradients. 13 | - **Multi-GPU/TPU Training**: `TPUStrategy` leverages TPUs for high-performance training. 14 | - **Distributed Datasets**: Shard datasets across devices for efficient processing. 15 | 16 | ## 📝 Example Walkthrough 17 | The `distributed_training.py` file demonstrates: 18 | 1. **Dataset**: Loading and preprocessing CIFAR-10. 19 | 2. **MirroredStrategy**: Training a CNN with data parallelism. 20 | 3. **TPUStrategy**: Training with TPU support (fallback to GPUs). 21 | 4. **Distributed Datasets**: Custom training loop with distributed datasets. 22 | 5. **Visualization**: Comparing accuracy across strategies. 23 | 24 | Example code: 25 | ```python 26 | import tensorflow as tf 27 | strategy = tf.distribute.MirroredStrategy() 28 | with strategy.scope(): 29 | model = tf.keras.Sequential([...]) 30 | model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 31 | ``` 32 | 33 | ## 🛠️ Practical Tasks 34 | 1. Train a CNN on CIFAR-10 using `MirroredStrategy`. 35 | 2. Adapt the model for `TPUStrategy` in a cloud environment (e.g., Colab). 36 | 3. Create a distributed dataset and implement a custom training loop. 37 | 4. Compare training speed and accuracy across strategies. 38 | 39 | ## 💡 Interview Tips 40 | - **Common Questions**: 41 | - How does `MirroredStrategy` synchronize gradients? 42 | - What are the benefits of `TPUStrategy`? 43 | - How do you shard datasets for distributed training? 44 | - **Tips**: 45 | - Explain gradient aggregation in data parallelism. 46 | - Highlight TPU’s matrix multiplication efficiency. 47 | - Be ready to code a model with `MirroredStrategy`. 48 | 49 | ## 📚 Resources 50 | - [TensorFlow Distributed Training Guide](https://www.tensorflow.org/guide/distributed_training) 51 | - [TensorFlow TPU Guide](https://www.tensorflow.org/guide/tpu) 52 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/01 Tensors and Operations/README.md: -------------------------------------------------------------------------------- 1 | # Tensors and Operations (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | Tensors are the core data structure in TensorFlow, representing multi-dimensional arrays. This guide covers tensor creation, attributes, operations, CPU/GPU interoperability, and NumPy integration, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand TensorFlow tensors and their properties. 8 | - Master tensor creation (`tf.constant`, `tf.zeros`, `tf.random`) and manipulation. 9 | - Perform operations like indexing, reshaping, matrix multiplication, and broadcasting. 10 | - Explore CPU/GPU interoperability and NumPy integration. 11 | 12 | ## 🔑 Key Concepts 13 | - **Tensor Creation**: Use `tf.constant`, `tf.zeros`, `tf.ones`, `tf.random` to create tensors. 14 | - **Attributes**: Shape, dtype, and device define tensor properties. 15 | - **Operations**: Indexing, reshaping, matrix multiplication (`tf.matmul`), and broadcasting. 16 | - **CPU/GPU Interoperability**: TensorFlow manages device placement (`tf.device`). 17 | - **NumPy Integration**: Seamless conversion between tensors and NumPy arrays. 18 | 19 | ## 📝 Example Walkthrough 20 | The `tensors_and_operations.py` file demonstrates: 21 | 1. **Tensor Creation**: Creating constant, zero, and random tensors. 22 | 2. **Attributes**: Inspecting shape, dtype, and device. 23 | 3. **Operations**: Indexing, reshaping, matrix multiplication, and broadcasting. 24 | 4. **Interoperability**: Running operations on CPU/GPU and converting to/from NumPy. 25 | 5. **Visualization**: Plotting a tensor as a heatmap. 26 | 27 | Example code: 28 | ```python 29 | import tensorflow as tf 30 | const_tensor = tf.constant([[1, 2], [3, 4]], dtype=tf.float32) 31 | matmul_result = tf.matmul(const_tensor, const_tensor) 32 | ``` 33 | 34 | ## 🛠️ Practical Tasks 35 | 1. Create a 2x3 tensor using `tf.random.normal` and print its shape and dtype. 36 | 2. Reshape a 4x4 tensor into a 2x8 tensor and verify the result. 37 | 3. Perform matrix multiplication on two 3x3 tensors and check the output. 38 | 4. Convert a NumPy array to a TensorFlow tensor, perform an operation, and convert back. 39 | 5. Run a matrix multiplication on CPU and GPU (if available) using `tf.device`. 40 | 41 | ## 💡 Interview Tips 42 | - **Common Questions**: 43 | - What is a TensorFlow tensor, and how does it differ from a NumPy array? 44 | - How does broadcasting work in TensorFlow? 45 | - Why is device placement important in TensorFlow? 46 | - **Tips**: 47 | - Explain tensor attributes (shape, dtype, device) clearly. 48 | - Highlight broadcasting’s role in handling shape mismatches. 49 | - Be ready to code a tensor manipulation task (e.g., extract diagonal, compute sum). 50 | 51 | ## 📚 Resources 52 | - [TensorFlow Core Guide](https://www.tensorflow.org/guide/tensor) 53 | - [TensorFlow API Documentation](https://www.tensorflow.org/api_docs/python/tf) 54 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/02 Automatic Differentiation/README.md: -------------------------------------------------------------------------------- 1 | # Automatic Differentiation (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | Automatic differentiation is a cornerstone of TensorFlow, enabling gradient-based optimization for machine learning models. This guide covers computational graphs, gradient computation with `tf.GradientTape`, gradient application using `optimizer.apply_gradients`, and no-gradient context with `tf.stop_gradient`, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand computational graphs and their role in differentiation. 8 | - Master `tf.GradientTape` for gradient computation. 9 | - Apply gradients using optimizers for model training. 10 | - Use `tf.stop_gradient` to control gradient flow. 11 | 12 | ## 🔑 Key Concepts 13 | - **Computational Graphs**: Track operations for automatic differentiation. 14 | - **tf.GradientTape**: Records operations dynamically to compute gradients. 15 | - **Gradient Application**: Optimizers (e.g., SGD, Adam) update variables using gradients. 16 | - **tf.stop_gradient**: Prevents gradients from flowing through specified tensors. 17 | 18 | ## 📝 Example Walkthrough 19 | The `automatic_differentiation.py` file demonstrates: 20 | 1. **Computational Graphs**: Computing gradients for a polynomial. 21 | 2. **Gradient Computation**: Gradients for a linear layer using `tf.GradientTape`. 22 | 3. **Higher-Order Gradients**: Second derivatives with nested tapes. 23 | 4. **Gradient Application**: Optimizing a linear regression model. 24 | 5. **No-Gradient Context**: Using `tf.stop_gradient` to block gradient flow. 25 | 6. **Visualization**: Plotting loss curves and model fits. 26 | 27 | Example code: 28 | ```python 29 | import tensorflow as tf 30 | x = tf.Variable(3.0) 31 | with tf.GradientTape() as tape: 32 | y = x**2 33 | dy_dx = tape.gradient(y, x) 34 | ``` 35 | 36 | ## 🛠️ Practical Tasks 37 | 1. Compute the gradient of `y = x^3 + 2x` at `x = 2` using `tf.GradientTape`. 38 | 2. Train a linear regression model with `tf.GradientTape` and Adam optimizer. 39 | 3. Use `tf.stop_gradient` to block gradients for a portion of a computation graph. 40 | 4. Compute second-order gradients for `y = x^4` and verify the result. 41 | 5. Debug a case where gradients are `None` due to non-differentiable operations. 42 | 43 | ## 💡 Interview Tips 44 | - **Common Questions**: 45 | - How does `tf.GradientTape` work in TensorFlow? 46 | - What causes gradients to be `None`, and how do you debug it? 47 | - When would you use `tf.stop_gradient`? 48 | - **Tips**: 49 | - Explain the dynamic computation graph in `tf.GradientTape`. 50 | - Highlight common gradient issues (e.g., non-differentiable ops like `tf.cast`). 51 | - Be ready to code a gradient computation for a simple function or a training loop. 52 | 53 | ## 📚 Resources 54 | - [TensorFlow Automatic Differentiation Guide](https://www.tensorflow.org/guide/autodiff) 55 | - [TensorFlow API Documentation](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 56 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/02 Intermediate TensorFlow Concepts/03 Optimization/README.md: -------------------------------------------------------------------------------- 1 | # Optimization (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | Optimization in TensorFlow enhances model performance and efficiency. This guide covers hyperparameter tuning (learning rate, batch size), regularization (dropout, L2), mixed precision training (`tf.keras.mixed_precision`), and model quantization, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand hyperparameter tuning for learning rate and batch size. 8 | - Apply regularization techniques like dropout and L2. 9 | - Implement mixed precision training for faster computation. 10 | - Perform model quantization for efficient deployment. 11 | 12 | ## 🔑 Key Concepts 13 | - **Hyperparameter Tuning**: Optimize learning rate and batch size for better accuracy. 14 | - **Regularization**: Use dropout and L2 to prevent overfitting. 15 | - **Mixed Precision Training**: Use `mixed_float16` for faster training on GPUs. 16 | - **Model Quantization**: Reduce model size and latency with TensorFlow Lite. 17 | 18 | ## 📝 Example Walkthrough 19 | The `optimization.py` file demonstrates: 20 | 1. **Dataset**: Loading and preprocessing CIFAR-10. 21 | 2. **Hyperparameter Tuning**: Testing learning rates and batch sizes. 22 | 3. **Regularization**: Applying dropout and L2 to a CNN. 23 | 4. **Mixed Precision**: Training with `mixed_float16` policy. 24 | 5. **Quantization**: Converting a model to TensorFlow Lite. 25 | 6. **Visualization**: Comparing accuracy and visualizing tuning results. 26 | 27 | Example code: 28 | ```python 29 | import tensorflow as tf 30 | import tensorflow.keras.mixed_precision as mixed_precision 31 | policy = mixed_precision.Policy('mixed_float16') 32 | mixed_precision.set_global_policy(policy) 33 | model = tf.keras.Sequential([...]) 34 | model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 35 | ``` 36 | 37 | ## 🛠️ Practical Tasks 38 | 1. Tune learning rate and batch size for a CNN on CIFAR-10 and select the best combination. 39 | 2. Add dropout and L2 regularization to a model and compare performance. 40 | 3. Train a model with mixed precision and measure training time savings. 41 | 4. Quantize a trained model with TensorFlow Lite and evaluate its accuracy. 42 | 5. Visualize hyperparameter tuning results using a scatter plot. 43 | 44 | ## 💡 Interview Tips 45 | - **Common Questions**: 46 | - How do you choose an optimal learning rate? 47 | - What is the benefit of mixed precision training? 48 | - Why would you quantize a model? 49 | - **Tips**: 50 | - Explain dropout’s role in reducing overfitting. 51 | - Highlight mixed precision’s speed and memory benefits. 52 | - Be ready to code a model with regularization or quantization. 53 | 54 | ## 📚 Resources 55 | - [TensorFlow Optimization Guide](https://www.tensorflow.org/guide/keras/training_with_built_in_methods) 56 | - [TensorFlow Mixed Precision Guide](https://www.tensorflow.org/guide/mixed_precision) 57 | - [TensorFlow Lite Guide](https://www.tensorflow.org/lite) 58 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/02 Intermediate TensorFlow Concepts/01 Model Architectures/README.md: -------------------------------------------------------------------------------- 1 | # Model Architectures (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | TensorFlow supports a variety of neural network architectures tailored to specific tasks. This guide covers Feedforward Neural Networks (FNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs, LSTMs, GRUs), and Transfer Learning with `tf.keras.applications`, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand the structure and use cases of FNNs, CNNs, RNNs, and transfer learning. 8 | - Master building and training these architectures with `tf.keras`. 9 | - Apply transfer learning using pre-trained models. 10 | - Compare architectures for different data types and tasks. 11 | 12 | ## 🔑 Key Concepts 13 | - **FNNs**: Fully connected layers for tabular data or simple tasks. 14 | - **CNNs**: Convolutional and pooling layers for image data. 15 | - **RNNs (LSTMs, GRUs)**: Recurrent layers for sequential or time-series data. 16 | - **Transfer Learning**: Use pre-trained models (e.g., MobileNetV2) for efficient training. 17 | 18 | ## 📝 Example Walkthrough 19 | The `model_architectures.py` file demonstrates: 20 | 1. **FNN**: Classification on MNIST with Dense layers. 21 | 2. **CNN**: Classification on MNIST with Conv2D and MaxPooling2D. 22 | 3. **RNNs**: LSTM and GRU for synthetic sequence classification. 23 | 4. **Transfer Learning**: Fine-tuning MobileNetV2 on CIFAR-10. 24 | 5. **Visualization**: Comparing validation accuracy and visualizing CNN predictions. 25 | 26 | Example code: 27 | ```python 28 | import tensorflow as tf 29 | cnn_model = tf.keras.Sequential([ 30 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)), 31 | tf.keras.layers.MaxPooling2D((2, 2)), 32 | tf.keras.layers.Flatten(), 33 | tf.keras.layers.Dense(10, activation='softmax') 34 | ]) 35 | cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 36 | ``` 37 | 38 | ## 🛠️ Practical Tasks 39 | 1. Build an FNN for MNIST classification and evaluate its accuracy. 40 | 2. Create a CNN for MNIST with additional Conv2D layers and compare performance. 41 | 3. Train an LSTM or GRU on a synthetic sequence dataset for binary classification. 42 | 4. Fine-tune a pre-trained MobileNetV2 model on CIFAR-10 and evaluate improvements. 43 | 5. Visualize predictions and compare parameter counts across architectures. 44 | 45 | ## 💡 Interview Tips 46 | - **Common Questions**: 47 | - When would you use a CNN over an FNN? 48 | - What are the differences between LSTMs and GRUs? 49 | - How does transfer learning improve training efficiency? 50 | - **Tips**: 51 | - Explain CNN’s spatial feature extraction vs. FNN’s fully connected layers. 52 | - Highlight LSTM’s memory cells for long-term dependencies. 53 | - Be ready to code a simple CNN or fine-tune a pre-trained model. 54 | 55 | ## 📚 Resources 56 | - [TensorFlow Keras Guide](https://www.tensorflow.org/guide/keras) 57 | - [TensorFlow Applications Documentation](https://www.tensorflow.org/api_docs/python/tf/keras/applications) 58 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) 59 | - [TensorFlow Official Tutorials](https://www.tensorflow.org/tutorials) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/04 Specialized TensorFlow Libraries/README.md: -------------------------------------------------------------------------------- 1 | # Specialized TensorFlow Libraries (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | TensorFlow’s specialized libraries streamline data handling, model reuse, and deployment. This guide covers **TensorFlow Datasets**, **TensorFlow Hub**, **Keras**, **TensorFlow Lite**, and **TensorFlow.js**, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Load and preprocess datasets with TensorFlow Datasets. 8 | - Use pre-trained models from TensorFlow Hub for transfer learning. 9 | - Build models rapidly with Keras. 10 | - Deploy models on edge devices with TensorFlow Lite. 11 | - Prepare models for browser-based inference with TensorFlow.js. 12 | 13 | ## 🔑 Key Concepts 14 | - **TensorFlow Datasets**: Curated, ready-to-use datasets with `tfds.load`. 15 | - **TensorFlow Hub**: Pre-trained models for transfer learning via `hub.KerasLayer`. 16 | - **Keras**: High-level API for quick model prototyping. 17 | - **TensorFlow Lite**: Lightweight models for mobile and edge devices. 18 | - **TensorFlow.js**: JavaScript-based ML for browser environments. 19 | 20 | ## 📝 Example Walkthrough 21 | The `specialized_libraries.py` file demonstrates: 22 | 1. **TensorFlow Datasets**: Loading CIFAR-10 with preprocessing. 23 | 2. **TensorFlow Hub**: Transfer learning with MobileNetV2. 24 | 3. **Keras**: Building a CNN for CIFAR-10. 25 | 4. **TensorFlow Lite**: Converting and evaluating a model. 26 | 5. **TensorFlow.js**: Instructions for browser deployment. 27 | 6. **Visualization**: Dataset samples and model predictions. 28 | 29 | Example code: 30 | ```python 31 | import tensorflow as tf 32 | import tensorflow_datasets as tfds 33 | ds, info = tfds.load('cifar10', with_info=True, as_supervised=True) 34 | train_ds = ds['train'].map(lambda x, y: (x / 255.0, tf.one_hot(y, 10))).batch(32) 35 | ``` 36 | 37 | ## 🛠️ Practical Tasks 38 | 1. Load a dataset (e.g., CIFAR-10) using TensorFlow Datasets and preprocess it. 39 | 2. Use a TensorFlow Hub model for transfer learning on CIFAR-10. 40 | 3. Build and train a Keras model for image classification. 41 | 4. Convert a Keras model to TensorFlow Lite and evaluate its accuracy. 42 | 5. Prepare a model for TensorFlow.js and outline browser deployment steps. 43 | 6. Combine TensorFlow Datasets, Hub, and Keras in a transfer learning workflow. 44 | 45 | ## 💡 Interview Tips 46 | - **Common Questions**: 47 | - How does TensorFlow Datasets simplify data preprocessing? 48 | - What are the benefits of using TensorFlow Hub for transfer learning? 49 | - Why choose TensorFlow Lite over a full TensorFlow model for edge devices? 50 | - **Tips**: 51 | - Explain `tfds.load` and its preprocessing pipeline. 52 | - Highlight Keras’s prototyping speed and TensorFlow Lite’s efficiency. 53 | - Be ready to code a transfer learning model with TensorFlow Hub or convert to TensorFlow Lite. 54 | 55 | ## 📚 Resources 56 | - [TensorFlow Datasets Guide](https://www.tensorflow.org/datasets) 57 | - [TensorFlow Hub Guide](https://www.tensorflow.org/hub) 58 | - [Keras Guide](https://www.tensorflow.org/guide/keras) 59 | - [TensorFlow Lite Guide](https://www.tensorflow.org/lite) 60 | - [TensorFlow.js Guide](https://www.tensorflow.org/js) 61 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/03 Neural Networks (tf.keras)/README.md: -------------------------------------------------------------------------------- 1 | # Neural Networks (`tf.keras`) 2 | 3 | ## 📖 Introduction 4 | `tf.keras` is TensorFlow’s high-level API for building and training neural networks. This guide covers defining models (`tf.keras.Sequential`, `tf.keras.Model`), layers (Dense, Convolutional, Pooling, Normalization), activations (ReLU, Sigmoid, Softmax), loss functions (MSE, Categorical Crossentropy), optimizers (SGD, Adam, RMSprop), and learning rate schedules, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand `tf.keras` for building neural networks. 8 | - Master model definition with `Sequential` and `Model` APIs. 9 | - Apply layers, activations, loss functions, and optimizers. 10 | - Implement learning rate schedules for improved training. 11 | 12 | ## 🔑 Key Concepts 13 | - **Model Definition**: `tf.keras.Sequential` for linear stacks, `tf.keras.Model` for custom architectures. 14 | - **Layers**: Dense (fully connected), Conv2D (convolutional), MaxPooling2D (pooling), BatchNormalization. 15 | - **Activations**: ReLU (non-linearity), Sigmoid (binary), Softmax (multi-class). 16 | - **Loss Functions**: MSE (regression), Categorical Crossentropy (classification). 17 | - **Optimizers**: SGD (gradient descent), Adam (adaptive), RMSprop (momentum-based). 18 | - **Learning Rate Schedules**: Adjust learning rate dynamically (e.g., exponential decay). 19 | 20 | ## 📝 Example Walkthrough 21 | The `neural_networks_keras.py` file demonstrates: 22 | 1. **Sequential Model**: Regression with Dense layers. 23 | 2. **Custom Model**: Classification using `tf.keras.Model` subclassing. 24 | 3. **CNN**: Image classification with Conv2D, Pooling, and BatchNormalization. 25 | 4. **Activations and Losses**: ReLU, Sigmoid, Softmax, MSE, and Crossentropy. 26 | 5. **Optimizers**: Comparing SGD, Adam, RMSprop. 27 | 6. **Learning Rate Schedules**: Exponential decay for regression. 28 | 7. **Visualization**: Plotting training loss curves. 29 | 30 | Example code: 31 | ```python 32 | import tensorflow as tf 33 | model = tf.keras.Sequential([ 34 | tf.keras.layers.Dense(64, activation='relu', input_shape=(5,)), 35 | tf.keras.layers.Dense(1) 36 | ]) 37 | model.compile(optimizer='adam', loss='mse') 38 | ``` 39 | 40 | ## 🛠️ Practical Tasks 41 | 1. Build a `Sequential` model for regression on synthetic data and evaluate MSE. 42 | 2. Create a custom `tf.keras.Model` for multi-class classification and train it. 43 | 3. Design a CNN with Conv2D, MaxPooling2D, and BatchNormalization for image data. 44 | 4. Compare SGD, Adam, and RMSprop on a regression task. 45 | 5. Implement an exponential decay learning rate schedule and plot the loss curve. 46 | 47 | ## 💡 Interview Tips 48 | - **Common Questions**: 49 | - What is the difference between `Sequential` and `Model` APIs? 50 | - When would you use ReLU vs. Sigmoid? 51 | - How does Adam differ from SGD? 52 | - **Tips**: 53 | - Explain the role of activations in introducing non-linearity. 54 | - Highlight Adam’s adaptive learning rate for faster convergence. 55 | - Be ready to code a simple neural network or CNN architecture. 56 | 57 | ## 📚 Resources 58 | - [TensorFlow Keras Guide](https://www.tensorflow.org/guide/keras) 59 | - [TensorFlow API Documentation](https://www.tensorflow.org/api_docs/python/tf/keras) 60 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/04 Datasets and Data Loading/README.md: -------------------------------------------------------------------------------- 1 | # Datasets and Data Loading (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | Efficient data loading and preprocessing are essential for machine learning pipelines. This guide covers built-in datasets (`tf.keras.datasets`), TensorFlow Datasets (`tfds.load`), data pipelines (`tf.data.Dataset`, map, batch, shuffle), preprocessing (`tf.keras.preprocessing`), and handling large datasets, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand TensorFlow’s dataset loading mechanisms. 8 | - Master `tf.data.Dataset` for efficient data pipelines. 9 | - Apply preprocessing and data augmentation with `tf.keras.preprocessing`. 10 | - Handle large datasets with optimized pipelines. 11 | 12 | ## 🔑 Key Concepts 13 | - **Built-in Datasets**: `tf.keras.datasets` provides datasets like MNIST. 14 | - **TensorFlow Datasets**: `tfds.load` accesses curated datasets (e.g., CIFAR-10). 15 | - **Data Pipeline**: `tf.data.Dataset` supports transformations (map, batch, shuffle, prefetch). 16 | - **Preprocessing**: `tf.keras.preprocessing` for data augmentation (e.g., rotation, zoom). 17 | - **Large Datasets**: Optimize pipelines with caching, prefetching, and parallel processing. 18 | 19 | ## 📝 Example Walkthrough 20 | The `datasets_and_data_loading.py` file demonstrates: 21 | 1. **Built-in Datasets**: Loading and normalizing MNIST. 22 | 2. **TensorFlow Datasets**: Loading CIFAR-10 with `tfds.load` (if installed). 23 | 3. **Data Pipeline**: Creating a `tf.data.Dataset` pipeline for MNIST with shuffle, batch, and augmentation. 24 | 4. **Preprocessing**: Applying `tf.keras.preprocessing` for image augmentation. 25 | 5. **Large Datasets**: Building an efficient pipeline for synthetic data. 26 | 6. **Visualization**: Comparing original vs. augmented images and plotting training progress. 27 | 28 | Example code: 29 | ```python 30 | import tensorflow as tf 31 | (x_train, y_train), _ = tf.keras.datasets.mnist.load_data() 32 | dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) 33 | dataset = dataset.shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 34 | ``` 35 | 36 | ## 🛠️ Practical Tasks 37 | 1. Load MNIST using `tf.keras.datasets` and normalize the data. 38 | 2. Create a `tf.data.Dataset` pipeline with shuffle, batch, and a custom preprocessing function. 39 | 3. Apply data augmentation (e.g., rotation, flip) using `tf.keras.preprocessing`. 40 | 4. Build an optimized pipeline for a large synthetic dataset with caching and prefetching. 41 | 5. Train a CNN using a `tf.data.Dataset` pipeline and evaluate its performance. 42 | 43 | ## 💡 Interview Tips 44 | - **Common Questions**: 45 | - How does `tf.data.Dataset` improve training efficiency? 46 | - What is the purpose of prefetching and caching? 47 | - When would you use `tf.keras.preprocessing` for augmentation? 48 | - **Tips**: 49 | - Explain the role of `shuffle`, `batch`, and `prefetch` in pipelines. 50 | - Highlight optimization techniques like `cache()` for small datasets. 51 | - Be ready to code a data pipeline with preprocessing and augmentation. 52 | 53 | ## 📚 Resources 54 | - [TensorFlow Data Guide](https://www.tensorflow.org/guide/data) 55 | - [TensorFlow Datasets Documentation](https://www.tensorflow.org/datasets) 56 | - [TensorFlow Keras Preprocessing](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing) 57 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/05 Training Pipeline/README.md: -------------------------------------------------------------------------------- 1 | # Training Pipeline (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | A training pipeline orchestrates model training, evaluation, checkpointing, and monitoring in TensorFlow. This guide covers training/evaluation loops, model checkpointing (`model.save`, `model.load`), GPU/TPU training (`tf.device`), and monitoring with TensorBoard, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand TensorFlow’s training and evaluation loops. 8 | - Master model checkpointing for saving and loading models. 9 | - Implement GPU/TPU training with `tf.device`. 10 | - Monitor training with TensorBoard for performance analysis. 11 | 12 | ## 🔑 Key Concepts 13 | - **Training/Evaluation Loops**: Custom loops using `tf.GradientTape` for fine-grained control. 14 | - **Model Checkpointing**: Save and load models with `model.save` and `model.load`. 15 | - **GPU/TPU Training**: Use `tf.device` to leverage hardware accelerators. 16 | - **TensorBoard**: Visualize metrics (loss, accuracy) during training. 17 | 18 | ## 📝 Example Walkthrough 19 | The `training_pipeline.py` file demonstrates: 20 | 1. **Dataset**: Loading and preprocessing MNIST with `tf.data.Dataset`. 21 | 2. **Model**: Building a CNN for digit classification. 22 | 3. **Training Loop**: Custom loop with `tf.GradientTape` and TensorBoard logging. 23 | 4. **Checkpointing**: Saving and loading the model. 24 | 5. **GPU/TPU Training**: Training on available hardware using `tf.device`. 25 | 6. **TensorBoard**: Logging metrics for visualization. 26 | 7. **Visualization**: Plotting training loss and accuracy. 27 | 28 | Example code: 29 | ```python 30 | import tensorflow as tf 31 | model = tf.keras.Sequential([...]) 32 | optimizer = tf.keras.optimizers.Adam() 33 | loss_fn = tf.keras.losses.SparseCategoricalCrossentropy() 34 | 35 | @tf.function 36 | def train_step(x, y): 37 | with tf.GradientTape() as tape: 38 | logits = model(x, training=True) 39 | loss = loss_fn(y, logits) 40 | gradients = tape.gradient(loss, model.trainable_variables) 41 | optimizer.apply_gradients(zip(gradients, model.trainable_variables)) 42 | return loss 43 | ``` 44 | 45 | ## 🛠️ Practical Tasks 46 | 1. Implement a custom training loop for a CNN on MNIST using `tf.GradientTape`. 47 | 2. Save and load a trained model using `model.save` and `model.load`. 48 | 3. Train a model on GPU/CPU using `tf.device` and verify device placement. 49 | 4. Set up TensorBoard to monitor loss and accuracy during training. 50 | 5. Use `ModelCheckpoint` callback to save the best model based on validation accuracy. 51 | 52 | ## 💡 Interview Tips 53 | - **Common Questions**: 54 | - How does a custom training loop differ from `model.fit`? 55 | - What is the purpose of `ModelCheckpoint` callback? 56 | - How do you ensure efficient GPU training in TensorFlow? 57 | - **Tips**: 58 | - Explain the role of `tf.GradientTape` in custom loops. 59 | - Highlight TensorBoard’s utility for debugging and optimization. 60 | - Be ready to code a training loop or set up TensorBoard logging. 61 | 62 | ## 📚 Resources 63 | - [TensorFlow Training Guide](https://www.tensorflow.org/guide/keras/training_with_built_in_methods) 64 | - [TensorFlow TensorBoard Guide](https://www.tensorflow.org/tensorboard) 65 | - [TensorFlow API Documentation](https://www.tensorflow.org/api_docs/python/tf) 66 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/02 Intermediate TensorFlow Concepts/02 Customization/README.md: -------------------------------------------------------------------------------- 1 | # Customization (`tensorflow`) 2 | 3 | ## 📖 Introduction 4 | TensorFlow’s customization capabilities enable flexible model design and optimization. This guide covers custom layers and loss functions, Functional and Subclassing APIs, and debugging gradient issues, with practical examples and interview insights. 5 | 6 | ## 🎯 Learning Objectives 7 | - Understand how to create custom layers and loss functions. 8 | - Master Functional and Subclassing APIs for complex models. 9 | - Learn to debug gradient issues in TensorFlow. 10 | 11 | ## 🔑 Key Concepts 12 | - **Custom Layers**: Extend `tf.keras.layers.Layer` for specialized operations. 13 | - **Custom Loss Functions**: Define task-specific loss functions. 14 | - **Functional API**: Build models with flexible, non-sequential architectures. 15 | - **Subclassing API**: Create fully customizable models via class inheritance. 16 | - **Gradient Debugging**: Identify and fix issues like `None` gradients. 17 | 18 | ## 📝 Example Walkthrough 19 | The `customization.py` file demonstrates: 20 | 1. **Custom Layer**: A `ScaledDense` layer with a learnable scaling factor. 21 | 2. **Custom Loss**: Weighted categorical crossentropy for class imbalance. 22 | 3. **Functional API**: A CNN with explicit input-output connections. 23 | 4. **Subclassing API**: A custom CNN with modular layers. 24 | 5. **Gradient Debugging**: Handling non-differentiable ops and disconnected graphs. 25 | 6. **Visualization**: Comparing model accuracy and gradient norms. 26 | 27 | Example code: 28 | ```python 29 | import tensorflow as tf 30 | class ScaledDense(tf.keras.layers.Layer): 31 | def __init__(self, units, activation=None): 32 | super().__init__() 33 | self.units = units 34 | self.activation = tf.keras.activations.get(activation) 35 | 36 | def build(self, input_shape): 37 | self.dense = tf.keras.layers.Dense(self.units) 38 | self.scale = self.add_weight('scale', shape=(), initializer='ones', trainable=True) 39 | 40 | def call(self, inputs): 41 | x = self.dense(inputs) 42 | x = x * self.scale 43 | return self.activation(x) if self.activation else x 44 | ``` 45 | 46 | ## 🛠️ Practical Tasks 47 | 1. Create a custom layer that applies a polynomial transformation and test it on MNIST. 48 | 2. Implement a custom loss function that penalizes specific classes and train a model. 49 | 3. Build a model using the Functional API with multiple branches. 50 | 4. Use the Subclassing API to create a CNN with custom logic. 51 | 5. Debug a `None` gradient issue caused by a non-differentiable operation. 52 | 53 | ## 💡 Interview Tips 54 | - **Common Questions**: 55 | - How do you implement a custom layer in TensorFlow? 56 | - What causes `None` gradients, and how do you debug them? 57 | - When would you use the Functional API over Subclassing? 58 | - **Tips**: 59 | - Explain the `build` and `call` methods in custom layers. 60 | - Highlight common gradient issues (e.g., `tf.cast`, `tf.stop_gradient`). 61 | - Be ready to code a custom layer or debug a gradient issue. 62 | 63 | ## 📚 Resources 64 | - [TensorFlow Custom Layers Guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models) 65 | - [TensorFlow Functional API Guide](https://www.tensorflow.org/guide/keras/functional_api) 66 | - [TensorFlow Gradient Debugging](https://www.tensorflow.org/guide/autodiff) 67 | - [Kaggle: TensorFlow Tutorials](https://www.kaggle.com/learn/intro-to-deep-learning) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/03 Advanced TensorFlow Concepts/04 Deployment/deployment.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | from tensorflow.keras.datasets import mnist 4 | import os 5 | 6 | # %% [1. Introduction to Deployment] 7 | # Deployment involves exporting, serving, and running TensorFlow models in production. 8 | # Covers SavedModel, TensorFlow Serving, and TensorFlow Lite/JS. 9 | 10 | print("TensorFlow version:", tf.__version__) 11 | 12 | # %% [2. Preparing the Dataset and Model] 13 | # Load and preprocess MNIST dataset. 14 | (x_train, y_train), (x_test, y_test) = mnist.load_data() 15 | x_train = x_train.astype('float32') / 255.0 16 | x_test = x_test.astype('float32') / 255.0 17 | x_train = x_train[..., np.newaxis] 18 | x_test = x_test[..., np.newaxis] 19 | train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 20 | test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32).prefetch(tf.data.AUTOTUNE) 21 | print("\nMNIST Dataset:") 22 | print("Train Shape:", x_train.shape, "Test Shape:", x_test.shape) 23 | 24 | # Train a simple CNN 25 | model = tf.keras.Sequential([ 26 | tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(28, 28, 1)), 27 | tf.keras.layers.MaxPooling2D((2, 2)), 28 | tf.keras.layers.Flatten(), 29 | tf.keras.layers.Dense(64, activation='relu'), 30 | tf.keras.layers.Dense(10, activation='softmax') 31 | ]) 32 | model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 33 | model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 34 | print("\nModel Test Accuracy:", model.evaluate(test_ds, verbose=0)[1].round(4)) 35 | 36 | # %% [3. Model Export: SavedModel] 37 | # Export the model as SavedModel. 38 | saved_model_path = "saved_model/mnist_cnn" 39 | model.save(saved_model_path) 40 | print("\nSavedModel Exported to:", saved_model_path) 41 | 42 | # Load and test SavedModel 43 | loaded_model = tf.keras.models.load_model(saved_model_path) 44 | print("Loaded SavedModel Test Accuracy:", loaded_model.evaluate(test_ds, verbose=0)[1].round(4)) 45 | 46 | # %% [4. Serving with TensorFlow Serving] 47 | # Note: TensorFlow Serving requires separate installation and setup. 48 | # Instructions for serving the SavedModel. 49 | print("\nTensorFlow Serving Instructions:") 50 | print("1. Install TensorFlow Serving: `docker pull tensorflow/serving`") 51 | print(f"2. Serve model: `docker run -p 8501:8501 --mount type=bind,source={os.path.abspath(saved_model_path)},target=/models/mnist_cnn -e MODEL_NAME=mnist_cnn -t tensorflow/serving`") 52 | print("3. Query model: Use REST API at http://localhost:8501/v1/models/mnist_cnn:predict") 53 | 54 | # %% [5. Edge Deployment: TensorFlow Lite] 55 | # Convert model to TensorFlow Lite. 56 | converter = tf.lite.TFLiteConverter.from_keras_model(model) 57 | tflite_model = converter.convert() 58 | tflite_path = "mnist_cnn.tflite" 59 | with open(tflite_path, 'wb') as f: 60 | f.write(tflite_model) 61 | print("\nTensorFlow Lite Model Saved to:", tflite_path) 62 | 63 | # Evaluate TFLite model 64 | interpreter = tf.lite.Interpreter(model_path=tflite_path) 65 | interpreter.allocate_tensors() 66 | input_index = interpreter.get_input_details()[0]['index'] 67 | output_index = interpreter.get_output_details()[0]['index'] 68 | correct = 0 69 | total = 0 70 | for x, y in test_ds.unbatch().take(100): 71 | x = x.numpy()[np.newaxis, ...] 72 | interpreter.set_tensor(input_index, x) 73 | interpreter.invoke() 74 | pred = interpreter.get_tensor(output_index) 75 | if np.argmax(pred) == y: 76 | correct += 1 77 | total += 1 78 | print("TFLite Model Accuracy (Subset):", (correct / total).round(4)) 79 | 80 | # %% [6. Edge Deployment: TensorFlow.js] 81 | # Note: TensorFlow.js conversion requires `tensorflowjs` package. 82 | print("\nTensorFlow.js Conversion Instructions:") 83 | print("1. Install: `pip install tensorflowjs`") 84 | print(f"2. Convert: `tensorflowjs_converter --input_format=tf_saved_model {saved_model_path} tfjs_model`") 85 | print("3. Use in browser: Load `tfjs_model/model.json` with TensorFlow.js") 86 | 87 | # %% [7. Visualizing Predictions] 88 | # Visualize predictions from the original model. 89 | predictions = model.predict(x_test[:5]) 90 | plt.figure(figsize=(15, 3)) 91 | for i in range(5): 92 | plt.subplot(1, 5, i + 1) 93 | plt.imshow(x_test[i, :, :, 0], cmap='gray') 94 | plt.title(f"Pred: {np.argmax(predictions[i])}\nTrue: {y_test[i]}") 95 | plt.axis('off') 96 | plt.savefig('deployment_predictions.png') 97 | 98 | # %% [8. Interview Scenario: Model Deployment] 99 | # Discuss deploying a model for production. 100 | print("\nInterview Scenario: Model Deployment") 101 | print("1. SavedModel: Standard format for TensorFlow Serving.") 102 | print("2. TensorFlow Serving: Scalable REST/gRPC API for production.") 103 | print("3. TensorFlow Lite: Lightweight for mobile/edge devices.") 104 | print("4. TensorFlow.js: Browser-based inference with WebGL.") -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/01 Tensors and Operations/tensors_and_operations.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | 4 | # %% [1. Introduction to Tensors and Operations] 5 | # Tensors are multi-dimensional arrays, the core data structure in TensorFlow. 6 | # TensorFlow provides functions for tensor creation, manipulation, and operations. 7 | 8 | print("TensorFlow version:", tf.__version__) 9 | 10 | # %% [2. Tensor Creation] 11 | # Create tensors using tf.constant, tf.zeros, tf.ones, and tf.random. 12 | # tf.constant: Creates a tensor from a fixed value. 13 | const_tensor = tf.constant([[1, 2], [3, 4]], dtype=tf.float32) 14 | print("\nConstant Tensor:") 15 | print(const_tensor) 16 | 17 | # tf.zeros: Creates a tensor filled with zeros. 18 | zeros_tensor = tf.zeros([2, 3], dtype=tf.int32) 19 | print("\nZeros Tensor:") 20 | print(zeros_tensor) 21 | 22 | # tf.random: Creates a tensor with random values. 23 | random_tensor = tf.random.normal([2, 2], mean=0.0, stddev=1.0, seed=42) 24 | print("\nRandom Normal Tensor:") 25 | print(random_tensor) 26 | 27 | # %% [3. Tensor Attributes] 28 | # Tensors have attributes: shape, dtype, and device. 29 | print("\nTensor Attributes for const_tensor:") 30 | print("Shape:", const_tensor.shape) 31 | print("Data Type:", const_tensor.dtype) 32 | print("Device:", const_tensor.device) 33 | 34 | # %% [4. Indexing and Slicing] 35 | # Access tensor elements using indexing and slicing. 36 | tensor = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) 37 | print("\nOriginal Tensor:") 38 | print(tensor) 39 | print("First Row:", tensor[0].numpy()) 40 | print("Element at [1, 2]:", tensor[1, 2].numpy()) 41 | print("Slice [0:2, 1:3]:") 42 | print(tensor[0:2, 1:3].numpy()) 43 | 44 | # %% [5. Reshaping] 45 | # Reshape tensors using tf.reshape. 46 | reshaped_tensor = tf.reshape(tensor, [1, 9]) 47 | print("\nReshaped Tensor (1x9):") 48 | print(reshaped_tensor) 49 | reshaped_tensor_2 = tf.reshape(tensor, [9, 1]) 50 | print("Reshaped Tensor (9x1):") 51 | print(reshaped_tensor_2) 52 | 53 | # %% [6. Matrix Multiplication] 54 | # Perform matrix multiplication using tf.matmul. 55 | matrix_a = tf.constant([[1, 2], [3, 4]], dtype=tf.float32) 56 | matrix_b = tf.constant([[5, 6], [7, 8]], dtype=tf.float32) 57 | matmul_result = tf.matmul(matrix_a, matrix_b) 58 | print("\nMatrix A:") 59 | print(matrix_a) 60 | print("Matrix B:") 61 | print(matrix_b) 62 | print("Matrix Multiplication (A @ B):") 63 | print(matmul_result) 64 | 65 | # %% [7. Broadcasting] 66 | # Broadcasting allows operations on tensors of different shapes. 67 | scalar = tf.constant(2.0, dtype=tf.float32) 68 | broadcast_result = matrix_a * scalar 69 | print("\nBroadcasting (Matrix A * Scalar):") 70 | print(broadcast_result) 71 | 72 | # Example with different shapes 73 | tensor_1 = tf.constant([[1, 2, 3]], dtype=tf.float32) 74 | tensor_2 = tf.constant([[4], [5], [6]], dtype=tf.float32) 75 | broadcast_sum = tensor_1 + tensor_2 76 | print("Broadcasting (1x3 + 3x1):") 77 | print(broadcast_sum) 78 | 79 | # %% [8. CPU/GPU Interoperability] 80 | # TensorFlow automatically places tensors on GPU if available, or CPU otherwise. 81 | # Explicitly place operations on CPU or GPU using tf.device. 82 | with tf.device('/CPU:0'): 83 | cpu_tensor = tf.constant([[1, 2], [3, 4]], dtype=tf.float32) 84 | cpu_result = tf.matmul(cpu_tensor, cpu_tensor) 85 | print("\nCPU Tensor Device:", cpu_tensor.device) 86 | print("CPU Matmul Result:") 87 | print(cpu_result) 88 | 89 | if tf.config.list_physical_devices('GPU'): 90 | with tf.device('/GPU:0'): 91 | gpu_tensor = tf.constant([[1, 2], [3, 4]], dtype=tf.float32) 92 | gpu_result = tf.matmul(gpu_tensor, gpu_tensor) 93 | print("GPU Tensor Device:", gpu_tensor.device) 94 | print("GPU Matmul Result:") 95 | print(gpu_result) 96 | else: 97 | print("No GPU available, skipping GPU test.") 98 | 99 | # %% [9. NumPy Integration] 100 | # Convert between TensorFlow tensors and NumPy arrays. 101 | numpy_array = np.array([[1, 2], [3, 4]]) 102 | tensor_from_numpy = tf.convert_to_tensor(numpy_array, dtype=tf.float32) 103 | print("\nTensor from NumPy Array:") 104 | print(tensor_from_numpy) 105 | 106 | # Convert tensor back to NumPy 107 | numpy_from_tensor = tensor_from_numpy.numpy() 108 | print("NumPy Array from Tensor:") 109 | print(numpy_from_tensor) 110 | 111 | # Perform NumPy-style operations 112 | numpy_result = np.matmul(numpy_array, numpy_array) 113 | tensor_result = tf.matmul(tensor_from_numpy, tensor_from_numpy) 114 | print("NumPy Matmul Result:") 115 | print(numpy_result) 116 | print("TensorFlow Matmul Result:") 117 | print(tensor_result) 118 | 119 | # %% [10. Interview Scenario: Tensor Manipulation] 120 | # Demonstrate a practical tensor manipulation task. 121 | # Task: Create a 3x3 tensor, extract its diagonal, and compute its sum. 122 | interview_tensor = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=tf.float32) 123 | diagonal = tf.linalg.diag_part(interview_tensor) 124 | diagonal_sum = tf.reduce_sum(diagonal) 125 | print("\nInterview Task:") 126 | print("Original Tensor:") 127 | print(interview_tensor) 128 | print("Diagonal:", diagonal.numpy()) 129 | print("Sum of Diagonal:", diagonal_sum.numpy()) 130 | 131 | # Visualize tensor as a heatmap 132 | import matplotlib.pyplot as plt 133 | plt.figure() 134 | plt.imshow(interview_tensor, cmap='viridis') 135 | plt.colorbar() 136 | plt.title('Tensor Heatmap') 137 | plt.savefig('tensor_heatmap.png') -------------------------------------------------------------------------------- /Tensorflow Fundamentals/03 Advanced TensorFlow Concepts/01 Distributed Training/distributed_training.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | from tensorflow.keras.datasets import cifar10 5 | 6 | # %% [1. Introduction to Distributed Training] 7 | # Distributed training scales TensorFlow models across multiple GPUs/TPUs. 8 | # Covers Data Parallelism, Multi-GPU/TPU Training, and Distributed Datasets. 9 | 10 | print("TensorFlow version:", tf.__version__) 11 | 12 | # %% [2. Preparing the Dataset] 13 | # Load and preprocess CIFAR-10 dataset. 14 | (x_train, y_train), (x_test, y_test) = cifar10.load_data() 15 | x_train = x_train.astype('float32') / 255.0 16 | x_test = x_test.astype('float32') / 255.0 17 | y_train = tf.keras.utils.to_categorical(y_train, 10) 18 | y_test = tf.keras.utils.to_categorical(y_test, 10) 19 | print("\nCIFAR-10 Dataset:") 20 | print("Train Shape:", x_train.shape, "Test Shape:", x_test.shape) 21 | 22 | # Create tf.data.Dataset pipelines 23 | batch_size = 64 24 | train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(batch_size).prefetch(tf.data.AUTOTUNE) 25 | test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size).prefetch(tf.data.AUTOTUNE) 26 | 27 | # %% [3. Data Parallelism with MirroredStrategy] 28 | # Use MirroredStrategy for data parallelism across GPUs. 29 | mirrored_strategy = tf.distribute.MirroredStrategy() 30 | print("\nMirroredStrategy Devices:", mirrored_strategy.num_replicas_in_sync) 31 | 32 | # Define and compile model within strategy scope 33 | with mirrored_strategy.scope(): 34 | model_mirrored = tf.keras.Sequential([ 35 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)), 36 | tf.keras.layers.MaxPooling2D((2, 2)), 37 | tf.keras.layers.Flatten(), 38 | tf.keras.layers.Dense(128, activation='relu'), 39 | tf.keras.layers.Dense(10, activation='softmax') 40 | ]) 41 | model_mirrored.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 42 | 43 | print("\nMirroredStrategy Model Summary:") 44 | model_mirrored.summary() 45 | mirrored_history = model_mirrored.fit(train_ds, epochs=5, validation_data=test_ds, verbose=1) 46 | print("MirroredStrategy Test Accuracy:", mirrored_history.history['val_accuracy'][-1].round(4)) 47 | 48 | # %% [4. Multi-GPU/TPU Training with TPUStrategy] 49 | # Use TPUStrategy (fallback to MirroredStrategy if TPU unavailable). 50 | try: 51 | resolver = tf.distribute.cluster_resolver.TPUClusterResolver() 52 | tf.config.experimental_connect_to_cluster(resolver) 53 | tf.tpu.experimental.initialize_tpu_system(resolver) 54 | tpu_strategy = tf.distribute.TPUStrategy(resolver) 55 | print("\nTPUStrategy Initialized") 56 | except ValueError: 57 | tpu_strategy = mirrored_strategy 58 | print("\nTPU Unavailable, Using MirroredStrategy") 59 | 60 | with tpu_strategy.scope(): 61 | model_tpu = tf.keras.Sequential([ 62 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)), 63 | tf.keras.layers.MaxPooling2D((2, 2)), 64 | tf.keras.layers.Flatten(), 65 | tf.keras.layers.Dense(128, activation='relu'), 66 | tf.keras.layers.Dense(10, activation='softmax') 67 | ]) 68 | model_tpu.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 69 | 70 | print("\nTPUStrategy Model Summary:") 71 | model_tpu.summary() 72 | tpu_history = model_tpu.fit(train_ds, epochs=5, validation_data=test_ds, verbose=1) 73 | print("TPUStrategy Test Accuracy:", tpu_history.history['val_accuracy'][-1].round(4)) 74 | 75 | # %% [5. Distributed Datasets] 76 | # Optimize dataset for distributed training. 77 | dist_train_ds = mirrored_strategy.experimental_distribute_dataset(train_ds) 78 | dist_test_ds = mirrored_strategy.experimental_distribute_dataset(test_ds) 79 | print("\nDistributed Dataset Created") 80 | 81 | # Custom training loop for distributed dataset 82 | @tf.function 83 | def dist_train_step(inputs): 84 | def step_fn(inputs): 85 | x, y = inputs 86 | with tf.GradientTape() as tape: 87 | logits = model_mirrored(x, training=True) 88 | loss = tf.keras.losses.categorical_crossentropy(y, logits) 89 | gradients = tape.gradient(loss, model_mirrored.trainable_variables) 90 | model_mirrored.optimizer.apply_gradients(zip(gradients, model_mirrored.trainable_variables)) 91 | return loss 92 | per_replica_losses = mirrored_strategy.run(step_fn, args=(inputs,)) 93 | return mirrored_strategy.reduce(tf.distribute.ReduceOp.MEAN, per_replica_losses, axis=None) 94 | 95 | print("\nCustom Distributed Training Loop:") 96 | for epoch in range(2): 97 | total_loss = 0.0 98 | num_batches = 0 99 | for inputs in dist_train_ds: 100 | total_loss += dist_train_step(inputs) 101 | num_batches += 1 102 | print(f"Epoch {epoch + 1}, Average Loss: {total_loss / num_batches:.4f}") 103 | 104 | # %% [6. Visualizing Training Progress] 105 | # Plot validation accuracy for MirroredStrategy and TPUStrategy. 106 | plt.figure() 107 | plt.plot(mirrored_history.history['val_accuracy'], label='MirroredStrategy') 108 | plt.plot(tpu_history.history['val_accuracy'], label='TPUStrategy') 109 | plt.xlabel('Epoch') 110 | plt.ylabel('Validation Accuracy') 111 | plt.title('Distributed Training Comparison') 112 | plt.legend() 113 | plt.savefig('distributed_training_comparison.png') 114 | 115 | # %% [7. Interview Scenario: Scaling Training] 116 | # Discuss strategies for scaling training to multiple devices. 117 | print("\nInterview Scenario: Scaling Training") 118 | print("1. MirroredStrategy: Synchronous data parallelism for multi-GPU setups.") 119 | print("2. TPUStrategy: Optimized for TPU clusters in cloud environments.") 120 | print("3. Distributed Datasets: Use experimental_distribute_dataset for efficient data sharding.") 121 | print("Key: Ensure model and data pipeline are compatible with strategy scope.") -------------------------------------------------------------------------------- /Tensorflow Fundamentals/03 Advanced TensorFlow Concepts/03 Custom Extensions/custom_extensions.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | from tensorflow.keras.datasets import mnist 5 | 6 | # %% [1. Introduction to Custom Extensions] 7 | # Custom extensions enhance TensorFlow with custom gradients, addons, and optimizers. 8 | # This file demonstrates custom gradient functions, TensorFlow Addons, and a custom optimizer. 9 | 10 | print("TensorFlow version:", tf.__version__) 11 | 12 | # %% [2. Preparing the Dataset] 13 | # Load and preprocess MNIST dataset. 14 | (x_train, y_train), (x_test, y_test) = mnist.load_data() 15 | x_train = x_train.astype('float32') / 255.0 16 | x_test = x_test.astype('float32') / 255.0 17 | x_train = x_train[..., np.newaxis] 18 | x_test = x_test[..., np.newaxis] 19 | train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 20 | test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32).prefetch(tf.data.AUTOTUNE) 21 | print("\nMNIST Dataset:") 22 | print("Train Shape:", x_train.shape, "Test Shape:", x_test.shape) 23 | 24 | # %% [3. Custom Gradient Functions] 25 | # Define a custom gradient for a clipping operation. 26 | @tf.custom_gradient 27 | def clip_by_value(x, clip_min, clip_max): 28 | y = tf.clip_by_value(x, clip_min, clip_max) 29 | def grad(dy): 30 | return dy * tf.where((x >= clip_min) & (x <= clip_max), 1.0, 0.0), None, None 31 | return y, grad 32 | 33 | # Test custom gradient in a model 34 | model_custom_grad = tf.keras.Sequential([ 35 | tf.keras.layers.Flatten(input_shape=(28, 28, 1)), 36 | tf.keras.layers.Dense(64, activation='relu'), 37 | tf.keras.layers.Dense(10, activation='softmax') 38 | ]) 39 | optimizer = tf.keras.optimizers.Adam() 40 | @tf.function 41 | def train_step(x, y): 42 | with tf.GradientTape() as tape: 43 | logits = model_custom_grad(x) 44 | logits = clip_by_value(logits, 1e-7, 1.0 - 1e-7) 45 | loss = tf.keras.losses.sparse_categorical_crossentropy(y, logits) 46 | gradients = tape.gradient(loss, model_custom_grad.trainable_variables) 47 | optimizer.apply_gradients(zip(gradients, model_custom_grad.trainable_variables)) 48 | return loss 49 | 50 | print("\nCustom Gradient Training:") 51 | for epoch in range(3): 52 | for x, y in train_ds: 53 | loss = train_step(x, y) 54 | print(f"Epoch {epoch + 1}, Loss: {loss.numpy():.4f}") 55 | 56 | # %% [4. TensorFlow Addons] 57 | # Note: TensorFlow Addons is deprecated; use Keras 3 or alternatives for advanced metrics/losses. 58 | # Example: Custom loss inspired by Addons (e.g., focal loss). 59 | class FocalLoss(tf.keras.losses.Loss): 60 | def __init__(self, gamma=2.0, alpha=0.25): 61 | super().__init__() 62 | self.gamma = gamma 63 | self.alpha = alpha 64 | 65 | def call(self, y_true, y_pred): 66 | y_true = tf.cast(y_true, tf.float32) 67 | y_pred = tf.clip_by_value(y_pred, 1e-7, 1.0 - 1e-7) 68 | ce = -y_true * tf.math.log(y_pred) 69 | weight = self.alpha * y_true * tf.pow(1.0 - y_pred, self.gamma) 70 | return tf.reduce_mean(weight * ce) 71 | 72 | model_addons = tf.keras.Sequential([ 73 | tf.keras.layers.Flatten(input_shape=(28, 28, 1)), 74 | tf.keras.layers.Dense(64, activation='relu'), 75 | tf.keras.layers.Dense(10, activation='softmax') 76 | ]) 77 | model_addons.compile(optimizer='adam', loss=FocalLoss(), metrics=['accuracy']) 78 | print("\nFocal Loss Model Summary:") 79 | model_addons.summary() 80 | addons_history = model_addons.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 81 | print("Focal Loss Test Accuracy:", addons_history.history['val_accuracy'][-1].round(4)) 82 | 83 | # %% [5. Custom Optimizers] 84 | # Define a custom optimizer with momentum. 85 | class CustomMomentumOptimizer(tf.keras.optimizers.Optimizer): 86 | def __init__(self, learning_rate=0.01, momentum=0.9, name="CustomMomentum"): 87 | super().__init__(name=name) 88 | self.learning_rate = learning_rate 89 | self.momentum = momentum 90 | 91 | def _create_slots(self, var_list): 92 | for var in var_list: 93 | self.add_slot(var, 'velocity', initializer='zeros') 94 | 95 | def _resource_apply_dense(self, grad, var, apply_state=None): 96 | velocity = self.get_slot(var, 'velocity') 97 | velocity_t = velocity * self.momentum - self.learning_rate * grad 98 | var_t = var + velocity_t 99 | velocity.assign(velocity_t) 100 | var.assign(var_t) 101 | return tf.no_op() 102 | 103 | model_custom_opt = tf.keras.Sequential([ 104 | tf.keras.layers.Flatten(input_shape=(28, 28, 1)), 105 | tf.keras.layers.Dense(64, activation='relu'), 106 | tf.keras.layers.Dense(10, activation='softmax') 107 | ]) 108 | model_custom_opt.compile(optimizer=CustomMomentumOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) 109 | print("\nCustom Optimizer Model Summary:") 110 | model_custom_opt.summary() 111 | custom_opt_history = model_custom_opt.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 112 | print("Custom Optimizer Test Accuracy:", custom_opt_history.history['val_accuracy'][-1].round(4)) 113 | 114 | # %% [6. Visualizing Training Progress] 115 | # Plot validation accuracy for models. 116 | plt.figure() 117 | plt.plot(addons_history.history['val_accuracy'], label='Focal Loss') 118 | plt.plot(custom_opt_history.history['val_accuracy'], label='Custom Optimizer') 119 | plt.xlabel('Epoch') 120 | plt.ylabel('Validation Accuracy') 121 | plt.title('Custom Extensions Comparison') 122 | plt.legend() 123 | plt.savefig('custom_extensions_comparison.png') 124 | 125 | # %% [7. Interview Scenario: Custom Gradient] 126 | # Discuss implementing a custom gradient for a non-standard operation. 127 | print("\nInterview Scenario: Custom Gradient") 128 | print("Use @tf.custom_gradient to define forward and backward passes.") 129 | print("Example: Clip operation with gradients only in valid range.") 130 | print("Key: Ensure gradient function matches operation’s logic.") -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/02 Automatic Differentiation/automatic_differentiation.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | 5 | # %% [1. Introduction to Automatic Differentiation] 6 | # Automatic differentiation computes gradients for optimization in TensorFlow. 7 | # Key components: computational graphs, tf.GradientTape, optimizer.apply_gradients, and tf.stop_gradient. 8 | 9 | print("TensorFlow version:", tf.__version__) 10 | 11 | # %% [2. Computational Graphs] 12 | # TensorFlow builds computational graphs to track operations for gradient computation. 13 | # tf.GradientTape records operations dynamically for automatic differentiation. 14 | x = tf.constant(3.0) 15 | with tf.GradientTape() as tape: 16 | tape.watch(x) # Ensure x is tracked 17 | y = x**2 + 2*x + 1 # Polynomial: y = x^2 + 2x + 1 18 | dy_dx = tape.gradient(y, x) 19 | print("\nComputational Graph Example:") 20 | print("Function: y = x^2 + 2x + 1, x =", x.numpy()) 21 | print("Gradient dy/dx =", dy_dx.numpy()) # Expected: 2x + 2 = 8 at x=3 22 | 23 | # %% [3. Gradient Computation with tf.GradientTape] 24 | # Compute gradients for a simple neural network layer: y = Wx + b. 25 | W = tf.Variable([[1.0, 2.0], [3.0, 4.0]]) 26 | b = tf.Variable([1.0, 1.0]) 27 | x = tf.constant([[1.0], [2.0]]) 28 | with tf.GradientTape() as tape: 29 | y = tf.matmul(W, x) + b # Linear transformation 30 | loss = tf.reduce_sum(y**2) # Dummy loss: sum of squared outputs 31 | grad_W, grad_b = tape.gradient(loss, [W, b]) 32 | print("\nGradient Computation (Linear Layer):") 33 | print("W:\n", W.numpy()) 34 | print("b:", b.numpy()) 35 | print("x:\n", x.numpy()) 36 | print("Loss:", loss.numpy()) 37 | print("Gradient w.r.t. W:\n", grad_W.numpy()) 38 | print("Gradient w.r.t. b:", grad_b.numpy()) 39 | 40 | # %% [4. Higher-Order Gradients] 41 | # Compute second-order gradients (e.g., Hessian) using nested tapes. 42 | x = tf.constant(2.0) 43 | with tf.GradientTape() as outer_tape: 44 | with tf.GradientTape() as inner_tape: 45 | inner_tape.watch(x) 46 | y = x**3 # Function: y = x^3 47 | dy_dx = inner_tape.gradient(y, x) # First derivative: 3x^2 48 | d2y_dx2 = outer_tape.gradient(dy_dx, x) # Second derivative: 6x 49 | print("\nHigher-Order Gradients:") 50 | print("Function: y = x^3, x =", x.numpy()) 51 | print("First Derivative (dy/dx):", dy_dx.numpy()) # Expected: 3x^2 = 12 at x=2 52 | print("Second Derivative (d2y/dx2):", d2y_dx2.numpy()) # Expected: 6x = 12 at x=2 53 | 54 | # %% [5. Gradient Application with Optimizer] 55 | # Use an optimizer to update variables based on gradients. 56 | W = tf.Variable([[1.0, 2.0]], name='W') 57 | b = tf.Variable([0.0], name='b') 58 | x = tf.constant([[1.0, 2.0]]) 59 | y_true = tf.constant([5.0]) 60 | optimizer = tf.keras.optimizers.SGD(learning_rate=0.1) 61 | for _ in range(3): # Simulate 3 optimization steps 62 | with tf.GradientTape() as tape: 63 | y_pred = tf.matmul(W, x, transpose_b=True) + b # y = Wx + b 64 | loss = tf.reduce_mean((y_pred - y_true)**2) # MSE loss 65 | grad_W, grad_b = tape.gradient(loss, [W, b]) 66 | optimizer.apply_gradients(zip([grad_W, grad_b], [W, b])) 67 | print(f"\nStep {_+1} - Loss: {loss.numpy():.4f}, W: {W.numpy().flatten()}, b: {b.numpy()}") 68 | 69 | # %% [6. No-Gradient Context with tf.stop_gradient] 70 | # tf.stop_gradient prevents gradients from flowing through a tensor. 71 | x = tf.constant(2.0) 72 | with tf.GradientTape() as tape: 73 | tape.watch(x) 74 | y = x**2 # y = x^2 75 | z = tf.stop_gradient(y) # Treat y as a constant 76 | w = z * x # w = y * x 77 | dw_dx = tape.gradient(w, x) 78 | print("\nNo-Gradient Context:") 79 | print("Function: w = (x^2) * x, with x^2 stopped") 80 | print("Gradient dw/dx:", dw_dx.numpy()) # Expected: y = x^2 = 4 at x=2 81 | 82 | # %% [7. Practical Application: Linear Regression] 83 | # Train a linear regression model using tf.GradientTape. 84 | np.random.seed(42) 85 | X = np.random.rand(100, 1).astype(np.float32) 86 | y = 3 * X + 2 + np.random.normal(0, 0.1, (100, 1)).astype(np.float32) 87 | W = tf.Variable([[0.0]], name='weight') 88 | b = tf.Variable([0.0], name='bias') 89 | optimizer = tf.keras.optimizers.Adam(learning_rate=0.1) 90 | losses = [] 91 | for epoch in range(50): 92 | with tf.GradientTape() as tape: 93 | y_pred = tf.matmul(X, W) + b 94 | loss = tf.reduce_mean(tf.square(y_pred - y)) 95 | grad_W, grad_b = tape.gradient(loss, [W, b]) 96 | optimizer.apply_gradients(zip([grad_W, grad_b], [W, b])) 97 | losses.append(loss.numpy()) 98 | if epoch % 10 == 0: 99 | print(f"Epoch {epoch}, Loss: {loss.numpy():.4f}") 100 | print("\nLearned Parameters: W =", W.numpy().flatten(), "b =", b.numpy()) 101 | 102 | # %% [8. Visualizing Training Progress] 103 | # Plot loss curve for linear regression. 104 | plt.figure() 105 | plt.plot(losses) 106 | plt.xlabel('Epoch') 107 | plt.ylabel('Loss') 108 | plt.title('Linear Regression Loss Curve') 109 | plt.savefig('loss_curve.png') 110 | 111 | # Plot predictions 112 | plt.figure() 113 | plt.scatter(X, y, label='Data') 114 | plt.plot(X, tf.matmul(X, W) + b, color='red', label='Fit') 115 | plt.xlabel('X') 116 | plt.ylabel('y') 117 | plt.title('Linear Regression Fit') 118 | plt.legend() 119 | plt.savefig('linear_fit.png') 120 | 121 | # %% [9. Interview Scenario: Gradient Debugging] 122 | # Debug a case where gradients are None due to non-differentiable operations. 123 | x = tf.Variable(1.0) 124 | with tf.GradientTape() as tape: 125 | y = tf.cast(x, tf.int32) # Non-differentiable operation 126 | loss = y**2 127 | grad = tape.gradient(loss, x) 128 | print("\nGradient Debugging:") 129 | print("Operation: y = cast(x to int), loss = y^2") 130 | print("Gradient:", grad) # Expected: None due to non-differentiable cast 131 | print("Fix: Ensure operations are differentiable (e.g., use float operations).") 132 | 133 | # %% [10. Custom Gradient Computation] 134 | # Compute gradients for a custom function: f(x) = sin(x) + x^2. 135 | x = tf.Variable(1.0) 136 | with tf.GradientTape() as tape: 137 | y = tf.sin(x) + x**2 # f(x) = sin(x) + x^2 138 | dy_dx = tape.gradient(y, x) 139 | print("\nCustom Gradient:") 140 | print("Function: f(x) = sin(x) + x^2, x =", x.numpy()) 141 | print("Gradient: df/dx =", dy_dx.numpy()) # Expected: cos(x) + 2x = cos(1) + 2 -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/03 Neural Networks (tf.keras)/neural_networks_keras.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | from sklearn.datasets import make_regression, make_classification 5 | from sklearn.model_selection import train_test_split 6 | from sklearn.preprocessing import StandardScaler 7 | 8 | # %% [1. Introduction to Neural Networks with tf.keras] 9 | # tf.keras is TensorFlow's high-level API for building and training neural networks. 10 | # Covers model definition, layers, activations, losses, optimizers, and learning rate schedules. 11 | 12 | print("TensorFlow version:", tf.__version__) 13 | 14 | # %% [2. Defining Models with tf.keras.Sequential] 15 | # tf.keras.Sequential creates a linear stack of layers for simple models. 16 | # Example: Regression model for synthetic data. 17 | X_reg, y_reg = make_regression(n_samples=1000, n_features=5, noise=10, random_state=42) 18 | X_reg_train, X_reg_test, y_reg_train, y_reg_test = train_test_split(X_reg, y_reg, test_size=0.2, random_state=42) 19 | scaler = StandardScaler() 20 | X_reg_train = scaler.fit_transform(X_reg_train) 21 | X_reg_test = scaler.transform(X_reg_test) 22 | 23 | seq_model = tf.keras.Sequential([ 24 | tf.keras.layers.Dense(64, activation='relu', input_shape=(5,)), 25 | tf.keras.layers.Dense(32, activation='relu'), 26 | tf.keras.layers.Dense(1) # No activation for regression 27 | ]) 28 | seq_model.compile(optimizer='adam', loss='mse') 29 | print("\nSequential Model Summary:") 30 | seq_model.summary() 31 | 32 | # Train the model 33 | history_seq = seq_model.fit(X_reg_train, y_reg_train, epochs=20, batch_size=32, validation_split=0.2, verbose=0) 34 | print("Sequential Model Final Validation Loss:", history_seq.history['val_loss'][-1].round(4)) 35 | 36 | # %% [3. Defining Models with tf.keras.Model] 37 | # tf.keras.Model allows custom models via subclassing for complex architectures. 38 | # Example: Classification model for synthetic data. 39 | X_clf, y_clf = make_classification(n_samples=1000, n_features=10, n_classes=3, n_informative=8, random_state=42) 40 | X_clf_train, X_clf_test, y_clf_train, y_clf_test = train_test_split(X_clf, y_clf, test_size=0.2, random_state=42) 41 | X_clf_train = scaler.fit_transform(X_clf_train) 42 | X_clf_test = scaler.transform(X_clf_test) 43 | y_clf_train_cat = tf.keras.utils.to_categorical(y_clf_train) 44 | y_clf_test_cat = tf.keras.utils.to_categorical(y_clf_test) 45 | 46 | class CustomModel(tf.keras.Model): 47 | def __init__(self): 48 | super(CustomModel, self).__init__() 49 | self.dense1 = tf.keras.layers.Dense(128, activation='relu') 50 | self.dense2 = tf.keras.layers.Dense(64, activation='relu') 51 | self.dense3 = tf.keras.layers.Dense(3, activation='softmax') 52 | 53 | def call(self, inputs): 54 | x = self.dense1(inputs) 55 | x = self.dense2(x) 56 | return self.dense3(x) 57 | 58 | custom_model = CustomModel() 59 | custom_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 60 | print("\nCustom Model Training:") 61 | custom_model.fit(X_clf_train, y_clf_train_cat, epochs=10, batch_size=32, validation_split=0.2, verbose=0) 62 | loss, acc = custom_model.evaluate(X_clf_test, y_clf_test_cat, verbose=0) 63 | print("Custom Model Test Loss:", loss.round(4), "Test Accuracy:", acc.round(4)) 64 | 65 | # %% [4. Layers: Dense, Convolutional, Pooling, Normalization] 66 | # Example: CNN for synthetic image-like data (simplified). 67 | X_img = np.random.rand(100, 28, 28, 1).astype(np.float32) # 100 samples, 28x28x1 68 | y_img = np.random.randint(0, 2, 100) # Binary classification 69 | X_img_train, X_img_test, y_img_train, y_img_test = train_test_split(X_img, y_img, test_size=0.2, random_state=42) 70 | 71 | cnn_model = tf.keras.Sequential([ 72 | tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(28, 28, 1)), 73 | tf.keras.layers.BatchNormalization(), 74 | tf.keras.layers.MaxPooling2D((2, 2)), 75 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu'), 76 | tf.keras.layers.Flatten(), 77 | tf.keras.layers.Dense(64, activation='relu'), 78 | tf.keras.layers.Dense(1, activation='sigmoid') 79 | ]) 80 | cnn_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 81 | print("\nCNN Model Summary:") 82 | cnn_model.summary() 83 | cnn_model.fit(X_img_train, y_img_train, epochs=5, batch_size=16, validation_split=0.2, verbose=0) 84 | cnn_loss, cnn_acc = cnn_model.evaluate(X_img_test, y_img_test, verbose=0) 85 | print("CNN Model Test Loss:", cnn_loss.round(4), "Test Accuracy:", cnn_acc.round(4)) 86 | 87 | # %% [5. Activations: ReLU, Sigmoid, Softmax] 88 | # Demonstrate activation functions in a small network. 89 | act_model = tf.keras.Sequential([ 90 | tf.keras.layers.Dense(16, activation='relu', input_shape=(5,)), # ReLU 91 | tf.keras.layers.Dense(8, activation='sigmoid'), # Sigmoid 92 | tf.keras.layers.Dense(3, activation='softmax') # Softmax 93 | ]) 94 | print("\nActivation Functions Model Summary:") 95 | act_model.summary() 96 | 97 | # %% [6. Loss Functions: MSE, Categorical Crossentropy] 98 | # MSE for regression (used in seq_model). 99 | # Categorical Crossentropy for classification (used in custom_model). 100 | print("\nLoss Functions Used:") 101 | print("MSE for Regression (Sequential Model):", history_seq.history['loss'][-1].round(4)) 102 | print("Categorical Crossentropy for Classification (Custom Model):", loss.round(4)) 103 | 104 | # %% [7. Optimizers: SGD, Adam, RMSprop] 105 | # Compare optimizers on the regression task. 106 | optimizers = { 107 | 'SGD': tf.keras.optimizers.SGD(learning_rate=0.01), 108 | 'Adam': tf.keras.optimizers.Adam(learning_rate=0.001), 109 | 'RMSprop': tf.keras.optimizers.RMSprop(learning_rate=0.001) 110 | } 111 | results = {} 112 | for name, opt in optimizers.items(): 113 | model = tf.keras.Sequential([ 114 | tf.keras.layers.Dense(32, activation='relu', input_shape=(5,)), 115 | tf.keras.layers.Dense(1) 116 | ]) 117 | model.compile(optimizer=opt, loss='mse') 118 | history = model.fit(X_reg_train, y_reg_train, epochs=10, batch_size=32, validation_split=0.2, verbose=0) 119 | results[name] = history.history['val_loss'][-1] 120 | print("\nOptimizer Comparison (Validation Loss):") 121 | for name, val_loss in results.items(): 122 | print(f"{name}: {val_loss:.4f}") 123 | 124 | # %% [8. Learning Rate Schedules] 125 | # Use a decaying learning rate schedule for the regression task. 126 | initial_lr = 0.1 127 | lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( 128 | initial_lr, decay_steps=100, decay_rate=0.9, staircase=True 129 | ) 130 | model_lr = tf.keras.Sequential([ 131 | tf.keras.layers.Dense(32, activation='relu', input_shape=(5,)), 132 | tf.keras.layers.Dense(1) 133 | ]) 134 | model_lr.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule), loss='mse') 135 | history_lr = model_lr.fit(X_reg_train, y_reg_train, epochs=20, batch_size=32, validation_split=0.2, verbose=0) 136 | print("\nLearning Rate Schedule Model Final Validation Loss:", history_lr.history['val_loss'][-1].round(4)) 137 | 138 | # %% [9. Visualizing Training Progress] 139 | # Plot loss curves for Sequential and Learning Rate Schedule models. 140 | plt.figure() 141 | plt.plot(history_seq.history['loss'], label='Sequential (Adam)') 142 | plt.plot(history_lr.history['loss'], label='Learning Rate Schedule') 143 | plt.xlabel('Epoch') 144 | plt.ylabel('Loss') 145 | plt.title('Training Loss Curves') 146 | plt.legend() 147 | plt.savefig('loss_curves.png') 148 | 149 | # %% [10. Interview Scenario: Model Design] 150 | # Design a CNN for a small image classification task. 151 | interview_cnn = tf.keras.Sequential([ 152 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)), 153 | tf.keras.layers.MaxPooling2D((2, 2)), 154 | tf.keras.layers.BatchNormalization(), 155 | tf.keras.layers.Flatten(), 156 | tf.keras.layers.Dense(64, activation='relu'), 157 | tf.keras.layers.Dense(10, activation='softmax') 158 | ]) 159 | interview_cnn.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 160 | print("\nInterview Scenario: CNN Model Summary:") 161 | interview_cnn.summary() 162 | print("Explanation: Conv2D extracts features, MaxPooling reduces dimensions, BatchNorm stabilizes training, Softmax outputs class probabilities.") -------------------------------------------------------------------------------- /Tensorflow Fundamentals/03 Advanced TensorFlow Concepts/02 Advanced Architectures/advanced_architectures.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | from tensorflow.keras.datasets import cifar10 5 | import tensorflow_datasets as tfds 6 | import tensorflow_agents as tfa 7 | from tensorflow_agents.environments import suite_gym, tf_py_environment 8 | from tensorflow_agents.agents.dqn import dqn_agent 9 | from tensorflow_agents.networks import q_network 10 | from tensorflow_agents.policies import random_tf_policy 11 | from tensorflow_agents.replay_buffers import tf_uniform_replay_buffer 12 | from tensorflow_agents.utils import common 13 | 14 | # %% [1. Introduction to Advanced Architectures] 15 | # Advanced architectures include Transformers, Generative Models, Graph Neural Networks, and Reinforcement Learning. 16 | # This file demonstrates a Vision Transformer, VAE, and DQN with TF-Agents. 17 | 18 | print("TensorFlow version:", tf.__version__) 19 | 20 | # %% [2. Preparing Datasets] 21 | # Load CIFAR-10 for Vision Transformer and VAE. 22 | (x_train, y_train), (x_test, y_test) = cifar10.load_data() 23 | x_train = x_train.astype('float32') / 255.0 24 | x_test = x_test.astype('float32') / 255.0 25 | y_train = tf.keras.utils.to_categorical(y_train, 10) 26 | y_test = tf.keras.utils.to_categorical(y_test, 10) 27 | print("\nCIFAR-10 Dataset:") 28 | print("Train Shape:", x_train.shape, "Test Shape:", x_test.shape) 29 | 30 | train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 31 | test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32).prefetch(tf.data.AUTOTUNE) 32 | 33 | # %% [3. Vision Transformer (ViT)] 34 | # Simplified Vision Transformer for CIFAR-10. 35 | class PatchExtractor(tf.keras.layers.Layer): 36 | def __init__(self, patch_size): 37 | super().__init__() 38 | self.patch_size = patch_size 39 | 40 | def call(self, images): 41 | patches = tf.image.extract_patches( 42 | images=images, sizes=[1, self.patch_size, self.patch_size, 1], 43 | strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding='VALID') 44 | return tf.reshape(patches, [tf.shape(images)[0], -1, patches.shape[-1]]) 45 | 46 | class ViT(tf.keras.Model): 47 | def __init__(self, num_classes, patch_size, num_patches, d_model, num_heads): 48 | super().__init__() 49 | self.patch_extractor = PatchExtractor(patch_size) 50 | self.pos_embedding = self.add_weight('pos_embedding', shape=(1, num_patches + 1, d_model)) 51 | self.cls_token = self.add_weight('cls_token', shape=(1, 1, d_model)) 52 | self.transformer = tf.keras.layers.MultiHeadAttention(num_heads=num_heads, key_dim=d_model) 53 | self.dense = tf.keras.layers.Dense(num_classes, activation='softmax') 54 | 55 | def call(self, inputs): 56 | patches = self.patch_extractor(inputs) 57 | batch_size = tf.shape(inputs)[0] 58 | cls_tokens = tf.repeat(self.cls_token, batch_size, axis=0) 59 | x = tf.concat([cls_tokens, patches], axis=1) 60 | x += self.pos_embedding 61 | x = self.transformer(x, x) 62 | x = x[:, 0, :] # Take CLS token 63 | return self.dense(x) 64 | 65 | vit_model = ViT(num_classes=10, patch_size=8, num_patches=(32//8)**2, d_model=64, num_heads=4) 66 | vit_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 67 | print("\nVision Transformer Training:") 68 | vit_history = vit_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 69 | print("ViT Test Accuracy:", vit_history.history['val_accuracy'][-1].round(4)) 70 | 71 | # %% [4. Generative Model: Variational Autoencoder (VAE)] 72 | # VAE for generating CIFAR-10-like images. 73 | class VAE(tf.keras.Model): 74 | def __init__(self): 75 | super().__init__() 76 | self.encoder = tf.keras.Sequential([ 77 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)), 78 | tf.keras.layers.Flatten(), 79 | tf.keras.layers.Dense(128, activation='relu'), 80 | tf.keras.layers.Dense(16 + 16) # Mean + log variance 81 | ]) 82 | self.decoder = tf.keras.Sequential([ 83 | tf.keras.layers.Dense(128, activation='relu', input_shape=(16,)), 84 | tf.keras.layers.Dense(32 * 32 * 32, activation='relu'), 85 | tf.keras.layers.Reshape((32, 32, 32)), 86 | tf.keras.layers.Conv2DTranspose(3, (3, 3), activation='sigmoid', padding='same') 87 | ]) 88 | 89 | def call(self, inputs): 90 | mean, logvar = tf.split(self.encoder(inputs), num_or_size_splits=2, axis=1) 91 | epsilon = tf.random.normal(tf.shape(mean)) 92 | z = mean + tf.exp(0.5 * logvar) * epsilon 93 | return self.decoder(z) 94 | 95 | vae_model = VAE() 96 | vae_optimizer = tf.keras.optimizers.Adam() 97 | @tf.function 98 | def vae_loss(x, x_recon): 99 | mean, logvar = tf.split(vae_model.encoder(x), num_or_size_splits=2, axis=1) 100 | recon_loss = tf.reduce_mean(tf.keras.losses.binary_crossentropy(x, x_recon)) 101 | kl_loss = -0.5 * tf.reduce_mean(1 + logvar - tf.square(mean) - tf.exp(logvar)) 102 | return recon_loss + kl_loss 103 | 104 | @tf.function 105 | def train_vae_step(x): 106 | with tf.GradientTape() as tape: 107 | x_recon = vae_model(x) 108 | loss = vae_loss(x, x_recon) 109 | gradients = tape.gradient(loss, vae_model.trainable_variables) 110 | vae_optimizer.apply_gradients(zip(gradients, vae_model.trainable_variables)) 111 | return loss 112 | 113 | print("\nVAE Training:") 114 | for epoch in range(3): 115 | for x, _ in train_ds: 116 | loss = train_vae_step(x) 117 | print(f"Epoch {epoch + 1}, Loss: {loss.numpy():.4f}") 118 | 119 | # Generate and save sample images 120 | generated = vae_model(x_test[:5]) 121 | plt.figure() 122 | for i in range(5): 123 | plt.subplot(2, 5, i + 1) 124 | plt.imshow(x_test[i]) 125 | plt.title("Original") 126 | plt.axis('off') 127 | plt.subplot(2, 5, i + 6) 128 | plt.imshow(generated[i]) 129 | plt.title("Generated") 130 | plt.axis('off') 131 | plt.savefig('vae_samples.png') 132 | 133 | # %% [5. Reinforcement Learning with TF-Agents] 134 | # DQN for CartPole environment. 135 | env = suite_gym.load('CartPole-v0') 136 | train_env = tf_py_environment.TFPyEnvironment(env) 137 | eval_env = tf_py_environment.TFPyEnvironment(env) 138 | 139 | q_net = q_network.QNetwork( 140 | train_env.observation_spec(), 141 | train_env.action_spec(), 142 | fc_layer_params=(100,) 143 | ) 144 | agent = dqn_agent.DqnAgent( 145 | train_env.time_step_spec(), 146 | train_env.action_spec(), 147 | q_network=q_net, 148 | optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), 149 | td_errors_loss_fn=common.element_wise_squared_loss, 150 | train_step_counter=tf.Variable(0) 151 | ) 152 | agent.initialize() 153 | 154 | replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( 155 | data_spec=agent.collect_data_spec, 156 | batch_size=train_env.batch_size, 157 | max_length=1000 158 | ) 159 | collect_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(), train_env.action_spec()) 160 | def collect_step(env, policy): 161 | time_step = env.current_time_step() 162 | action_step = policy.action(time_step) 163 | next_time_step = env.step(action_step.action) 164 | traj = tfa.trajectories.from_transition(time_step, action_step, next_time_step) 165 | replay_buffer.add_batch(traj) 166 | 167 | print("\nDQN Training on CartPole:") 168 | for _ in range(100): 169 | collect_step(train_env, collect_policy) 170 | dataset = replay_buffer.as_dataset(num_parallel_calls=3, sample_batch_size=64, num_steps=2).prefetch(3) 171 | iterator = iter(dataset) 172 | for _ in range(1000): 173 | trajectories, _ = next(iterator) 174 | agent.train(trajectories) 175 | 176 | # Evaluate DQN 177 | total_reward = 0 178 | for _ in range(5): 179 | time_step = eval_env.reset() 180 | episode_reward = 0 181 | while not time_step.is_last(): 182 | action_step = agent.policy.action(time_step) 183 | time_step = eval_env.step(action_step.action) 184 | episode_reward += time_step.reward 185 | total_reward += episode_reward 186 | print("Average Reward:", (total_reward / 5).numpy()) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/04 Datasets and Data Loading/datasets_and_data_loading.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | try: 5 | import tensorflow_datasets as tfds 6 | except ImportError: 7 | tfds = None 8 | from sklearn.preprocessing import StandardScaler 9 | 10 | # %% [1. Introduction to Datasets and Data Loading] 11 | # Efficient data loading and preprocessing are critical for ML training. 12 | # TensorFlow provides tf.keras.datasets, tfds.load, tf.data.Dataset, and tf.keras.preprocessing. 13 | 14 | print("TensorFlow version:", tf.__version__) 15 | 16 | # %% [2. Built-in Datasets with tf.keras.datasets] 17 | # Load MNIST dataset from tf.keras.datasets. 18 | (x_train_mnist, y_train_mnist), (x_test_mnist, y_test_mnist) = tf.keras.datasets.mnist.load_data() 19 | print("\nMNIST Dataset:") 20 | print("Train Shape:", x_train_mnist.shape, "Test Shape:", x_test_mnist.shape) 21 | print("Label Example:", y_train_mnist[:5]) 22 | 23 | # Normalize pixel values to [0, 1] 24 | x_train_mnist = x_train_mnist.astype('float32') / 255.0 25 | x_test_mnist = x_test_mnist.astype('float32') / 255.0 26 | print("Normalized Train Data (first sample, first row):", x_train_mnist[0, 0, :5]) 27 | 28 | # %% [3. TensorFlow Datasets with tfds.load] 29 | # Load CIFAR-10 dataset using tensorflow-datasets (if installed). 30 | if tfds is not None: 31 | ds_cifar, info = tfds.load('cifar10', with_info=True, as_supervised=True) 32 | ds_cifar_train = ds_cifar['train'] 33 | ds_cifar_test = ds_cifar['test'] 34 | print("\nCIFAR-10 Dataset Info:") 35 | print("Features:", info.features) 36 | print("Number of Training Examples:", info.splits['train'].num_examples) 37 | 38 | # Example: Extract one batch 39 | for image, label in ds_cifar_train.take(1): 40 | print("Sample Image Shape:", image.shape, "Label:", label.numpy()) 41 | else: 42 | print("\ntensorflow-datasets not installed. Install with: pip install tensorflow-datasets") 43 | ds_cifar_train = None 44 | 45 | # %% [4. Data Pipeline with tf.data.Dataset] 46 | # Create a tf.data.Dataset pipeline for MNIST. 47 | mnist_train_ds = tf.data.Dataset.from_tensor_slices((x_train_mnist, y_train_mnist)) 48 | 49 | # Apply transformations: shuffle, batch, and preprocess 50 | def preprocess_mnist(image, label): 51 | image = tf.image.random_brightness(image, max_delta=0.1) # Data augmentation 52 | image = tf.expand_dims(image, axis=-1) # Add channel dimension: (28, 28) -> (28, 28, 1) 53 | label = tf.cast(label, tf.int32) 54 | return image, label 55 | 56 | mnist_train_ds = (mnist_train_ds 57 | .map(preprocess_mnist, num_parallel_calls=tf.data.AUTOTUNE) 58 | .shuffle(buffer_size=1000) 59 | .batch(batch_size=32) 60 | .prefetch(tf.data.AUTOTUNE)) 61 | print("\nMNIST tf.data.Dataset Pipeline Created:") 62 | for image, label in mnist_train_ds.take(1): 63 | print("Batch Shape:", image.shape, "Label Shape:", label.shape) 64 | 65 | # %% [5. Preprocessing with tf.keras.preprocessing] 66 | # Use tf.keras.preprocessing for data augmentation on MNIST. 67 | data_augmentation = tf.keras.Sequential([ 68 | tf.keras.layers.RandomRotation(0.1), 69 | tf.keras.layers.RandomZoom(0.1), 70 | tf.keras.layers.RandomTranslation(0.1, 0.1) 71 | ]) 72 | # Apply augmentation to a sample image 73 | sample_image = x_train_mnist[0:1][..., np.newaxis] # Shape: (1, 28, 28, 1) 74 | augmented_image = data_augmentation(sample_image) 75 | print("\nAugmented Image Shape:", augmented_image.shape) 76 | 77 | # Visualize original vs. augmented image 78 | plt.figure(figsize=(8, 4)) 79 | plt.subplot(1, 2, 1) 80 | plt.imshow(sample_image[0, :, :, 0], cmap='gray') 81 | plt.title('Original Image') 82 | plt.subplot(1, 2, 2) 83 | plt.imshow(augmented_image[0, :, :, 0], cmap='gray') 84 | plt.title('Augmented Image') 85 | plt.savefig('augmentation_comparison.png') 86 | 87 | # %% [6. Handling Large Datasets] 88 | # Simulate a large dataset with synthetic data and create an efficient pipeline. 89 | np.random.seed(42) 90 | large_x = np.random.rand(100000, 10).astype(np.float32) 91 | large_y = np.random.randint(0, 2, 100000).astype(np.int32) 92 | large_ds = tf.data.Dataset.from_tensor_slices((large_x, large_y)) 93 | 94 | # Preprocessing function 95 | def preprocess_large(x, y): 96 | x = tf.cast(x, tf.float32) 97 | x = (x - tf.reduce_mean(x, axis=0)) / tf.math.reduce_std(x, axis=0) # Standardize 98 | y = tf.cast(y, tf.int32) 99 | return x, y 100 | 101 | # Efficient pipeline for large dataset 102 | large_ds = (large_ds 103 | .map(preprocess_large, num_parallel_calls=tf.data.AUTOTUNE) 104 | .shuffle(buffer_size=10000) 105 | .batch(batch_size=64) 106 | .prefetch(tf.data.AUTOTUNE)) 107 | print("\nLarge Dataset Pipeline:") 108 | for x, y in large_ds.take(1): 109 | print("Batch Shape:", x.shape, "Label Shape:", y.shape) 110 | print("Standardized Features (first sample, first 5):", x[0, :5].numpy().round(4)) 111 | 112 | # %% [7. Practical Application: Training a Model with tf.data] 113 | # Train a simple CNN on the MNIST pipeline. 114 | cnn_model = tf.keras.Sequential([ 115 | tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(28, 28, 1)), 116 | tf.keras.layers.MaxPooling2D((2, 2)), 117 | tf.keras.layers.Flatten(), 118 | tf.keras.layers.Dense(64, activation='relu'), 119 | tf.keras.layers.Dense(10, activation='softmax') 120 | ]) 121 | cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 122 | print("\nCNN Training on MNIST Pipeline:") 123 | history = cnn_model.fit(mnist_train_ds, epochs=5, validation_data=(x_test_mnist[..., np.newaxis], y_test_mnist), verbose=1) 124 | print("Final Validation Accuracy:", history.history['val_accuracy'][-1].round(4)) 125 | 126 | # %% [8. Visualizing Training Progress] 127 | # Plot training and validation accuracy. 128 | plt.figure() 129 | plt.plot(history.history['accuracy'], label='Train Accuracy') 130 | plt.plot(history.history['val_accuracy'], label='Validation Accuracy') 131 | plt.xlabel('Epoch') 132 | plt.ylabel('Accuracy') 133 | plt.title('MNIST CNN Training Progress') 134 | plt.legend() 135 | plt.savefig('mnist_training_progress.png') 136 | 137 | # %% [9. Interview Scenario: Optimizing Data Pipelines] 138 | # Optimize a pipeline for a large image dataset. 139 | def optimized_pipeline(dataset, batch_size=32): 140 | def preprocess(image, label): 141 | image = tf.cast(image, tf.float32) / 255.0 142 | image = tf.image.random_flip_left_right(image) 143 | label = tf.cast(label, tf.int32) 144 | return image, label 145 | return (dataset 146 | .map(preprocess, num_parallel_calls=tf.data.AUTOTUNE) 147 | .cache() # Cache in memory for small datasets 148 | .shuffle(buffer_size=1000) 149 | .batch(batch_size) 150 | .prefetch(tf.data.AUTOTUNE)) 151 | 152 | print("\nInterview Scenario: Optimized Pipeline") 153 | print("Key Optimizations: cache(), prefetch(), parallel map, appropriate shuffle buffer.") 154 | if tfds is not None: 155 | sample_ds = tfds.load('cifar10', split='train', as_supervised=True) 156 | optimized_ds = optimized_pipeline(sample_ds) 157 | for image, label in optimized_ds.take(1): 158 | print("Optimized Batch Shape:", image.shape, "Label Shape:", label.shape) 159 | 160 | # %% [10. Custom Preprocessing Function] 161 | # Create a custom preprocessing function for a regression dataset. 162 | np.random.seed(42) 163 | X_reg = np.random.rand(1000, 5).astype(np.float32) 164 | y_reg = np.sum(X_reg, axis=1) + np.random.normal(0, 0.1, 1000).astype(np.float32) 165 | reg_ds = tf.data.Dataset.from_tensor_slices((X_reg, y_reg)) 166 | 167 | def custom_preprocess(x, y): 168 | x = tf.cast(x, tf.float32) 169 | x = (x - tf.reduce_mean(x)) / tf.math.reduce_std(x) # Normalize 170 | y = tf.cast(y, tf.float32) 171 | return x, y 172 | 173 | reg_ds = (reg_ds 174 | .map(custom_preprocess, num_parallel_calls=tf.data.AUTOTUNE) 175 | .shuffle(buffer_size=100) 176 | .batch(batch_size=16) 177 | .prefetch(tf.data.AUTOTUNE)) 178 | print("\nCustom Preprocessing for Regression Dataset:") 179 | for x, y in reg_ds.take(1): 180 | print("Batch Shape:", x.shape, "Label Shape:", y.shape) 181 | print("Normalized Features (first sample, first 5):", x[0, :5].numpy().round(4)) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 🔥 TensorFlow Interview Preparation 2 | 3 |
4 | TensorFlow Logo 5 | NumPy 6 | Keras 7 | TensorFlow Datasets 8 | TensorFlow Hub 9 | TensorFlow Lite 10 |
11 | 12 |

Your comprehensive guide to mastering TensorFlow for AI/ML research and industry applications

13 | 14 | --- 15 | 16 | ## 📖 Introduction 17 | 18 | Welcome to the TensorFlow Mastery Roadmap! 🚀 This repository is your ultimate guide to conquering TensorFlow, a powerful open-source framework for machine learning and AI. Designed for hands-on learning and interview preparation, it covers everything from tensors to advanced model deployment, empowering you to excel in AI/ML projects and technical interviews with confidence. 19 | 20 | ## 🌟 What’s Inside? 21 | 22 | - **Core TensorFlow Foundations**: Master tensors, Keras API, neural networks, and data pipelines. 23 | - **Intermediate Techniques**: Build CNNs, RNNs, and leverage transfer learning. 24 | - **Advanced Concepts**: Explore Transformers, GANs, distributed training, and edge deployment. 25 | - **Specialized Libraries**: Dive into `TensorFlow Datasets`, `TensorFlow Hub`, `Keras`, and `TensorFlow Lite`. 26 | - **Hands-on Projects**: Tackle beginner-to-advanced projects to solidify your skills. 27 | - **Best Practices**: Learn optimization, debugging, and production-ready workflows. 28 | 29 | ## 🔍 Who Is This For? 30 | 31 | - Data Scientists aiming to build scalable ML models. 32 | - Machine Learning Engineers preparing for technical interviews. 33 | - AI Researchers exploring advanced architectures. 34 | - Software Engineers transitioning to deep learning roles. 35 | - Anyone passionate about TensorFlow and AI innovation. 36 | 37 | ## 🗺️ Comprehensive Learning Roadmap 38 | 39 | --- 40 | 41 | ### 📚 Prerequisites 42 | 43 | - **Python Proficiency**: Core Python (data structures, OOP, file handling). 44 | - **Mathematics for ML**: 45 | - Linear Algebra (vectors, matrices, eigenvalues) 46 | - Calculus (gradients, optimization) 47 | - Probability & Statistics (distributions, Bayes’ theorem) 48 | - **Machine Learning Basics**: 49 | - Supervised/Unsupervised Learning 50 | - Regression, Classification, Clustering 51 | - Bias-Variance, Evaluation Metrics 52 | - **NumPy**: Arrays, broadcasting, and mathematical operations. 53 | 54 | --- 55 | 56 | ### 🏗️ Core TensorFlow Foundations 57 | 58 | #### 🧮 Tensors and Operations 59 | - Tensor Creation (`tf.constant`, `tf.zeros`, `tf.random`) 60 | - Attributes (shape, `dtype`, `device`) 61 | - Operations (indexing, reshaping, matrix multiplication, broadcasting) 62 | - CPU/GPU Interoperability 63 | - NumPy Integration 64 | 65 | #### 🔢 Automatic Differentiation 66 | - Computational Graphs 67 | - Gradient Computation (`tf.GradientTape`) 68 | - Gradient Application (`optimizer.apply_gradients`) 69 | - No-Gradient Context (`tf.stop_gradient`) 70 | 71 | #### 🛠️ Neural Networks (`tf.keras`) 72 | - Defining Models (`tf.keras.Sequential`, `tf.keras.Model`) 73 | - Layers: Dense, Convolutional, Pooling, Normalization 74 | - Activations: ReLU, Sigmoid, Softmax 75 | - Loss Functions: MSE, Categorical Crossentropy 76 | - Optimizers: SGD, Adam, RMSprop 77 | - Learning Rate Schedules 78 | 79 | #### 📂 Datasets and Data Loading 80 | - Built-in Datasets (`tf.keras.datasets`) 81 | - TensorFlow Datasets (`tfds.load`) 82 | - Data Pipeline (`tf.data.Dataset`, map, batch, shuffle) 83 | - Preprocessing (`tf.keras.preprocessing`) 84 | - Handling Large Datasets 85 | 86 | #### 🔄 Training Pipeline 87 | - Training/Evaluation Loops 88 | - Model Checkpointing (`model.save`, `model.load`) 89 | - GPU/TPU Training (`tf.device`) 90 | - Monitoring with TensorBoard 91 | 92 | --- 93 | 94 | ### 🧩 Intermediate TensorFlow Concepts 95 | 96 | #### 🏋️ Model Architectures 97 | - Feedforward Neural Networks (FNNs) 98 | - Convolutional Neural Networks (CNNs) 99 | - Recurrent Neural Networks (RNNs, LSTMs, GRUs) 100 | - Transfer Learning (`tf.keras.applications`) 101 | 102 | #### ⚙️ Customization 103 | - Custom Layers and Loss Functions 104 | - Functional and Subclassing APIs 105 | - Debugging Gradient Issues 106 | 107 | #### 📈 Optimization 108 | - Hyperparameter Tuning (learning rate, batch size) 109 | - Regularization (dropout, L2) 110 | - Mixed Precision Training (`tf.keras.mixed_precision`) 111 | - Model Quantization 112 | 113 | --- 114 | 115 | ### 🚀 Advanced TensorFlow Concepts 116 | 117 | #### 🌐 Distributed Training 118 | - Data Parallelism (`tf.distribute.MirroredStrategy`) 119 | - Multi-GPU/TPU Training (`tf.distribute.TPUStrategy`) 120 | - Distributed Datasets 121 | 122 | #### 🧠 Advanced Architectures 123 | - Transformers (BERT, Vision Transformers) 124 | - Generative Models (VAEs, GANs) 125 | - Graph Neural Networks 126 | - Reinforcement Learning (TF-Agents) 127 | 128 | #### 🛠️ Custom Extensions 129 | - Custom Gradient Functions 130 | - TensorFlow Addons 131 | - Custom Optimizers 132 | 133 | #### 📦 Deployment 134 | - Model Export (SavedModel, ONNX) 135 | - Serving (TensorFlow Serving, FastAPI) 136 | - Edge Deployment (TensorFlow Lite, TensorFlow.js) 137 | 138 | --- 139 | 140 | ### 🧬 Specialized TensorFlow Libraries 141 | 142 | - **TensorFlow Datasets**: Curated datasets for ML tasks 143 | - **TensorFlow Hub**: Pretrained models for transfer learning 144 | - **Keras**: High-level API for rapid prototyping 145 | - **TensorFlow Lite**: Lightweight models for mobile/edge devices 146 | - **TensorFlow.js**: ML in the browser 147 | 148 | --- 149 | 150 | ### ⚠️ Best Practices 151 | 152 | - Modular Code Organization 153 | - Version Control with Git 154 | - Unit Testing for Models 155 | - Experiment Tracking (TensorBoard, MLflow) 156 | - Reproducible Research (random seeds, versioning) 157 | 158 | --- 159 | 160 | ## 💡 Why Master TensorFlow? 161 | 162 | TensorFlow is a leading framework for machine learning, and here’s why: 163 | 1. **Scalability**: Seamless transition from research to production. 164 | 2. **Ecosystem**: Rich libraries for datasets, pretrained models, and edge deployment. 165 | 3. **Industry Adoption**: Powers AI at Google, Airbnb, and more. 166 | 4. **Versatility**: Supports mobile, web, and enterprise applications. 167 | 5. **Community**: Active support on X, forums, and GitHub. 168 | 169 | This roadmap is your guide to mastering TensorFlow for AI/ML careers—let’s ignite your machine learning journey! 🔥 170 | 171 | ## 📆 Study Plan 172 | 173 | - **Month 1-2**: Tensors, Keras, neural networks, data pipelines 174 | - **Month 3-4**: CNNs, RNNs, transfer learning, intermediate projects 175 | - **Month 5-6**: Transformers, GANs, distributed training 176 | - **Month 7+**: Deployment, custom extensions, advanced projects 177 | 178 | ## 🛠️ Projects 179 | 180 | - **Beginner**: Linear Regression, MNIST/CIFAR-10 Classification 181 | - **Intermediate**: Object Detection (SSD, Faster R-CNN), Sentiment Analysis 182 | - **Advanced**: BERT Fine-tuning, GANs, Distributed Training 183 | 184 | ## 📚 Resources 185 | 186 | - **Official Docs**: [tensorflow.org](https://tensorflow.org) 187 | - **Tutorials**: TensorFlow Tutorials, Coursera 188 | - **Books**: 189 | - *Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow* by Aurélien Géron 190 | - *TensorFlow for Deep Learning* by Bharath Ramsundar 191 | - **Communities**: TensorFlow Forums, X (#TensorFlow), r/TensorFlow 192 | 193 | ## 🤝 Contributions 194 | 195 | Want to enhance this roadmap? 🌟 196 | 1. Fork the repository. 197 | 2. Create a feature branch (`git checkout -b feature/amazing-addition`). 198 | 3. Commit changes (`git commit -m 'Add awesome content'`). 199 | 4. Push to the branch (`git push origin feature/amazing-addition`). 200 | 5. Open a Pull Request. 201 | 202 | --- 203 | 204 |
205 |

Happy Learning and Best of Luck in Your AI/ML Journey! ✨

206 |
-------------------------------------------------------------------------------- /Tensorflow Fundamentals/01 Core TensorFlow Foundations/05 Training Pipeline/training_pipeline.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | import os 5 | from datetime import datetime 6 | 7 | # %% [1. Introduction to Training Pipeline] 8 | # A training pipeline manages model training, evaluation, checkpointing, and monitoring. 9 | # TensorFlow supports training/evaluation loops, model.save/load, GPU/TPU training, and TensorBoard. 10 | 11 | print("TensorFlow version:", tf.__version__) 12 | 13 | # %% [2. Preparing the Dataset] 14 | # Load and preprocess MNIST dataset. 15 | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() 16 | x_train = x_train.astype('float32') / 255.0 17 | x_test = x_test.astype('float32') / 255.0 18 | x_train = x_train[..., np.newaxis] # Shape: (60000, 28, 28, 1) 19 | x_test = x_test[..., np.newaxis] # Shape: (10000, 28, 28, 1) 20 | print("\nMNIST Dataset:") 21 | print("Train Shape:", x_train.shape, "Test Shape:", x_test.shape) 22 | 23 | # Create tf.data.Dataset pipelines 24 | train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 25 | test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32).prefetch(tf.data.AUTOTUNE) 26 | print("Dataset Pipelines Created") 27 | 28 | # %% [3. Defining the Model] 29 | # Create a CNN model using tf.keras.Sequential. 30 | model = tf.keras.Sequential([ 31 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)), 32 | tf.keras.layers.MaxPooling2D((2, 2)), 33 | tf.keras.layers.Conv2D(64, (3, 3), activation='relu'), 34 | tf.keras.layers.MaxPooling2D((2, 2)), 35 | tf.keras.layers.Flatten(), 36 | tf.keras.layers.Dense(128, activation='relu'), 37 | tf.keras.layers.Dense(10, activation='softmax') 38 | ]) 39 | print("\nModel Summary:") 40 | model.summary() 41 | 42 | # %% [4. Training/Evaluation Loops] 43 | # Compile and train the model with a custom training loop. 44 | optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) 45 | loss_fn = tf.keras.losses.SparseCategoricalCrossentropy() 46 | train_acc_metric = tf.keras.metrics.SparseCategoricalAccuracy() 47 | val_acc_metric = tf.keras.metrics.SparseCategoricalAccuracy() 48 | 49 | @tf.function 50 | def train_step(x, y): 51 | with tf.GradientTape() as tape: 52 | logits = model(x, training=True) 53 | loss = loss_fn(y, logits) 54 | gradients = tape.gradient(loss, model.trainable_variables) 55 | optimizer.apply_gradients(zip(gradients, model.trainable_variables)) 56 | train_acc_metric.update_state(y, logits) 57 | return loss 58 | 59 | @tf.function 60 | def test_step(x, y): 61 | logits = model(x, training=False) 62 | val_acc_metric.update_state(y, logits) 63 | 64 | # Training loop 65 | epochs = 5 66 | log_dir = "logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S") 67 | summary_writer = tf.summary.create_file_writer(log_dir) 68 | history = {'loss': [], 'accuracy': [], 'val_accuracy': []} 69 | 70 | for epoch in range(epochs): 71 | print(f"\nEpoch {epoch + 1}/{epochs}") 72 | train_loss = 0.0 73 | train_acc_metric.reset_states() 74 | val_acc_metric.reset_states() 75 | 76 | # Training 77 | for step, (x_batch, y_batch) in enumerate(train_ds): 78 | loss = train_step(x_batch, y_batch) 79 | train_loss += loss 80 | if step % 200 == 0: 81 | print(f"Step {step}, Loss: {loss.numpy():.4f}, Accuracy: {train_acc_metric.result().numpy():.4f}") 82 | 83 | # Evaluation 84 | for x_batch, y_batch in test_ds: 85 | test_step(x_batch, y_batch) 86 | 87 | # Log metrics to TensorBoard 88 | with summary_writer.as_default(): 89 | tf.summary.scalar('loss', train_loss / (step + 1), step=epoch) 90 | tf.summary.scalar('accuracy', train_acc_metric.result(), step=epoch) 91 | tf.summary.scalar('val_accuracy', val_acc_metric.result(), step=epoch) 92 | 93 | history['loss'].append(train_loss.numpy() / (step + 1)) 94 | history['accuracy'].append(train_acc_metric.result().numpy()) 95 | history['val_accuracy'].append(val_acc_metric.result().numpy()) 96 | print(f"Epoch {epoch + 1}, Loss: {history['loss'][-1]:.4f}, Accuracy: {history['accuracy'][-1]:.4f}, Val Accuracy: {history['val_accuracy'][-1]:.4f}") 97 | 98 | # %% [5. Model Checkpointing] 99 | # Save and load the model using model.save and model.load. 100 | checkpoint_dir = "checkpoints/mnist_cnn" 101 | os.makedirs(checkpoint_dir, exist_ok=True) 102 | model.save(os.path.join(checkpoint_dir, "model")) 103 | print("\nModel Saved to:", checkpoint_dir) 104 | 105 | # Load and evaluate the saved model 106 | loaded_model = tf.keras.models.load_model(os.path.join(checkpoint_dir, "model")) 107 | loaded_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 108 | loss, acc = loaded_model.evaluate(test_ds, verbose=0) 109 | print("Loaded Model Test Loss:", loss.round(4), "Test Accuracy:", acc.round(4)) 110 | 111 | # %% [6. GPU/TPU Training with tf.device] 112 | # Train on GPU if available, otherwise CPU. 113 | device = '/CPU:0' 114 | if tf.config.list_physical_devices('GPU'): 115 | device = '/GPU:0' 116 | print("\nTraining on GPU") 117 | elif tf.config.list_physical_devices('TPU'): 118 | device = '/TPU:0' 119 | print("\nTraining on TPU (Note: Typically requires cloud environment like Colab)") 120 | else: 121 | print("\nTraining on CPU") 122 | 123 | # Re-train a small model on the selected device for demonstration 124 | small_model = tf.keras.Sequential([ 125 | tf.keras.layers.Flatten(input_shape=(28, 28, 1)), 126 | tf.keras.layers.Dense(64, activation='relu'), 127 | tf.keras.layers.Dense(10, activation='softmax') 128 | ]) 129 | small_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 130 | 131 | with tf.device(device): 132 | small_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 133 | print("Device Training Completed") 134 | 135 | # %% [7. Monitoring with TensorBoard] 136 | # TensorBoard logs are saved in log_dir. 137 | print("\nTensorBoard Monitoring:") 138 | print(f"Run: tensorboard --logdir {log_dir}") 139 | print("Then open http://localhost:6006 in your browser to view metrics.") 140 | 141 | # %% [8. Visualizing Training Progress] 142 | # Plot training and validation accuracy. 143 | plt.figure() 144 | plt.plot(history['accuracy'], label='Train Accuracy') 145 | plt.plot(history['val_accuracy'], label='Validation Accuracy') 146 | plt.xlabel('Epoch') 147 | plt.ylabel('Accuracy') 148 | plt.title('MNIST CNN Training Progress') 149 | plt.legend() 150 | plt.savefig('training_progress.png') 151 | 152 | # Plot training loss 153 | plt.figure() 154 | plt.plot(history['loss'], label='Train Loss') 155 | plt.xlabel('Epoch') 156 | plt.ylabel('Loss') 157 | plt.title('MNIST CNN Training Loss') 158 | plt.legend() 159 | plt.savefig('training_loss.png') 160 | 161 | # %% [9. Interview Scenario: Custom Training Loop] 162 | # Implement a custom training loop with gradient clipping. 163 | optimizer = tf.keras.optimizers.Adam(learning_rate=0.001, clipnorm=1.0) # Gradient clipping 164 | model_clip = tf.keras.Sequential([ 165 | tf.keras.layers.Flatten(input_shape=(28, 28, 1)), 166 | tf.keras.layers.Dense(64, activation='relu'), 167 | tf.keras.layers.Dense(10, activation='softmax') 168 | ]) 169 | 170 | @tf.function 171 | def train_step_clip(x, y): 172 | with tf.GradientTape() as tape: 173 | logits = model_clip(x, training=True) 174 | loss = loss_fn(y, logits) 175 | gradients = tape.gradient(loss, model_clip.trainable_variables) 176 | optimizer.apply_gradients(zip(gradients, model_clip.trainable_variables)) 177 | train_acc_metric.update_state(y, logits) 178 | return loss 179 | 180 | print("\nInterview Scenario: Custom Training Loop with Gradient Clipping") 181 | for epoch in range(2): # Short loop for demonstration 182 | train_acc_metric.reset_states() 183 | for x_batch, y_batch in train_ds: 184 | loss = train_step_clip(x_batch, y_batch) 185 | print(f"Epoch {epoch + 1}, Loss: {loss.numpy():.4f}, Accuracy: {train_acc_metric.result().numpy():.4f}") 186 | 187 | # %% [10. Practical Application: Checkpoint Callback] 188 | # Use ModelCheckpoint callback to save the best model during training. 189 | checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( 190 | filepath=os.path.join(checkpoint_dir, "best_model"), 191 | save_best_only=True, 192 | monitor='val_accuracy', 193 | mode='max' 194 | ) 195 | model.fit(train_ds, epochs=3, validation_data=test_ds, callbacks=[checkpoint_callback], verbose=1) 196 | print("\nBest Model Saved with ModelCheckpoint Callback") -------------------------------------------------------------------------------- /Tensorflow Fundamentals/04 Specialized TensorFlow Libraries/specialized_libraries.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | try: 5 | import tensorflow_datasets as tfds 6 | except ImportError: 7 | tfds = None 8 | try: 9 | import tensorflow_hub as hub 10 | except ImportError: 11 | hub = None 12 | import os 13 | 14 | # %% [1. Introduction to Specialized TensorFlow Libraries] 15 | # TensorFlow offers specialized libraries for data, models, and deployment. 16 | # Covers TensorFlow Datasets, TensorFlow Hub, Keras, TensorFlow Lite, and TensorFlow.js. 17 | 18 | print("TensorFlow version:", tf.__version__) 19 | 20 | # %% [2. TensorFlow Datasets] 21 | # Load CIFAR-10 dataset using TensorFlow Datasets. 22 | if tfds is None: 23 | print("\nTensorFlow Datasets not installed. Please install: `pip install tensorflow-datasets`") 24 | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() 25 | else: 26 | ds, info = tfds.load('cifar10', with_info=True, as_supervised=True) 27 | train_ds = ds['train'] 28 | test_ds = ds['test'] 29 | def preprocess(image, label): 30 | image = tf.cast(image, tf.float32) / 255.0 31 | label = tf.one_hot(label, 10) 32 | return image, label 33 | train_ds = train_ds.map(preprocess).shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 34 | test_ds = test_ds.map(preprocess).batch(32).prefetch(tf.data.AUTOTUNE) 35 | x_train, y_train = next(iter(train_ds.batch(50000))) 36 | x_test, y_test = next(iter(test_ds.batch(10000))) 37 | x_train, y_train = x_train.numpy(), y_train.numpy() 38 | x_test, y_test = x_test.numpy(), y_test.numpy() 39 | 40 | print("\nCIFAR-10 Dataset (via TensorFlow Datasets):") 41 | print("Train Shape:", x_train.shape, "Test Shape:", x_test.shape) 42 | 43 | # Visualize dataset samples 44 | plt.figure(figsize=(10, 2)) 45 | for i in range(5): 46 | plt.subplot(1, 5, i + 1) 47 | plt.imshow(x_train[i]) 48 | plt.title(f"Class: {np.argmax(y_train[i])}") 49 | plt.axis('off') 50 | plt.savefig('cifar10_samples.png') 51 | 52 | # %% [3. TensorFlow Hub] 53 | # Use a pre-trained MobileNetV2 from TensorFlow Hub for transfer learning. 54 | if hub is None: 55 | print("\nTensorFlow Hub not installed. Please install: `pip install tensorflow-hub`") 56 | base_model = tf.keras.applications.MobileNetV2(input_shape=(96, 96, 3), include_top=False, weights='imagenet') 57 | else: 58 | hub_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/5" 59 | base_model = hub.KerasLayer(hub_url, input_shape=(96, 96, 3), trainable=False) 60 | 61 | # Resize images for MobileNetV2 62 | x_train_resized = tf.image.resize(x_train, [96, 96]).numpy() 63 | x_test_resized = tf.image.resize(x_test, [96, 96]).numpy() 64 | 65 | # Build model with Hub layer 66 | hub_model = tf.keras.Sequential([ 67 | base_model, 68 | tf.keras.layers.GlobalAveragePooling2D(), 69 | tf.keras.layers.Dense(10, activation='softmax') 70 | ]) 71 | hub_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 72 | print("\nTensorFlow Hub Model Summary:") 73 | hub_model.summary() 74 | hub_history = hub_model.fit(x_train_resized, y_train, epochs=3, batch_size=32, 75 | validation_data=(x_test_resized, y_test), verbose=1) 76 | print("TensorFlow Hub Test Accuracy:", hub_history.history['val_accuracy'][-1].round(4)) 77 | 78 | # %% [4. Keras] 79 | # Build a CNN using Keras high-level API for rapid prototyping. 80 | keras_model = tf.keras.Sequential([ 81 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)), 82 | tf.keras.layers.MaxPooling2D((2, 2)), 83 | tf.keras.layers.Conv2D(64, (3, 3), activation='relu'), 84 | tf.keras.layers.MaxPooling2D((2, 2)), 85 | tf.keras.layers.Flatten(), 86 | tf.keras.layers.Dense(128, activation='relu'), 87 | tf.keras.layers.Dense(10, activation='softmax') 88 | ]) 89 | keras_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 90 | print("\nKeras Model Summary:") 91 | keras_model.summary() 92 | keras_history = keras_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 93 | print("Keras Test Accuracy:", keras_history.history['val_accuracy'][-1].round(4)) 94 | 95 | # %% [5. TensorFlow Lite] 96 | # Convert Keras model to TensorFlow Lite for edge deployment. 97 | converter = tf.lite.TFLiteConverter.from_keras_model(keras_model) 98 | tflite_model = converter.convert() 99 | tflite_path = "cifar10_cnn.tflite" 100 | with open(tflite_path, 'wb') as f: 101 | f.write(tflite_model) 102 | print("\nTensorFlow Lite Model Saved to:", tflite_path) 103 | 104 | # Evaluate TFLite model 105 | interpreter = tf.lite.Interpreter(model_path=tflite_path) 106 | interpreter.allocate_tensors() 107 | input_index = interpreter.get_input_details()[0]['index'] 108 | output_index = interpreter.get_output_details()[0]['index'] 109 | correct = 0 110 | total = 0 111 | for x, y in test_ds.unbatch().take(100): 112 | x = x.numpy()[np.newaxis, ...] 113 | interpreter.set_tensor(input_index, x) 114 | interpreter.invoke() 115 | pred = interpreter.get_tensor(output_index) 116 | if np.argmax(pred) == np.argmax(y): 117 | correct += 1 118 | total += 1 119 | print("TensorFlow Lite Accuracy (Subset):", (correct / total).round(4)) 120 | 121 | # %% [6. TensorFlow.js] 122 | # Provide instructions for converting Keras model to TensorFlow.js. 123 | print("\nTensorFlow.js Conversion Instructions:") 124 | print("1. Install: `pip install tensorflowjs`") 125 | print(f"2. Convert: `tensorflowjs_converter --input_format=keras {tflite_path} tfjs_model`") 126 | print("3. Use in browser: Load `tfjs_model/model.json` with TensorFlow.js") 127 | print("Note: Requires tensorflowjs package and JavaScript environment.") 128 | 129 | # %% [7. Visualizing Predictions] 130 | # Visualize predictions from Keras model. 131 | predictions = keras_model.predict(x_test[:5]) 132 | plt.figure(figsize=(15, 3)) 133 | for i in range(5): 134 | plt.subplot(1, 5, i + 1) 135 | plt.imshow(x_test[i]) 136 | plt.title(f"Pred: {np.argmax(predictions[i])}\nTrue: {np.argmax(y_test[i])}") 137 | plt.axis('off') 138 | plt.savefig('keras_predictions.png') 139 | 140 | # %% [8. Interview Scenario: Library Selection] 141 | # Discuss choosing TensorFlow libraries for a project. 142 | print("\nInterview Scenario: Library Selection") 143 | print("1. TensorFlow Datasets: For curated, preprocessed datasets.") 144 | print("2. TensorFlow Hub: For quick transfer learning with pre-trained models.") 145 | print("3. Keras: For rapid prototyping and simple model building.") 146 | print("4. TensorFlow Lite: For mobile/edge deployment with low latency.") 147 | print("5. TensorFlow.js: For browser-based ML with WebGL acceleration.") 148 | 149 | # %% [9. Practical Application: Combined Workflow] 150 | # Combine TensorFlow Datasets, Hub, and Keras for transfer learning. 151 | if hub and tfds: 152 | ds = tfds.load('cifar10', split='train', as_supervised=True) 153 | def preprocess_hub(image, label): 154 | image = tf.cast(image, tf.float32) / 255.0 155 | image = tf.image.resize(image, [96, 96]) 156 | label = tf.one_hot(label, 10) 157 | return image, label 158 | hub_ds = ds.map(preprocess_hub).shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 159 | combined_model = tf.keras.Sequential([ 160 | hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/5", 161 | input_shape=(96, 96, 3), trainable=False), 162 | tf.keras.layers.Dense(10, activation='softmax') 163 | ]) 164 | combined_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 165 | print("\nCombined Workflow Model Summary:") 166 | combined_model.summary() 167 | combined_history = combined_model.fit(hub_ds, epochs=3, validation_data=test_ds, verbose=1) 168 | print("Combined Workflow Test Accuracy:", combined_history.history['val_accuracy'][-1].round(4)) 169 | else: 170 | print("\nCombined Workflow Skipped: Requires tensorflow-datasets and tensorflow-hub") 171 | 172 | # %% [10. Visualizing Training Progress] 173 | # Plot validation accuracy for Keras and Hub models. 174 | plt.figure() 175 | plt.plot(keras_history.history['val_accuracy'], label='Keras CNN') 176 | plt.plot(hub_history.history['val_accuracy'], label='TensorFlow Hub') 177 | if hub and tfds: 178 | plt.plot(combined_history.history['val_accuracy'], label='Combined Workflow') 179 | plt.xlabel('Epoch') 180 | plt.ylabel('Validation Accuracy') 181 | plt.title('Specialized Libraries Comparison') 182 | plt.legend() 183 | plt.savefig('specialized_libraries_comparison.png') -------------------------------------------------------------------------------- /Tensorflow Fundamentals/02 Intermediate TensorFlow Concepts/03 Optimization/optimization.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | from tensorflow.keras.datasets import cifar10 5 | import tensorflow.keras.mixed_precision as mixed_precision 6 | import os 7 | 8 | # %% [1. Introduction to Optimization] 9 | # Optimization in TensorFlow involves tuning hyperparameters, applying regularization, 10 | # using mixed precision training, and model quantization for performance and efficiency. 11 | 12 | print("TensorFlow version:", tf.__version__) 13 | 14 | # %% [2. Preparing the Dataset] 15 | # Load and preprocess CIFAR-10 dataset. 16 | (x_train, y_train), (x_test, y_test) = cifar10.load_data() 17 | x_train = x_train.astype('float32') / 255.0 18 | x_test = x_test.astype('float32') / 255.0 19 | y_train = tf.keras.utils.to_categorical(y_train, 10) 20 | y_test = tf.keras.utils.to_categorical(y_test, 10) 21 | print("\nCIFAR-10 Dataset:") 22 | print("Train Shape:", x_train.shape, "Test Shape:", x_test.shape) 23 | 24 | # Create tf.data.Dataset pipelines 25 | train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 26 | test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32).prefetch(tf.data.AUTOTUNE) 27 | print("Dataset Pipelines Created") 28 | 29 | # %% [3. Base Model Definition] 30 | # Define a CNN model for CIFAR-10 classification. 31 | def create_cnn_model(dropout_rate=0.0, l2_lambda=0.0): 32 | model = tf.keras.Sequential([ 33 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3), 34 | kernel_regularizer=tf.keras.regularizers.l2(l2_lambda)), 35 | tf.keras.layers.MaxPooling2D((2, 2)), 36 | tf.keras.layers.Conv2D(64, (3, 3), activation='relu', 37 | kernel_regularizer=tf.keras.regularizers.l2(l2_lambda)), 38 | tf.keras.layers.MaxPooling2D((2, 2)), 39 | tf.keras.layers.Flatten(), 40 | tf.keras.layers.Dense(128, activation='relu', 41 | kernel_regularizer=tf.keras.regularizers.l2(l2_lambda)), 42 | tf.keras.layers.Dropout(dropout_rate), 43 | tf.keras.layers.Dense(10, activation='softmax') 44 | ]) 45 | return model 46 | 47 | # %% [4. Hyperparameter Tuning: Learning Rate and Batch Size] 48 | # Manually tune learning rate and batch size. 49 | learning_rates = [0.001, 0.0001] 50 | batch_sizes = [32, 64] 51 | tuning_results = [] 52 | 53 | for lr in learning_rates: 54 | for bs in batch_sizes: 55 | print(f"\nTuning: Learning Rate = {lr}, Batch Size = {bs}") 56 | train_ds_tune = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(bs).prefetch(tf.data.AUTOTUNE) 57 | model = create_cnn_model() 58 | model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lr), 59 | loss='categorical_crossentropy', metrics=['accuracy']) 60 | history = model.fit(train_ds_tune, epochs=5, validation_data=test_ds, verbose=0) 61 | val_acc = history.history['val_accuracy'][-1] 62 | tuning_results.append({'lr': lr, 'bs': bs, 'val_acc': val_acc}) 63 | print(f"Validation Accuracy: {val_acc:.4f}") 64 | 65 | # Select best hyperparameters 66 | best_result = max(tuning_results, key=lambda x: x['val_acc']) 67 | print("\nBest Hyperparameters:", best_result) 68 | 69 | # %% [5. Regularization: Dropout and L2] 70 | # Train model with dropout and L2 regularization using best hyperparameters. 71 | dropout_rate = 0.3 72 | l2_lambda = 0.01 73 | model_reg = create_cnn_model(dropout_rate=dropout_rate, l2_lambda=l2_lambda) 74 | model_reg.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=best_result['lr']), 75 | loss='categorical_crossentropy', metrics=['accuracy']) 76 | print("\nRegularized Model Summary:") 77 | model_reg.summary() 78 | reg_history = model_reg.fit(train_ds, epochs=5, validation_data=test_ds, verbose=1) 79 | print("Regularized Model Test Accuracy:", reg_history.history['val_accuracy'][-1].round(4)) 80 | 81 | # %% [6. Mixed Precision Training] 82 | # Enable mixed precision training for faster computation. 83 | policy = mixed_precision.Policy('mixed_float16') 84 | mixed_precision.set_global_policy(policy) 85 | print("\nMixed Precision Policy:", policy.name) 86 | 87 | # Train model with mixed precision 88 | model_mp = create_cnn_model(dropout_rate=dropout_rate, l2_lambda=l2_lambda) 89 | model_mp.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=best_result['lr']), 90 | loss='categorical_crossentropy', metrics=['accuracy']) 91 | model_mp.output.dtype = tf.float32 # Ensure output is float32 for compatibility 92 | print("\nMixed Precision Model Summary:") 93 | model_mp.summary() 94 | mp_history = model_mp.fit(train_ds, epochs=5, validation_data=test_ds, verbose=1) 95 | print("Mixed Precision Test Accuracy:", mp_history.history['val_accuracy'][-1].round(4)) 96 | 97 | # Reset policy to default 98 | mixed_precision.set_global_policy('float32') 99 | 100 | # %% [7. Model Quantization] 101 | # Quantize the regularized model for deployment using TensorFlow Lite. 102 | converter = tf.lite.TFLiteConverter.from_keras_model(model_reg) 103 | converter.optimizations = [tf.lite.Optimize.DEFAULT] 104 | tflite_model = converter.convert() 105 | 106 | # Save quantized model 107 | quantized_model_path = 'quantized_model.tflite' 108 | with open(quantized_model_path, 'wb') as f: 109 | f.write(tflite_model) 110 | print("\nQuantized Model Saved to:", quantized_model_path) 111 | 112 | # Evaluate quantized model (simplified evaluation) 113 | interpreter = tf.lite.Interpreter(model_path=quantized_model_path) 114 | interpreter.allocate_tensors() 115 | input_index = interpreter.get_input_details()[0]['index'] 116 | output_index = interpreter.get_output_details()[0]['index'] 117 | correct = 0 118 | total = 0 119 | for x, y in test_ds.unbatch().take(100): # Evaluate on subset 120 | x = x.numpy()[np.newaxis, ...] 121 | interpreter.set_tensor(input_index, x) 122 | interpreter.invoke() 123 | pred = interpreter.get_tensor(output_index) 124 | if np.argmax(pred) == np.argmax(y): 125 | correct += 1 126 | total += 1 127 | quantized_acc = correct / total 128 | print("Quantized Model Accuracy (Subset):", quantized_acc.round(4)) 129 | 130 | # %% [8. Visualizing Training Progress] 131 | # Plot validation accuracy for regularized and mixed precision models. 132 | plt.figure() 133 | plt.plot(reg_history.history['val_accuracy'], label='Regularized') 134 | plt.plot(mp_history.history['val_accuracy'], label='Mixed Precision') 135 | plt.xlabel('Epoch') 136 | plt.ylabel('Validation Accuracy') 137 | plt.title('Optimization Comparison') 138 | plt.legend() 139 | plt.savefig('optimization_comparison.png') 140 | 141 | # Plot hyperparameter tuning results 142 | plt.figure() 143 | for result in tuning_results: 144 | plt.scatter(result['lr'], result['bs'], s=100, c=result['val_acc'], cmap='viridis') 145 | plt.xscale('log') 146 | plt.xlabel('Learning Rate') 147 | plt.ylabel('Batch Size') 148 | plt.title('Hyperparameter Tuning Results') 149 | plt.colorbar(label='Validation Accuracy') 150 | plt.savefig('hyperparameter_tuning.png') 151 | 152 | # %% [9. Interview Scenario: Optimization Strategy] 153 | # Discuss optimization strategies for a high-accuracy, efficient model. 154 | print("\nInterview Scenario: Optimization Strategy") 155 | print("1. Hyperparameter Tuning: Test learning rates (e.g., 1e-3, 1e-4) and batch sizes (e.g., 32, 64).") 156 | print("2. Regularization: Use dropout (0.3) and L2 (0.01) to prevent overfitting.") 157 | print("3. Mixed Precision: Enable mixed_float16 for faster training on GPUs.") 158 | print("4. Quantization: Apply post-training quantization for efficient deployment.") 159 | print("Tools: KerasTuner for automated hyperparameter tuning.") 160 | 161 | # %% [10. Practical Application: Combined Optimization] 162 | # Train a model with all optimizations combined. 163 | model_combined = create_cnn_model(dropout_rate=dropout_rate, l2_lambda=l2_lambda) 164 | policy = mixed_precision.Policy('mixed_float16') 165 | mixed_precision.set_global_policy(policy) 166 | model_combined.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=best_result['lr']), 167 | loss='categorical_crossentropy', metrics=['accuracy']) 168 | model_combined.output.dtype = tf.float32 169 | print("\nCombined Optimization Model Summary:") 170 | model_combined.summary() 171 | combined_history = model_combined.fit(train_ds, epochs=5, validation_data=test_ds, verbose=1) 172 | print("Combined Optimization Test Accuracy:", combined_history.history['val_accuracy'][-1].round(4)) 173 | 174 | # Save and quantize combined model 175 | converter = tf.lite.TFLiteConverter.from_keras_model(model_combined) 176 | converter.optimizations = [tf.lite.Optimize.DEFAULT] 177 | tflite_combined_model = converter.convert() 178 | with open('quantized_combined_model.tflite', 'wb') as f: 179 | f.write(tflite_combined_model) 180 | print("Quantized Combined Model Saved to: quantized_combined_model.tflite") 181 | 182 | # Reset policy 183 | mixed_precision.set_global_policy('float32') -------------------------------------------------------------------------------- /Tensorflow Fundamentals/02 Intermediate TensorFlow Concepts/01 Model Architectures/model_architectures.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | from sklearn.preprocessing import StandardScaler 5 | from tensorflow.keras.datasets import mnist, cifar10 6 | 7 | # %% [1. Introduction to Model Architectures] 8 | # TensorFlow supports various neural network architectures for different tasks. 9 | # This file covers Feedforward Neural Networks (FNNs), Convolutional Neural Networks (CNNs), 10 | # Recurrent Neural Networks (RNNs, LSTMs, GRUs), and Transfer Learning. 11 | 12 | print("TensorFlow version:", tf.__version__) 13 | 14 | # %% [2. Feedforward Neural Networks (FNNs)] 15 | # FNNs are fully connected networks for tasks like regression or classification. 16 | # Example: FNN for MNIST digit classification. 17 | (x_train_mnist, y_train_mnist), (x_test_mnist, y_test_mnist) = mnist.load_data() 18 | x_train_mnist = x_train_mnist.astype('float32') / 255.0 19 | x_test_mnist = x_test_mnist.astype('float32') / 255.0 20 | x_train_mnist = x_train_mnist.reshape(-1, 28 * 28) # Flatten: (28, 28) -> (784,) 21 | x_test_mnist = x_test_mnist.reshape(-1, 28 * 28) 22 | 23 | fnn_model = tf.keras.Sequential([ 24 | tf.keras.layers.Dense(128, activation='relu', input_shape=(784,)), 25 | tf.keras.layers.Dense(64, activation='relu'), 26 | tf.keras.layers.Dense(10, activation='softmax') 27 | ]) 28 | fnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 29 | print("\nFNN Model Summary:") 30 | fnn_model.summary() 31 | fnn_history = fnn_model.fit(x_train_mnist, y_train_mnist, epochs=5, batch_size=32, 32 | validation_data=(x_test_mnist, y_test_mnist), verbose=1) 33 | print("FNN Test Accuracy:", fnn_history.history['val_accuracy'][-1].round(4)) 34 | 35 | # %% [3. Convolutional Neural Networks (CNNs)] 36 | # CNNs are designed for image data, using convolutional and pooling layers. 37 | # Example: CNN for MNIST digit classification. 38 | x_train_mnist_2d = x_train_mnist.reshape(-1, 28, 28, 1) # Reshape: (784,) -> (28, 28, 1) 39 | x_test_mnist_2d = x_test_mnist.reshape(-1, 28, 28, 1) 40 | 41 | cnn_model = tf.keras.Sequential([ 42 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)), 43 | tf.keras.layers.MaxPooling2D((2, 2)), 44 | tf.keras.layers.Conv2D(64, (3, 3), activation='relu'), 45 | tf.keras.layers.MaxPooling2D((2, 2)), 46 | tf.keras.layers.Flatten(), 47 | tf.keras.layers.Dense(128, activation='relu'), 48 | tf.keras.layers.Dense(10, activation='softmax') 49 | ]) 50 | cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 51 | print("\nCNN Model Summary:") 52 | cnn_model.summary() 53 | cnn_history = cnn_model.fit(x_train_mnist_2d, y_train_mnist, epochs=5, batch_size=32, 54 | validation_data=(x_test_mnist_2d, y_test_mnist), verbose=1) 55 | print("CNN Test Accuracy:", cnn_history.history['val_accuracy'][-1].round(4)) 56 | 57 | # %% [4. Recurrent Neural Networks (RNNs, LSTMs, GRUs)] 58 | # RNNs are designed for sequential data, with LSTMs and GRUs handling long-term dependencies. 59 | # Example: Synthetic sequence classification (predict next value in a noisy sine wave). 60 | np.random.seed(42) 61 | t = np.linspace(0, 100, 1000) 62 | x_seq = np.sin(0.1 * t) + np.random.normal(0, 0.1, 1000) 63 | y_seq = (x_seq[1:] > x_seq[:-1]).astype(np.int32) # 1 if next value increases, 0 otherwise 64 | x_seq = x_seq[:-1] 65 | sequence_length = 10 66 | x_seq_data = np.array([x_seq[i:i+sequence_length] for i in range(len(x_seq) - sequence_length)]) 67 | y_seq_data = y_seq[sequence_length:] 68 | 69 | x_seq_train, x_seq_test = x_seq_data[:800], x_seq_data[800:] 70 | y_seq_train, y_seq_test = y_seq_data[:800], y_seq_data[800:] 71 | x_seq_train = x_seq_train[..., np.newaxis] # Shape: (800, 10, 1) 72 | x_seq_test = x_seq_test[..., np.newaxis] # Shape: (189, 10, 1) 73 | 74 | # LSTM Model 75 | lstm_model = tf.keras.Sequential([ 76 | tf.keras.layers.LSTM(32, return_sequences=False, input_shape=(sequence_length, 1)), 77 | tf.keras.layers.Dense(16, activation='relu'), 78 | tf.keras.layers.Dense(1, activation='sigmoid') 79 | ]) 80 | lstm_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 81 | print("\nLSTM Model Summary:") 82 | lstm_model.summary() 83 | lstm_history = lstm_model.fit(x_seq_train, y_seq_train, epochs=5, batch_size=16, 84 | validation_data=(x_seq_test, y_seq_test), verbose=1) 85 | print("LSTM Test Accuracy:", lstm_history.history['val_accuracy'][-1].round(4)) 86 | 87 | # GRU Model 88 | gru_model = tf.keras.Sequential([ 89 | tf.keras.layers.GRU(32, return_sequences=False, input_shape=(sequence_length, 1)), 90 | tf.keras.layers.Dense(16, activation='relu'), 91 | tf.keras.layers.Dense(1, activation='sigmoid') 92 | ]) 93 | gru_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 94 | print("\nGRU Model Summary:") 95 | gru_model.summary() 96 | gru_history = gru_model.fit(x_seq_train, y_seq_train, epochs=5, batch_size=16, 97 | validation_data=(x_seq_test, y_seq_test), verbose=1) 98 | print("GRU Test Accuracy:", gru_history.history['val_accuracy'][-1].round(4)) 99 | 100 | # %% [5. Transfer Learning with tf.keras.applications] 101 | # Transfer learning uses pre-trained models for new tasks. 102 | # Example: Fine-tune MobileNetV2 on CIFAR-10. 103 | (x_train_cifar, y_train_cifar), (x_test_cifar, y_test_cifar) = cifar10.load_data() 104 | x_train_cifar = x_train_cifar.astype('float32') / 255.0 105 | x_test_cifar = x_test_cifar.astype('float32') / 255.0 106 | y_train_cifar = tf.keras.utils.to_categorical(y_train_cifar, 10) 107 | y_test_cifar = tf.keras.utils.to_categorical(y_test_cifar, 10) 108 | 109 | # Preprocess input for MobileNetV2 (resize to 96x96) 110 | x_train_cifar_resized = tf.image.resize(x_train_cifar, [96, 96]) 111 | x_test_cifar_resized = tf.image.resize(x_test_cifar, [96, 96]) 112 | 113 | base_model = tf.keras.applications.MobileNetV2(input_shape=(96, 96, 3), include_top=False, weights='imagenet') 114 | base_model.trainable = False # Freeze base model 115 | transfer_model = tf.keras.Sequential([ 116 | base_model, 117 | tf.keras.layers.GlobalAveragePooling2D(), 118 | tf.keras.layers.Dense(128, activation='relu'), 119 | tf.keras.layers.Dense(10, activation='softmax') 120 | ]) 121 | transfer_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 122 | print("\nTransfer Learning Model Summary:") 123 | transfer_model.summary() 124 | transfer_history = transfer_model.fit(x_train_cifar_resized, y_train_cifar, epochs=5, batch_size=32, 125 | validation_data=(x_test_cifar_resized, y_test_cifar), verbose=1) 126 | print("Transfer Learning Test Accuracy:", transfer_history.history['val_accuracy'][-1].round(4)) 127 | 128 | # %% [6. Visualizing Training Progress] 129 | # Plot validation accuracy for all models. 130 | plt.figure() 131 | plt.plot(fnn_history.history['val_accuracy'], label='FNN') 132 | plt.plot(cnn_history.history['val_accuracy'], label='CNN') 133 | plt.plot(lstm_history.history['val_accuracy'], label='LSTM') 134 | plt.plot(gru_history.history['val_accuracy'], label='GRU') 135 | plt.plot(transfer_history.history['val_accuracy'], label='Transfer (MobileNetV2)') 136 | plt.xlabel('Epoch') 137 | plt.ylabel('Validation Accuracy') 138 | plt.title('Model Architecture Comparison') 139 | plt.legend() 140 | plt.savefig('model_comparison.png') 141 | 142 | # %% [7. Practical Application: Fine-Tuning Transfer Learning] 143 | # Fine-tune MobileNetV2 by unfreezing some layers. 144 | base_model.trainable = True 145 | for layer in base_model.layers[:100]: # Freeze first 100 layers 146 | layer.trainable = False 147 | transfer_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-5), 148 | loss='categorical_crossentropy', metrics=['accuracy']) 149 | fine_tune_history = transfer_model.fit(x_train_cifar_resized, y_train_cifar, epochs=3, batch_size=32, 150 | validation_data=(x_test_cifar_resized, y_test_cifar), verbose=1) 151 | print("\nFine-Tuned Transfer Learning Test Accuracy:", fine_tune_history.history['val_accuracy'][-1].round(4)) 152 | 153 | # %% [8. Interview Scenario: Model Selection] 154 | # Discuss choosing architectures for specific tasks. 155 | print("\nInterview Scenario: Model Selection") 156 | print("FNN: Suitable for tabular data, simple classification/regression.") 157 | print("CNN: Ideal for image data, leveraging spatial hierarchies.") 158 | print("LSTM/GRU: Best for sequential/time-series data, handling long-term dependencies.") 159 | print("Transfer Learning: Efficient for image tasks with limited data, using pre-trained models.") 160 | 161 | # %% [9. Visualizing Predictions] 162 | # Visualize CNN predictions on MNIST test set. 163 | predictions = cnn_model.predict(x_test_mnist_2d[:5]) 164 | plt.figure(figsize=(15, 3)) 165 | for i in range(5): 166 | plt.subplot(1, 5, i + 1) 167 | plt.imshow(x_test_mnist_2d[i, :, :, 0], cmap='gray') 168 | plt.title(f"Pred: {np.argmax(predictions[i])}\nTrue: {y_test_mnist[i]}") 169 | plt.axis('off') 170 | plt.savefig('cnn_predictions.png') 171 | 172 | # %% [10. Comparing Model Parameters] 173 | # Compare parameter counts for all models. 174 | print("\nModel Parameter Counts:") 175 | print("FNN Parameters:", fnn_model.count_params()) 176 | print("CNN Parameters:", cnn_model.count_params()) 177 | print("LSTM Parameters:", lstm_model.count_params()) 178 | print("GRU Parameters:", gru_model.count_params()) 179 | print("Transfer Learning Parameters:", transfer_model.count_params()) -------------------------------------------------------------------------------- /Tensorflow Fundamentals/02 Intermediate TensorFlow Concepts/02 Customization/customization.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | from tensorflow.keras.datasets import mnist 5 | 6 | # %% [1. Introduction to Customization] 7 | # TensorFlow allows customization via custom layers, loss functions, and model APIs. 8 | # This file covers custom layers/losses, Functional/Subclassing APIs, and gradient debugging. 9 | 10 | print("TensorFlow version:", tf.__version__) 11 | 12 | # %% [2. Preparing the Dataset] 13 | # Load and preprocess MNIST dataset. 14 | (x_train, y_train), (x_test, y_test) = mnist.load_data() 15 | x_train = x_train.astype('float32') / 255.0 16 | x_test = x_test.astype('float32') / 255.0 17 | x_train = x_train[..., np.newaxis] # Shape: (60000, 28, 28, 1) 18 | x_test = x_test[..., np.newaxis] # Shape: (10000, 28, 28, 1) 19 | print("\nMNIST Dataset:") 20 | print("Train Shape:", x_train.shape, "Test Shape:", x_test.shape) 21 | 22 | # Create tf.data.Dataset pipelines 23 | train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE) 24 | test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32).prefetch(tf.data.AUTOTUNE) 25 | print("Dataset Pipelines Created") 26 | 27 | # %% [3. Custom Layers] 28 | # Define a custom layer that applies a learnable scaling factor to a Dense layer. 29 | class ScaledDense(tf.keras.layers.Layer): 30 | def __init__(self, units, activation=None): 31 | super(ScaledDense, self).__init__() 32 | self.units = units 33 | self.activation = tf.keras.activations.get(activation) 34 | 35 | def build(self, input_shape): 36 | self.dense = tf.keras.layers.Dense(self.units) 37 | self.scale = self.add_weight('scale', shape=(), initializer='ones', trainable=True) 38 | 39 | def call(self, inputs): 40 | x = self.dense(inputs) 41 | x = x * self.scale 42 | if self.activation is not None: 43 | x = self.activation(x) 44 | return x 45 | 46 | # Test custom layer in a simple model 47 | custom_layer_model = tf.keras.Sequential([ 48 | tf.keras.layers.Flatten(input_shape=(28, 28, 1)), 49 | ScaledDense(64, activation='relu'), 50 | tf.keras.layers.Dense(10, activation='softmax') 51 | ]) 52 | custom_layer_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 53 | print("\nCustom Layer Model Summary:") 54 | custom_layer_model.summary() 55 | custom_layer_history = custom_layer_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 56 | print("Custom Layer Test Accuracy:", custom_layer_history.history['val_accuracy'][-1].round(4)) 57 | 58 | # %% [4. Custom Loss Functions] 59 | # Define a custom loss function: weighted categorical crossentropy. 60 | class WeightedCategoricalCrossentropy(tf.keras.losses.Loss): 61 | def __init__(self, class_weights): 62 | super().__init__() 63 | self.class_weights = tf.constant(class_weights, dtype=tf.float32) 64 | 65 | def call(self, y_true, y_pred): 66 | y_true = tf.cast(y_true, tf.int32) 67 | y_pred = tf.clip_by_value(y_pred, 1e-7, 1.0 - 1e-7) 68 | cross_entropy = -tf.reduce_sum( 69 | tf.one_hot(y_true, depth=tf.shape(y_pred)[-1]) * tf.math.log(y_pred), axis=-1) 70 | weights = tf.gather(self.class_weights, y_true) 71 | return tf.reduce_mean(cross_entropy * weights) 72 | 73 | # Test custom loss (emphasize class 0) 74 | class_weights = [2.0] + [1.0] * 9 # Weight class 0 higher 75 | custom_loss = WeightedCategoricalCrossentropy(class_weights) 76 | custom_loss_model = tf.keras.Sequential([ 77 | tf.keras.layers.Flatten(input_shape=(28, 28, 1)), 78 | tf.keras.layers.Dense(64, activation='relu'), 79 | tf.keras.layers.Dense(10, activation='softmax') 80 | ]) 81 | custom_loss_model.compile(optimizer='adam', loss=custom_loss, metrics=['accuracy']) 82 | print("\nCustom Loss Model Summary:") 83 | custom_loss_model.summary() 84 | custom_loss_history = custom_loss_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 85 | print("Custom Loss Test Accuracy:", custom_loss_history.history['val_accuracy'][-1].round(4)) 86 | 87 | # %% [5. Functional API] 88 | # Build a model using the Functional API for more complex architectures. 89 | inputs = tf.keras.Input(shape=(28, 28, 1)) 90 | x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(inputs) 91 | x = tf.keras.layers.MaxPooling2D((2, 2))(x) 92 | x = tf.keras.layers.Flatten()(x) 93 | x = tf.keras.layers.Dense(64, activation='relu')(x) 94 | outputs = tf.keras.layers.Dense(10, activation='softmax')(x) 95 | functional_model = tf.keras.Model(inputs, outputs) 96 | functional_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 97 | print("\nFunctional API Model Summary:") 98 | functional_model.summary() 99 | functional_history = functional_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 100 | print("Functional API Test Accuracy:", functional_history.history['val_accuracy'][-1].round(4)) 101 | 102 | # %% [6. Subclassing API] 103 | # Define a custom model using the Subclassing API for maximum flexibility. 104 | class CustomCNN(tf.keras.Model): 105 | def __init__(self): 106 | super(CustomCNN, self).__init__() 107 | self.conv1 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu') 108 | self.pool1 = tf.keras.layers.MaxPooling2D((2, 2)) 109 | self.flatten = tf.keras.layers.Flatten() 110 | self.dense1 = tf.keras.layers.Dense(64, activation='relu') 111 | self.dense2 = tf.keras.layers.Dense(10, activation='softmax') 112 | 113 | def call(self, inputs, training=False): 114 | x = self.conv1(inputs) 115 | x = self.pool1(x) 116 | x = self.flatten(x) 117 | x = self.dense1(x) 118 | return self.dense2(x) 119 | 120 | subclass_model = CustomCNN() 121 | subclass_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 122 | print("\nSubclassing API Model Summary:") 123 | subclass_model.summary() 124 | subclass_history = subclass_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 125 | print("Subclassing API Test Accuracy:", subclass_history.history['val_accuracy'][-1].round(4)) 126 | 127 | # %% [7. Debugging Gradient Issues] 128 | # Demonstrate common gradient issues and debugging techniques. 129 | # Case 1: Non-differentiable operation (tf.cast to int). 130 | x = tf.Variable(1.0) 131 | with tf.GradientTape() as tape: 132 | y = tf.cast(x, tf.int32) # Non-differentiable 133 | loss = tf.square(y) 134 | grad = tape.gradient(loss, x) 135 | print("\nGradient Debugging - Non-Differentiable Operation:") 136 | print("Operation: y = cast(x to int), loss = y^2") 137 | print("Gradient:", grad) # Expected: None 138 | print("Fix: Avoid non-differentiable ops (e.g., use float operations).") 139 | 140 | # Case 2: Disconnected graph (no dependency). 141 | x = tf.Variable(1.0) 142 | with tf.GradientTape() as tape: 143 | y = tf.stop_gradient(x) # Blocks gradient flow 144 | loss = tf.square(y) 145 | grad = tape.gradient(loss, x) 146 | print("\nGradient Debugging - Disconnected Graph:") 147 | print("Operation: y = stop_gradient(x), loss = y^2") 148 | print("Gradient:", grad) # Expected: None 149 | print("Fix: Ensure variables are part of the computational graph.") 150 | 151 | # Case 3: Monitor gradient norms. 152 | optimizer = tf.keras.optimizers.Adam() 153 | gradient_norms = [] 154 | for epoch in range(2): 155 | with tf.GradientTape() as tape: 156 | logits = subclass_model(x_train[:32], training=True) 157 | loss = tf.keras.losses.sparse_categorical_crossentropy(y_train[:32], logits) 158 | gradients = tape.gradient(loss, subclass_model.trainable_variables) 159 | optimizer.apply_gradients(zip(gradients, subclass_model.trainable_variables)) 160 | grad_norm = tf.sqrt(sum(tf.norm(g) ** 2 for g in gradients if g is not None)) 161 | gradient_norms.append(grad_norm.numpy()) 162 | print("\nGradient Norms:", gradient_norms) 163 | 164 | # %% [8. Visualizing Training Progress] 165 | # Plot validation accuracy for all models. 166 | plt.figure() 167 | plt.plot(custom_layer_history.history['val_accuracy'], label='Custom Layer') 168 | plt.plot(custom_loss_history.history['val_accuracy'], label='Custom Loss') 169 | plt.plot(functional_history.history['val_accuracy'], label='Functional API') 170 | plt.plot(subclass_history.history['val_accuracy'], label='Subclassing API') 171 | plt.xlabel('Epoch') 172 | plt.ylabel('Validation Accuracy') 173 | plt.title('Customization Model Comparison') 174 | plt.legend() 175 | plt.savefig('customization_comparison.png') 176 | 177 | # Plot gradient norms 178 | plt.figure() 179 | plt.plot(gradient_norms, label='Gradient Norm') 180 | plt.xlabel('Epoch') 181 | plt.ylabel('Gradient Norm') 182 | plt.title('Gradient Norm During Training') 183 | plt.legend() 184 | plt.savefig('gradient_norms.png') 185 | 186 | # %% [9. Interview Scenario: Custom Layer Implementation] 187 | # Implement a custom layer with a learnable polynomial transformation. 188 | class PolynomialLayer(tf.keras.layers.Layer): 189 | def __init__(self, degree): 190 | super(PolynomialLayer, self).__init__() 191 | self.degree = degree 192 | 193 | def build(self, input_shape): 194 | self.coefficients = self.add_weight('coefficients', shape=(self.degree + 1,), 195 | initializer='random_normal', trainable=True) 196 | 197 | def call(self, inputs): 198 | x = inputs 199 | result = 0 200 | for i in range(self.degree + 1): 201 | result += self.coefficients[i] * tf.pow(x, float(i)) 202 | return result 203 | 204 | # Test polynomial layer 205 | poly_model = tf.keras.Sequential([ 206 | tf.keras.layers.Flatten(input_shape=(28, 28, 1)), 207 | PolynomialLayer(degree=2), 208 | tf.keras.layers.Dense(10, activation='softmax') 209 | ]) 210 | poly_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 211 | print("\nInterview Scenario: Polynomial Layer Model Summary:") 212 | poly_model.summary() 213 | poly_history = poly_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 214 | print("Polynomial Layer Test Accuracy:", poly_history.history['val_accuracy'][-1].round(4)) 215 | 216 | # %% [10. Practical Application: Combining Customizations] 217 | # Combine custom layer, loss, and Subclassing API for MNIST classification. 218 | class CustomCombinedModel(tf.keras.Model): 219 | def __init__(self): 220 | super(CustomCombinedModel, self).__init__() 221 | self.conv1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu') 222 | self.flatten = tf.keras.layers.Flatten() 223 | self.scaled_dense = ScaledDense(32, activation='relu') 224 | self.dense = tf.keras.layers.Dense(10, activation='softmax') 225 | 226 | def call(self, inputs): 227 | x = self.conv1(inputs) 228 | x = self.flatten(x) 229 | x = self.scaled_dense(x) 230 | return self.dense(x) 231 | 232 | combined_model = CustomCombinedModel() 233 | combined_model.compile(optimizer='adam', loss=custom_loss, metrics=['accuracy']) 234 | print("\nCombined Custom Model Summary:") 235 | combined_model.summary() 236 | combined_history = combined_model.fit(train_ds, epochs=3, validation_data=test_ds, verbose=1) 237 | print("Combined Custom Model Test Accuracy:", combined_history.history['val_accuracy'][-1].round(4)) -------------------------------------------------------------------------------- /Tensorflow Interview Questions/README.md: -------------------------------------------------------------------------------- 1 | # TensorFlow Interview Questions for AI/ML Roles 2 | 3 | This README provides 170 TensorFlow interview questions tailored for AI/ML roles, focusing on deep learning with TensorFlow in Python. The questions cover **core TensorFlow concepts** (e.g., tensors, neural networks, training, optimization, deployment) and their applications in AI/ML tasks like image classification, natural language processing, and generative modeling. Questions are categorized by topic and divided into **Basic**, **Intermediate**, and **Advanced** levels to support candidates preparing for roles requiring TensorFlow in deep learning workflows. 4 | 5 | ## Tensor Operations 6 | 7 | ### Basic 8 | 1. **What is TensorFlow, and why is it used in AI/ML?** 9 | TensorFlow is a deep learning framework for building and training neural networks. 10 | ```python 11 | import tensorflow as tf 12 | tensor = tf.constant([1, 2, 3]) 13 | ``` 14 | 15 | 2. **How do you create a TensorFlow tensor from a Python list?** 16 | Converts lists to tensors for computation. 17 | ```python 18 | list_data = [1, 2, 3] 19 | tensor = tf.constant(list_data) 20 | ``` 21 | 22 | 3. **How do you create a tensor with zeros or ones in TensorFlow?** 23 | Initializes tensors for placeholders. 24 | ```python 25 | zeros = tf.zeros((2, 3)) 26 | ones = tf.ones((2, 3)) 27 | ``` 28 | 29 | 4. **What is the role of `tf.range` in TensorFlow?** 30 | Creates tensors with a range of values. 31 | ```python 32 | tensor = tf.range(0, 10, delta=2) 33 | ``` 34 | 35 | 5. **How do you create a tensor with random values in TensorFlow?** 36 | Generates random data for testing. 37 | ```python 38 | random_tensor = tf.random.uniform((2, 3)) 39 | ``` 40 | 41 | 6. **How do you reshape a TensorFlow tensor?** 42 | Changes tensor dimensions for model inputs. 43 | ```python 44 | tensor = tf.constant([1, 2, 3, 4, 5, 6]) 45 | reshaped = tf.reshape(tensor, (2, 3)) 46 | ``` 47 | 48 | #### Intermediate 49 | 7. **Write a function to create a 2D TensorFlow tensor with a given shape.** 50 | Initializes tensors dynamically. 51 | ```python 52 | def create_2d_tensor(rows, cols, fill=0): 53 | return tf.fill((rows, cols), fill) 54 | ``` 55 | 56 | 8. **How do you create a tensor with evenly spaced values in TensorFlow?** 57 | Uses `linspace` for uniform intervals. 58 | ```python 59 | tensor = tf.linspace(0.0, 10.0, 5) 60 | ``` 61 | 62 | 9. **Write a function to initialize a tensor with random integers in TensorFlow.** 63 | Generates integer tensors for simulations. 64 | ```python 65 | def random_int_tensor(shape, low, high): 66 | return tf.random.uniform(shape, minval=low, maxval=high, dtype=tf.int32) 67 | ``` 68 | 69 | 10. **How do you convert a NumPy array to a TensorFlow tensor?** 70 | Bridges NumPy and TensorFlow for data integration. 71 | ```python 72 | import numpy as np 73 | array = np.array([1, 2, 3]) 74 | tensor = tf.convert_to_tensor(array) 75 | ``` 76 | 77 | 11. **Write a function to visualize a TensorFlow tensor as a heatmap.** 78 | Displays tensor values graphically. 79 | ```python 80 | import matplotlib.pyplot as plt 81 | def plot_tensor_heatmap(tensor): 82 | plt.imshow(tensor.numpy(), cmap='viridis') 83 | plt.colorbar() 84 | plt.savefig('tensor_heatmap.png') 85 | ``` 86 | 87 | 12. **How do you perform element-wise operations on TensorFlow tensors?** 88 | Applies operations across elements. 89 | ```python 90 | tensor1 = tf.constant([1, 2, 3]) 91 | tensor2 = tf.constant([4, 5, 6]) 92 | result = tensor1 + tensor2 93 | ``` 94 | 95 | #### Advanced 96 | 13. **Write a function to create a tensor with a custom pattern in TensorFlow.** 97 | Generates structured tensors. 98 | ```python 99 | def custom_pattern_tensor(shape, pattern='checkerboard'): 100 | tensor = tf.zeros(shape) 101 | if pattern == 'checkerboard': 102 | indices = tf.where((tf.range(shape[0]) % 2) == (tf.range(shape[1]) % 2)) 103 | updates = tf.ones(tf.shape(indices)[0]) 104 | tensor = tf.tensor_scatter_nd_update(tensor, indices, updates) 105 | return tensor 106 | ``` 107 | 108 | 14. **How do you optimize tensor creation for large datasets in TensorFlow?** 109 | Uses efficient initialization methods. 110 | ```python 111 | large_tensor = tf.zeros((10000, 10000), dtype=tf.float32) 112 | ``` 113 | 114 | 15. **Write a function to create a block tensor in TensorFlow.** 115 | Constructs tensors from sub-tensors. 116 | ```python 117 | def block_tensor(blocks): 118 | return tf.linalg.diag(blocks) 119 | ``` 120 | 121 | 16. **How do you handle memory-efficient tensor creation in TensorFlow?** 122 | Uses sparse tensors or low-precision dtypes. 123 | ```python 124 | sparse_tensor = tf.sparse.SparseTensor(indices=[[0, 1], [1, 0]], values=[1, 2], dense_shape=[1000, 1000]) 125 | ``` 126 | 127 | 17. **Write a function to pad a TensorFlow tensor.** 128 | Adds padding for convolutional tasks. 129 | ```python 130 | def pad_tensor(tensor, paddings): 131 | return tf.pad(tensor, paddings) 132 | ``` 133 | 134 | 18. **How do you create a tensor with a specific device (CPU/GPU) in TensorFlow?** 135 | Controls computation location. 136 | ```python 137 | with tf.device('/GPU:0'): 138 | tensor = tf.constant([1, 2, 3]) 139 | ``` 140 | 141 | ## Neural Network Basics 142 | 143 | ### Basic 144 | 19. **How do you define a simple neural network in TensorFlow?** 145 | Builds a basic model using Keras. 146 | ```python 147 | from tensorflow.keras import models, layers 148 | model = models.Sequential([ 149 | layers.Dense(2, input_shape=(10,)) 150 | ]) 151 | ``` 152 | 153 | 20. **What is the role of `tf.keras.Model` in TensorFlow?** 154 | Base class for neural networks. 155 | ```python 156 | model = models.Sequential() 157 | ``` 158 | 159 | 21. **How do you initialize model parameters in TensorFlow?** 160 | Sets weights and biases. 161 | ```python 162 | model = models.Sequential([ 163 | layers.Dense(10, kernel_initializer='glorot_uniform') 164 | ]) 165 | ``` 166 | 167 | 22. **How do you compute a forward pass in TensorFlow?** 168 | Processes input through the model. 169 | ```python 170 | x = tf.random.uniform((1, 10)) 171 | output = model(x) 172 | ``` 173 | 174 | 23. **What is the role of activation functions in TensorFlow?** 175 | Introduces non-linearity. 176 | ```python 177 | output = tf.keras.activations.relu(tf.constant([-1, 0, 1])) 178 | ``` 179 | 180 | 24. **How do you visualize model predictions?** 181 | Plots output distributions. 182 | ```python 183 | import matplotlib.pyplot as plt 184 | def plot_predictions(outputs): 185 | plt.hist(outputs.numpy(), bins=20) 186 | plt.savefig('predictions_hist.png') 187 | ``` 188 | 189 | #### Intermediate 190 | 25. **Write a function to define a multi-layer perceptron (MLP) in TensorFlow.** 191 | Builds a customizable MLP. 192 | ```python 193 | def create_mlp(input_dim, hidden_dims, output_dim): 194 | model = models.Sequential() 195 | model.add(layers.Input(shape=(input_dim,))) 196 | for dim in hidden_dims: 197 | model.add(layers.Dense(dim, activation='relu')) 198 | model.add(layers.Dense(output_dim)) 199 | return model 200 | ``` 201 | 202 | 26. **How do you implement a convolutional neural network (CNN) in TensorFlow?** 203 | Processes image data. 204 | ```python 205 | model = models.Sequential([ 206 | layers.Conv2D(16, 3, activation='relu', input_shape=(28, 28, 1)), 207 | layers.Flatten(), 208 | layers.Dense(10) 209 | ]) 210 | ``` 211 | 212 | 27. **Write a function to add dropout to a TensorFlow model.** 213 | Prevents overfitting. 214 | ```python 215 | def add_dropout(model, rate=0.5): 216 | new_model = models.Sequential() 217 | for layer in model.layers: 218 | new_model.add(layer) 219 | if isinstance(layer, layers.Dense): 220 | new_model.add(layers.Dropout(rate)) 221 | return new_model 222 | ``` 223 | 224 | 28. **How do you implement batch normalization in TensorFlow?** 225 | Stabilizes training. 226 | ```python 227 | model = models.Sequential([ 228 | layers.Dense(10, input_shape=(10,)), 229 | layers.BatchNormalization() 230 | ]) 231 | ``` 232 | 233 | 29. **Write a function to visualize model architecture.** 234 | Displays layer structure. 235 | ```python 236 | from tensorflow.keras.utils import plot_model 237 | def visualize_model(model): 238 | plot_model(model, to_file='model_architecture.png') 239 | ``` 240 | 241 | 30. **How do you handle gradient computation in TensorFlow?** 242 | Enables backpropagation. 243 | ```python 244 | x = tf.Variable([1.0, 2.0]) 245 | with tf.GradientTape() as tape: 246 | y = tf.reduce_sum(x) 247 | grads = tape.gradient(y, x) 248 | ``` 249 | 250 | #### Advanced 251 | 31. **Write a function to implement a custom neural network layer in TensorFlow.** 252 | Defines specialized operations. 253 | ```python 254 | class CustomLayer(layers.Layer): 255 | def __init__(self, units): 256 | super().__init__() 257 | self.units = units 258 | def build(self, input_shape): 259 | self.w = self.add_weight(shape=(input_shape[-1], self.units), initializer='random_normal') 260 | def call(self, inputs): 261 | return tf.matmul(inputs, self.w) 262 | ``` 263 | 264 | 32. **How do you optimize neural network memory usage in TensorFlow?** 265 | Uses mixed precision training. 266 | ```python 267 | from tensorflow.keras.mixed_precision import set_global_policy 268 | set_global_policy('mixed_float16') 269 | ``` 270 | 271 | 33. **Write a function to implement a residual network (ResNet) block in TensorFlow.** 272 | Enhances deep network training. 273 | ```python 274 | def res_block(x, filters): 275 | shortcut = x 276 | x = layers.Conv2D(filters, 3, padding='same', activation='relu')(x) 277 | x = layers.Conv2D(filters, 3, padding='same')(x) 278 | x = layers.Add()([x, shortcut]) 279 | return layers.Activation('relu')(x) 280 | ``` 281 | 282 | 34. **How do you implement attention mechanisms in TensorFlow?** 283 | Enhances model focus on relevant data. 284 | ```python 285 | class AttentionLayer(layers.Layer): 286 | def __init__(self, units): 287 | super().__init__() 288 | self.query = layers.Dense(units) 289 | self.key = layers.Dense(units) 290 | self.value = layers.Dense(units) 291 | def call(self, inputs): 292 | q, k, v = self.query(inputs), self.key(inputs), self.value(inputs) 293 | scores = tf.matmul(q, k, transpose_b=True) / tf.sqrt(tf.cast(k.shape[-1], tf.float32)) 294 | return tf.matmul(tf.nn.softmax(scores), v) 295 | ``` 296 | 297 | 35. **Write a function to handle dynamic network architectures in TensorFlow.** 298 | Builds flexible models. 299 | ```python 300 | def dynamic_model(layer_sizes): 301 | model = models.Sequential() 302 | for i in range(len(layer_sizes) - 1): 303 | model.add(layers.Dense(layer_sizes[i+1], activation='relu')) 304 | return model 305 | ``` 306 | 307 | 36. **How do you implement a transformer model in TensorFlow?** 308 | Supports NLP and vision tasks. 309 | ```python 310 | from tensorflow.keras.layers import MultiHeadAttention 311 | def transformer_block(x, heads, d_model): 312 | x = MultiHeadAttention(num_heads=heads, key_dim=d_model)(x, x) 313 | return layers.Add()([x, x]) 314 | ``` 315 | 316 | ## Training and Optimization 317 | 318 | ### Basic 319 | 37. **How do you define a loss function in TensorFlow?** 320 | Measures model error. 321 | ```python 322 | loss_fn = tf.keras.losses.SparseCategoricalCrossentropy() 323 | ``` 324 | 325 | 38. **How do you set up an optimizer in TensorFlow?** 326 | Updates model parameters. 327 | ```python 328 | optimizer = tf.keras.optimizers.SGD(learning_rate=0.01) 329 | ``` 330 | 331 | 39. **How do you compile a TensorFlow model?** 332 | Configures training settings. 333 | ```python 334 | model.compile(optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy']) 335 | ``` 336 | 337 | 40. **How do you perform a training step in TensorFlow?** 338 | Executes forward and backward passes. 339 | ```python 340 | def train_step(model, inputs, targets, loss_fn, optimizer): 341 | with tf.GradientTape() as tape: 342 | predictions = model(inputs, training=True) 343 | loss = loss_fn(targets, predictions) 344 | grads = tape.gradient(loss, model.trainable_variables) 345 | optimizer.apply_gradients(zip(grads, model.trainable_variables)) 346 | return loss 347 | ``` 348 | 349 | 41. **How do you move data to GPU in TensorFlow?** 350 | Accelerates computation. 351 | ```python 352 | with tf.device('/GPU:0'): 353 | model.fit(X_train, y_train) 354 | ``` 355 | 356 | 42. **How do you visualize training loss in TensorFlow?** 357 | Plots loss curves. 358 | ```python 359 | import matplotlib.pyplot as plt 360 | def plot_loss(history): 361 | plt.plot(history.history['loss']) 362 | plt.savefig('loss_curve.png') 363 | ``` 364 | 365 | #### Intermediate 366 | 43. **Write a function to implement a training loop in TensorFlow.** 367 | Trains model over epochs. 368 | ```python 369 | def train_model(model, dataset, loss_fn, optimizer, epochs): 370 | for epoch in range(epochs): 371 | epoch_loss = 0 372 | for inputs, targets in dataset: 373 | loss = train_step(model, inputs, targets, loss_fn, optimizer) 374 | epoch_loss += loss 375 | print(f"Epoch {epoch+1}, Loss: {epoch_loss.numpy()}") 376 | ``` 377 | 378 | 44. **How do you implement learning rate scheduling in TensorFlow?** 379 | Adjusts learning rate dynamically. 380 | ```python 381 | lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(0.01, decay_steps=100, decay_rate=0.9) 382 | optimizer = tf.keras.optimizers.SGD(learning_rate=lr_schedule) 383 | ``` 384 | 385 | 45. **Write a function to evaluate a TensorFlow model.** 386 | Computes validation metrics. 387 | ```python 388 | def evaluate_model(model, dataset, loss_fn): 389 | total_loss = 0 390 | for inputs, targets in dataset: 391 | predictions = model(inputs, training=False) 392 | total_loss += loss_fn(targets, predictions) 393 | return total_loss.numpy() / len(dataset) 394 | ``` 395 | 396 | 46. **How do you implement early stopping in TensorFlow?** 397 | Halts training on stagnation. 398 | ```python 399 | early_stopping = tf.keras.callbacks.EarlyStopping(patience=5, restore_best_weights=True) 400 | model.fit(X_train, y_train, callbacks=[early_stopping]) 401 | ``` 402 | 403 | 47. **Write a function to save and load a TensorFlow model.** 404 | Persists trained models. 405 | ```python 406 | def save_model(model, path): 407 | model.save(path) 408 | def load_model(path): 409 | return tf.keras.models.load_model(path) 410 | ``` 411 | 412 | 48. **How do you implement data augmentation in TensorFlow?** 413 | Enhances training data. 414 | ```python 415 | data_augmentation = tf.keras.Sequential([ 416 | layers.RandomFlip('horizontal'), 417 | layers.RandomRotation(0.1) 418 | ]) 419 | ``` 420 | 421 | #### Advanced 422 | 49. **Write a function to implement gradient clipping in TensorFlow.** 423 | Stabilizes training. 424 | ```python 425 | def clip_gradients(grads, max_norm): 426 | return tf.clip_by_global_norm(grads, max_norm)[0] 427 | ``` 428 | 429 | 50. **How do you optimize training for large datasets in TensorFlow?** 430 | Uses distributed training or mixed precision. 431 | ```python 432 | strategy = tf.distribute.MirroredStrategy() 433 | with strategy.scope(): 434 | model = create_mlp(10, [64], 2) 435 | model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') 436 | ``` 437 | 438 | 51. **Write a function to implement custom loss functions in TensorFlow.** 439 | Defines specialized losses. 440 | ```python 441 | def custom_loss(y_true, y_pred): 442 | return tf.reduce_mean(tf.square(y_true - y_pred)) 443 | ``` 444 | 445 | 52. **How do you implement adversarial training in TensorFlow?** 446 | Enhances model robustness. 447 | ```python 448 | def adversarial_step(model, inputs, targets, loss_fn, optimizer, epsilon=0.1): 449 | with tf.GradientTape() as tape: 450 | tape.watch(inputs) 451 | predictions = model(inputs, training=True) 452 | loss = loss_fn(targets, predictions) 453 | grad = tape.gradient(loss, inputs) 454 | adv_inputs = inputs + epsilon * tf.sign(grad) 455 | return train_step(model, adv_inputs, targets, loss_fn, optimizer) 456 | ``` 457 | 458 | 53. **Write a function to implement curriculum learning in TensorFlow.** 459 | Adjusts training difficulty. 460 | ```python 461 | def curriculum_train(model, dataset, loss_fn, optimizer, difficulty): 462 | easy_data = [(x, y) for x, y in dataset if tf.reduce_std(x) < difficulty] 463 | return train_model(model, easy_data, loss_fn, optimizer, epochs=1) 464 | ``` 465 | 466 | 54. **How do you implement distributed training in TensorFlow?** 467 | Scales across multiple GPUs. 468 | ```python 469 | strategy = tf.distribute.MirroredStrategy() 470 | with strategy.scope(): 471 | model = create_mlp(10, [64], 2) 472 | ``` 473 | 474 | ## Data Loading and Preprocessing 475 | 476 | ### Basic 477 | 55. **How do you create a dataset in TensorFlow?** 478 | Defines data access. 479 | ```python 480 | dataset = tf.data.Dataset.from_tensor_slices((X, y)) 481 | ``` 482 | 483 | 56. **How do you create a batched dataset in TensorFlow?** 484 | Batches and shuffles data. 485 | ```python 486 | dataset = dataset.shuffle(1000).batch(32) 487 | ``` 488 | 489 | 57. **How do you preprocess images in TensorFlow?** 490 | Applies transformations for vision tasks. 491 | ```python 492 | def preprocess_image(image): 493 | return tf.image.resize(image, [64, 64]) / 255.0 494 | ``` 495 | 496 | 58. **How do you load a standard dataset in TensorFlow?** 497 | Uses TensorFlow datasets. 498 | ```python 499 | import tensorflow_datasets as tfds 500 | dataset, _ = tfds.load('mnist', split='train', with_info=True) 501 | ``` 502 | 503 | 59. **How do you visualize dataset samples in TensorFlow?** 504 | Plots data examples. 505 | ```python 506 | import matplotlib.pyplot as plt 507 | def plot_samples(dataset): 508 | for image, _ in dataset.take(1): 509 | plt.imshow(image.numpy()) 510 | plt.savefig('sample_image.png') 511 | ``` 512 | 513 | 60. **How do you handle imbalanced datasets in TensorFlow?** 514 | Uses weighted sampling. 515 | ```python 516 | weights = tf.constant([1.0 if y == 0 else 2.0 for y in y_train]) 517 | dataset = dataset.map(lambda x, y: (x, y, weights)) 518 | ``` 519 | 520 | #### Intermediate 521 | 61. **Write a function to create a dataset with augmentation in TensorFlow.** 522 | Enhances data variety. 523 | ```python 524 | def create_augmented_dataset(images, labels): 525 | dataset = tf.data.Dataset.from_tensor_slices((images, labels)) 526 | dataset = dataset.map(lambda x, y: (data_augmentation(x, training=True), y)) 527 | return dataset.batch(32) 528 | ``` 529 | 530 | 62. **How do you implement data normalization in TensorFlow?** 531 | Scales data for training. 532 | ```python 533 | normalization_layer = layers.Normalization() 534 | normalization_layer.adapt(X_train) 535 | ``` 536 | 537 | 63. **Write a function to split a dataset into train/test sets in TensorFlow.** 538 | Prepares data for evaluation. 539 | ```python 540 | def split_dataset(dataset, train_ratio=0.8): 541 | train_size = int(train_ratio * len(dataset)) 542 | train_dataset = dataset.take(train_size) 543 | test_dataset = dataset.skip(train_size) 544 | return train_dataset, test_dataset 545 | ``` 546 | 547 | 64. **How do you optimize data loading in TensorFlow?** 548 | Uses prefetching and caching. 549 | ```python 550 | dataset = dataset.cache().prefetch(tf.data.AUTOTUNE) 551 | ``` 552 | 553 | 65. **Write a function to create a dataset with custom preprocessing.** 554 | Handles complex data transformations. 555 | ```python 556 | def custom_preprocess(x, y): 557 | x = tf.cast(x, tf.float32) / 255.0 558 | return x, y 559 | dataset = dataset.map(custom_preprocess) 560 | ``` 561 | 562 | 66. **How do you handle large datasets in TensorFlow?** 563 | Uses streaming or TFRecords. 564 | ```python 565 | def create_tfrecord_dataset(file_path): 566 | return tf.data.TFRecordDataset(file_path) 567 | ``` 568 | 569 | #### Advanced 570 | 67. **Write a function to implement dataset caching in TensorFlow.** 571 | Speeds up data access. 572 | ```python 573 | def cache_dataset(dataset, cache_file='cache'): 574 | return dataset.cache(cache_file) 575 | ``` 576 | 577 | 68. **How do you implement distributed data loading in TensorFlow?** 578 | Scales data across nodes. 579 | ```python 580 | strategy = tf.distribute.MirroredStrategy() 581 | dataset = strategy.experimental_distribute_dataset(dataset) 582 | ``` 583 | 584 | 69. **Write a function to preprocess text data for NLP in TensorFlow.** 585 | Tokenizes and encodes text. 586 | ```python 587 | from tensorflow.keras.preprocessing.text import Tokenizer 588 | def preprocess_text(texts, max_length=128): 589 | tokenizer = Tokenizer() 590 | tokenizer.fit_on_texts(texts) 591 | sequences = tokenizer.texts_to_sequences(texts) 592 | return tf.keras.preprocessing.sequence.pad_sequences(sequences, maxlen=max_length) 593 | ``` 594 | 595 | 70. **How do you implement data pipelines with TensorFlow?** 596 | Chains preprocessing steps. 597 | ```python 598 | def create_pipeline(dataset): 599 | return dataset.map(preprocess_image).batch(32).prefetch(tf.data.AUTOTUNE) 600 | ``` 601 | 602 | 71. **Write a function to handle multi-modal data in TensorFlow.** 603 | Processes images and text. 604 | ```python 605 | def multi_modal_dataset(images, texts, labels): 606 | dataset = tf.data.Dataset.from_tensor_slices(({'image': images, 'text': texts}, labels)) 607 | return dataset.map(lambda x, y: ({'image': preprocess_image(x['image']), 'text': x['text']}, y)) 608 | ``` 609 | 610 | 72. **How do you optimize data preprocessing for real-time inference?** 611 | Uses efficient transformations. 612 | ```python 613 | def preprocess_for_inference(image): 614 | return tf.image.resize(image, [64, 64], method='nearest') / 255.0 615 | ``` 616 | 617 | ## Model Deployment and Inference 618 | 619 | ### Basic 620 | 73. **How do you perform inference with a TensorFlow model?** 621 | Generates predictions. 622 | ```python 623 | def inference(model, input): 624 | return model.predict(input) 625 | ``` 626 | 627 | 74. **How do you save a trained TensorFlow model for deployment?** 628 | Persists model weights. 629 | ```python 630 | model.save('model') 631 | ``` 632 | 633 | 75. **How do you load a TensorFlow model for inference?** 634 | Restores model state. 635 | ```python 636 | model = tf.keras.models.load_model('model') 637 | ``` 638 | 639 | 76. **What is TensorFlow SavedModel format?** 640 | Standard format for deployment. 641 | ```python 642 | tf.saved_model.save(model, 'saved_model') 643 | ``` 644 | 645 | 77. **How do you optimize a model for inference in TensorFlow?** 646 | Uses quantization. 647 | ```python 648 | converter = tf.lite.TFLiteConverter.from_keras_model(model) 649 | converter.optimizations = [tf.lite.Optimize.DEFAULT] 650 | tflite_model = converter.convert() 651 | ``` 652 | 653 | 78. **How do you visualize inference results in TensorFlow?** 654 | Plots predictions. 655 | ```python 656 | import matplotlib.pyplot as plt 657 | def plot_inference(outputs): 658 | plt.bar(range(len(outputs[0])), tf.nn.softmax(outputs[0]).numpy()) 659 | plt.savefig('inference_plot.png') 660 | ``` 661 | 662 | #### Intermediate 663 | 79. **Write a function to perform batch inference in TensorFlow.** 664 | Processes multiple inputs. 665 | ```python 666 | def batch_inference(model, dataset): 667 | results = [] 668 | for inputs, _ in dataset: 669 | results.extend(model.predict(inputs)) 670 | return results 671 | ``` 672 | 673 | 80. **How do you deploy a TensorFlow model with TensorFlow Serving?** 674 | Serves models via API. 675 | ```python 676 | tf.saved_model.save(model, 'saved_model/1') 677 | ``` 678 | 679 | 81. **Write a function to implement real-time inference in TensorFlow.** 680 | Processes streaming data. 681 | ```python 682 | def real_time_inference(model, input_stream): 683 | for input in input_stream: 684 | yield model.predict(input) 685 | ``` 686 | 687 | 82. **How do you optimize inference for mobile devices in TensorFlow?** 688 | Uses TensorFlow Lite. 689 | ```python 690 | converter = tf.lite.TFLiteConverter.from_keras_model(model) 691 | tflite_model = converter.convert() 692 | with open('model.tflite', 'wb') as f: 693 | f.write(tflite_model) 694 | ``` 695 | 696 | 83. **Write a function to serve a TensorFlow model with FastAPI.** 697 | Exposes model via API. 698 | ```python 699 | from fastapi import FastAPI 700 | app = FastAPI() 701 | @app.post('/predict') 702 | async def predict(data: list): 703 | input = tf.constant(data, dtype=tf.float32) 704 | return {'prediction': model.predict(input).tolist()} 705 | ``` 706 | 707 | 84. **How do you handle model versioning in TensorFlow?** 708 | Tracks model iterations. 709 | ```python 710 | def save_versioned_model(model, version): 711 | tf.saved_model.save(model, f'saved_model/{version}') 712 | ``` 713 | 714 | #### Advanced 715 | 85. **Write a function to implement model quantization in TensorFlow.** 716 | Reduces model size. 717 | ```python 718 | def quantize_model(model): 719 | converter = tf.lite.TFLiteConverter.from_keras_model(model) 720 | converter.optimizations = [tf.lite.Optimize.DEFAULT] 721 | converter.target_spec.supported_types = [tf.int8] 722 | return converter.convert() 723 | ``` 724 | 725 | 86. **How do you deploy TensorFlow models in a distributed environment?** 726 | Uses TensorFlow Serving clusters. 727 | ```python 728 | tf.saved_model.save(model, 'saved_model/1') 729 | ``` 730 | 731 | 87. **Write a function to implement model pruning in TensorFlow.** 732 | Removes unnecessary weights. 733 | ```python 734 | from tensorflow_model_optimization.sparsity import keras as sparsity 735 | def prune_model(model): 736 | pruning_params = {'pruning_schedule': sparsity.PolynomialDecay(initial_sparsity=0.0, final_sparsity=0.5)} 737 | return sparsity.prune_low_magnitude(model, **pruning_params) 738 | ``` 739 | 740 | 88. **How do you implement A/B testing for TensorFlow models?** 741 | Compares model performance. 742 | ```python 743 | def ab_test(model_a, model_b, dataset): 744 | metrics_a = evaluate_model(model_a, dataset, loss_fn) 745 | metrics_b = evaluate_model(model_b, dataset, loss_fn) 746 | return {'model_a': metrics_a, 'model_b': metrics_b} 747 | ``` 748 | 749 | 89. **Write a function to monitor inference performance in TensorFlow.** 750 | Tracks latency and throughput. 751 | ```python 752 | import time 753 | def monitor_inference(model, dataset): 754 | start = time.time() 755 | results = batch_inference(model, dataset) 756 | latency = (time.time() - start) / len(dataset) 757 | return {'latency': latency, 'throughput': len(dataset) / (time.time() - start)} 758 | ``` 759 | 760 | 90. **How do you implement model explainability in TensorFlow?** 761 | Visualizes feature importance. 762 | ```python 763 | from tf_explain.core.grad_cam import GradCAM 764 | def explain_model(model, input): 765 | explainer = GradCAM() 766 | grid = explainer.explain((input, None), model, class_index=0) 767 | return grid 768 | ``` 769 | 770 | ## Debugging and Error Handling 771 | 772 | ### Basic 773 | 91. **How do you debug TensorFlow tensor operations?** 774 | Logs tensor shapes and values. 775 | ```python 776 | def debug_tensor(tensor): 777 | print(f"Shape: {tensor.shape}, Values: {tensor[:5]}") 778 | return tensor 779 | ``` 780 | 781 | 92. **What is a try-except block in TensorFlow applications?** 782 | Handles runtime errors. 783 | ```python 784 | try: 785 | output = model.predict(input) 786 | except tf.errors.InvalidArgumentError as e: 787 | print(f"Error: {e}") 788 | ``` 789 | 790 | 93. **How do you validate TensorFlow model inputs?** 791 | Ensures correct shapes and types. 792 | ```python 793 | def validate_input(tensor, expected_shape): 794 | if tensor.shape != expected_shape: 795 | raise ValueError(f"Expected shape {expected_shape}, got {tensor.shape}") 796 | return tensor 797 | ``` 798 | 799 | 94. **How do you handle NaN values in TensorFlow tensors?** 800 | Detects and replaces NaNs. 801 | ```python 802 | tensor = tf.where(tf.math.is_nan(tensor), tf.zeros_like(tensor), tensor) 803 | ``` 804 | 805 | 95. **What is the role of logging in TensorFlow debugging?** 806 | Tracks errors and operations. 807 | ```python 808 | import logging 809 | logging.basicConfig(filename='tensorflow.log', level=logging.INFO) 810 | logging.info("Starting TensorFlow operation") 811 | ``` 812 | 813 | 96. **How do you handle GPU memory errors in TensorFlow?** 814 | Manages memory allocation. 815 | ```python 816 | def safe_operation(tensor): 817 | if tf.config.experimental.get_memory_info('GPU:0')['current'] > 0.9 * tf.config.experimental.get_memory_info('GPU:0')['peak']: 818 | raise MemoryError("GPU memory limit reached") 819 | return tensor * 2 820 | ``` 821 | 822 | #### Intermediate 823 | 97. **Write a function to retry TensorFlow operations on failure.** 824 | Handles transient errors. 825 | ```python 826 | def retry_operation(func, tensor, max_attempts=3): 827 | for attempt in range(max_attempts): 828 | try: 829 | return func(tensor) 830 | except Exception as e: 831 | if attempt == max_attempts - 1: 832 | raise 833 | print(f"Attempt {attempt+1} failed: {e}") 834 | ``` 835 | 836 | 98. **How do you debug TensorFlow model outputs?** 837 | Inspects intermediate results. 838 | ```python 839 | def debug_model(model, input): 840 | output = model(input) 841 | print(f"Output shape: {output.shape}, Values: {output[:5]}") 842 | return output 843 | ``` 844 | 845 | 99. **Write a function to validate TensorFlow model parameters.** 846 | Ensures correct weights. 847 | ```python 848 | def validate_params(model): 849 | for layer in model.layers: 850 | weights = layer.get_weights() 851 | if any(tf.math.is_nan(w).numpy().any() for w in weights): 852 | raise ValueError("NaN in weights") 853 | return model 854 | ``` 855 | 856 | 100. **How do you profile TensorFlow operation performance?** 857 | Measures execution time. 858 | ```python 859 | import time 860 | def profile_operation(model, input): 861 | start = time.time() 862 | output = model(input) 863 | print(f"Operation took {time.time() - start}s") 864 | return output 865 | ``` 866 | 867 | 101. **Write a function to handle numerical instability in TensorFlow.** 868 | Stabilizes computations. 869 | ```python 870 | def safe_computation(tensor, epsilon=1e-8): 871 | return tf.clip_by_value(tensor, epsilon, 1/epsilon) 872 | ``` 873 | 874 | 102. **How do you debug TensorFlow training loops?** 875 | Logs epoch metrics. 876 | ```python 877 | def debug_training(model, dataset, loss_fn, optimizer): 878 | losses = [] 879 | for inputs, targets in dataset: 880 | loss = train_step(model, inputs, targets, loss_fn, optimizer) 881 | print(f"Batch loss: {loss.numpy()}") 882 | losses.append(loss) 883 | return losses 884 | ``` 885 | 886 | #### Advanced 887 | 103. **Write a function to implement a custom TensorFlow error handler.** 888 | Logs specific errors. 889 | ```python 890 | import logging 891 | def custom_error_handler(operation, tensor): 892 | logging.basicConfig(filename='tensorflow.log', level=logging.ERROR) 893 | try: 894 | return operation(tensor) 895 | except Exception as e: 896 | logging.error(f"Operation error: {e}") 897 | raise 898 | ``` 899 | 900 | 104. **How do you implement circuit breakers in TensorFlow applications?** 901 | Prevents cascading failures. 902 | ```python 903 | from pybreaker import CircuitBreaker 904 | breaker = CircuitBreaker(fail_max=3, reset_timeout=60) 905 | @breaker 906 | def safe_training(model, inputs, targets, loss_fn, optimizer): 907 | return train_step(model, inputs, targets, loss_fn, optimizer) 908 | ``` 909 | 910 | 105. **Write a function to detect gradient explosions in TensorFlow.** 911 | Checks gradient norms. 912 | ```python 913 | def detect_explosion(model, inputs, targets, loss_fn): 914 | with tf.GradientTape() as tape: 915 | predictions = model(inputs) 916 | loss = loss_fn(targets, predictions) 917 | grads = tape.gradient(loss, model.trainable_variables) 918 | total_norm = tf.sqrt(sum(tf.norm(g) ** 2 for g in grads)) 919 | if total_norm > 10: 920 | print("Warning: Gradient explosion detected") 921 | ``` 922 | 923 | 106. **How do you implement logging for distributed TensorFlow training?** 924 | Centralizes logs for debugging. 925 | ```python 926 | import logging.handlers 927 | def setup_distributed_logging(): 928 | handler = logging.handlers.SocketHandler('log-server', 9090) 929 | logging.getLogger().addHandler(handler) 930 | logging.info("TensorFlow training started") 931 | ``` 932 | 933 | 107. **Write a function to handle version compatibility in TensorFlow.** 934 | Checks library versions. 935 | ```python 936 | import tensorflow as tf 937 | def check_tensorflow_version(): 938 | if tf.__version__ < '2.0': 939 | raise ValueError("Unsupported TensorFlow version") 940 | ``` 941 | 942 | 108. **How do you debug TensorFlow performance bottlenecks?** 943 | Profiles training stages. 944 | ```python 945 | from tensorflow.profiler.experimental import ProfilerOptions 946 | def debug_bottlenecks(model, inputs): 947 | with tf.profiler.experimental.Profile('logdir'): 948 | model(inputs) 949 | ``` 950 | 951 | ## Visualization and Interpretation 952 | 953 | ### Basic 954 | 109. **How do you visualize TensorFlow tensor distributions?** 955 | Plots histograms for analysis. 956 | ```python 957 | import matplotlib.pyplot as plt 958 | def plot_tensor_dist(tensor): 959 | plt.hist(tensor.numpy(), bins=20) 960 | plt.savefig('tensor_dist.png') 961 | ``` 962 | 963 | 110. **How do you create a scatter plot with TensorFlow outputs?** 964 | Visualizes predictions. 965 | ```python 966 | import matplotlib.pyplot as plt 967 | def plot_scatter(outputs, targets): 968 | plt.scatter(outputs.numpy(), targets.numpy()) 969 | plt.savefig('scatter_plot.png') 970 | ``` 971 | 972 | 111. **How do you visualize training metrics in TensorFlow?** 973 | Plots loss or accuracy curves. 974 | ```python 975 | import matplotlib.pyplot as plt 976 | def plot_metrics(history): 977 | plt.plot(history.history['accuracy']) 978 | plt.savefig('metrics_plot.png') 979 | ``` 980 | 981 | 112. **How do you visualize model feature maps in TensorFlow?** 982 | Shows convolutional outputs. 983 | ```python 984 | import matplotlib.pyplot as plt 985 | def plot_feature_maps(model, input): 986 | feature_model = tf.keras.Model(inputs=model.input, outputs=model.layers[0].output) 987 | features = feature_model.predict(input) 988 | plt.imshow(features[0, :, :, 0], cmap='gray') 989 | plt.savefig('feature_map.png') 990 | ``` 991 | 992 | 113. **How do you create a confusion matrix in TensorFlow?** 993 | Evaluates classification performance. 994 | ```python 995 | from sklearn.metrics import confusion_matrix 996 | import seaborn as sns 997 | def plot_confusion_matrix(outputs, targets): 998 | cm = confusion_matrix(targets, tf.argmax(outputs, axis=1).numpy()) 999 | sns.heatmap(cm, annot=True) 1000 | plt.savefig('confusion_matrix.png') 1001 | ``` 1002 | 1003 | 114. **How do you visualize gradient flow in TensorFlow?** 1004 | Checks vanishing/exploding gradients. 1005 | ```python 1006 | import matplotlib.pyplot as plt 1007 | def plot_grad_flow(model, inputs, targets, loss_fn): 1008 | with tf.GradientTape() as tape: 1009 | predictions = model(inputs) 1010 | loss = loss_fn(targets, predictions) 1011 | grads = tape.gradient(loss, model.trainable_variables) 1012 | plt.plot([tf.norm(g).numpy() for g in grads]) 1013 | plt.savefig('grad_flow.png') 1014 | ``` 1015 | 1016 | #### Intermediate 1017 | 115. **Write a function to visualize model predictions over time.** 1018 | Plots temporal trends. 1019 | ```python 1020 | import matplotlib.pyplot as plt 1021 | def plot_time_series(outputs): 1022 | plt.plot(outputs.numpy()) 1023 | plt.savefig('time_series_plot.png') 1024 | ``` 1025 | 1026 | 116. **How do you visualize attention weights in TensorFlow?** 1027 | Shows model focus areas. 1028 | ```python 1029 | import matplotlib.pyplot as plt 1030 | def plot_attention(attention_weights): 1031 | plt.imshow(attention_weights.numpy(), cmap='hot') 1032 | plt.colorbar() 1033 | plt.savefig('attention_plot.png') 1034 | ``` 1035 | 1036 | 117. **Write a function to visualize model uncertainty.** 1037 | Plots confidence intervals. 1038 | ```python 1039 | import matplotlib.pyplot as plt 1040 | def plot_uncertainty(outputs, std): 1041 | mean = tf.reduce_mean(outputs, axis=0).numpy() 1042 | std = std.numpy() 1043 | plt.plot(mean) 1044 | plt.fill_between(range(len(mean)), mean - std, mean + std, alpha=0.2) 1045 | plt.savefig('uncertainty_plot.png') 1046 | ``` 1047 | 1048 | 118. **How do you visualize embedding spaces in TensorFlow?** 1049 | Projects high-dimensional data. 1050 | ```python 1051 | from sklearn.manifold import TSNE 1052 | import matplotlib.pyplot as plt 1053 | def plot_embeddings(embeddings): 1054 | tsne = TSNE(n_components=2) 1055 | reduced = tsne.fit_transform(embeddings.numpy()) 1056 | plt.scatter(reduced[:, 0], reduced[:, 1]) 1057 | plt.savefig('embedding_plot.png') 1058 | ``` 1059 | 1060 | 119. **Write a function to visualize model performance metrics.** 1061 | Plots accuracy or loss. 1062 | ```python 1063 | import matplotlib.pyplot as plt 1064 | def plot_performance(metrics, metric_name): 1065 | plt.plot(metrics) 1066 | plt.title(metric_name) 1067 | plt.savefig(f'{metric_name}_plot.png') 1068 | ``` 1069 | 1070 | 120. **How do you visualize data augmentation effects in TensorFlow?** 1071 | Compares original and augmented data. 1072 | ```python 1073 | import matplotlib.pyplot as plt 1074 | def plot_augmentation(original, augmented): 1075 | plt.subplot(1, 2, 1) 1076 | plt.imshow(original.numpy()) 1077 | plt.subplot(1, 2, 2) 1078 | plt.imshow(augmented.numpy()) 1079 | plt.savefig('augmentation_plot.png') 1080 | ``` 1081 | 1082 | #### Advanced 1083 | 121. **Write a function to visualize model interpretability with Grad-CAM.** 1084 | Highlights important regions. 1085 | ```python 1086 | from tf_explain.core.grad_cam import GradCAM 1087 | import matplotlib.pyplot as plt 1088 | def plot_grad_cam(model, input): 1089 | explainer = GradCAM() 1090 | grid = explainer.explain((input, None), model, class_index=0) 1091 | plt.imshow(grid, cmap='jet') 1092 | plt.savefig('grad_cam_plot.png') 1093 | ``` 1094 | 1095 | 122. **How do you implement a dashboard for TensorFlow metrics?** 1096 | Displays real-time training stats. 1097 | ```python 1098 | from fastapi import FastAPI 1099 | app = FastAPI() 1100 | metrics = [] 1101 | @app.get('/metrics') 1102 | async def get_metrics(): 1103 | return {'metrics': metrics} 1104 | ``` 1105 | 1106 | 123. **Write a function to visualize data drift in TensorFlow.** 1107 | Tracks dataset changes. 1108 | ```python 1109 | import matplotlib.pyplot as plt 1110 | def plot_data_drift(old_data, new_data): 1111 | plt.hist(old_data.numpy(), alpha=0.5, label='Old') 1112 | plt.hist(new_data.numpy(), alpha=0.5, label='New') 1113 | plt.legend() 1114 | plt.savefig('data_drift_plot.png') 1115 | ``` 1116 | 1117 | 124. **How do you visualize model robustness in TensorFlow?** 1118 | Plots performance under perturbations. 1119 | ```python 1120 | import matplotlib.pyplot as plt 1121 | def plot_robustness(outputs, noise_levels): 1122 | accuracies = [tf.reduce_mean(o).numpy() for o in outputs] 1123 | plt.plot(noise_levels, accuracies) 1124 | plt.savefig('robustness_plot.png') 1125 | ``` 1126 | 1127 | 125. **Write a function to visualize multi-modal model outputs.** 1128 | Plots image and text predictions. 1129 | ```python 1130 | import matplotlib.pyplot as plt 1131 | def plot_multi_modal(image_output, text_output): 1132 | plt.subplot(1, 2, 1) 1133 | plt.imshow(image_output.numpy()) 1134 | plt.subplot(1, 2, 2) 1135 | plt.bar(range(len(text_output)), text_output.numpy()) 1136 | plt.savefig('multi_modal_plot.png') 1137 | ``` 1138 | 1139 | 126. **How do you visualize model fairness in TensorFlow?** 1140 | Plots group-wise metrics. 1141 | ```python 1142 | import matplotlib.pyplot as plt 1143 | def plot_fairness(outputs, groups): 1144 | group_metrics = [tf.reduce_mean(outputs[groups == g]).numpy() for g in tf.unique(groups)[0]] 1145 | plt.bar(tf.unique(groups)[0].numpy(), group_metrics) 1146 | plt.savefig('fairness_plot.png') 1147 | ``` 1148 | 1149 | ## Best Practices and Optimization 1150 | 1151 | ### Basic 1152 | 127. **What are best practices for TensorFlow code organization?** 1153 | Modularizes model and training code. 1154 | ```python 1155 | def build_model(): 1156 | return models.Sequential([layers.Dense(10)]) 1157 | def train(model, dataset): 1158 | model.fit(dataset, epochs=1) 1159 | ``` 1160 | 1161 | 128. **How do you ensure reproducibility in TensorFlow?** 1162 | Sets random seeds. 1163 | ```python 1164 | import tensorflow as tf 1165 | tf.random.set_seed(42) 1166 | ``` 1167 | 1168 | 129. **What is caching in TensorFlow pipelines?** 1169 | Stores intermediate results. 1170 | ```python 1171 | dataset = dataset.cache() 1172 | ``` 1173 | 1174 | 130. **How do you handle large-scale TensorFlow models?** 1175 | Uses model parallelism. 1176 | ```python 1177 | strategy = tf.distribute.MirroredStrategy() 1178 | with strategy.scope(): 1179 | model = build_model() 1180 | ``` 1181 | 1182 | 131. **What is the role of environment configuration in TensorFlow?** 1183 | Manages settings securely. 1184 | ```python 1185 | import os 1186 | os.environ['TF_MODEL_PATH'] = 'model' 1187 | ``` 1188 | 1189 | 132. **How do you document TensorFlow code?** 1190 | Uses docstrings for clarity. 1191 | ```python 1192 | def train_model(model, dataset): 1193 | """Trains a TensorFlow model over a dataset.""" 1194 | return model.fit(dataset, epochs=1) 1195 | ``` 1196 | 1197 | #### Intermediate 1198 | 133. **Write a function to optimize TensorFlow memory usage.** 1199 | Limits memory allocation. 1200 | ```python 1201 | def optimize_memory(model): 1202 | model.compile(dtype='float16') 1203 | return model 1204 | ``` 1205 | 1206 | 134. **How do you implement unit tests for TensorFlow code?** 1207 | Validates model behavior. 1208 | ```python 1209 | import unittest 1210 | class TestTensorFlow(unittest.TestCase): 1211 | def test_model_output(self): 1212 | model = build_model() 1213 | input = tf.random.uniform((1, 10)) 1214 | output = model(input) 1215 | self.assertEqual(output.shape, (1, 10)) 1216 | ``` 1217 | 1218 | 135. **Write a function to create reusable TensorFlow templates.** 1219 | Standardizes model building. 1220 | ```python 1221 | def model_template(input_dim, output_dim): 1222 | return models.Sequential([ 1223 | layers.Dense(64, input_shape=(input_dim,), activation='relu'), 1224 | layers.Dense(output_dim) 1225 | ]) 1226 | ``` 1227 | 1228 | 136. **How do you optimize TensorFlow for batch processing?** 1229 | Processes data in chunks. 1230 | ```python 1231 | def batch_process(model, dataset): 1232 | results = [] 1233 | for batch in dataset: 1234 | results.extend(model.predict(batch[0])) 1235 | return results 1236 | ``` 1237 | 1238 | 137. **Write a function to handle TensorFlow configuration.** 1239 | Centralizes settings. 1240 | ```python 1241 | def configure_tensorflow(): 1242 | return {'device': '/GPU:0', 'dtype': tf.float32} 1243 | ``` 1244 | 1245 | 138. **How do you ensure TensorFlow pipeline consistency?** 1246 | Standardizes versions and settings. 1247 | ```python 1248 | import tensorflow as tf 1249 | def check_tensorflow_env(): 1250 | print(f"TensorFlow version: {tf.__version__}") 1251 | ``` 1252 | 1253 | #### Advanced 1254 | 139. **Write a function to implement TensorFlow pipeline caching.** 1255 | Reuses processed data. 1256 | ```python 1257 | def cache_data(dataset, cache_path='cache'): 1258 | return dataset.cache(cache_path) 1259 | ``` 1260 | 1261 | 140. **How do you optimize TensorFlow for high-throughput processing?** 1262 | Uses parallel execution. 1263 | ```python 1264 | from joblib import Parallel, delayed 1265 | def high_throughput_inference(model, inputs): 1266 | return Parallel(n_jobs=-1)(delayed(model.predict)(input) for input in inputs) 1267 | ``` 1268 | 1269 | 141. **Write a function to implement TensorFlow pipeline versioning.** 1270 | Tracks changes in workflows. 1271 | ```python 1272 | import json 1273 | def version_pipeline(config, version): 1274 | with open(f'tensorflow_pipeline_v{version}.json', 'w') as f: 1275 | json.dump(config, f) 1276 | ``` 1277 | 1278 | 142. **How do you implement TensorFlow pipeline monitoring?** 1279 | Logs performance metrics. 1280 | ```python 1281 | import logging 1282 | def monitored_training(model, dataset): 1283 | logging.basicConfig(filename='tensorflow.log', level=logging.INFO) 1284 | start = time.time() 1285 | history = model.fit(dataset, epochs=1) 1286 | logging.info(f"Training took {time.time() - start}s") 1287 | return history 1288 | ``` 1289 | 1290 | 143. **Write a function to handle TensorFlow scalability.** 1291 | Processes large datasets efficiently. 1292 | ```python 1293 | def scalable_training(model, dataset, chunk_size=1000): 1294 | for batch in dataset.take(chunk_size): 1295 | model.fit(batch, epochs=1) 1296 | ``` 1297 | 1298 | 144. **How do you implement TensorFlow pipeline automation?** 1299 | Scripts end-to-end workflows. 1300 | ```python 1301 | def automate_pipeline(data, labels): 1302 | dataset = tf.data.Dataset.from_tensor_slices((data, labels)).batch(32) 1303 | model = build_model() 1304 | model.fit(dataset, epochs=5) 1305 | model.save('model') 1306 | return model 1307 | ``` 1308 | 1309 | ## Ethical Considerations in TensorFlow 1310 | 1311 | ### Basic 1312 | 145. **What are ethical concerns in TensorFlow applications?** 1313 | Includes bias in models and energy consumption. 1314 | ```python 1315 | def check_model_bias(outputs, groups): 1316 | return tf.reduce_mean(outputs[groups == g]).numpy() for g in tf.unique(groups)[0] 1317 | ``` 1318 | 1319 | 146. **How do you detect bias in TensorFlow model predictions?** 1320 | Analyzes group disparities. 1321 | ```python 1322 | def detect_bias(outputs, groups): 1323 | return {g.numpy(): tf.reduce_mean(outputs[groups == g]).numpy() for g in tf.unique(groups)[0]} 1324 | ``` 1325 | 1326 | 147. **What is data privacy in TensorFlow, and how is it ensured?** 1327 | Protects sensitive data. 1328 | ```python 1329 | def anonymize_data(data): 1330 | return data + tf.random.normal(tf.shape(data), mean=0, stddev=0.1) 1331 | ``` 1332 | 1333 | 148. **How do you ensure fairness in TensorFlow models?** 1334 | Balances predictions across groups. 1335 | ```python 1336 | def fair_training(model, dataset, weights): 1337 | dataset = dataset.map(lambda x, y: (x, y, weights)) 1338 | return model.fit(dataset, epochs=1) 1339 | ``` 1340 | 1341 | 149. **What is explainability in TensorFlow applications?** 1342 | Clarifies model decisions. 1343 | ```python 1344 | def explain_predictions(model, input): 1345 | grid = explain_model(model, input) 1346 | print(f"Feature importance: {grid}") 1347 | return grid 1348 | ``` 1349 | 1350 | 150. **How do you visualize TensorFlow model bias?** 1351 | Plots group-wise predictions. 1352 | ```python 1353 | import matplotlib.pyplot as plt 1354 | def plot_bias(outputs, groups): 1355 | group_means = [tf.reduce_mean(outputs[groups == g]).numpy() for g in tf.unique(groups)[0]] 1356 | plt.bar(tf.unique(groups)[0].numpy(), group_means) 1357 | plt.savefig('bias_plot.png') 1358 | ``` 1359 | 1360 | #### Intermediate 1361 | 151. **Write a function to mitigate bias in TensorFlow models.** 1362 | Reweights or resamples data. 1363 | ```python 1364 | def mitigate_bias(dataset, weights): 1365 | return dataset.map(lambda x, y: (x, y, weights)) 1366 | ``` 1367 | 1368 | 152. **How do you implement differential privacy in TensorFlow?** 1369 | Adds noise to gradients. 1370 | ```python 1371 | from tensorflow_privacy import DPAdamGaussianOptimizer 1372 | optimizer = DPAdamGaussianOptimizer(learning_rate=0.01, noise_multiplier=1.0) 1373 | ``` 1374 | 1375 | 153. **Write a function to assess model fairness in TensorFlow.** 1376 | Computes fairness metrics. 1377 | ```python 1378 | def fairness_metrics(outputs, groups, targets): 1379 | group_acc = {g.numpy(): tf.reduce_mean(tf.cast(outputs[groups == g] == targets[groups == g], tf.float32)).numpy() 1380 | for g in tf.unique(groups)[0]} 1381 | return group_acc 1382 | ``` 1383 | 1384 | 154. **How do you ensure energy-efficient TensorFlow training?** 1385 | Optimizes resource usage. 1386 | ```python 1387 | def efficient_training(model, dataset): 1388 | model.compile(dtype='float16') 1389 | return model.fit(dataset, epochs=1) 1390 | ``` 1391 | 1392 | 155. **Write a function to audit TensorFlow model decisions.** 1393 | Logs predictions and inputs. 1394 | ```python 1395 | import logging 1396 | def audit_predictions(model, inputs, outputs): 1397 | logging.basicConfig(filename='audit.log', level=logging.INFO) 1398 | for i, o in zip(inputs, outputs): 1399 | logging.info(f"Input: {i.numpy().tolist()}, Output: {o.numpy().tolist()}") 1400 | ``` 1401 | 1402 | 156. **How do you visualize fairness metrics in TensorFlow?** 1403 | Plots group-wise performance. 1404 | ```python 1405 | import matplotlib.pyplot as plt 1406 | def plot_fairness_metrics(metrics): 1407 | plt.bar(metrics.keys(), metrics.values()) 1408 | plt.savefig('fairness_metrics_plot.png') 1409 | ``` 1410 | 1411 | #### Advanced 1412 | 157. **Write a function to implement fairness-aware training in TensorFlow.** 1413 | Uses adversarial debiasing. 1414 | ```python 1415 | def fairness_training(model, adv_model, dataset, loss_fn, optimizer, adv_optimizer): 1416 | for inputs, targets, groups in dataset: 1417 | with tf.GradientTape() as tape: 1418 | outputs = model(inputs, training=True) 1419 | adv_loss = loss_fn(adv_model(outputs), groups) 1420 | adv_grads = tape.gradient(adv_loss, adv_model.trainable_variables) 1421 | adv_optimizer.apply_gradients(zip(adv_grads, adv_model.trainable_variables)) 1422 | loss = loss_fn(outputs, targets) - adv_loss 1423 | grads = tape.gradient(loss, model.trainable_variables) 1424 | optimizer.apply_gradients(zip(grads, model.trainable_variables)) 1425 | ``` 1426 | 1427 | 158. **How do you implement privacy-preserving inference in TensorFlow?** 1428 | Uses encrypted computation. 1429 | ```python 1430 | def private_inference(model, input): 1431 | input_noisy = input + tf.random.normal(tf.shape(input), mean=0, stddev=0.1) 1432 | return model.predict(input_noisy) 1433 | ``` 1434 | 1435 | 159. **Write a function to monitor ethical risks in TensorFlow models.** 1436 | Tracks bias and fairness metrics. 1437 | ```python 1438 | import logging 1439 | def monitor_ethics(outputs, groups, targets): 1440 | logging.basicConfig(filename='ethics.log', level=logging.INFO) 1441 | metrics = fairness_metrics(outputs, groups, targets) 1442 | logging.info(f"Fairness metrics: {metrics}") 1443 | return metrics 1444 | ``` 1445 | 1446 | 160. **How do you implement explainable AI with TensorFlow?** 1447 | Uses attribution methods. 1448 | ```python 1449 | from tf_explain.core.integrated_gradients import IntegratedGradients 1450 | def explainable_model(model, input): 1451 | explainer = IntegratedGradients() 1452 | grid = explainer.explain((input, None), model, class_index=0) 1453 | return grid 1454 | ``` 1455 | 1456 | 161. **Write a function to ensure regulatory compliance in TensorFlow.** 1457 | Logs model metadata. 1458 | ```python 1459 | import json 1460 | def log_compliance(model, metadata): 1461 | with open('compliance.json', 'w') as f: 1462 | json.dump({'model': str(model), 'metadata': metadata}, f) 1463 | ``` 1464 | 1465 | 162. **How do you implement ethical model evaluation in TensorFlow?** 1466 | Assesses fairness and robustness. 1467 | ```python 1468 | def ethical_evaluation(model, dataset): 1469 | fairness = fairness_metrics(*batch_inference(model, dataset)) 1470 | robustness = evaluate_model(model, dataset, loss_fn) 1471 | return {'fairness': fairness, 'robustness': robustness} 1472 | ``` 1473 | 1474 | ## Integration with Other Libraries 1475 | 1476 | ### Basic 1477 | 163. **How do you integrate TensorFlow with NumPy?** 1478 | Converts between tensors and arrays. 1479 | ```python 1480 | import numpy as np 1481 | array = np.array([1, 2, 3]) 1482 | tensor = tf.convert_to_tensor(array) 1483 | ``` 1484 | 1485 | 164. **How do you integrate TensorFlow with Pandas?** 1486 | Prepares DataFrame data for models. 1487 | ```python 1488 | import pandas as pd 1489 | df = pd.DataFrame({'A': [1, 2, 3]}) 1490 | tensor = tf.convert_to_tensor(df['A'].values) 1491 | ``` 1492 | 1493 | 165. **How do you use TensorFlow with Matplotlib?** 1494 | Visualizes model outputs. 1495 | ```python 1496 | import matplotlib.pyplot as plt 1497 | def plot_data(tensor): 1498 | plt.plot(tensor.numpy()) 1499 | plt.savefig('data_plot.png') 1500 | ``` 1501 | 1502 | 166. **How do you integrate TensorFlow with Scikit-learn?** 1503 | Combines ML and DL workflows. 1504 | ```python 1505 | from sklearn.metrics import accuracy_score 1506 | def evaluate_with_sklearn(model, dataset): 1507 | outputs, targets = batch_inference(model, dataset) 1508 | return accuracy_score(targets, tf.argmax(outputs, axis=1).numpy()) 1509 | ``` 1510 | 1511 | 167. **How do you use TensorFlow with Keras?** 1512 | Leverages Keras API for simplicity. 1513 | ```python 1514 | from tensorflow.keras import models, layers 1515 | model = models.Sequential([layers.Dense(10)]) 1516 | ``` 1517 | 1518 | 168. **How do you integrate TensorFlow with Hugging Face Transformers?** 1519 | Uses pre-trained NLP models. 1520 | ```python 1521 | from transformers import TFBertModel 1522 | model = TFBertModel.from_pretrained('bert-base-uncased') 1523 | ``` 1524 | 1525 | #### Intermediate 1526 | 169. **Write a function to integrate TensorFlow with Pandas for preprocessing.** 1527 | Converts DataFrames to tensors. 1528 | ```python 1529 | def preprocess_with_pandas(df, columns): 1530 | return tf.convert_to_tensor(df[columns].values, dtype=tf.float32) 1531 | ``` 1532 | 1533 | 170. **How do you integrate TensorFlow with Dask for large-scale data?** 1534 | Processes big data efficiently. 1535 | ```python 1536 | import dask.dataframe as dd 1537 | def dask_to_tensorflow(df): 1538 | df = dd.from_pandas(df, npartitions=4) 1539 | tensors = [tf.convert_to_tensor(part[columns].compute().values) for part in df.partitions] 1540 | return tf.concat(tensors, axis=0) 1541 | ``` --------------------------------------------------------------------------------