The response has been limited to 50k tokens of the smallest files in the repo. You can remove this limitation by removing the max tokens filter.
├── .DS_Store
├── .gitattributes
├── .gitignore
├── .idea
    ├── $CACHE_FILE$
    ├── $PRODUCT_WORKSPACE_FILE$
    ├── OUCML.iml
    ├── codeStyles
    │   └── Project.xml
    ├── csv-plugin.xml
    ├── dbnavigator.xml
    ├── dictionaries
    ├── inspectionProfiles
    │   └── profiles_settings.xml
    ├── misc.xml
    ├── modules.xml
    ├── vcs.xml
    └── workspace.xml
├── Attention
    └── test
├── AutoML
    ├── README.md
    └── darts-master
    │   ├── LICENSE
    │   ├── README.md
    │   ├── cnn
    │       ├── architect.py
    │       ├── genotypes.py
    │       ├── model.py
    │       ├── model_search.py
    │       ├── operations.py
    │       ├── test.py
    │       ├── test_imagenet.py
    │       ├── train.py
    │       ├── train_imagenet.py
    │       ├── train_search.py
    │       ├── utils.py
    │       └── visualize.py
    │   ├── img
    │       ├── cifar10.png
    │       ├── darts.png
    │       ├── imagenet.png
    │       ├── progress_convolutional.gif
    │       ├── progress_convolutional_normal.gif
    │       ├── progress_convolutional_reduce.gif
    │       ├── progress_recurrent.gif
    │       └── ptb.png
    │   └── rnn
    │       ├── architect.py
    │       ├── data.py
    │       ├── genotypes.py
    │       ├── model.py
    │       ├── model_search.py
    │       ├── test.py
    │       ├── train.py
    │       ├── train_search.py
    │       ├── utils.py
    │       └── visualize.py
├── BOOK
    ├──  OpenCV中文版.pdf
    ├── CUDA编程入门.pdf
    ├── C和指针.pdf
    ├── David Beazley:Python Cookbook_2013 (第3版).pdf
    ├── Dive-into-DL-PyTorch.pdf
    ├── Dive-into-DL-Pytorch.md
    ├── Jakub Langr, Vladimir Bok - GANs in Action_ Deep learning with Generative Adversarial Networks (2019, Manning Publications).epub
    ├── Machine Learning Books you should read in 2020.md
    ├── batch-markdown-to-pdf.py
    ├── deep_learning.pdf
    ├── pytorch速查表.pdf
    ├── template.tex
    └── 《Python Cookbook》第三版中文v1.0.2.pdf
├── GAN
    ├──  Wasserstein_GAN
    │   ├── README.md
    │   ├── WGAN-tensorflow-master
    │   │   ├── README.md
    │   │   ├── WGAN.ipynb
    │   │   ├── generate_from_ckpt.ipynb
    │   │   ├── get_svhn.py
    │   │   └── load_svhn.py
    │   ├── wgan
    │   │   ├── WGAN_CIFAR10.py
    │   │   ├── images
    │   │   │   └── .gitignore
    │   │   ├── readme.md
    │   │   ├── saved_model
    │   │   │   └── .gitignore
    │   │   ├── wgan.py
    │   │   ├── wgan_fashion_mnist.py
    │   │   └── wgan_mnist.py
    │   └── wgan_gp
    │   │   ├── images
    │   │       ├── fashion_mnist_29700.png
    │   │       ├── fashion_mnist_29800.png
    │   │       ├── fashion_mnist_29900.png
    │   │       ├── mnist_0.png
    │   │       ├── mnist_100.png
    │   │       ├── mnist_200.png
    │   │       ├── mnist_29600.png
    │   │       ├── mnist_29700.png
    │   │       ├── mnist_29800.png
    │   │       └── mnist_29900.png
    │   │   ├── readme.md
    │   │   └── wgan_gp.py
    ├── ACGAN-PyTorch-master
    │   ├── LICENSE
    │   ├── README.md
    │   ├── __pycache__
    │   │   ├── folder.cpython-36.pyc
    │   │   ├── network.cpython-36.pyc
    │   │   └── utils.cpython-36.pyc
    │   ├── fake_samples_epoch_482.png
    │   ├── fake_samples_epoch_498.png
    │   ├── figs
    │   │   ├── architecture.png
    │   │   ├── fake_samples_epoch_470.png
    │   │   └── fake_samples_epoch_499.png
    │   ├── folder.py
    │   ├── main.py
    │   ├── network.py
    │   ├── real_samples.png
    │   └── utils.py
    ├── DEHAZE-AOE_torch
    │   ├── README.md
    │   ├── dataloader.py
    │   ├── dehaze.py
    │   ├── net.py
    │   ├── results
    │   │   ├── 3921554452543_.pic.jpg
    │   │   └── WechatIMG391.jpeg
    │   ├── samples
    │   │   └── dummyText.txt
    │   ├── snapshots
    │   │   └── dehazer.pth
    │   ├── test_images
    │   │   ├── 3921554452543_.pic.jpg
    │   │   └── WechatIMG391.jpeg
    │   └── train.py
    ├── README.md
    ├── Self_attention_GAN_tensorflow
    │   ├── 1803.08664.pdf
    │   ├── LICENSE
    │   ├── README.md
    │   ├── SAGAN.py
    │   ├── SAGAN_train_08_07500.png
    │   ├── SAGAN_train_08_09000.png
    │   ├── __pycache__
    │   │   ├── SAGAN.cpython-36.pyc
    │   │   ├── ops.cpython-36.pyc
    │   │   └── utils.cpython-36.pyc
    │   ├── assests
    │   │   ├── celebA.png
    │   │   ├── framework.PNG
    │   │   └── result_.png
    │   ├── download.py
    │   ├── main.py
    │   ├── ops.py
    │   ├── readme_cy.md
    │   ├── readme_cy.md.tmp.html
    │   ├── results
    │   │   ├── SAGAN_celebA_hinge_128_128_True
    │   │   │   ├── SAGAN_test_0.png
    │   │   │   ├── SAGAN_test_1.png
    │   │   │   ├── SAGAN_test_2.png
    │   │   │   ├── SAGAN_test_3.png
    │   │   │   ├── SAGAN_test_4.png
    │   │   │   ├── SAGAN_test_5.png
    │   │   │   ├── SAGAN_test_6.png
    │   │   │   ├── SAGAN_test_7.png
    │   │   │   ├── SAGAN_test_8.png
    │   │   │   └── SAGAN_test_9.png
    │   │   └── SAGAN_cifar10_hinge_128_128_True
    │   │   │   ├── SAGAN_test_0.png
    │   │   │   ├── SAGAN_test_1.png
    │   │   │   ├── SAGAN_test_2.png
    │   │   │   ├── SAGAN_test_3.png
    │   │   │   ├── SAGAN_test_4.png
    │   │   │   ├── SAGAN_test_5.png
    │   │   │   ├── SAGAN_test_6.png
    │   │   │   ├── SAGAN_test_7.png
    │   │   │   ├── SAGAN_test_8.png
    │   │   │   └── SAGAN_test_9.png
    │   ├── samples
    │   │   └── SAGAN_img_align_celeba_wgan-gp_64_128_True
    │   │   │   ├── SAGAN_train_08_09000.png
    │   │   │   └── SAGAN_train_08_09500.png
    │   └── utils.py
    ├── data
    │   ├── download_cyclegan_dataset.sh
    │   └── download_pix2pix_dataset.sh
    ├── datadownloader
    │   ├── README.md
    │   ├── __pycache__
    │   │   ├── model.cpython-36.pyc
    │   │   └── utils.cpython-36.pyc
    │   ├── dataload.py
    │   ├── download.py
    │   ├── main.py
    │   ├── model.py
    │   └── utils.py
    ├── ebgan-master
    │   ├── CHANGELOG.md
    │   ├── LICENSE
    │   ├── README.md
    │   ├── mnist_ebgan_generate.py
    │   ├── mnist_ebgan_train.py
    │   ├── model.py
    │   └── png
    │   │   ├── sample.png
    │   │   └── sample_with_pt.png
    ├── self_attention _gan
    │   ├── 44310_D.pth
    │   ├── 44310_G.pth
    │   ├── README.md
    │   ├── __pycache__
    │   │   ├── data_loader.cpython-36.pyc
    │   │   ├── parameter.cpython-36.pyc
    │   │   ├── sagan_models.cpython-36.pyc
    │   │   ├── spectral.cpython-36.pyc
    │   │   ├── trainer.cpython-36.pyc
    │   │   └── utils.cpython-36.pyc
    │   ├── data_loader.py
    │   ├── download.sh
    │   ├── image
    │   │   ├── attn_gf1.png
    │   │   ├── attn_gf2.png
    │   │   ├── main_model.PNG
    │   │   ├── sagan_attn.png
    │   │   ├── sagan_celeb.png
    │   │   ├── sagan_lsun.png
    │   │   └── unnamed
    │   ├── main.py
    │   ├── models
    │   │   └── sagan_celeb
    │   │   │   ├── 44310_D.pth
    │   │   │   └── 44310_G.pth
    │   ├── parameter.py
    │   ├── parameter.pyc
    │   ├── readcy.md
    │   ├── sagan_models.py
    │   ├── sample
    │   │   └── sagan_celeb
    │   │   │   ├── 45800_fake.png
    │   │   │   ├── 45900_fake.png
    │   │   │   └── 46000_fake.png
    │   ├── spectral.py
    │   ├── trainer.py
    │   ├── trainer.pyc
    │   └── utils.py
    ├── srgan_celebA
    │   ├── README.md
    │   ├── __pycache__
    │   │   └── data_loader.cpython-36.pyc
    │   ├── data_loader.py
    │   ├── images
    │   │   └── celebA
    │   │   │   ├── 4950.png
    │   │   │   ├── 4950_lowres0.png
    │   │   │   ├── 4950_lowres1.png
    │   │   │   ├── 5000.png
    │   │   │   ├── 5000_lowres0.png
    │   │   │   └── 5000_lowres1.png
    │   └── srgan.py
    ├── wgan_gp
    │   ├── images
    │   │   ├── fashion_mnist_29700.png
    │   │   ├── fashion_mnist_29800.png
    │   │   ├── fashion_mnist_29900.png
    │   │   ├── mnist_0.png
    │   │   ├── mnist_100.png
    │   │   ├── mnist_200.png
    │   │   ├── mnist_29600.png
    │   │   ├── mnist_29700.png
    │   │   ├── mnist_29800.png
    │   │   └── mnist_29900.png
    │   ├── readme.md
    │   └── wgan_gp.py
    ├── 人脸还原SCFEGAN
    │   ├── LICENSE
    │   ├── README.md
    │   ├── __pycache__
    │   │   ├── model.cpython-36.pyc
    │   │   └── ops.cpython-36.pyc
    │   ├── ckpt
    │   │   └── SC-FEGAN.ckpt.index
    │   ├── demo.py
    │   ├── demo.yaml
    │   ├── imgs
    │   │   ├── GUI.gif
    │   │   ├── earring.jpg
    │   │   ├── face_edit.jpg
    │   │   ├── restoration.jpg
    │   │   ├── restoration2.jpg
    │   │   └── teaser.jpg
    │   ├── model.py
    │   ├── ops.py
    │   ├── tmp.jpg
    │   ├── ui
    │   │   ├── __pycache__
    │   │   │   ├── mouse_event.cpython-36.pyc
    │   │   │   └── ui.cpython-36.pyc
    │   │   ├── mouse_event.py
    │   │   └── ui.py
    │   └── utils
    │   │   ├── __pycache__
    │   │       └── config.cpython-36.pyc
    │   │   └── config.py
    └── 生成对抗网络综述.md
├── ML
    ├── nndl-book.pdf
    ├── train.csv
    └── 神经网络与深度学习-3小时.pdf
├── One_Day_One_GAN
    ├── Readme.md
    ├── day1
    │   ├── 1406.2661.pdf
    │   ├── PPT.md
    │   ├── PPT.pdf
    │   ├── Readme.md
    │   ├── gan
    │   │   ├── gan.py
    │   │   ├── images
    │   │   │   └── .gitignore
    │   │   ├── pytorch_show_1.ipynb
    │   │   └── saved_model
    │   │   │   └── .gitignore
    │   └── gan_t
    │   │   └── gan.py
    ├── day10
    │   ├── infoGAN.pdf
    │   ├── infogan
    │   │   ├── images
    │   │   │   └── .gitignore
    │   │   ├── infogan.py
    │   │   └── saved_model
    │   │   │   └── .gitignore
    │   ├── md2all.html
    │   ├── md2all.md
    │   ├── md2zh.py
    │   ├── readme.docx
    │   ├── readme.md
    │   ├── zhihu.md
    │   └── zhihu2.docx
    ├── day11
    │   ├── pytorch_show_1.ipynb
    │   └── readme.md
    ├── day12
    │   └── began
    │   │   └── began.py
    ├── day13
    │   ├── 2.md
    │   ├── cyclegan
    │   │   ├── cyclegan.py
    │   │   ├── data
    │   │   │   ├── download_cyclegan_dataset.sh
    │   │   │   └── download_pix2pix_dataset.sh
    │   │   ├── datasets.py
    │   │   ├── horse2zebra.gif
    │   │   ├── models.py
    │   │   ├── test.py
    │   │   └── utils.py
    │   ├── readme.md
    │   └── to_zhihu.py
    ├── day14
    │   └── pix2pix
    │   │   ├── datasets.py
    │   │   ├── models.py
    │   │   └── pix2pix.py
    ├── day15
    │   ├── MNIST Convolutional VAE with Label Input.ipynb
    │   ├── MNIST+Letters Convolutional VAE with Label Input.ipynb
    │   ├── VAE 报告.md
    │   ├── VAE 报告.pdf
    │   ├── data
    │   │   ├── processed
    │   │   │   ├── test.pt
    │   │   │   └── training.pt
    │   │   └── raw
    │   │   │   ├── t10k-images-idx3-ubyte
    │   │   │   ├── t10k-labels-idx1-ubyte
    │   │   │   ├── train-images-idx3-ubyte
    │   │   │   └── train-labels-idx1-ubyte
    │   ├── model.py
    │   ├── requirements.txt
    │   ├── vae.py
    │   ├── vae_cnn_mnist.h5
    │   └── 编码的应用--VAE.md
    ├── day16
    │   ├── 1904.09709.pdf
    │   ├── md2zh.py
    │   ├── readme.md
    │   ├── stgan_slides.md
    │   └── stgan_slides.pdf
    ├── day17
    │   └── readme.md
    ├── day18
    │   ├── Self-Supervised GANs via Auxiliary Rotation Loss.pdf
    │   ├── Self-Supervised-GANs-master
    │   │   ├── Model.py
    │   │   ├── README.md
    │   │   ├── download.py
    │   │   ├── main.py
    │   │   ├── ops.py
    │   │   └── utils.py
    │   └── readme.md
    ├── day19
    │   ├── # 联合子带学习的CliqueNet在小波域上的图像超分辨复原.md
    │   └── readme.md
    ├── day2
    │   ├── 1511.06434.pdf
    │   ├── dcgan
    │   │   ├── dcgan.py
    │   │   ├── images
    │   │   │   └── .gitignore
    │   │   └── saved_model
    │   │   │   └── .gitignore
    │   └── readme.md
    ├── day20
    │   └── readme.md
    ├── day21
    │   ├── 深入理解风格迁移三部曲(一)--UNIT.md
    │   └── 深入理解风格迁移三部曲(一)--UNIT.pdf
    ├── day22
    │   └── 深入理解风格迁移三部曲(二)--MUNIT.md
    ├── day23
    │   ├── FUNIT_zhihu.md
    │   └── 深入理解风格迁移三部曲(三)--FUNIT.md
    ├── day24
    │   ├── Advanced Topics in GANs.md
    │   └── GAN-Coding Implementation.md
    ├── day3
    │   ├── 1411.1784.pdf
    │   ├── cgan
    │   │   ├── cgan.py
    │   │   ├── images
    │   │   │   └── .gitignore
    │   │   └── saved_model
    │   │   │   └── .gitignore
    │   └── readme.md
    ├── day4
    │   ├── 1610.09585.pdf
    │   ├── acgan
    │   │   ├── acgan.py
    │   │   ├── images
    │   │   │   └── .gitignore
    │   │   └── saved_model
    │   │   │   └── .gitignore
    │   └── readme.md
    ├── day5
    │   ├── 1701.07875.pdf
    │   ├── readme.md
    │   └── wgan
    │   │   ├── images
    │   │       └── .gitignore
    │   │   ├── saved_model
    │   │       └── .gitignore
    │   │   └── wgan.py
    ├── day6
    │   ├── 1609.04802.pdf
    │   ├── readme.md
    │   ├── srgan
    │   │   ├── Zoom_To_Learn_CVPR2019.pdf
    │   │   ├── data_loader.py
    │   │   ├── images
    │   │   │   └── .gitignore
    │   │   ├── saved_model
    │   │   │   └── .gitignore
    │   │   └── srgan.py
    │   └── srgan_pytorch
    │   │   ├── PSNR&&SSIM.md
    │   │   ├── PSRN_SSIM.py
    │   │   ├── datasets.py
    │   │   ├── esrgan.py
    │   │   ├── models.py
    │   │   ├── spectral.py
    │   │   └── test_on_image.py
    ├── day7
    │   ├── 1809.00219 (1).pdf
    │   ├── esrgan
    │   │   ├── datasets.py
    │   │   ├── esrgan.py
    │   │   ├── models.py
    │   │   ├── test.py
    │   │   └── test_on_image.py
    │   ├── esrgan_slides.md
    │   ├── esrgan_slides.pdf
    │   └── readme.md
    ├── day8
    │   ├── 1711.10098.pdf
    │   ├── Attentive GAN.docx
    │   └── readme.md
    └── day9
    │   ├── WechatIMG399.jpeg
    │   ├── WechatIMG400.jpeg
    │   └── readme.md
├── README.md
├── Regularization
    ├── Cutout-master
    │   ├── README.md
    │   ├── images
    │   │   └── cutout_on_cifar10.jpg
    │   ├── model
    │   │   ├── resnet.py
    │   │   └── wide_resnet.py
    │   ├── shake-shake
    │   │   ├── README.md
    │   │   ├── cifar10.lua
    │   │   ├── cifar100.lua
    │   │   └── transforms.lua
    │   ├── train.py
    │   └── util
    │   │   ├── cutout.py
    │   │   └── misc.py
    └── README.md
├── md2zh.py
├── paper_of_NLP
    ├── attention_is_all_your_need.pdf
    └── bert.pdf
├── results
    ├── 3921554452543_.pic.jpg
    └── WechatIMG391.jpeg
├── 代码技巧汇总
    ├── 1. 仿射变换(affine transformation).md
    ├── 10 Python Tips and Tricks You Should Learn Today.md
    ├── Active Learning with PyTorch.md
    ├── CVPR 2020: The Top Object Detection Papers.md
    ├── From GAN to WGAN.md
    ├── GMMN 网络,GAN变体.md
    ├── GNN综述.md
    ├── Get started with PyTorch, Cloud TPUs, and Colab.md
    ├── ICCV_GAN_SOTA.md
    ├── Image-to-Image papers.md
    ├── Image-to-Image 的论文汇总(含 GitHub 代码).md
    ├── Isolating Sources of Disentanglement in VAEs.md
    ├── Latex simbols table.pdf
    ├── ML's loss function conclusion.md
    ├── PSNR&&SSIM.md
    ├── Perceptual GAN for Small Object Detection阅读笔记.md
    ├── Protecting networks against adversial attacks.pdf
    ├── PyTorch常用代码段整理合集.md
    ├── ROI pooling&ROIAlign.md
    ├── SPP.md
    ├── Spectral Normalization 谱归一化.md
    ├── TF_GAN_util.py
    ├── Tensor to img && imge to tensor.md
    ├── acm-book.pdf
    ├── distribution_show.md
    ├── how to write research proposal.md
    ├── imaaug 数据增强大杀器.md
    ├── img2Latex simplified document.md
    ├── linux 技巧.md
    ├── linux显卡驱动修复.md
    ├── opencv-python极速入门.md
    ├── pair_dataset.py
    ├── python图像数据增强——imgaug.md
    ├── python编程技巧.md
    ├── pytorch_psnr_ssim.py
    ├── pytorch加速训练大数据.md
    ├── pytorch学习率设置.md
    ├── pytorch常用代码.md
    ├── pytorch网络可视化.md
    ├── pytorch解冻.md
    ├── pytorch训练.md
    ├── x 1import PIL.md
    ├── 【论文解读】图像视频去噪中的Deformable Kernels.md
    ├── 不均衡样本loss.md
    ├── 为什么要使得AI System具备可解释性呢?.md
    ├── 判别器训练.md
    ├── 动手推导Self-attention-译文.md
    ├── 动手推导Self-attention.md
    ├── 图像数据增强.md
    ├── 图片处理.md
    ├── 图的基本概念.md
    ├── 密集目标检测论文及代码.md
    ├── 常见排序python.md
    ├── 数据增强.md
    ├── 最便捷的神经网络可视化工具之一--Flashtorch.md
    ├── 深入理解风格迁移三部曲(二)--MUNIT.md
    ├── 生成对抗网络-styleGAN&styleGAN_v2.md
    ├── 目标检测前世今生.md
    ├── 知乎导入公式.md
    ├── 矩阵总结.md
    ├── 网络FLOPS计算.md
    ├── 论文写作.md
    ├── 论文神经网络示意图.md
    ├── 超分辨率baseline.md
    ├── 超分辨率代码数据集合集.md
    ├── 超分辨率技术(Super-Resolution, SR)是指从观测到的低分辨率图像重建出相应的高分辨率图像,在监控设备、卫星图像和医学影像等领域都有重要的应用价值。.md
    ├── 超分辨率方向综述.md
    ├── 超分辨率的损失函数总结.md
    ├── 送给研一入学的你们—炼丹师入门手册.md
    ├── 送给研一入学的你们—炼丹师入门手册.pdf
    ├── 防御对抗性样本攻击.md
    └── 风格迁移汇总.md
├── 代码速查表
    ├── O().png
    ├── README.md
    ├── TF.png
    ├── bokeh.png
    ├── data vis.jpeg
    ├── datastruct.png
    ├── df.jpeg
    ├── df2.jpeg
    ├── gg.jpeg
    ├── keras.jpeg
    ├── liner.png
    ├── matplot.png
    ├── np.png
    ├── pd.png
    ├── sci.png
    ├── scikit.png
    ├── scipy.png
    ├── sk.png
    ├── sort.png
    ├── spark.jpeg
    ├── 网络.png
    └── 这是一张机器&深度学习代码速查表.pdf
└── 论文推荐
    ├── Marcus
        ├── SR
        │   ├── AIM 2019 Challenge on Video Extreme Super-Resolution- Methods and Results.pdf
        │   ├── Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation.pdf
        │   └── SinGAN- Learning a Generative Model from a Single Natural Image.pdf
        ├── paper_of_GAN
        │   ├── 08358814.pdf
        │   ├── 1603.08155v1.pdf
        │   ├── 1609.04802v5.pdf
        │   ├── 1610.04490.pdf
        │   ├── 1611.04076.pdf
        │   ├── 1611.07004.pdf
        │   ├── 1703.10593.pdf
        │   ├── 1704.00028.pdf
        │   ├── 1706.04983.pdf
        │   ├── 1706.08224v2 (1).pdf
        │   ├── 1710.04026v2.pdf
        │   ├── 1710.10196.pdf
        │   ├── 1711.07064.pdf
        │   ├── 1711.11585.pdf
        │   ├── 1802.05957.pdf
        │   ├── 1803.04189.pdf
        │   ├── 1804.02815.pdf
        │   ├── 1804.02900v2.pdf
        │   ├── 1807.00734.pdf
        │   ├── 1807.04720.pdf
        │   ├── 1809.02983.pdf
        │   ├── 1903.02271v1.pdf
        │   ├── 1903.09814v2.pdf
        │   ├── 1904.04514.pdf
        │   ├── 1904.08118v3.pdf
        │   ├── 1905.01723.pdf
        │   ├── 1906.01529.pdf
        │   ├── 1907.10107.pdf
        │   ├── 1908.03826.pdf
        │   ├── 1909.11573.pdf
        │   ├── 1909.11856.pdf
        │   ├── 2019-05-07.pdf
        │   ├── A Deep Journey into Super-resolution-A Survey.pdf
        │   ├── Bau_et_al_Semantic_Photo_Manipulation_preprint.pdf
        │   ├── Bayesian Generative Active Deep learning.pdf
        │   ├── CVPR2019-Filter Pruning via Geometric Median.pdf
        │   ├── Expectation-Maximization Attention Networks for Semantic Segmentation.pdf
        │   ├── Generating Classification Weights with GNN Denoising Autoencoders for Few-Shot Learning.pdf
        │   ├── Generative Adversarial Networks_A Survey and Taxonomy.pdf
        │   ├── Han_umd_0117E_19307.pdf
        │   ├── Li_Perceptual_Generative_Adversarial_CVPR_2017_paper.pdf
        │   ├── Lifelong GAN Continual Learning for Conditional Image Generation .pdf
        │   ├── MODE REGULARIZED GENERATIVE ADVERSARIAL.pdf
        │   ├── NIPS2018-Discrimination-aware Channel Pruning.pdf
        │   ├── Non-local Neural Networks.pdf
        │   ├── Zhaoyi_Yan_Shift-Net_Image_Inpainting_ECCV_2018_paper.pdf
        │   ├── Zheng_Looking_for_the_Devil_in_the_Details_Learning_Trilinear_Attention_CVPR_2019_paper.pdf
        │   ├── deblur_cvpr19.pdf
        │   ├── funit-190708162302.pdf
        │   └── paper.pdf
        ├── 写给一位陌生人的一封信.jpg
        ├── 写给一位陌生人的一封信.md
        ├── 写给一位陌生人的一封信.pdf
        └── 深入理解风格迁移三部曲(三)--FUNIT.md
    ├── bookmarks_2019_4_18.html
    ├── readme.md
    ├── 工作报告
        ├── Denoise_underwater_实验.docx
        ├── Denoise_underwater_实验.html
        ├── Denoise_underwater_实验.md
        ├── Denoise_underwater_实验.pdf
        ├── Denoise_实验_1126.md
        ├── Dive-into-DL-Pytorch.pdf
        ├── IJCAI.md
        ├── RSR_补充实验.pdf
        ├── TIP_补充实验.md
        ├── TIP_补充实验.pdf
        ├── 人工智能发展的现状与反思3.pdf
        ├── 周报模板.md
        ├── 大三上期末总复习.md
        ├── 定制类.md
        ├── 工作报告10.1-MARCUS.md
        ├── 工作报告10.14-MARCUS.md
        ├── 工作报告10.21-MARCUS.md
        ├── 工作报告10.8-MARCUS.md
        ├── 工作报告11.27.md
        ├── 工作报告9.16-MARCU.md
        ├── 工作报告9.16-MARCU.pdf
        ├── 工作报告9.2-MARCUS.md
        ├── 工作报告9.2-MARCUS.pdf
        ├── 工作报告9.23-MARCUS.md
        ├── 工作报告9.23-MARCUS.pdf
        ├── 工作报告9.9-MARCUS.md
        ├── 手写一个 LIST.md
        └── 生成对抗网络GAN如果只训练一个网络会有效果么?.md
    └── 报告
        ├── 1905.01723.pdf
        ├── FUNIT_zhihu.md
        ├── Shaham_SinGAN_Learning_a_Generative_Model_From_a_Single_Natural_Image_ICCV_2019_paper.pdf
        ├── introduction_GAN.md
        ├── introduction_GAN.pdf
        ├── weekly_slides.md
        ├── weekly_slides.pdf
        ├── 深入理解风格迁移三部曲(三)--FUNIT.md
        └── 深入理解风格迁移三部曲(二)--MUNIT_tmp.html


/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/.DS_Store


--------------------------------------------------------------------------------
/.gitattributes:
--------------------------------------------------------------------------------
1 | *.html linguist-language=py
2 | *.md linguist-language=py
3 | *.ipynb linguist-language=py
4 | 


--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
 1 | 
 2 | **DS_Store
 3 | GAN/Self_attention_GAN_tensorflow/samples/SAGAN_cifar10_hinge_128_128_True/SAGAN_epoch09_visualize.png
 4 | *.png
 5 | *.png
 6 | *.png
 7 | GAN/Self_attention_GAN_tensorflow/.DS_Store
 8 | .DS_Store
 9 | GAN/Self_attention_GAN_tensorflow/samples/SAGAN_cifar10_hinge_128_128_True/SAGAN_epoch09_visualize.png
10 | GAN/Self_attention_GAN_tensorflow/samples/SAGAN_cifar10_hinge_128_128_True/SAGAN_train_00_00500.png
11 | GAN/ESRGAN/
12 | GAN/SC-FEGAN/
13 | GAN/人脸还原SCFEGAN/ckpt/SC-FEGAN.ckpt.data-00000-of-00001
14 | GAN/.DS_Store
15 | .DS_Store
16 | .DS_Store
17 | .DS_Store
18 | .DS_Store
19 | .DS_Store
20 | .DS_Store
21 | 论文推荐/工作报告/CVPR实验.md
22 | 论文推荐/工作报告/CVPR实验.pdf
23 | .DS_Store
24 | 论文推荐/工作报告/Denoise_underwater_实验.pdf
25 | 论文推荐/工作报告/Denoise_underwater_实验.pdf
26 | .DS_Store
27 | .DS_Store
28 | .idea/deployment.xml
29 | .idea/misc.xml
30 | .idea/OUCML.iml
31 | .idea/vcs.xml
32 | .idea/webServers.xml
33 | .idea/workspace.xml
34 | *.xml
35 | .idea/workspace.xml
36 | .DS_Store
37 | 论文推荐/工作报告/AAAI2020实验_v1.md
38 | 论文推荐/工作报告/AAAI2020实验_v2.md
39 | 


--------------------------------------------------------------------------------
/.idea/$CACHE_FILE$:
--------------------------------------------------------------------------------
1 | <?xml version="1.0" encoding="UTF-8"?>
2 | <project version="4">
3 |   <component name="NodePackageJsonFileManager">
4 |     <packageJsonPaths />
5 |   </component>
6 | </project>


--------------------------------------------------------------------------------
/.idea/$PRODUCT_WORKSPACE_FILE$:
--------------------------------------------------------------------------------
 1 | <?xml version="1.0" encoding="UTF-8"?>
 2 | <project version="4">
 3 |   <component name="masterDetails">
 4 |     <states>
 5 |       <state key="ScopeChooserConfigurable.UI">
 6 |         <settings>
 7 |           <splitter-proportions>
 8 |             <option name="proportions">
 9 |               <list>
10 |                 <option value="0.25156444" />
11 |               </list>
12 |             </option>
13 |           </splitter-proportions>
14 |         </settings>
15 |       </state>
16 |     </states>
17 |   </component>
18 | </project>


--------------------------------------------------------------------------------
/.idea/OUCML.iml:
--------------------------------------------------------------------------------
 1 | <?xml version="1.0" encoding="UTF-8"?>
 2 | <module type="PYTHON_MODULE" version="4">
 3 |   <component name="NewModuleRootManager">
 4 |     <content url="file://$MODULE_DIR
quot; />
 5 |     <orderEntry type="jdk" jdkName="Python 3.7 (OUCML)" jdkType="Python SDK" />
 6 |     <orderEntry type="sourceFolder" forTests="false" />
 7 |   </component>
 8 |   <component name="ReSTService">
 9 |     <option name="workdir" value="$MODULE_DIR
quot; />
10 |     <option name="DOC_DIR" value="$MODULE_DIR
quot; />
11 |   </component>
12 | </module>


--------------------------------------------------------------------------------
/.idea/codeStyles/Project.xml:
--------------------------------------------------------------------------------
 1 | <component name="ProjectCodeStyleConfiguration">
 2 |   <code_scheme name="Project" version="173">
 3 |     <DBN-PSQL>
 4 |       <case-options enabled="true">
 5 |         <option name="KEYWORD_CASE" value="lower" />
 6 |         <option name="FUNCTION_CASE" value="lower" />
 7 |         <option name="PARAMETER_CASE" value="lower" />
 8 |         <option name="DATATYPE_CASE" value="lower" />
 9 |         <option name="OBJECT_CASE" value="preserve" />
10 |       </case-options>
11 |       <formatting-settings enabled="false" />
12 |     </DBN-PSQL>
13 |     <DBN-SQL>
14 |       <case-options enabled="true">
15 |         <option name="KEYWORD_CASE" value="lower" />
16 |         <option name="FUNCTION_CASE" value="lower" />
17 |         <option name="PARAMETER_CASE" value="lower" />
18 |         <option name="DATATYPE_CASE" value="lower" />
19 |         <option name="OBJECT_CASE" value="preserve" />
20 |       </case-options>
21 |       <formatting-settings enabled="false">
22 |         <option name="STATEMENT_SPACING" value="one_line" />
23 |         <option name="CLAUSE_CHOP_DOWN" value="chop_down_if_statement_long" />
24 |         <option name="ITERATION_ELEMENTS_WRAPPING" value="chop_down_if_not_single" />
25 |       </formatting-settings>
26 |     </DBN-SQL>
27 |     <DBN-PSQL>
28 |       <case-options enabled="true">
29 |         <option name="KEYWORD_CASE" value="lower" />
30 |         <option name="FUNCTION_CASE" value="lower" />
31 |         <option name="PARAMETER_CASE" value="lower" />
32 |         <option name="DATATYPE_CASE" value="lower" />
33 |         <option name="OBJECT_CASE" value="preserve" />
34 |       </case-options>
35 |       <formatting-settings enabled="false" />
36 |     </DBN-PSQL>
37 |     <DBN-SQL>
38 |       <case-options enabled="true">
39 |         <option name="KEYWORD_CASE" value="lower" />
40 |         <option name="FUNCTION_CASE" value="lower" />
41 |         <option name="PARAMETER_CASE" value="lower" />
42 |         <option name="DATATYPE_CASE" value="lower" />
43 |         <option name="OBJECT_CASE" value="preserve" />
44 |       </case-options>
45 |       <formatting-settings enabled="false">
46 |         <option name="STATEMENT_SPACING" value="one_line" />
47 |         <option name="CLAUSE_CHOP_DOWN" value="chop_down_if_statement_long" />
48 |         <option name="ITERATION_ELEMENTS_WRAPPING" value="chop_down_if_not_single" />
49 |       </formatting-settings>
50 |     </DBN-SQL>
51 |   </code_scheme>
52 | </component>


--------------------------------------------------------------------------------
/.idea/csv-plugin.xml:
--------------------------------------------------------------------------------
 1 | <?xml version="1.0" encoding="UTF-8"?>
 2 | <project version="4">
 3 |   <component name="CsvFileAttributes">
 4 |     <option name="attributeMap">
 5 |       <map>
 6 |         <entry key="&lt;30c491c8-610a-4c82-a44d-207325070239&gt;/UnfreezeGAN_dense_SN_v2/statistics/srf_4_test_results.csv">
 7 |           <value>
 8 |             <Attribute>
 9 |               <option name="separator" value="&#9;" />
10 |             </Attribute>
11 |           </value>
12 |         </entry>
13 |       </map>
14 |     </option>
15 |   </component>
16 | </project>


--------------------------------------------------------------------------------
/.idea/dictionaries:
--------------------------------------------------------------------------------
1 | <?xml version="1.0" encoding="UTF-8"?>
2 | <project version="4">
3 |   <component name="ProjectDictionaryState">
4 |     <dictionary name="Macbook" />
5 |   </component>
6 | </project>


--------------------------------------------------------------------------------
/.idea/inspectionProfiles/profiles_settings.xml:
--------------------------------------------------------------------------------
1 | <component name="InspectionProjectProfileManager">
2 |   <settings>
3 |     <option name="PROJECT_PROFILE" />
4 |   </settings>
5 | </component>


--------------------------------------------------------------------------------
/.idea/misc.xml:
--------------------------------------------------------------------------------
1 | <?xml version="1.0" encoding="UTF-8"?>
2 | <project version="4">
3 |   <component name="JavaScriptSettings">
4 |     <option name="languageLevel" value="ES6" />
5 |   </component>
6 |   <component name="ProjectRootManager" version="2" project-jdk-name="Python 3.7 (OUCML)" project-jdk-type="Python SDK" />
7 | </project>


--------------------------------------------------------------------------------
/.idea/modules.xml:
--------------------------------------------------------------------------------
1 | <?xml version="1.0" encoding="UTF-8"?>
2 | <project version="4">
3 |   <component name="ProjectModuleManager">
4 |     <modules>
5 |       <module fileurl="file://$PROJECT_DIR$/.idea/OUCML.iml" filepath="$PROJECT_DIR$/.idea/OUCML.iml" />
6 |     </modules>
7 |   </component>
8 | </project>


--------------------------------------------------------------------------------
/.idea/vcs.xml:
--------------------------------------------------------------------------------
1 | <?xml version="1.0" encoding="UTF-8"?>
2 | <project version="4">
3 |   <component name="VcsDirectoryMappings">
4 |     <mapping directory="$PROJECT_DIR
quot; vcs="Git" />
5 |   </component>
6 | </project>


--------------------------------------------------------------------------------
/Attention/test:
--------------------------------------------------------------------------------
1 | test
2 | 


--------------------------------------------------------------------------------
/AutoML/README.md:
--------------------------------------------------------------------------------
1 | # Differentiable Architecture Search
2 | Code accompanying the paper
3 | > [DARTS: Differentiable Architecture Search](https://arxiv.org/abs/1806.09055)\
4 | > Hanxiao Liu, Karen Simonyan, Yiming Yang.\
5 | > _arXiv:1806.09055_.\
6 | > github address(https://github.com/quark0/darts)
7 | 
8 | 


--------------------------------------------------------------------------------
/AutoML/darts-master/cnn/visualize.py:
--------------------------------------------------------------------------------
 1 | import sys
 2 | import genotypes
 3 | from graphviz import Digraph
 4 | 
 5 | 
 6 | def plot(genotype, filename):
 7 |   g = Digraph(
 8 |       format='pdf',
 9 |       edge_attr=dict(fontsize='20', fontname="times"),
10 |       node_attr=dict(style='filled', shape='rect', align='center', fontsize='20', height='0.5', width='0.5', penwidth='2', fontname="times"),
11 |       engine='dot')
12 |   g.body.extend(['rankdir=LR'])
13 | 
14 |   g.node("c_{k-2}", fillcolor='darkseagreen2')
15 |   g.node("c_{k-1}", fillcolor='darkseagreen2')
16 |   assert len(genotype) % 2 == 0
17 |   steps = len(genotype) // 2
18 | 
19 |   for i in range(steps):
20 |     g.node(str(i), fillcolor='lightblue')
21 | 
22 |   for i in range(steps):
23 |     for k in [2*i, 2*i + 1]:
24 |       op, j = genotype[k]
25 |       if j == 0:
26 |         u = "c_{k-2}"
27 |       elif j == 1:
28 |         u = "c_{k-1}"
29 |       else:
30 |         u = str(j-2)
31 |       v = str(i)
32 |       g.edge(u, v, label=op, fillcolor="gray")
33 | 
34 |   g.node("c_{k}", fillcolor='palegoldenrod')
35 |   for i in range(steps):
36 |     g.edge(str(i), "c_{k}", fillcolor="gray")
37 | 
38 |   g.render(filename, view=True)
39 | 
40 | 
41 | if __name__ == '__main__':
42 |   if len(sys.argv) != 2:
43 |     print("usage:\n python {} ARCH_NAME".format(sys.argv[0]))
44 |     sys.exit(1)
45 | 
46 |   genotype_name = sys.argv[1]
47 |   try:
48 |     genotype = eval('genotypes.{}'.format(genotype_name))
49 |   except AttributeError:
50 |     print("{} is not specified in genotypes.py".format(genotype_name)) 
51 |     sys.exit(1)
52 | 
53 |   plot(genotype.normal, "normal")
54 |   plot(genotype.reduce, "reduction")
55 | 
56 | 


--------------------------------------------------------------------------------
/AutoML/darts-master/img/cifar10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/AutoML/darts-master/img/cifar10.png


--------------------------------------------------------------------------------
/AutoML/darts-master/img/darts.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/AutoML/darts-master/img/darts.png


--------------------------------------------------------------------------------
/AutoML/darts-master/img/imagenet.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/AutoML/darts-master/img/imagenet.png


--------------------------------------------------------------------------------
/AutoML/darts-master/img/progress_convolutional.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/AutoML/darts-master/img/progress_convolutional.gif


--------------------------------------------------------------------------------
/AutoML/darts-master/img/progress_convolutional_normal.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/AutoML/darts-master/img/progress_convolutional_normal.gif


--------------------------------------------------------------------------------
/AutoML/darts-master/img/progress_convolutional_reduce.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/AutoML/darts-master/img/progress_convolutional_reduce.gif


--------------------------------------------------------------------------------
/AutoML/darts-master/img/progress_recurrent.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/AutoML/darts-master/img/progress_recurrent.gif


--------------------------------------------------------------------------------
/AutoML/darts-master/img/ptb.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/AutoML/darts-master/img/ptb.png


--------------------------------------------------------------------------------
/AutoML/darts-master/rnn/genotypes.py:
--------------------------------------------------------------------------------
 1 | from collections import namedtuple
 2 | 
 3 | Genotype = namedtuple('Genotype', 'recurrent concat')
 4 | 
 5 | PRIMITIVES = [
 6 |     'none',
 7 |     'tanh',
 8 |     'relu',
 9 |     'sigmoid',
10 |     'identity'
11 | ]
12 | STEPS = 8
13 | CONCAT = 8
14 | 
15 | ENAS = Genotype(
16 |     recurrent = [
17 |         ('tanh', 0),
18 |         ('tanh', 1),
19 |         ('relu', 1),
20 |         ('tanh', 3),
21 |         ('tanh', 3),
22 |         ('relu', 3),
23 |         ('relu', 4),
24 |         ('relu', 7),
25 |         ('relu', 8),
26 |         ('relu', 8),
27 |         ('relu', 8),
28 |     ],
29 |     concat = [2, 5, 6, 9, 10, 11]
30 | )
31 | 
32 | DARTS_V1 = Genotype(recurrent=[('relu', 0), ('relu', 1), ('tanh', 2), ('relu', 3), ('relu', 4), ('identity', 1), ('relu', 5), ('relu', 1)], concat=range(1, 9))
33 | DARTS_V2 = Genotype(recurrent=[('sigmoid', 0), ('relu', 1), ('relu', 1), ('identity', 1), ('tanh', 2), ('sigmoid', 5), ('tanh', 3), ('relu', 5)], concat=range(1, 9))
34 | 
35 | DARTS = DARTS_V2
36 | 
37 | 


--------------------------------------------------------------------------------
/AutoML/darts-master/rnn/visualize.py:
--------------------------------------------------------------------------------
 1 | import sys
 2 | import genotypes
 3 | from graphviz import Digraph
 4 | 
 5 | 
 6 | def plot(genotype, filename):
 7 |   g = Digraph(
 8 |       format='pdf',
 9 |       edge_attr=dict(fontsize='20', fontname="times"),
10 |       node_attr=dict(style='filled', shape='rect', align='center', fontsize='20', height='0.5', width='0.5', penwidth='2', fontname="times"),
11 |       engine='dot')
12 |   g.body.extend(['rankdir=LR'])
13 | 
14 |   g.node("x_{t}", fillcolor='darkseagreen2')
15 |   g.node("h_{t-1}", fillcolor='darkseagreen2')
16 |   g.node("0", fillcolor='lightblue')
17 |   g.edge("x_{t}", "0", fillcolor="gray")
18 |   g.edge("h_{t-1}", "0", fillcolor="gray")
19 |   steps = len(genotype)
20 | 
21 |   for i in range(1, steps + 1):
22 |     g.node(str(i), fillcolor='lightblue')
23 | 
24 |   for i, (op, j) in enumerate(genotype):
25 |     g.edge(str(j), str(i + 1), label=op, fillcolor="gray")
26 | 
27 |   g.node("h_{t}", fillcolor='palegoldenrod')
28 |   for i in range(1, steps + 1):
29 |     g.edge(str(i), "h_{t}", fillcolor="gray")
30 | 
31 |   g.render(filename, view=True)
32 | 
33 | 
34 | if __name__ == '__main__':
35 |   if len(sys.argv) != 2:
36 |     print("usage:\n python {} ARCH_NAME".format(sys.argv[0]))
37 |     sys.exit(1)
38 | 
39 |   genotype_name = sys.argv[1]
40 |   try:
41 |     genotype = eval('genotypes.{}'.format(genotype_name))
42 |   except AttributeError:
43 |     print("{} is not specified in genotypes.py".format(genotype_name)) 
44 |     sys.exit(1)
45 | 
46 |   plot(genotype.recurrent, "recurrent")
47 | 
48 | 


--------------------------------------------------------------------------------
/BOOK/ OpenCV中文版.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/ OpenCV中文版.pdf


--------------------------------------------------------------------------------
/BOOK/CUDA编程入门.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/CUDA编程入门.pdf


--------------------------------------------------------------------------------
/BOOK/C和指针.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/C和指针.pdf


--------------------------------------------------------------------------------
/BOOK/David Beazley:Python Cookbook_2013 (第3版).pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/David Beazley:Python Cookbook_2013 (第3版).pdf


--------------------------------------------------------------------------------
/BOOK/Dive-into-DL-PyTorch.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/Dive-into-DL-PyTorch.pdf


--------------------------------------------------------------------------------
/BOOK/Jakub Langr, Vladimir Bok - GANs in Action_ Deep learning with Generative Adversarial Networks (2019, Manning Publications).epub:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/Jakub Langr, Vladimir Bok - GANs in Action_ Deep learning with Generative Adversarial Networks (2019, Manning Publications).epub


--------------------------------------------------------------------------------
/BOOK/batch-markdown-to-pdf.py:
--------------------------------------------------------------------------------
 1 | from pathlib import Path
 2 | import os
 3 | import glob
 4 | work_dir = Path.cwd()
 5 | 
 6 | export_pdf_dir = work_dir / 'pdf2'
 7 | print(export_pdf_dir)
 8 | if not export_pdf_dir.exists():
 9 |     export_pdf_dir.mkdir()
10 | #file=open('fix.md','w')  
11 | ##向文件中写入字符  
12 | ##file.write('test\n')  
13 | #for md_file in list(sorted(glob.glob('./*/*.md'))):
14 | #    for line in open(md_file):  
15 | #        file.writelines(line)  
16 | #    file.write('\n')
17 | #pdf_file = "./Dive-into-DL-PyTorch.pdf"   
18 | #cmd = "pandoc  -N --template=template2.tex --variable mainfont='PingFang SC' --variable sansfont='Helvetica' --variable monofont='Menlo' --variable fontsize=12pt --variable version=2.0 '{}' --latex-engine=xelatex --toc -o '{}' ".format("fix.md", pdf_file)
19 | #os.system(cmd)
20 |    
21 | for md_file in list(sorted(glob.glob('./*/*.md'))):
22 |     print(md_file)
23 |     md_file_name = md_file
24 |     zhanjie=md_file_name.split("/")[-2]
25 |     print(zhanjie)
26 |     pdf_file_name = md_file_name.replace('.md', '.pdf')
27 |     pdf_file = export_pdf_dir/pdf_file_name
28 |     os.makedirs(str(export_pdf_dir/zhanjie),exist_ok=True)
29 |     print(pdf_file)
30 |     cmd = "pandoc '{}' -o '{}' -s --highlight-style pygments  --latex-engine=xelatex -V mainfont='PingFang SC' --template=template.tex".format(md_file, pdf_file)
31 |     os.system(cmd)


--------------------------------------------------------------------------------
/BOOK/deep_learning.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/deep_learning.pdf


--------------------------------------------------------------------------------
/BOOK/pytorch速查表.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/pytorch速查表.pdf


--------------------------------------------------------------------------------
/BOOK/《Python Cookbook》第三版中文v1.0.2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/BOOK/《Python Cookbook》第三版中文v1.0.2.pdf


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/README.md:
--------------------------------------------------------------------------------
 1 | # Wasserstein GAN
 2 | 
 3 | This is a tensorflow implementation of WGAN on mnist and SVHN.
 4 | 
 5 | ## Requirement
 6 | 
 7 | tensorflow==1.0.0+
 8 | 
 9 | numpy
10 | 
11 | matplotlib
12 | 
13 | cv2
14 | 
15 | ## Usage
16 | 
17 | Train: Use WGAN.ipynb, set the parameters in the second cell and choose the dataset you want to run on. You can use tensorboard to visualize the training. 
18 | 
19 | Generation : Use generate_from_ckpt.ipynb, set  `ckpt_dir` in the second cell. Don't forget to change the dataset type accordingly.
20 | 
21 | ## Note
22 | 
23 | 1. All data will be downloaded automatically, the SVHN script is modified from [this](https://github.com/openai/improved-gan/blob/master/mnist_svhn_cifar10/svhn_data.py).
24 | 
25 | 2. All parameters are set to the values the original paper recommends by default. `Diters`  represents the number of critic updates in one step, in [Original PyTorch version](https://github.com/martinarjovsky/WassersteinGAN) it was set to 5 unless iterstep < 25 or iterstep % 500 == 0 , I guess since the critic is free to fully optimized, it's reasonable to make more updates to critic at the beginning and every 500 steps, so I borrowed it without tuning. The learning rates for generator and critic are both set to 5e-5 , since during the training time the gradient norms are always relatively high(around 1e3), I suggest no drastic change on learning rates.
26 | 
27 | 3. MLP version could take longer time to generate sharp image.
28 | 
29 | 4. In this implementation, the critic loss is `tf.reduce_mean(fake_logit - true_logit)`, and generator loss is `tf.reduce_mean(-fake_logit)` . Actually, the whole system still works if you add a `-` before both of them, it doesn't matter. Recall that the critic loss in duality form is ![](https://ww2.sinaimg.cn/large/006tKfTcly1fcewxqyfvwj307i00kglj.jpg), and the set![](https://ww2.sinaimg.cn/large/006tKfTcly1fcex6p638ij302200o3yd.jpg) is symmetric about the sign. Substitute $f$ with $-f$ gives us ![](https://ww3.sinaimg.cn/large/006tKfTcly1fcewyhols5j307g00odfr.jpg), the opposite number of original form. The original PyTorch implementation takes the second form, this implementation takes the first, both will work equally. You might want to add the `-` and try it out.
30 | 
31 | 5. Please set your device you want to run on in the code, search `tf.device` and change accordingly. It runs on gpu:0 by default.
32 | 
33 | 6. Inproved-WGAN is added, but somehow the gradient norm is close to 1, so the square-gradient normalizer doesn't work. Couldn't figure out why.   ​
34 | 


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/WGAN-tensorflow-master/README.md:
--------------------------------------------------------------------------------
 1 | # Wasserstein GAN
 2 | 
 3 | This is a tensorflow implementation of WGAN on mnist and SVHN.
 4 | 
 5 | ## Requirement
 6 | 
 7 | tensorflow==1.0.0+
 8 | 
 9 | numpy
10 | 
11 | matplotlib
12 | 
13 | cv2
14 | 
15 | ## Usage
16 | 
17 | Train: Use WGAN.ipynb, set the parameters in the second cell and choose the dataset you want to run on. You can use tensorboard to visualize the training. 
18 | 
19 | Generation : Use generate_from_ckpt.ipynb, set  `ckpt_dir` in the second cell. Don't forget to change the dataset type accordingly.
20 | 
21 | ## Note
22 | 
23 | 1. All data will be downloaded automatically, the SVHN script is modified from [this](https://github.com/openai/improved-gan/blob/master/mnist_svhn_cifar10/svhn_data.py).
24 | 
25 | 2. All parameters are set to the values the original paper recommends by default. `Diters`  represents the number of critic updates in one step, in [Original PyTorch version](https://github.com/martinarjovsky/WassersteinGAN) it was set to 5 unless iterstep < 25 or iterstep % 500 == 0 , I guess since the critic is free to fully optimized, it's reasonable to make more updates to critic at the beginning and every 500 steps, so I borrowed it without tuning. The learning rates for generator and critic are both set to 5e-5 , since during the training time the gradient norms are always relatively high(around 1e3), I suggest no drastic change on learning rates.
26 | 
27 | 3. MLP version could take longer time to generate sharp image.
28 | 
29 | 4. In this implementation, the critic loss is `tf.reduce_mean(fake_logit - true_logit)`, and generator loss is `tf.reduce_mean(-fake_logit)` . Actually, the whole system still works if you add a `-` before both of them, it doesn't matter. Recall that the critic loss in duality form is ![](https://ww2.sinaimg.cn/large/006tKfTcly1fcewxqyfvwj307i00kglj.jpg), and the set![](https://ww2.sinaimg.cn/large/006tKfTcly1fcex6p638ij302200o3yd.jpg) is symmetric about the sign. Substitute $f$ with $-f$ gives us ![](https://ww3.sinaimg.cn/large/006tKfTcly1fcewyhols5j307g00odfr.jpg), the opposite number of original form. The original PyTorch implementation takes the second form, this implementation takes the first, both will work equally. You might want to add the `-` and try it out.
30 | 
31 | 5. Please set your device you want to run on in the code, search `tf.device` and change accordingly. It runs on gpu:0 by default.
32 | 
33 | 6. Inproved-WGAN is added, but somehow the gradient norm is close to 1, so the square-gradient normalizer doesn't work. Couldn't figure out why.   ​
34 | 


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/WGAN-tensorflow-master/get_svhn.py:
--------------------------------------------------------------------------------
 1 | import sys
 2 | import os
 3 | from six.moves import urllib
 4 | from scipy.io import loadmat
 5 | import numpy as np
 6 | 
 7 | def dense_to_one_hot(labels_dense, num_classes):
 8 |   """Convert class labels from scalars to one-hot vectors."""
 9 |   num_labels = labels_dense.shape[0]
10 |   index_offset = np.arange(num_labels) * num_classes
11 |   labels_one_hot = np.zeros((num_labels, num_classes))
12 |   labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
13 |   return labels_one_hot
14 | 
15 | 
16 | def maybe_download(data_dir):
17 |     new_data_dir = os.path.join(data_dir, 'svhn')
18 |     if not os.path.exists(new_data_dir):
19 |         os.makedirs(new_data_dir)
20 |         def _progress(count, block_size, total_size):
21 |             sys.stdout.write('\r>> Downloading %.1f%%' % (float(count * block_size) / float(total_size) * 100.0))
22 |             sys.stdout.flush()
23 |         filepath, _ = urllib.request.urlretrieve('http://ufldl.stanford.edu/housenumbers/train_32x32.mat', new_data_dir+'/train_32x32.mat', _progress)
24 |         filepath, _ = urllib.request.urlretrieve('http://ufldl.stanford.edu/housenumbers/test_32x32.mat', new_data_dir+'/test_32x32.mat', _progress)
25 | 
26 | def load(data_dir, subset='train'):
27 |     maybe_download(data_dir)
28 |     if subset=='train':
29 |         train_data = loadmat(os.path.join(data_dir, 'svhn') + '/train_32x32.mat')
30 |         trainx = train_data['X']
31 |         trainy = train_data['y'].flatten()
32 |         trainy[trainy==10] = 0
33 |         trainx = trainx.transpose((3, 0, 1, 2))
34 |         trainy = dense_to_one_hot(trainy, 10)
35 |         return trainx, trainy
36 |     elif subset=='test':
37 |         test_data = loadmat(os.path.join(data_dir, 'svhn') + '/test_32x32.mat')
38 |         testx = test_data['X']
39 |         testy = test_data['y'].flatten()
40 |         testy[testy==10] = 0
41 |         testx = testx.transpose((3, 0, 1, 2))
42 |         testy = dense_to_one_hot(testy, 10)
43 |         return testx, testy
44 |     else:
45 |         raise NotImplementedError('subset should be either train or test')
46 | 
47 | def main():
48 |     # maybe_download('./')
49 |     tx, ty = load('./')
50 |     print(tx.shape)
51 | 
52 | 
53 | if __name__ == '__main__':
54 |     main()


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan/images/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan/readme.md:
--------------------------------------------------------------------------------
 1 | ## 常见的Diesplay错误
 2 | 
 3 | 在`import matplotlib.pyplot as plt `前面加
 4 | `import matplotlib  
 5 | matplotlib.use('Agg')`  
 6 | 即可
 7 | ### 运行
 8 | 	python wgan_mnist.py
 9 | ### 结果在`/images`目录下面,
10 | 
11 | 分别有`mnist`,`fashion_mnist`, <strike>CIFAR10</strike> ,三个数据集的结果  
12 | 算了算了,mnist跑出来的结果都这么烂,我就不跑cifar了  
13 | 算了还是做了吧,日后好diss沃瑟斯坦  
14 | 还是wgan_gp的结果好,
15 | ###沃瑟斯坦loss  
16 | 其实超级容易理解
17 | 沃瑟斯坦距离就是,groud_truth=-1,fake=1
18 | loss就是预测值和groud_truth相乘
19 | 
20 | 	wloss=wasserstein_loss
21 | 	valid = -np.ones((batch_size, 1))
22 | 	fake = np.ones((batch_size, 1))
23 | 
24 | 	def wasserstein_loss(self, y_true, y_pred):
25 | 		return K.mean(y_true * y_pred)
26 | 顾名思义,原来SGAN的loss='binary_crossentropy'
27 | 
28 | 	valid = np.ones((batch_size, 1))
29 | 	fake = np.zeros((batch_size, 1))
30 | 	d_loss_real = self.discriminator.train_on_batch(imgs, valid)
31 | 	d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
32 | 	d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
33 | 


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan/saved_model/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/fashion_mnist_29700.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/fashion_mnist_29700.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/fashion_mnist_29800.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/fashion_mnist_29800.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/fashion_mnist_29900.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/fashion_mnist_29900.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_0.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_100.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_100.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_200.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_200.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_29600.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_29600.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_29700.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_29700.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_29800.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_29800.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_29900.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ Wasserstein_GAN/wgan_gp/images/mnist_29900.png


--------------------------------------------------------------------------------
/GAN/ Wasserstein_GAN/wgan_gp/readme.md:
--------------------------------------------------------------------------------
 1 | ###解决了display的问题
 2 | ###去白边的merge
 3 | def merge(images, size):
 4 | 	h, w= images.shape[1], images.shape[2]
 5 | 	img = np.zeros((h * size[0], w * size[1]))
 6 | 	for idx, image in enumerate(images):
 7 | 		i = idx % size[1]
 8 | 		j = idx // size[1]
 9 | 		img[j*h:j*h+h, i*w:i*w+w] = image
10 | 	return img
11 | ###合并多张图在一张
12 | from scipy.misc import *	
13 | r, c = 10, 10
14 | noise = np.random.normal(0, 1, (r * c, self.latent_dim))
15 | gen_imgs = self.generator.predict(noise)
16 | 
17 | # Rescale images 0 - 1
18 | gen_imgs = 0.5 * gen_imgs + 1
19 | gen_imgs=gen_imgs.reshape(-1,28,28)
20 | gen_imgs = merge(gen_imgs[:49], [7,7])
21 | imsave("images/mnist_%d.png" % epoch,gen_imgs)
22 | ###运行python wgan.py
23 | ###结果
24 | 29995 [D loss: -1.087117] [G loss: 4.016634]
25 | 29996 [D loss: -0.511691] [G loss: 3.625752]
26 | 29997 [D loss: -0.533835] [G loss: 4.005987]
27 | 29998 [D loss: -0.423012] [G loss: 3.547036]
28 | 29999 [D loss: 0.091400] [G loss: 4.133564]
29 | 存在的问题,很容易梯度爆炸,目前W-ACGAN复现失败
30 | 


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/LICENSE:
--------------------------------------------------------------------------------
 1 | MIT License
 2 | 
 3 | Copyright (c) 2017 Te-Lin Wu
 4 | 
 5 | Permission is hereby granted, free of charge, to any person obtaining a copy
 6 | of this software and associated documentation files (the "Software"), to deal
 7 | in the Software without restriction, including without limitation the rights
 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 | 
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 | 
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 | 


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/__pycache__/folder.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/__pycache__/folder.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/__pycache__/network.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/__pycache__/network.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/__pycache__/utils.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/__pycache__/utils.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/fake_samples_epoch_482.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/fake_samples_epoch_482.png


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/fake_samples_epoch_498.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/fake_samples_epoch_498.png


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/figs/architecture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/figs/architecture.png


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/figs/fake_samples_epoch_470.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/figs/fake_samples_epoch_470.png


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/figs/fake_samples_epoch_499.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/figs/fake_samples_epoch_499.png


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/real_samples.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ACGAN-PyTorch-master/real_samples.png


--------------------------------------------------------------------------------
/GAN/ACGAN-PyTorch-master/utils.py:
--------------------------------------------------------------------------------
 1 | # custom weights initialization called on netG and netD
 2 | def weights_init(m):
 3 |     classname = m.__class__.__name__
 4 |     if classname.find('Conv') != -1:
 5 |         m.weight.data.normal_(0.0, 0.02)
 6 |     elif classname.find('BatchNorm') != -1:
 7 |         m.weight.data.normal_(1.0, 0.02)
 8 |         m.bias.data.fill_(0)
 9 | 
10 | # compute the current classification accuracy
11 | def compute_acc(preds, labels):
12 |     correct = 0
13 |     preds_ = preds.data.max(1)[1]
14 |     correct = preds_.eq(labels.data).cpu().sum()
15 |     acc = float(correct) / float(len(labels.data)) * 100.0
16 |     return acc
17 | 


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/README.md:
--------------------------------------------------------------------------------
 1 | # PyTorch-Image-Dehazing
 2 | PyTorch implementation of some single image dehazing networks. 
 3 | 
 4 | Currently Implemented:
 5 | **AOD-Net**: An extremely lightweight model (< 10 KB). Results are good.
 6 | 
 7 | 
 8 | **Prerequisites:**
 9 | 1. Python 3 
10 | 2. Pytorch 0.4
11 | 
12 | **Preparation:**
13 | 1. Create folder "data".
14 | 2. Download and extract the dataset into "data" from the original author's project page. (https://sites.google.com/site/boyilics/website-builder/project-page). 
15 | 
16 | **Training:**
17 | 1. Run train.py. The script will automatically dump some validation results into the "samples" folder after every epoch. The model snapshots are dumped in the "snapshots" folder. 
18 | 
19 | **Testing:**
20 | 1. Run dehaze.py. The script takes images in the "test_images" folder and dumps the dehazed images into the "results" folder. A pre-trained snapshot has been provided in the snapshots folder.
21 | 
22 | **Evaluation:**
23 | WIP.  
24 | Some qualitative results are shown below:
25 | 
26 | ![Alt text](results/man.png?raw=true "Title")  
27 | ![Alt text](results/guogong.png?raw=true "Title")  
28 | ![Alt text](results/test4.jpg?raw=true "Title")  
29 | ![Alt text](results/test9.jpg?raw=true "Title")  
30 | ![Alt text](results/test13.jpg?raw=true "Title")  
31 | ![Alt text](results/test15.jpg?raw=true "Title")
32 | 


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/dehaze.py:
--------------------------------------------------------------------------------
 1 | import torch
 2 | import torch.nn as nn
 3 | import torchvision
 4 | import torch.backends.cudnn as cudnn
 5 | import torch.optim
 6 | import os
 7 | import sys
 8 | import argparse
 9 | import time
10 | import dataloader
11 | import net
12 | import numpy as np
13 | from torchvision import transforms
14 | from PIL import Image
15 | import glob
16 | 
17 | 
18 | def dehaze_image(image_path):
19 | 
20 | 	data_hazy = Image.open(image_path)
21 | 	data_hazy = (np.asarray(data_hazy)/255.0)
22 | 
23 | 	data_hazy = torch.from_numpy(data_hazy).float()
24 | 	data_hazy = data_hazy.permute(2,0,1)
25 | 	data_hazy = data_hazy.cuda().unsqueeze(0)
26 | 
27 | 	dehaze_net = net.dehaze_net().cuda()
28 | 	dehaze_net.load_state_dict(torch.load('snapshots/dehazer.pth'))
29 | 
30 | 	clean_image = dehaze_net(data_hazy)
31 | 	torchvision.utils.save_image(torch.cat((data_hazy, clean_image),0), "results/" + image_path.split("/")[-1])
32 | 	
33 | 
34 | if __name__ == '__main__':
35 | 
36 | 	test_list = glob.glob("test_images/*")
37 | 
38 | 	for image in test_list:
39 | 
40 | 		dehaze_image(image)
41 | 		print(image, "done!")
42 | 


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/net.py:
--------------------------------------------------------------------------------
 1 | import torch
 2 | import torch.nn as nn
 3 | import math
 4 | 
 5 | class dehaze_net(nn.Module):
 6 | 
 7 | 	def __init__(self):
 8 | 		super(dehaze_net, self).__init__()
 9 | 
10 | 		self.relu = nn.ReLU(inplace=True)
11 | 	
12 | 		self.e_conv1 = nn.Conv2d(3,3,1,1,0,bias=True) 
13 | 		self.e_conv2 = nn.Conv2d(3,3,3,1,1,bias=True) 
14 | 		self.e_conv3 = nn.Conv2d(6,3,5,1,2,bias=True) 
15 | 		self.e_conv4 = nn.Conv2d(6,3,7,1,3,bias=True) 
16 | 		self.e_conv5 = nn.Conv2d(12,3,3,1,1,bias=True) 
17 | 		
18 | 	def forward(self, x):
19 | 		source = []
20 | 		source.append(x)
21 | 
22 | 		x1 = self.relu(self.e_conv1(x))
23 | 		x2 = self.relu(self.e_conv2(x1))
24 | 
25 | 		concat1 = torch.cat((x1,x2), 1)
26 | 		x3 = self.relu(self.e_conv3(concat1))
27 | 
28 | 		concat2 = torch.cat((x2, x3), 1)
29 | 		x4 = self.relu(self.e_conv4(concat2))
30 | 
31 | 		concat3 = torch.cat((x1,x2,x3,x4),1)
32 | 		x5 = self.relu(self.e_conv5(concat3))
33 | 
34 | 		clean_image = self.relu((x5 * x) - x5 + 1) 
35 | 		
36 | 		return clean_image
37 | 
38 | 		
39 | 
40 | 
41 | 			
42 | 
43 | 			
44 | 			
45 | 
46 | 
47 | 
48 | 
49 | 
50 | 
51 | 


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/results/3921554452543_.pic.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/DEHAZE-AOE_torch/results/3921554452543_.pic.jpg


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/results/WechatIMG391.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/DEHAZE-AOE_torch/results/WechatIMG391.jpeg


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/samples/dummyText.txt:
--------------------------------------------------------------------------------
1 | dummy text.
2 | 


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/snapshots/dehazer.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/DEHAZE-AOE_torch/snapshots/dehazer.pth


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/test_images/3921554452543_.pic.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/DEHAZE-AOE_torch/test_images/3921554452543_.pic.jpg


--------------------------------------------------------------------------------
/GAN/DEHAZE-AOE_torch/test_images/WechatIMG391.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/DEHAZE-AOE_torch/test_images/WechatIMG391.jpeg


--------------------------------------------------------------------------------
/GAN/README.md:
--------------------------------------------------------------------------------
1 | This is a file used to save GAN code that has been run in the server.
2 | 


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/1803.08664.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/1803.08664.pdf


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/LICENSE:
--------------------------------------------------------------------------------
 1 | MIT License
 2 | 
 3 | Copyright (c) 2018 Junho Kim (1993.01.12)
 4 | 
 5 | Permission is hereby granted, free of charge, to any person obtaining a copy
 6 | of this software and associated documentation files (the "Software"), to deal
 7 | in the Software without restriction, including without limitation the rights
 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 | 
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 | 
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 | 


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/README.md:
--------------------------------------------------------------------------------
 1 | ## celebA
 2 | 首先,rm dataset/celebA
 3 | 然后python download.py celebA ......你会发现下载不了哈哈哈哈哈哈哈哈哈
 4 | 然后,我发下可以直接用kaggle下载
 5 | 先注册一个kaggle账号,然后install kaggel ,再按github上的教程编辑kaggle api的json
 6 | >>kaggle datasets download -d jessicali9530/celeba-dataset 
 7 | 
 8 | 把/img_align_celeba文件夹解压放到datasets下面
 9 | 
10 | ```
11 | hx@hx-b412:~$ python main.py --phase train --dataset img_align_celeba  --gan_type wgan-gp --img 64
12 | 
13 | ```
14 | 
15 | ## cifar10
16 | 直接就能跑
17 | python main.py --phase train --dataset cifar10 --gan_type wgan-gp --img 32
18 | 
19 | ---
20 | 
21 | # Self-Attention-GAN-Tensorflow
22 | Simple Tensorflow implementation of ["Self-Attention Generative Adversarial Networks" (SAGAN)](https://arxiv.org/pdf/1805.08318.pdf)
23 | 
24 | 
25 | ## Requirements
26 | * Tensorflow 1.8
27 | * Python 3.6
28 | 
29 | ## Summary
30 | ### Framework
31 | ![framework](./assests/framework.PNG)
32 | 
33 | ### Code
34 | ```python
35 |     def attention(self, x, ch):
36 |       f = conv(x, ch // 8, kernel=1, stride=1, sn=self.sn, scope='f_conv') # [bs, h, w, c']
37 |       g = conv(x, ch // 8, kernel=1, stride=1, sn=self.sn, scope='g_conv') # [bs, h, w, c']
38 |       h = conv(x, ch, kernel=1, stride=1, sn=self.sn, scope='h_conv') # [bs, h, w, c]
39 | 
40 |       # N = h * w
41 |       s = tf.matmul(hw_flatten(g), hw_flatten(f), transpose_b=True) # # [bs, N, N]
42 | 
43 |       beta = tf.nn.softmax(s, axis=-1)  # attention map
44 | 
45 |       o = tf.matmul(beta, hw_flatten(h)) # [bs, N, C]
46 |       gamma = tf.get_variable("gamma", [1], initializer=tf.constant_initializer(0.0))
47 | 
48 |       o = tf.reshape(o, shape=x.shape) # [bs, h, w, C]
49 |       x = gamma * o + x
50 | 
51 |       return x
52 | ```
53 | ## Usage
54 | ### dataset
55 | 
56 | ```python
57 | > python download.py celebA
58 | ```
59 | 
60 | * `mnist` and `cifar10` are used inside keras
61 | * For `your dataset`, put images like this:
62 | 
63 | ```
64 | ├── dataset
65 |    └── YOUR_DATASET_NAME
66 |        ├── xxx.jpg (name, format doesn't matter)
67 |        ├── yyy.png
68 |        └── ...
69 | ```
70 | 
71 | ### train
72 | * python main.py --phase train --dataset celebA --gan_type hinge
73 | 
74 | ### test
75 | * python main.py --phase test --dataset celebA --gan_type hinge
76 | 
77 | ## Results
78 | ### ImageNet
79 | <div align="">
80 |    <img src="./assests/result_.png" width="420">
81 | </div>
82 | 
83 | ### CelebA (100K iteration, hinge loss)
84 | ![celebA](./assests/celebA.png)
85 | 
86 | ## Author
87 | Junho Kim
88 | 


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/SAGAN_train_08_07500.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/SAGAN_train_08_07500.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/SAGAN_train_08_09000.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/SAGAN_train_08_09000.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/__pycache__/SAGAN.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/__pycache__/SAGAN.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/__pycache__/ops.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/__pycache__/ops.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/__pycache__/utils.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/__pycache__/utils.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/assests/celebA.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/assests/celebA.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/assests/framework.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/assests/framework.PNG


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/assests/result_.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/assests/result_.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/download.py:
--------------------------------------------------------------------------------
 1 | import os
 2 | import zipfile
 3 | import argparse
 4 | import requests
 5 | 
 6 | from tqdm import tqdm
 7 | 
 8 | parser = argparse.ArgumentParser(description='Download dataset for SAGAN')
 9 | parser.add_argument('dataset', metavar='N', type=str, nargs='+', choices=['celebA'],
10 |                     help='name of dataset to download [celebA]')
11 | 
12 | 
13 | def download_file_from_google_drive(id, destination):
14 |     URL = "https://docs.google.com/uc?export=download"
15 |     session = requests.Session()
16 | 
17 |     response = session.get(URL, params={'id': id}, stream=True)
18 |     token = get_confirm_token(response)
19 | 
20 |     if token:
21 |         params = {'id': id, 'confirm': token}
22 |         response = session.get(URL, params=params, stream=True)
23 | 
24 |     save_response_content(response, destination)
25 | 
26 | 
27 | def get_confirm_token(response):
28 |     for key, value in response.cookies.items():
29 |         if key.startswith('download_warning'):
30 |             return value
31 |     return None
32 | 
33 | 
34 | def save_response_content(response, destination, chunk_size=32 * 1024):
35 |     total_size = int(response.headers.get('content-length', 0))
36 |     with open(destination, "wb") as f:
37 |         for chunk in tqdm(response.iter_content(chunk_size), total=total_size,
38 |                           unit='B', unit_scale=True, desc=destination):
39 |             if chunk:  # filter out keep-alive new chunks
40 |                 f.write(chunk)
41 | 
42 | 
43 | def download_celeb_a(dirpath):
44 |     data_dir = 'celebA'
45 |     if os.path.exists(os.path.join(dirpath, data_dir)):
46 |         print('Found Celeb-A - skip')
47 |         return
48 | 
49 |     filename, drive_id = "img_align_celeba.zip", "0B7EVK8r0v71pZjFTYXZWM3FlRnM"
50 |     save_path = os.path.join(dirpath, filename)
51 | 
52 |     if os.path.exists(save_path):
53 |         print('[*] {} already exists'.format(save_path))
54 |     else:
55 |         download_file_from_google_drive(drive_id, save_path)
56 | 
57 |     zip_dir = ''
58 |     with zipfile.ZipFile(save_path) as zf:
59 |         zip_dir = zf.namelist()[0]
60 |         zf.extractall(dirpath)
61 |     os.remove(save_path)
62 |     os.rename(os.path.join(dirpath, zip_dir), os.path.join(dirpath, data_dir))
63 | 
64 | 
65 | def prepare_data_dir(path='./dataset'):
66 |     if not os.path.exists(path):
67 |         os.mkdir(path)
68 | 
69 | 
70 | if __name__ == '__main__':
71 |     args = parser.parse_args()
72 |     prepare_data_dir()
73 | 
74 |     if any(name in args.dataset for name in ['CelebA', 'celebA', 'celebA']):
75 |         download_celeb_a('./dataset')
76 | 


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/readme_cy.md:
--------------------------------------------------------------------------------
 1 | ## celebA
 2 | 首先,rm dataset/celebA
 3 | 然后python download.py celebA ......你会发现下载不了哈哈哈哈哈哈哈哈哈
 4 | 然后,我发下可以直接用kaggle下载
 5 | 先注册一个kaggle账号,然后install kaggel ,再按github上的教程编辑kaggle api的json
 6 | >>kaggle datasets download -d jessicali9530/celeba-dataset 
 7 | 
 8 | 把/img_align_celeba文件夹解压放到datasets下面
 9 | 
10 | ```
11 | hx@hx-b412:~$ python main.py --phase train --dataset img_align_celeba  --gan_type wgan-gp --img 64
12 | 
13 | ```
14 | 
15 | ## cifar10
16 | 直接就能跑
17 | python main.py --phase train --dataset cifar10 --gan_type wgan-gp --img 32


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/readme_cy.md.tmp.html:
--------------------------------------------------------------------------------
 1 | <h2>celebA</h2>
 2 | 
 3 | <p>首先,rm dataset/celebA
 4 | 然后python download.py celebA ......你会发现下载不了哈哈哈哈哈哈哈哈哈
 5 | 然后,我发下可以直接用kaggle下载
 6 | 先注册一个kaggle账号,然后install kaggel ,再按github上的教程编辑kaggle api的json</p>
 7 | 
 8 | <blockquote>
 9 |   <blockquote>
10 |   <p>kaggle datasets download -d jessicali9530/celeba-dataset </p>
11 | </blockquote>
12 | 
13 | <p></blockquote></p>
14 | 
15 | <p>把/img<em>align</em>celeba文件夹解压放到datasets下面</p>
16 | 
17 | <p>```
18 | hx@hx-b412:~$ python main.py --phase train --dataset img<em>align</em>celeba  --gan_type wgan-gp --img 64</p>
19 | 
20 | <p>```</p>
21 | 
22 | <h2>cifar10</h2>
23 | 
24 | <p>直接就能跑
25 | python main.py --phase train --dataset cifar10 --gan_type wgan --img 32</p>
26 | 


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_0.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_1.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_2.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_3.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_4.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_5.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_6.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_7.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_8.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_9.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_celebA_hinge_128_128_True/SAGAN_test_9.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_0.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_1.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_2.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_3.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_4.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_5.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_6.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_7.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_8.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_9.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/results/SAGAN_cifar10_hinge_128_128_True/SAGAN_test_9.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/samples/SAGAN_img_align_celeba_wgan-gp_64_128_True/SAGAN_train_08_09000.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/samples/SAGAN_img_align_celeba_wgan-gp_64_128_True/SAGAN_train_08_09000.png


--------------------------------------------------------------------------------
/GAN/Self_attention_GAN_tensorflow/samples/SAGAN_img_align_celeba_wgan-gp_64_128_True/SAGAN_train_08_09500.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/Self_attention_GAN_tensorflow/samples/SAGAN_img_align_celeba_wgan-gp_64_128_True/SAGAN_train_08_09500.png


--------------------------------------------------------------------------------
/GAN/data/download_cyclegan_dataset.sh:
--------------------------------------------------------------------------------
 1 | #!/bin/bash
 2 | 
 3 | FILE=$1
 4 | 
 5 | if [[ $FILE != "ae_photos" && $FILE != "apple2orange" && $FILE != "summer2winter_yosemite" &&  $FILE != "horse2zebra" && $FILE != "monet2photo" && $FILE != "cezanne2photo" && $FILE != "ukiyoe2photo" && $FILE != "vangogh2photo" && $FILE != "maps" && $FILE != "cityscapes" && $FILE != "facades" && $FILE != "iphone2dslr_flower" && $FILE != "ae_photos" ]]; then
 6 |     echo "Available datasets are: apple2orange, summer2winter_yosemite, horse2zebra, monet2photo, cezanne2photo, ukiyoe2photo, vangogh2photo, maps, cityscapes, facades, iphone2dslr_flower, ae_photos"
 7 |     exit 1
 8 | fi
 9 | 
10 | URL=https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/$FILE.zip
11 | ZIP_FILE=./$FILE.zip
12 | TARGET_DIR=./$FILE
13 | wget -N $URL -O $ZIP_FILE
14 | unzip $ZIP_FILE -d .
15 | rm $ZIP_FILE
16 | 
17 | # Adapt to project expected directory heriarchy
18 | mkdir -p "$TARGET_DIR/train" "$TARGET_DIR/test"
19 | mv "$TARGET_DIR/trainA" "$TARGET_DIR/train/A"
20 | mv "$TARGET_DIR/trainB" "$TARGET_DIR/train/B"
21 | mv "$TARGET_DIR/testA" "$TARGET_DIR/test/A"
22 | mv "$TARGET_DIR/testB" "$TARGET_DIR/test/B"
23 | 


--------------------------------------------------------------------------------
/GAN/data/download_pix2pix_dataset.sh:
--------------------------------------------------------------------------------
 1 | FILE=$1
 2 | 
 3 | if [[ $FILE != "cityscapes" && $FILE != "night2day" && $FILE != "edges2handbags" && $FILE != "edges2shoes" && $FILE != "facades" && $FILE != "maps" ]]; then
 4 |   echo "Available datasets are cityscapes, night2day, edges2handbags, edges2shoes, facades, maps"
 5 |   exit 1
 6 | fi
 7 | 
 8 | if [[ $FILE == "cityscapes" ]]; then
 9 |     echo "Due to license issue, we cannot provide the Cityscapes dataset from our repository. Please download the Cityscapes dataset from https://cityscapes-dataset.com, and use the script ./datasets/prepare_cityscapes_dataset.py."
10 |     echo "You need to download gtFine_trainvaltest.zip and leftImg8bit_trainvaltest.zip. For further instruction, please read ./datasets/prepare_cityscapes_dataset.py"
11 |     exit 1
12 | fi
13 | 
14 | echo "Specified [$FILE]"
15 | 
16 | URL=http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/$FILE.tar.gz
17 | TAR_FILE=$FILE.tar.gz
18 | TARGET_DIR=$FILE/
19 | wget -N $URL -O $TAR_FILE
20 | mkdir -p $TARGET_DIR
21 | tar -zxvf $TAR_FILE -C ./
22 | rm $TAR_FILE
23 | 


--------------------------------------------------------------------------------
/GAN/datadownloader/README.md:
--------------------------------------------------------------------------------
 1 | # DCGAN in TensorFlow
 2 | 
 3 | TensorFlow / TensorLayer implementation of [Deep Convolutional Generative Adversarial Networks](http://arxiv.org/abs/1511.06434) which is a stabilize Generative Adversarial Networks.
 4 | 
 5 | Looking for Text to Image Synthesis ? [click here](https://github.com/zsdonghao/text-to-image)
 6 | 
 7 | ![alt tag](img/DCGAN.png)
 8 | 
 9 | * [Brandon Amos](http://bamos.github.io/) wrote an excellent [blog post](http://bamos.github.io/2016/08/09/deep-completion/) and [image completion code](https://github.com/bamos/dcgan-completion.tensorflow) based on this repo.
10 | * *To avoid the fast convergence of D (discriminator) network, G (generator) network is updated twice for each D network update, which differs from original paper.*
11 | 
12 | 
13 | ## Prerequisites
14 | 
15 | - Python 2.7 or Python 3.3+
16 | - [TensorFlow==1.10.0+](https://www.tensorflow.org/)
17 | - [TensorLayer==1.10.1+](https://github.com/tensorlayer/tensorlayer)
18 | 
19 | 
20 | ## Usage
21 | 
22 | First, download images to `data/celebA`:
23 | 
24 |     $ python download.py celebA		[202599 face images]
25 | 
26 | Second, train the GAN:
27 | 
28 |     $ python main.py
29 | 
30 | ## Result on celebA
31 | 
32 | 
33 | <a href="http://tensorlayer.readthedocs.io">
34 | <div align="center">
35 | 	<img src="img/result.png" width="90%" height="90%"/>
36 | </div>
37 | </a>


--------------------------------------------------------------------------------
/GAN/datadownloader/__pycache__/model.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/datadownloader/__pycache__/model.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/datadownloader/__pycache__/utils.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/datadownloader/__pycache__/utils.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/datadownloader/dataload.py:
--------------------------------------------------------------------------------
 1 | # -*- coding: utf-8 -*-
 2 | """
 3 | Created on Thu Nov  1 17:28:30 2018
 4 | 
 5 | @author: qd
 6 | """
 7 | 
 8 | from __future__ import print_function, division
 9 | 
10 | import argparse
11 | import json
12 | import subprocess
13 | import urllib.request
14 | from os.path import join
15 | 
16 | __author__ = 'Fisher Yu'
17 | __email__ = 'fy@cs.princeton.edu'
18 | __license__ = 'MIT'
19 | 
20 | 
21 | def list_categories(tag):
22 |     url = 'http://lsun.cs.princeton.edu/htbin/list.cgi?tag=' + tag
23 |     f = urllib.request.urlopen(url)
24 |     return json.loads(f.read())
25 | 
26 | 
27 | def download(out_dir, category, set_name, tag):
28 |     url = 'http://lsun.cs.princeton.edu/htbin/download.cgi?tag={tag}' \
29 |           '&category={category}&set={set_name}'.format(**locals())
30 |     if set_name == 'test':
31 |         out_name = 'test_lmdb.zip'
32 |     else:
33 |         out_name = '{category}_{set_name}_lmdb.zip'.format(**locals())
34 |     out_path = join(out_dir, out_name)
35 |     cmd = ['curl', url, '-o', out_path]
36 |     print('Downloading', category, set_name, 'set')
37 |     subprocess.call(cmd)
38 | 
39 | 
40 | def main():
41 |     parser = argparse.ArgumentParser()
42 |     parser.add_argument('--tag', type=str, default='latest')
43 |     parser.add_argument('-o', '--out_dir', default='')
44 |     parser.add_argument('-c', '--category', default=None)
45 |     args = parser.parse_args()
46 | 
47 |     categories = list_categories(args.tag)
48 |     if args.category is None:
49 |         print('Downloading', len(categories), 'categories')
50 |         for category in categories:
51 |             download(args.out_dir, category, 'train', args.tag)
52 |             download(args.out_dir, category, 'val', args.tag)
53 |         download(args.out_dir, '', 'test', args.tag)
54 |     else:
55 |         if args.category == 'test':
56 |             download(args.out_dir, '', 'test', args.tag)
57 |         elif args.category not in categories:
58 |             print('Error:', args.category, "doesn't exist in",
59 |                   args.tag, 'LSUN release')
60 |         else:
61 |             download(args.out_dir, args.category, 'train', args.tag)
62 |             download(args.out_dir, args.category, 'val', args.tag)
63 | 
64 | 
65 | if __name__ == '__main__':
66 |     main()


--------------------------------------------------------------------------------
/GAN/datadownloader/utils.py:
--------------------------------------------------------------------------------
 1 | import imageio as io
 2 | import numpy as np
 3 | import scipy.misc
 4 | 
 5 | 
 6 | def center_crop(x, crop_h, crop_w=None, resize_w=64):
 7 |     if crop_w is None:
 8 |         crop_w = crop_h
 9 |     h, w = x.shape[:2]
10 |     j = int(round((h - crop_h)/2.))
11 |     i = int(round((w - crop_w)/2.))
12 |     return scipy.misc.imresize(x[j:j+crop_h, i:i+crop_w],
13 |                                [resize_w, resize_w])
14 | 
15 | def merge(images, size):
16 |     h, w = images.shape[1], images.shape[2]
17 |     img = np.zeros((h * size[0], w * size[1], 3))
18 |     for idx, image in enumerate(images):
19 |         i = idx % size[1]
20 |         j = idx // size[1]
21 |         img[j * h: j * h + h, i * w: i * w + w, :] = image
22 |     return img
23 | 
24 | def transform(image, npx=64, is_crop=True, resize_w=64):
25 |     if is_crop:
26 |         cropped_image = center_crop(image, npx, resize_w=resize_w)
27 |     else:
28 |         cropped_image = image
29 |     return (np.array(cropped_image) / 127.5) - 1.
30 | 
31 | def inverse_transform(images):
32 |     return (images + 1.) / 2.
33 | 
34 | def imread(path, is_grayscale = False):
35 |     if (is_grayscale):
36 |         return io.imread(path).astype(np.float).flatten()
37 |     else:
38 |         return io.imread(path).astype(np.float)
39 | 
40 | def imsave(images, size, path):
41 |     return io.imsave(path, merge(images, size))
42 | 
43 | def get_image(image_path, image_size, is_crop=True, resize_w=64, is_grayscale = False):
44 |     return transform(imread(image_path, is_grayscale), image_size, is_crop, resize_w)
45 | 
46 | def save_images(images, size, image_path):
47 |     return imsave(inverse_transform(images), size, image_path)
48 | 


--------------------------------------------------------------------------------
/GAN/ebgan-master/CHANGELOG.md:
--------------------------------------------------------------------------------
 1 | ## 0.0.0.2 ( 2017-04-04 )
 2 | 
 3 | Features :
 4 | 
 5 |    
 6 | Refactored :
 7 | 
 8 |     - adapted to tensorflow 1.0.0
 9 |     - split modeling stub from train.py to model.py
10 |      


--------------------------------------------------------------------------------
/GAN/ebgan-master/LICENSE:
--------------------------------------------------------------------------------
 1 | MIT License
 2 | 
 3 | Copyright (c) 2016 Namju Kim
 4 | 
 5 | Permission is hereby granted, free of charge, to any person obtaining a copy
 6 | of this software and associated documentation files (the "Software"), to deal
 7 | in the Software without restriction, including without limitation the rights
 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 | 
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 | 
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 | 


--------------------------------------------------------------------------------
/GAN/ebgan-master/README.md:
--------------------------------------------------------------------------------
 1 | # EBGAN
 2 | A tensorflow implementation of Junbo et al's Energy-based generative adversarial network ( EBGAN ) paper. 
 3 | ( See : [https://arxiv.org/pdf/1609.03126v2.pdf](https://arxiv.org/pdf/1609.03126v2.pdf) )
 4 | My implementation is somewhat different from original papers, for example I've used convolution layers
 5 | in both generator and discriminator instead of fully connected layers.
 6 | I think this isn't important and will not make a big difference in the final result.
 7 | 
 8 | ## Version
 9 | 
10 | Current Version : __***0.0.0.2***__
11 | 
12 | ## Dependencies ( VERSION MUST BE MATCHED EXACTLY! )
13 | 
14 | 1. tensorflow == 1.0.0 
15 | 1. sugartensor == 1.0.0.2
16 | 
17 | ## Training the network
18 | 
19 | Execute
20 | <pre><code>
21 | python mnist_ebgan_train.py
22 | </code></pre>
23 | to train the network. You can see the result ckpt files and log files in the 'asset/train' directory.
24 | Launch tensorboard --logdir asset/train/log to monitor training process.
25 | 
26 | 
27 | ## Generating image
28 |  
29 | Execute
30 | <pre><code>
31 | python mnist_ebgan_generate.py
32 | </code></pre>
33 | to generate sample image.  The 'sample.png' file will be generated in the 'asset/train' directory.
34 | 
35 | ## Generated image sample
36 | 
37 | This image was generated by EBGAN network.
38 | <p align="center">
39 |   <img src="https://raw.githubusercontent.com/buriburisuri/ebgan/master/png/sample.png" width="1024"/>
40 | </p>  
41 | 
42 | ## Other resources
43 | 
44 | 1. [Original GAN tensorflow implementation](https://github.com/buriburisuri/sugartensor/blob/master/sugartensor/example/mnist_gan.py)
45 | 1. [InfoGAN tensorflow implementation](https://github.com/buriburisuri/sugartensor/blob/master/sugartensor/example/mnist_info_gan.py)
46 | 1. [Supervised InfoGAN tensorflow implementation](https://github.com/buriburisuri/supervised_infogan)
47 | 
48 | # Authors
49 | Namju Kim (buriburisuri@gmail.com) at Jamonglabs Co., Ltd.


--------------------------------------------------------------------------------
/GAN/ebgan-master/mnist_ebgan_generate.py:
--------------------------------------------------------------------------------
 1 | import sugartensor as tf
 2 | import matplotlib
 3 | matplotlib.use('Agg')
 4 | import matplotlib.pyplot as plt
 5 | from model import *
 6 | 
 7 | 
 8 | __author__ = 'namju.kim@kakaobrain.com'
 9 | 
10 | 
11 | # set log level to debug
12 | tf.sg_verbosity(10)
13 | 
14 | #
15 | # hyper parameters
16 | #
17 | 
18 | batch_size = 100
19 | 
20 | 
21 | # random uniform seed
22 | z = tf.random_uniform((batch_size, z_dim))
23 | 
24 | # generator
25 | gen = generator(z)
26 | 
27 | #
28 | # draw samples
29 | #
30 | 
31 | with tf.Session() as sess:
32 | 
33 |     tf.sg_init(sess)
34 | 
35 |     # restore parameters
36 |     tf.sg_restore(sess, tf.train.latest_checkpoint('asset/train'), category='generator')
37 | 
38 |     # run generator
39 |     imgs = sess.run(gen.sg_squeeze())
40 | 
41 |     # plot result
42 |     _, ax = plt.subplots(10, 10, sharex=True, sharey=True)
43 |     for i in range(10):
44 |         for j in range(10):
45 |             ax[i][j].imshow(imgs[i * 10 + j], 'gray')
46 |             ax[i][j].set_axis_off()
47 |     plt.savefig('asset/train/sample.png', dpi=600)
48 |     tf.sg_info('Sample image saved to "asset/train/sample.png"')
49 |     plt.close()
50 | 


--------------------------------------------------------------------------------
/GAN/ebgan-master/mnist_ebgan_train.py:
--------------------------------------------------------------------------------
 1 | import sugartensor as tf
 2 | import numpy as np
 3 | from model import *
 4 | 
 5 | 
 6 | __author__ = 'namju.kim@kakaobrain.com'
 7 | 
 8 | 
 9 | # set log level to debug
10 | tf.sg_verbosity(10)
11 | 
12 | #
13 | # hyper parameters
14 | #
15 | 
16 | batch_size = 128   # batch size
17 | 
18 | #
19 | # inputs
20 | #
21 | 
22 | # MNIST input tensor ( with QueueRunner )
23 | data = tf.sg_data.Mnist(batch_size=batch_size)
24 | 
25 | # input images
26 | x = data.train.image
27 | 
28 | # random uniform seed
29 | z = tf.random_uniform((batch_size, z_dim))
30 | 
31 | #
32 | # Computational graph
33 | #
34 | 
35 | # generator
36 | gen = generator(z)
37 | 
38 | # add image summary
39 | tf.sg_summary_image(x, name='real')
40 | tf.sg_summary_image(gen, name='fake')
41 | 
42 | # discriminator
43 | disc_real = discriminator(x)
44 | disc_fake = discriminator(gen)
45 | 
46 | #
47 | # pull-away term ( PT ) regularizer
48 | #
49 | 
50 | sample = gen.sg_flatten()
51 | nom = tf.matmul(sample, tf.transpose(sample, perm=[1, 0]))
52 | denom = tf.reduce_sum(tf.square(sample), reduction_indices=[1], keep_dims=True)
53 | pt = tf.square(nom/denom)
54 | pt -= tf.diag(tf.diag_part(pt))
55 | pt = tf.reduce_sum(pt) / (batch_size * (batch_size - 1))
56 | 
57 | 
58 | #
59 | # loss & train ops
60 | #
61 | 
62 | # mean squared errors
63 | mse_real = tf.reduce_mean(tf.square(disc_real - x), reduction_indices=[1, 2, 3])
64 | mse_fake = tf.reduce_mean(tf.square(disc_fake - gen), reduction_indices=[1, 2, 3])
65 | 
66 | # discriminator loss
67 | loss_disc = mse_real + tf.maximum(margin - mse_fake, 0)
68 | # generator loss + PT regularizer
69 | loss_gen = mse_fake + pt * pt_weight
70 | 
71 | train_disc = tf.sg_optim(loss_disc, lr=0.001, category='discriminator')  # discriminator train ops
72 | train_gen = tf.sg_optim(loss_gen, lr=0.001, category='generator')  # generator train ops
73 | 
74 | # add summary
75 | tf.sg_summary_loss(loss_disc, name='disc')
76 | tf.sg_summary_loss(loss_gen, name='gen')
77 | 
78 | 
79 | #
80 | # training
81 | #
82 | 
83 | # def alternate training func
84 | @tf.sg_train_func
85 | def alt_train(sess, opt):
86 |     l_disc = sess.run([loss_disc, train_disc])[0]  # training discriminator
87 |     l_gen = sess.run([loss_gen, train_gen])[0]     # training generator
88 |     return np.mean(l_disc) + np.mean(l_gen)
89 | 
90 | # do training
91 | alt_train(log_interval=10, max_ep=30, ep_size=data.train.num_batch)
92 | 
93 | 


--------------------------------------------------------------------------------
/GAN/ebgan-master/model.py:
--------------------------------------------------------------------------------
 1 | import sugartensor as tf
 2 | 
 3 | #
 4 | # hyper parameters
 5 | #
 6 | 
 7 | z_dim = 50        # noise dimension
 8 | margin = 1        # max-margin for hinge loss
 9 | pt_weight = 0.1   # PT regularizer's weight
10 | 
11 | 
12 | #
13 | # create generator
14 | #
15 | 
16 | def generator(x):
17 | 
18 |     reuse = len([t for t in tf.global_variables() if t.name.startswith('generator')]) > 0
19 |     with tf.sg_context(name='generator', size=4, stride=2, act='leaky_relu', bn=True, reuse=reuse):
20 | 
21 |         # generator network
22 |         res = (x.sg_dense(dim=1024, name='fc_1')
23 |                .sg_dense(dim=7*7*128, name='fc_2')
24 |                .sg_reshape(shape=(-1, 7, 7, 128))
25 |                .sg_upconv(dim=64, name='conv_1')
26 |                .sg_upconv(dim=1, act='sigmoid', bn=False, name='conv_2'))
27 |     return res
28 | 
29 | 
30 | #
31 | # create discriminator
32 | #
33 | 
34 | def discriminator(x):
35 | 
36 |     reuse = len([t for t in tf.global_variables() if t.name.startswith('discriminator')]) > 0
37 |     with tf.sg_context(name='discriminator', size=4, stride=2, act='leaky_relu', bn=True, reuse=reuse):
38 |         res = (x.sg_conv(dim=64, name='conv_1')
39 |                 .sg_conv(dim=128, name='conv_2')
40 |                 .sg_upconv(dim=64, name='conv_3')
41 |                 .sg_upconv(dim=1, act='linear', name='conv_4'))
42 | 
43 |     return res
44 | 


--------------------------------------------------------------------------------
/GAN/ebgan-master/png/sample.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ebgan-master/png/sample.png


--------------------------------------------------------------------------------
/GAN/ebgan-master/png/sample_with_pt.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/ebgan-master/png/sample_with_pt.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/44310_D.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/44310_D.pth


--------------------------------------------------------------------------------
/GAN/self_attention _gan/44310_G.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/44310_G.pth


--------------------------------------------------------------------------------
/GAN/self_attention _gan/__pycache__/data_loader.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/__pycache__/data_loader.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/self_attention _gan/__pycache__/parameter.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/__pycache__/parameter.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/self_attention _gan/__pycache__/sagan_models.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/__pycache__/sagan_models.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/self_attention _gan/__pycache__/spectral.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/__pycache__/spectral.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/self_attention _gan/__pycache__/trainer.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/__pycache__/trainer.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/self_attention _gan/__pycache__/utils.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/__pycache__/utils.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/self_attention _gan/data_loader.py:
--------------------------------------------------------------------------------
 1 | import torch
 2 | import torchvision.datasets as dsets
 3 | from torchvision import transforms
 4 | 
 5 | 
 6 | class Data_Loader():
 7 |     def __init__(self, train, dataset, image_path, image_size, batch_size, shuf=True):
 8 |         self.dataset = dataset
 9 |         self.path = image_path
10 |         self.imsize = image_size
11 |         self.batch = batch_size
12 |         self.shuf = shuf
13 |         self.train = train
14 | 
15 |     def transform(self, resize, totensor, normalize, centercrop):
16 |         options = []
17 |         if centercrop:
18 |             options.append(transforms.CenterCrop(160))
19 |         if resize:
20 |             options.append(transforms.Scale((self.imsize,self.imsize)))
21 |         if totensor:
22 |             options.append(transforms.ToTensor())
23 |         if normalize:
24 |             options.append(transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)))
25 |         transform = transforms.Compose(options)
26 |         return transform
27 | 
28 |     def load_lsun(self, classes='church_outdoor_train'):
29 |         transforms = self.transform(True, True, True, False)
30 |         dataset = dsets.LSUN(self.path, classes=[classes], transform=transforms)
31 |         return dataset
32 | 
33 |     def load_celeb(self):
34 |         transforms = self.transform(True, True, True, True)
35 | #        print(self.path)
36 |         dataset = dsets.ImageFolder('data/CelebA', transform=transforms)
37 |         return dataset
38 | 
39 | 
40 |     def loader(self):
41 |         if self.dataset == 'lsun':
42 |             dataset = self.load_lsun()
43 |         elif self.dataset == 'celeb':
44 |             dataset = self.load_celeb()
45 | 
46 |         loader = torch.utils.data.DataLoader(dataset=dataset,
47 |                                               batch_size=self.batch,
48 |                                               shuffle=self.shuf,
49 |                                               num_workers=2,
50 |                                               drop_last=True)
51 |         return loader
52 | 
53 | 


--------------------------------------------------------------------------------
/GAN/self_attention _gan/download.sh:
--------------------------------------------------------------------------------
 1 | FILE=$1
 2 | 
 3 | if [ $FILE == 'CelebA' ]
 4 | then
 5 |     URL=https://www.dropbox.com/s/3e5cmqgplchz85o/CelebA_nocrop.zip?dl=0
 6 |     ZIP_FILE=./data/CelebA.zip
 7 | 
 8 | elif [ $FILE == 'LSUN' ]
 9 | then
10 |     URL=https://www.dropbox.com/s/zt7d2hchrw7cp9p/church_outdoor_train_lmdb.zip?dl=0
11 |     ZIP_FILE=./data/church_outdoor_train_lmdb.zip
12 | else
13 |     echo "Available datasets are: CelebA and LSUN"
14 |     exit 1
15 | fi
16 | 
17 | mkdir -p ./data/
18 | wget -N $URL -O $ZIP_FILE
19 | unzip $ZIP_FILE -d ./data/
20 | 
21 | if [ $FILE == 'CelebA' ]
22 | then
23 |     mv ./data/CelebA_nocrop ./data/CelebA
24 | fi
25 | 
26 | rm $ZIP_FILE
27 | 


--------------------------------------------------------------------------------
/GAN/self_attention _gan/image/attn_gf1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/image/attn_gf1.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/image/attn_gf2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/image/attn_gf2.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/image/main_model.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/image/main_model.PNG


--------------------------------------------------------------------------------
/GAN/self_attention _gan/image/sagan_attn.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/image/sagan_attn.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/image/sagan_celeb.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/image/sagan_celeb.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/image/sagan_lsun.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/image/sagan_lsun.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/image/unnamed:
--------------------------------------------------------------------------------
1 |  
2 | 


--------------------------------------------------------------------------------
/GAN/self_attention _gan/main.py:
--------------------------------------------------------------------------------
 1 | 
 2 | from parameter import *
 3 | from trainer import Trainer
 4 | # from tester import Tester
 5 | from data_loader import Data_Loader
 6 | from torch.backends import cudnn
 7 | from utils import make_folder
 8 | import os
 9 | os.environ["CUDA_VISIBLE_DEVICES"] = "0"
10 | def main(config):
11 |     # For fast training
12 |     cudnn.benchmark = True
13 | 
14 | 
15 |     # Data loader
16 |     data_loader = Data_Loader(config.train, config.dataset, config.image_path, config.imsize,
17 |                              config.batch_size, shuf=config.train)
18 | 
19 |     # Create directories if not exist
20 |     make_folder(config.model_save_path, config.version)
21 |     make_folder(config.sample_path, config.version)
22 |     make_folder(config.log_path, config.version)
23 |     make_folder(config.attn_path, config.version)
24 | 
25 | 
26 |     if config.train:
27 |         if config.model=='sagan':
28 |             trainer = Trainer(data_loader.loader(), config)
29 |         elif config.model == 'qgan':
30 |             trainer = qgan_trainer(data_loader.loader(), config)
31 |         trainer.train()
32 |     else:
33 |         tester = Tester(data_loader.loader(), config)
34 |         tester.test()
35 | 
36 | if __name__ == '__main__':
37 |     config = get_parameters()
38 |     print(config)
39 |     main(config)


--------------------------------------------------------------------------------
/GAN/self_attention _gan/models/sagan_celeb/44310_D.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/models/sagan_celeb/44310_D.pth


--------------------------------------------------------------------------------
/GAN/self_attention _gan/models/sagan_celeb/44310_G.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/models/sagan_celeb/44310_G.pth


--------------------------------------------------------------------------------
/GAN/self_attention _gan/parameter.py:
--------------------------------------------------------------------------------
 1 | import argparse
 2 | 
 3 | def str2bool(v):
 4 |     return v.lower() in ('true')
 5 | 
 6 | def get_parameters():
 7 | 
 8 |     parser = argparse.ArgumentParser()
 9 | 
10 |     # Model hyper-parameters
11 |     parser.add_argument('--model', type=str, default='sagan', choices=['sagan', 'qgan'])
12 |     parser.add_argument('--adv_loss', type=str, default='wgan-gp', choices=['wgan-gp', 'hinge'])
13 |     parser.add_argument('--imsize', type=int, default=32)
14 |     parser.add_argument('--g_num', type=int, default=5)
15 |     parser.add_argument('--z_dim', type=int, default=128)
16 |     parser.add_argument('--g_conv_dim', type=int, default=64)
17 |     parser.add_argument('--d_conv_dim', type=int, default=64)
18 |     parser.add_argument('--lambda_gp', type=float, default=10)
19 |     parser.add_argument('--version', type=str, default='sagan_1')
20 | 
21 |     # Training setting
22 |     parser.add_argument('--total_step', type=int, default=1000000, help='how many times to update the generator')
23 |     parser.add_argument('--d_iters', type=float, default=5)
24 |     parser.add_argument('--batch_size', type=int, default=64)
25 |     parser.add_argument('--num_workers', type=int, default=2)
26 |     parser.add_argument('--g_lr', type=float, default=0.0001)
27 |     parser.add_argument('--d_lr', type=float, default=0.0004)
28 |     parser.add_argument('--lr_decay', type=float, default=0.95)
29 |     parser.add_argument('--beta1', type=float, default=0.0)
30 |     parser.add_argument('--beta2', type=float, default=0.9)
31 | 
32 |     # using pretrained
33 |     parser.add_argument('--pretrained_model', type=int, default=None)
34 | 
35 |     # Misc
36 |     parser.add_argument('--train', type=str2bool, default=True)
37 |     parser.add_argument('--parallel', type=str2bool, default=False)
38 |     parser.add_argument('--dataset', type=str, default='cifar', choices=['lsun', 'celeb'])
39 |     parser.add_argument('--use_tensorboard', type=str2bool, default=False)
40 | 
41 |     # Path
42 |     parser.add_argument('--image_path', type=str, default='./data')
43 |     parser.add_argument('--log_path', type=str, default='./logs')
44 |     parser.add_argument('--model_save_path', type=str, default='./models')
45 |     parser.add_argument('--sample_path', type=str, default='./samples')
46 |     parser.add_argument('--attn_path', type=str, default='./attn')
47 | 
48 |     # Step size
49 |     parser.add_argument('--log_step', type=int, default=10)
50 |     parser.add_argument('--sample_step', type=int, default=100)
51 |     parser.add_argument('--model_save_step', type=float, default=1.0)
52 | 
53 | 
54 |     return parser.parse_args()


--------------------------------------------------------------------------------
/GAN/self_attention _gan/parameter.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/parameter.pyc


--------------------------------------------------------------------------------
/GAN/self_attention _gan/readcy.md:
--------------------------------------------------------------------------------
1 | ##error:
2 | 	AttributeError: module 'torchvision.transforms' has no attribute 'Resize'
3 | ###solve:
4 | 	


--------------------------------------------------------------------------------
/GAN/self_attention _gan/sample/sagan_celeb/45800_fake.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/sample/sagan_celeb/45800_fake.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/sample/sagan_celeb/45900_fake.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/sample/sagan_celeb/45900_fake.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/sample/sagan_celeb/46000_fake.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/sample/sagan_celeb/46000_fake.png


--------------------------------------------------------------------------------
/GAN/self_attention _gan/spectral.py:
--------------------------------------------------------------------------------
 1 | import torch
 2 | from torch.optim.optimizer import Optimizer, required
 3 | 
 4 | from torch.autograd import Variable
 5 | import torch.nn.functional as F
 6 | from torch import nn
 7 | from torch import Tensor
 8 | from torch.nn import Parameter
 9 | 
10 | def l2normalize(v, eps=1e-12):
11 |     return v / (v.norm() + eps)
12 | 
13 | 
14 | class SpectralNorm(nn.Module):
15 |     def __init__(self, module, name='weight', power_iterations=1):
16 |         super(SpectralNorm, self).__init__()
17 |         self.module = module
18 |         self.name = name
19 |         self.power_iterations = power_iterations
20 |         if not self._made_params():
21 |             self._make_params()
22 | 
23 |     def _update_u_v(self):
24 |         u = getattr(self.module, self.name + "_u")
25 |         v = getattr(self.module, self.name + "_v")
26 |         w = getattr(self.module, self.name + "_bar")
27 | 
28 |         height = w.data.shape[0]
29 |         for _ in range(self.power_iterations):
30 |             v.data = l2normalize(torch.mv(torch.t(w.view(height,-1).data), u.data))
31 |             u.data = l2normalize(torch.mv(w.view(height,-1).data, v.data))
32 | 
33 |         # sigma = torch.dot(u.data, torch.mv(w.view(height,-1).data, v.data))
34 |         sigma = u.dot(w.view(height, -1).mv(v))
35 |         setattr(self.module, self.name, w / sigma.expand_as(w))
36 | 
37 |     def _made_params(self):
38 |         try:
39 |             u = getattr(self.module, self.name + "_u")
40 |             v = getattr(self.module, self.name + "_v")
41 |             w = getattr(self.module, self.name + "_bar")
42 |             return True
43 |         except AttributeError:
44 |             return False
45 | 
46 | 
47 |     def _make_params(self):
48 |         w = getattr(self.module, self.name)
49 | 
50 |         height = w.data.shape[0]
51 |         width = w.view(height, -1).data.shape[1]
52 | 
53 |         u = Parameter(w.data.new(height).normal_(0, 1), requires_grad=False)
54 |         v = Parameter(w.data.new(width).normal_(0, 1), requires_grad=False)
55 |         u.data = l2normalize(u.data)
56 |         v.data = l2normalize(v.data)
57 |         w_bar = Parameter(w.data)
58 | 
59 |         del self.module._parameters[self.name]
60 | 
61 |         self.module.register_parameter(self.name + "_u", u)
62 |         self.module.register_parameter(self.name + "_v", v)
63 |         self.module.register_parameter(self.name + "_bar", w_bar)
64 | 
65 | 
66 |     def forward(self, *args):
67 |         self._update_u_v()
68 |         return self.module.forward(*args)


--------------------------------------------------------------------------------
/GAN/self_attention _gan/trainer.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/self_attention _gan/trainer.pyc


--------------------------------------------------------------------------------
/GAN/self_attention _gan/utils.py:
--------------------------------------------------------------------------------
 1 | import os
 2 | import torch
 3 | from torch.autograd import Variable
 4 | 
 5 | 
 6 | def make_folder(path, version):
 7 |         if not os.path.exists(os.path.join(path, version)):
 8 |             os.makedirs(os.path.join(path, version))
 9 | 
10 | 
11 | def tensor2var(x, grad=False):
12 |     if torch.cuda.is_available():
13 |         x = x.cuda()
14 |     return Variable(x, requires_grad=grad)
15 | 
16 | def var2tensor(x):
17 |     return x.data.cpu()
18 | 
19 | def var2numpy(x):
20 |     return x.data.cpu().numpy()
21 | 
22 | def denorm(x):
23 |     out = (x + 1) / 2
24 |     return out.clamp_(0, 1)
25 | 
26 | 


--------------------------------------------------------------------------------
/GAN/srgan_celebA/README.md:
--------------------------------------------------------------------------------
 1 | ### 这是一个超分辨率复原gan应用于celebA数据集
 2 | 我们依赖vgg的预训练权重.
 3 | 在使用数据集之前,你需要把'celebA'数据集放到'datasets/'下面
 4 | 
 5 | 然后 
 6 |  ``` 
 7 |  python srgan.py
 8 |  ```
 9 | 注意,如果你需要去白边的图片计算IS和FID,只要在最后面保存模型的时候吧函数注释了,并且释放下面的标注
10 | SRGAN:https://arxiv.org/pdf/1609.04802
11 | 


--------------------------------------------------------------------------------
/GAN/srgan_celebA/__pycache__/data_loader.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/srgan_celebA/__pycache__/data_loader.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/srgan_celebA/data_loader.py:
--------------------------------------------------------------------------------
 1 | import scipy
 2 | from glob import glob
 3 | import numpy as np
 4 | import matplotlib.pyplot as plt
 5 | import matplotlib
 6 | matplotlib.use('Agg')
 7 | class DataLoader():
 8 |     def __init__(self, dataset_name, img_res=(128, 128)):
 9 |         self.dataset_name = dataset_name
10 |         self.img_res = img_res
11 | 
12 |     def load_data(self, batch_size=1, is_testing=False):
13 |         data_type = "train" if not is_testing else "test"
14 |         
15 |         path = glob('./datasets/%s/*' % (self.dataset_name))
16 | 
17 |         batch_images = np.random.choice(path, size=batch_size)
18 | 
19 |         imgs_hr = []
20 |         imgs_lr = []
21 |         for img_path in batch_images:
22 |             img = self.imread(img_path)
23 | 
24 |             h, w = self.img_res
25 |             low_h, low_w = int(h / 4), int(w / 4)
26 | 
27 |             img_hr = scipy.misc.imresize(img, self.img_res)
28 |             img_lr = scipy.misc.imresize(img, (low_h, low_w))
29 | 
30 |             # If training => do random flip
31 |             if not is_testing and np.random.random() < 0.5:
32 |                 img_hr = np.fliplr(img_hr)
33 |                 img_lr = np.fliplr(img_lr)
34 | 
35 |             imgs_hr.append(img_hr)
36 |             imgs_lr.append(img_lr)
37 | 
38 |         imgs_hr = np.array(imgs_hr) / 127.5 - 1.
39 |         imgs_lr = np.array(imgs_lr) / 127.5 - 1.
40 | 
41 |         return imgs_hr, imgs_lr
42 | 
43 | 
44 |     def imread(self, path):
45 |         return scipy.misc.imread(path, mode='RGB').astype(np.float)
46 | 


--------------------------------------------------------------------------------
/GAN/srgan_celebA/images/celebA/4950.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/srgan_celebA/images/celebA/4950.png


--------------------------------------------------------------------------------
/GAN/srgan_celebA/images/celebA/4950_lowres0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/srgan_celebA/images/celebA/4950_lowres0.png


--------------------------------------------------------------------------------
/GAN/srgan_celebA/images/celebA/4950_lowres1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/srgan_celebA/images/celebA/4950_lowres1.png


--------------------------------------------------------------------------------
/GAN/srgan_celebA/images/celebA/5000.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/srgan_celebA/images/celebA/5000.png


--------------------------------------------------------------------------------
/GAN/srgan_celebA/images/celebA/5000_lowres0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/srgan_celebA/images/celebA/5000_lowres0.png


--------------------------------------------------------------------------------
/GAN/srgan_celebA/images/celebA/5000_lowres1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/srgan_celebA/images/celebA/5000_lowres1.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/fashion_mnist_29700.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/fashion_mnist_29700.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/fashion_mnist_29800.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/fashion_mnist_29800.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/fashion_mnist_29900.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/fashion_mnist_29900.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/mnist_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/mnist_0.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/mnist_100.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/mnist_100.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/mnist_200.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/mnist_200.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/mnist_29600.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/mnist_29600.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/mnist_29700.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/mnist_29700.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/mnist_29800.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/mnist_29800.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/images/mnist_29900.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/wgan_gp/images/mnist_29900.png


--------------------------------------------------------------------------------
/GAN/wgan_gp/readme.md:
--------------------------------------------------------------------------------
 1 | 解决了display的问题
 2 | 
 3 | 去白边的merge
 4 | ```python
 5 | def merge(images, size):
 6 | 	h, w= images.shape[1], images.shape[2]
 7 | 	img = np.zeros((h * size[0], w * size[1]))
 8 | 	for idx, image in enumerate(images):
 9 | 		i = idx % size[1]
10 | 		j = idx // size[1]
11 | 		img[j*h:j*h+h, i*w:i*w+w] = image
12 | 	return img
13 | ```	
14 | 合并多张图在一张
15 | ```python
16 | from scipy.misc import *	
17 | r, c = 10, 10
18 | noise = np.random.normal(0, 1, (r * c, self.latent_dim))
19 | gen_imgs = self.generator.predict(noise)
20 | ```
21 | 
22 | # Rescale images 0 - 1
23 | ```python
24 | gen_imgs = 0.5 * gen_imgs + 1
25 | gen_imgs=gen_imgs.reshape(-1,28,28)
26 | gen_imgs = merge(gen_imgs[:49], [7,7])
27 | imsave("images/mnist_%d.png" % epoch,gen_imgs)
28 | ```
29 | 运行python wgan.py
30 | 
31 | 结果
32 | ```python
33 | 29995 [D loss: -1.087117] [G loss: 4.016634]
34 | 29996 [D loss: -0.511691] [G loss: 3.625752]
35 | 29997 [D loss: -0.533835] [G loss: 4.005987]
36 | 29998 [D loss: -0.423012] [G loss: 3.547036]
37 | 29999 [D loss: 0.091400] [G loss: 4.133564]
38 | ```
39 | 存在的问题,很容易梯度爆炸,目前W-ACGAN复现失败
40 | 


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/__pycache__/model.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/__pycache__/model.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/__pycache__/ops.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/__pycache__/ops.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/ckpt/SC-FEGAN.ckpt.index:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/ckpt/SC-FEGAN.ckpt.index


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/demo.yaml:
--------------------------------------------------------------------------------
1 | INPUT_SIZE: 512
2 | BATCH_SIZE: 1
3 | 
4 | GPU_NUM: 0
5 | 
6 | # directories
7 | CKPT_DIR: './ckpt/SC-FEGAN.ckpt'
8 | 


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/imgs/GUI.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/imgs/GUI.gif


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/imgs/earring.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/imgs/earring.jpg


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/imgs/face_edit.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/imgs/face_edit.jpg


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/imgs/restoration.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/imgs/restoration.jpg


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/imgs/restoration2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/imgs/restoration2.jpg


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/imgs/teaser.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/imgs/teaser.jpg


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/ops.py:
--------------------------------------------------------------------------------
 1 | import cv2
 2 | import numpy as np
 3 | import tensorflow as tf
 4 | from tensorflow.contrib.framework.python.ops import add_arg_scope
 5 | 
 6 | @add_arg_scope
 7 | def gate_conv(x_in, cnum, ksize, stride=1, rate=1, name='conv',
 8 |              padding='SAME', activation='leaky_relu', use_lrn=True,training=True):
 9 |     assert padding in ['SYMMETRIC', 'SAME', 'REFELECT']
10 |     if padding == 'SYMMETRIC' or padding == 'REFELECT':
11 |         p = int(rate*(ksize-1)/2)
12 |         x = tf.pad(x_in, [[0,0], [p, p], [p, p], [0,0]], mode=padding)
13 |         padding = 'VALID'
14 |     x = tf.layers.conv2d(
15 |         x_in, cnum, ksize, stride, dilation_rate=rate,
16 |         activation=None, padding=padding, name=name)    
17 |     if use_lrn:
18 |         x = tf.nn.lrn(x, bias=0.00005)
19 |     if activation=='leaky_relu':
20 |         x = tf.nn.leaky_relu(x)
21 | 
22 |     g = tf.layers.conv2d(
23 |         x_in, cnum, ksize, stride, dilation_rate=rate,
24 |         activation=tf.nn.sigmoid, padding=padding, name=name+'_g')
25 | 
26 |     x = tf.multiply(x,g)
27 |     return x, g
28 | 
29 | @add_arg_scope
30 | def gate_deconv(input_, output_shape, k_h=5, k_w=5, d_h=2, d_w=2, stddev=0.02,
31 |        name="deconv", training=True):
32 |     with tf.variable_scope(name):
33 |         # filter : [height, width, output_channels, in_channels]
34 |         w = tf.get_variable('w', [k_h, k_w, output_shape[-1], input_.get_shape()[-1]],
35 |                   initializer=tf.random_normal_initializer(stddev=stddev))
36 | 
37 |         deconv = tf.nn.conv2d_transpose(input_, w, output_shape=output_shape,
38 |                     strides=[1, d_h, d_w, 1])
39 | 
40 |         biases = tf.get_variable('biases1', [output_shape[-1]], initializer=tf.constant_initializer(0.0))
41 |         deconv = tf.reshape(tf.nn.bias_add(deconv, biases), deconv.get_shape())
42 |         deconv = tf.nn.leaky_relu(deconv)
43 | 
44 |         g = tf.nn.conv2d_transpose(input_, w, output_shape=output_shape,
45 |                     strides=[1, d_h, d_w, 1])
46 |         b = tf.get_variable('biases2', [output_shape[-1]], initializer=tf.constant_initializer(0.0))
47 |         g = tf.reshape(tf.nn.bias_add(g, b), deconv.get_shape())
48 |         g = tf.nn.sigmoid(deconv)
49 | 
50 |         deconv = tf.multiply(g,deconv)
51 | 
52 |         return deconv, g
53 | 
54 | 
55 | 
56 | 


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/tmp.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/tmp.jpg


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/ui/__pycache__/mouse_event.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/ui/__pycache__/mouse_event.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/ui/__pycache__/ui.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/ui/__pycache__/ui.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/utils/__pycache__/config.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/GAN/人脸还原SCFEGAN/utils/__pycache__/config.cpython-36.pyc


--------------------------------------------------------------------------------
/GAN/人脸还原SCFEGAN/utils/config.py:
--------------------------------------------------------------------------------
 1 | import argparse
 2 | import yaml
 3 | import os
 4 | import logging
 5 | 
 6 | logger = logging.getLogger()
 7 | 
 8 | class Config(object):
 9 |     def __init__(self, filename=None):
10 |         assert os.path.exists(filename), "ERROR: Config File doesn't exist."
11 |         try:
12 |             with open(filename, 'r') as f:
13 |                 self._cfg_dict = yaml.load(f)
14 |         # parent of IOError, OSError *and* WindowsError where available
15 |         except EnvironmentError:
16 |             logger.error('Please check the file with name of "%s"', filename)
17 |         logger.info(' APP CONFIG '.center(80, '-'))
18 |         logger.info(''.center(80, '-'))
19 | 
20 |     def __getattr__(self, name):
21 |         value = self._cfg_dict[name]
22 |         if isinstance(value, dict):
23 |             value = DictAsMember(value)
24 |         return value


--------------------------------------------------------------------------------
/ML/nndl-book.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/ML/nndl-book.pdf


--------------------------------------------------------------------------------
/ML/神经网络与深度学习-3小时.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/ML/神经网络与深度学习-3小时.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day1/1406.2661.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day1/1406.2661.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day1/PPT.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day1/PPT.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day1/gan/images/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day1/gan/saved_model/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day10/infoGAN.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day10/infoGAN.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day10/infogan/images/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day10/infogan/saved_model/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day10/md2zh.py:
--------------------------------------------------------------------------------
 1 | 
 2 | import re
 3 | def repl(m):
 4 | 	inner_word = m.group(1)
 5 | 	return '<br><br>
#39; + inner_word + '
lt;br><br>'
 6 | with open('readme.md', 'r') as f_read:
 7 | 	text = f_read.readlines()
 8 | 	for k, item in enumerate(text):
 9 | 		text[k] = re.sub(r'\$\$(.*?)\$\
#39;, repl, item)
10 | 	
11 | 	with open('zhihu.md', 'w') as f_write:
12 | 		f_write.writelines(text)


--------------------------------------------------------------------------------
/One_Day_One_GAN/day10/readme.docx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day10/readme.docx


--------------------------------------------------------------------------------
/One_Day_One_GAN/day10/zhihu2.docx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day10/zhihu2.docx


--------------------------------------------------------------------------------
/One_Day_One_GAN/day13/cyclegan/data/download_cyclegan_dataset.sh:
--------------------------------------------------------------------------------
 1 | #!/bin/bash
 2 | 
 3 | FILE=$1
 4 | 
 5 | if [[ $FILE != "ae_photos" && $FILE != "apple2orange" && $FILE != "summer2winter_yosemite" &&  $FILE != "horse2zebra" && $FILE != "monet2photo" && $FILE != "cezanne2photo" && $FILE != "ukiyoe2photo" && $FILE != "vangogh2photo" && $FILE != "maps" && $FILE != "cityscapes" && $FILE != "facades" && $FILE != "iphone2dslr_flower" && $FILE != "ae_photos" ]]; then
 6 |     echo "Available datasets are: apple2orange, summer2winter_yosemite, horse2zebra, monet2photo, cezanne2photo, ukiyoe2photo, vangogh2photo, maps, cityscapes, facades, iphone2dslr_flower, ae_photos"
 7 |     exit 1
 8 | fi
 9 | 
10 | URL=https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/$FILE.zip
11 | ZIP_FILE=./$FILE.zip
12 | TARGET_DIR=./$FILE
13 | wget -N $URL -O $ZIP_FILE
14 | unzip $ZIP_FILE -d .
15 | rm $ZIP_FILE
16 | 
17 | # Adapt to project expected directory heriarchy
18 | mkdir -p "$TARGET_DIR/train" "$TARGET_DIR/test"
19 | mv "$TARGET_DIR/trainA" "$TARGET_DIR/train/A"
20 | mv "$TARGET_DIR/trainB" "$TARGET_DIR/train/B"
21 | mv "$TARGET_DIR/testA" "$TARGET_DIR/test/A"
22 | mv "$TARGET_DIR/testB" "$TARGET_DIR/test/B"
23 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day13/cyclegan/data/download_pix2pix_dataset.sh:
--------------------------------------------------------------------------------
1 | FILE=$1
2 | URL=https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/$FILE.tar.gz
3 | TAR_FILE=./$FILE.tar.gz
4 | TARGET_DIR=./$FILE/
5 | wget -N $URL -O $TAR_FILE
6 | mkdir $TARGET_DIR
7 | tar -zxvf $TAR_FILE -C ./
8 | rm $TAR_FILE
9 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day13/cyclegan/datasets.py:
--------------------------------------------------------------------------------
 1 | import glob
 2 | import random
 3 | import os
 4 | 
 5 | from torch.utils.data import Dataset
 6 | from PIL import Image
 7 | import torchvision.transforms as transforms
 8 | 
 9 | 
10 | def to_rgb(image):
11 |     rgb_image = Image.new("RGB", image.size)
12 |     rgb_image.paste(image)
13 |     return rgb_image
14 | 
15 | 
16 | class ImageDataset(Dataset):
17 |     def __init__(self, root, transforms_=None, unaligned=False, mode="train"):
18 |         self.transform = transforms.Compose(transforms_)
19 |         self.unaligned = unaligned
20 | 
21 |         self.files_A = sorted(glob.glob(os.path.join(root, "%s/A" % mode) + "/*.*"))
22 |         self.files_B = sorted(glob.glob(os.path.join(root, "%s/B" % mode) + "/*.*"))
23 | 
24 |     def __getitem__(self, index):
25 |         image_A = Image.open(self.files_A[index % len(self.files_A)])
26 | 
27 |         if self.unaligned:
28 |             image_B = Image.open(self.files_B[random.randint(0, len(self.files_B) - 1)])
29 |         else:
30 |             image_B = Image.open(self.files_B[index % len(self.files_B)])
31 | 
32 |         # Convert grayscale images to rgb
33 |         if image_A.mode != "RGB":
34 |             image_A = to_rgb(image_A)
35 |         if image_B.mode != "RGB":
36 |             image_B = to_rgb(image_B)
37 | 
38 |         item_A = self.transform(image_A)
39 |         item_B = self.transform(image_B)
40 |         return {"A": item_A, "B": item_B}
41 | 
42 |     def __len__(self):
43 |         return max(len(self.files_A), len(self.files_B))
44 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day13/cyclegan/horse2zebra.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day13/cyclegan/horse2zebra.gif


--------------------------------------------------------------------------------
/One_Day_One_GAN/day13/cyclegan/test.py:
--------------------------------------------------------------------------------
 1 | from models import *
 2 | from datasets  import  *
 3 | import torch
 4 | from torch.autograd import Variable
 5 | import argparse
 6 | import os
 7 | from torchvision.utils import save_image
 8 | from PIL import Image
 9 | import glob
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument("--image_path", type=str, required=True, help="Path to image")
12 | parser.add_argument("--checkpoint_model", type=str, required=True, help="Path to checkpoint model")
13 | parser.add_argument("--netG", type=str, default='W2M', help="The network structure of the generator")
14 | parser.add_argument("--img_height", type=int, default=256, help="size of image height")
15 | parser.add_argument("--img_width", type=int, default=256, help="size of image width")
16 | parser.add_argument("--channels", type=int, default=3, help="number of image channels")
17 | parser.add_argument("--n_residual_blocks", type=int, default=9, help="number of residual blocks in generator")
18 | 
19 | opt = parser.parse_args()
20 | print(opt)
21 | 
22 | os.makedirs("images/outputs", exist_ok=True)
23 | 
24 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
25 | 
26 | input_shape = (opt.channels, opt.img_height, opt.img_width)
27 | 
28 | 
29 | generator = GeneratorResNet(input_shape, opt.n_residual_blocks).to (device)
30 | 
31 | 
32 | generator.load_state_dict(torch.load(opt.checkpoint_model))
33 | 
34 | generator.eval()
35 | 
36 | transform = transforms.Compose([
37 |     transforms.Resize((512,512), Image.BICUBIC),
38 |     transforms.ToTensor(),
39 |     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
40 | )
41 | 
42 | # Prepare input
43 | for image_path in glob.glob(opt.image_path+'*.*'):
44 |     print(image_path)
45 |     image_tensor = Variable(transform(Image.open(image_path))).to(device).unsqueeze(0)
46 |     print(image_tensor.shape)
47 | 
48 |     # Upsample image
49 |     with torch.no_grad():
50 |         sr_image = generator(image_tensor).cpu()
51 |     # Save image
52 |     fn = image_path.split("/")[-1]
53 |     img_grid = torch.cat((image_tensor.cpu(),sr_image), 3)
54 |     save_image(img_grid, f"images/outputs/{fn}",nrow=1,normalize=True)
55 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day13/cyclegan/utils.py:
--------------------------------------------------------------------------------
 1 | import random
 2 | import time
 3 | import datetime
 4 | import sys
 5 | 
 6 | from torch.autograd import Variable
 7 | import torch
 8 | import numpy as np
 9 | 
10 | from torchvision.utils import save_image
11 | 
12 | 
13 | class ReplayBuffer:
14 |     def __init__(self, max_size=50):
15 |         assert max_size > 0, "Empty buffer or trying to create a black hole. Be careful."
16 |         self.max_size = max_size
17 |         self.data = []
18 | 
19 |     def push_and_pop(self, data):
20 |         to_return = []
21 |         for element in data.data:
22 |             element = torch.unsqueeze(element, 0)
23 |             if len(self.data) < self.max_size:
24 |                 self.data.append(element)
25 |                 to_return.append(element)
26 |             else:
27 |                 if random.uniform(0, 1) > 0.5:
28 |                     i = random.randint(0, self.max_size - 1)
29 |                     to_return.append(self.data[i].clone())
30 |                     self.data[i] = element
31 |                 else:
32 |                     to_return.append(element)
33 |         return Variable(torch.cat(to_return))
34 | 
35 | 
36 | class LambdaLR:
37 |     def __init__(self, n_epochs, offset, decay_start_epoch):
38 |         assert (n_epochs - decay_start_epoch) > 0, "Decay must start before the training session ends!"
39 |         self.n_epochs = n_epochs
40 |         self.offset = offset
41 |         self.decay_start_epoch = decay_start_epoch
42 | 
43 |     def step(self, epoch):
44 |         return 1.0 - max(0, epoch + self.offset - self.decay_start_epoch) / (self.n_epochs - self.decay_start_epoch)
45 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day13/to_zhihu.py:
--------------------------------------------------------------------------------
 1 | import re
 2 | def repl(m):
 3 | 	inner_word = m.group(1)
 4 | 	return '<br><br>
#39; + inner_word + '
lt;br><br>'
 5 | with open('readme.md', 'r') as f_read:
 6 | 	text = f_read.readlines()
 7 | 	for k, item in enumerate(text):
 8 | 		text[k] = re.sub(r'\$\$(.*?)\$\
#39;, repl, item)
 9 | 	
10 | 	with open('2.md', 'w') as f_write:
11 | 		f_write.writelines(text)


--------------------------------------------------------------------------------
/One_Day_One_GAN/day14/pix2pix/datasets.py:
--------------------------------------------------------------------------------
 1 | import glob
 2 | import random
 3 | import os
 4 | import numpy as np
 5 | 
 6 | from torch.utils.data import Dataset
 7 | from PIL import Image
 8 | import torchvision.transforms as transforms
 9 | 
10 | 
11 | class ImageDataset(Dataset):
12 |     def __init__(self, root, transforms_=None, mode="train"):
13 |         self.transform = transforms.Compose(transforms_)
14 | 
15 |         self.files = sorted(glob.glob(os.path.join(root, mode) + "/*.*"))
16 |         if mode == "train":
17 |             self.files.extend(sorted(glob.glob(os.path.join(root, "test") + "/*.*")))
18 | 
19 |     def __getitem__(self, index):
20 | 
21 |         img = Image.open(self.files[index % len(self.files)])
22 |         w, h = img.size
23 |         img_A = img.crop((0, 0, w / 2, h))
24 |         img_B = img.crop((w / 2, 0, w, h))
25 | 
26 |         if np.random.random() < 0.5:
27 |             img_A = Image.fromarray(np.array(img_A)[:, ::-1, :], "RGB")
28 |             img_B = Image.fromarray(np.array(img_B)[:, ::-1, :], "RGB")
29 | 
30 |         img_A = self.transform(img_A)
31 |         img_B = self.transform(img_B)
32 | 
33 |         return {"A": img_A, "B": img_B}
34 | 
35 |     def __len__(self):
36 |         return len(self.files)
37 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/VAE 报告.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day15/VAE 报告.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/data/processed/test.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day15/data/processed/test.pt


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/data/processed/training.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day15/data/processed/training.pt


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/data/raw/t10k-images-idx3-ubyte:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day15/data/raw/t10k-images-idx3-ubyte


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/data/raw/t10k-labels-idx1-ubyte:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day15/data/raw/t10k-labels-idx1-ubyte


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/data/raw/train-images-idx3-ubyte:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day15/data/raw/train-images-idx3-ubyte


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/data/raw/train-labels-idx1-ubyte:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day15/data/raw/train-labels-idx1-ubyte


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/requirements.txt:
--------------------------------------------------------------------------------
 1 | numpy
 2 | scipy
 3 | matplotlib
 4 | seaborn
 5 | pandas
 6 | keras
 7 | tensorflow
 8 | pydot
 9 | ipywidgets
10 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day15/vae_cnn_mnist.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day15/vae_cnn_mnist.h5


--------------------------------------------------------------------------------
/One_Day_One_GAN/day16/1904.09709.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day16/1904.09709.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day16/md2zh.py:
--------------------------------------------------------------------------------
 1 | 
 2 | import re
 3 | def repl(m):
 4 | 	inner_word = m.group(1)
 5 | 	return '<br><br>
#39; + inner_word + '
lt;br><br>'
 6 | with open('readme.md', 'r') as f_read:
 7 | 	text = f_read.readlines()
 8 | 	for k, item in enumerate(text):
 9 | 		text[k] = re.sub(r'\$\$(.*?)\$\
#39;, repl, item)
10 | 	
11 | 	with open('zhihu.md', 'w') as f_write:
12 | 		f_write.writelines(text)


--------------------------------------------------------------------------------
/One_Day_One_GAN/day16/stgan_slides.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day16/stgan_slides.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day17/readme.md:
--------------------------------------------------------------------------------
 1 | # Self-Supervised Generative Adversarial Networks 
 2 | 
 3 | ## Abstract
 4 | 
 5 | cGAN处于自然图像合成的最前沿。这种模型的主要缺点是标记数据的必要性。在这项工作中,我们利用两种流行的无监督学习技术(对抗性训练和自我监督)来缩小有条件和无条件GAN之间的差距。自监督的作用是鼓励判别器学习有意义的特征表示,这些表示在训练期间不会被遗忘。
 6 | 
 7 | ## A Key Issue:Discriminator Forgetting
 8 | 
 9 | 传统的GAN的核心公式为:
10 | 
11 | ![1554727641089](C:\Users\pc\AppData\Roaming\Typora\typora-user-images\1554727641089.png)
12 | 
13 | 每当生成器G的参数发生变化时,判别器也会发生变化,这意味着判别器是非平稳的在线学习
14 | 
15 | 在非凸函数的在线学习中,神经网络已被证明会忘记先前的任务,在GAN的背景下,学习不同级别的细节,结构和纹理可以被认为是不同的任务。因此,训练中不稳定的一个原因是,只要当前表示对分类有用,就不会激励判别器维持有用的数据表示。
16 | 
17 | 作者在论文中设计实验来证明了特征的遗忘,如下图(a)所示:
18 | 
19 | ![1554730058072](C:\Users\pc\AppData\Roaming\Typora\typora-user-images\1554730058072.png)
20 | 
21 | 分类器被训练用1 vs all的方式在CIFAR10数据集的十个类别上进行训练,在每个任务上训练1k次,10k以后回到原点,根据图像我们可以发现,每次任务切换时,分类器精度都会大幅下降。在10k次迭代之后,任务循环重复,并且精度与第一个循环相同,即没有任何有用的信息在任务中传递。(b)中加入的自监督,我们可以发现其准确率是在逐渐提升的。
22 | 
23 | GAN的情况与此类似,如下图:
24 | 
25 | ![1554730758943](C:\Users\pc\AppData\Roaming\Typora\typora-user-images\1554730758943.png)
26 | 
27 | 在训练期间,无条件GAN的准确率增加,然后减少,表明有关分类的信息被获取并随后被遗忘。这种遗忘与训练不稳定性有关。添加自我监督可以防止在鉴别器表示中遗忘这些类。
28 | 
29 | ## The Self-Supervised GAN
30 | 
31 | 在判别器遗忘的主要挑战的推动下,我们的目标是为鉴别器注入一种机制,允许学习有用的表示,而不依赖于当前生成器的质量。 为此,我们利用自监督的表示学习方法的最新进展。 自监督背后的主要思想是在预测任务上训练模型,如预测图像块的旋转角度或相对位置,然后从结果网络中提取相关表示。
32 | 
33 | 作者应用了基于图像旋转的最先进的自监督方法, 在该方法中,旋转图像,并且旋转角度变为人造标签。 然后,自监督的任务是预测图像的旋转角度。直观地,这种损失促使分类器学习有用的图像表示以检测旋转角度,并转移到图像分类任务。
34 | 
35 | ![1554731701345](C:\Users\pc\AppData\Roaming\Typora\typora-user-images\1554731701345.png)
36 | 
37 | 加入自监督以后的损失函数为:
38 | 
39 | ![1554731800972](C:\Users\pc\AppData\Roaming\Typora\typora-user-images\1554731800972.png)
40 | 
41 | ![1554732560588](C:\Users\pc\AppData\Roaming\Typora\typora-user-images\1554732560588.png)
42 | 
43 | 生成器和判别器在真实与假预测损失方面是对抗的,然而,它们在旋转任务方面是协作的。生成器不是有条件的,而是仅生成“直立”图像,随后将其旋转并馈送到判别器。另一方面,训练判别器仅基于真实数据检测旋转角度。换句话说,判别器的参数仅基于真实数据上的旋转损失而更新。鼓励生成器生成可旋转检测的图像,因为它们与用于旋转分类的真实图像共享特征。
44 | 
45 | ## Experiments
46 | 
47 | ![1554732962702](C:\Users\pc\AppData\Roaming\Typora\typora-user-images\1554732962702.png)
48 | 
49 | 
50 | 
51 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day18/Self-Supervised GANs via Auxiliary Rotation Loss.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day18/Self-Supervised GANs via Auxiliary Rotation Loss.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day18/Self-Supervised-GANs-master/README.md:
--------------------------------------------------------------------------------
 1 | # Self-Supervised-GANs
 2 | 
 3 | The tensorflow implement of [[Self-Supervised Generative Adversarial Networks]](https://arxiv.org/pdf/1811.11212.pdf)
 4 | 
 5 | ## Network Architecture
 6 | 
 7 | <p align="center">
 8 |   <img src="/img/net.png">
 9 | </p>
10 | 
11 | ## Experiments(Our results on cifar10)
12 | 
13 | SN_GAN: spectral norm gan
14 | 
15 | SN_GAN_SS: spectral norm gan with self-supervised learning
16 | 
17 | <p align="center">
18 |   <img src="/img/fig_is.png">
19 | </p>
20 | 
21 | <p align="center">
22 |   <img src="/img/fig_fid.png">
23 | </p>
24 | 
25 | ## Reference code
26 | 
27 | [Sparsely_Grouped_GAN](https://github.com/zhangqianhui/Sparsely_Grouped_GAN)
28 | 
29 | [DCGAN tensorflow](https://github.com/carpedm20/DCGAN-tensorflow)
30 | 
31 | [Spectral Norm tensorflow](https://github.com/taki0112/Spectral_Normalization-Tensorflow)
32 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day18/Self-Supervised-GANs-master/main.py:
--------------------------------------------------------------------------------
 1 | import tensorflow as tf
 2 | from utils import mkdir_p
 3 | from utils import Cifar, STL
 4 | from Model import SSGAN
 5 | import os
 6 | 
 7 | os.environ['CUDA_VISIBLE_DEVICES']='1'
 8 | 
 9 | flags = tf.app.flags
10 | flags.DEFINE_integer("OPER_FLAG", 0, "flag of opertion")
11 | flags.DEFINE_boolean("sn", True, "whether using spectural normalization")
12 | flags.DEFINE_integer("n_dis", 1, "the number of D training for every g")
13 | flags.DEFINE_integer("iter_power", 1, "the iteration of power")
14 | flags.DEFINE_float("beta1", 0.0, "the beat1 for adam method")
15 | flags.DEFINE_float("beta2", 0.9, "the beta2 for adam method")
16 | flags.DEFINE_float("weight_rotation_loss_d", 1.0, "weight for rotation loss of D")
17 | flags.DEFINE_float("weight_rotation_loss_g", 0.5, "weight for rotation loss for G")
18 | flags.DEFINE_integer("num_rotation", 4, "0, 90, 180, 270")
19 | flags.DEFINE_integer("loss_type", 3, "wgan:0; va: 1; -log(d(x)):2; 3: hinge loss")
20 | flags.DEFINE_boolean("resnet", True, "whether using resnet architecture")
21 | flags.DEFINE_boolean("is_adam", True, "using adam")
22 | flags.DEFINE_boolean("ssup", False, "whether using self-supervised learning")
23 | flags.DEFINE_integer("max_iters", 20000, "maxi iterations of networks")
24 | flags.DEFINE_integer("batch_size", 128, "number of a batch")
25 | flags.DEFINE_integer("sample_size", 128, "size of sample")
26 | flags.DEFINE_float("learning_rate", 0.0002, "lr for g and d")
27 | flags.DEFINE_integer("image_size", 32, "resolution of image; 32 for cifar")
28 | flags.DEFINE_integer("dataset", 0, "0:cifar10; 1: stl")
29 | flags.DEFINE_string("log_dir", "./output_w/log/", "path of log")
30 | flags.DEFINE_string("model_path", "./output_w/model/", "path of model")
31 | flags.DEFINE_string("sample_path", "./output_w/sample/", "path of sample")
32 | 
33 | FLAGS = flags.FLAGS
34 | if __name__ == "__main__":
35 | 
36 |     mkdir_p([FLAGS.log_dir, FLAGS.model_path, FLAGS.sample_path])
37 | 
38 |     if FLAGS.dataset == 0:
39 |         m_ob = Cifar(batch_size=FLAGS.batch_size)
40 |     elif FLAGS.dataset == 1:
41 |         m_ob = STL(batch_size=FLAGS.batch_size)
42 | 
43 |     ssgan = SSGAN(flags=FLAGS, data=m_ob)
44 | 
45 |     if FLAGS.OPER_FLAG == 0:
46 |         ssgan._init_inception()
47 |         ssgan.build_model_GAN()
48 |         ssgan.train()
49 | 
50 |     if FLAGS.OPER_FLAG == 1:
51 |         ssgan.build_model_GAN()
52 |         ssgan.test2()
53 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day2/1511.06434.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day2/1511.06434.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day2/dcgan/images/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day2/dcgan/saved_model/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day21/深入理解风格迁移三部曲(一)--UNIT.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day21/深入理解风格迁移三部曲(一)--UNIT.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day3/1411.1784.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day3/1411.1784.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day3/cgan/images/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day3/cgan/saved_model/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day4/1610.09585.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day4/1610.09585.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day4/acgan/images/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day4/acgan/saved_model/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day5/1701.07875.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day5/1701.07875.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day5/wgan/images/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day5/wgan/saved_model/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day6/1609.04802.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day6/1609.04802.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day6/srgan/Zoom_To_Learn_CVPR2019.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day6/srgan/Zoom_To_Learn_CVPR2019.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day6/srgan/data_loader.py:
--------------------------------------------------------------------------------
 1 | import scipy
 2 | from glob import glob
 3 | import numpy as np
 4 | import matplotlib.pyplot as plt
 5 | 
 6 | class DataLoader():
 7 |     def __init__(self, dataset_name, img_res=(128, 128)):
 8 |         self.dataset_name = dataset_name
 9 |         self.img_res = img_res
10 | 
11 |     def load_data(self, batch_size=1, is_testing=False):
12 |         data_type = "train" if not is_testing else "test"
13 |         
14 |         path = glob('./datasets/%s' % (self.dataset_name))
15 | 
16 |         batch_images = np.random.choice(path, size=batch_size)
17 | 
18 |         imgs_hr = []
19 |         imgs_lr = []
20 |         for img_path in batch_images:
21 |             img = self.imread(img_path)
22 | 
23 |             h, w = self.img_res
24 |             low_h, low_w = int(h / 4), int(w / 4)
25 | 
26 |             img_hr = scipy.misc.imresize(img, self.img_res)
27 |             img_lr = scipy.misc.imresize(img, (low_h, low_w))
28 | 
29 |             # If training => do random flip
30 |             if not is_testing and np.random.random() < 0.5:
31 |                 img_hr = np.fliplr(img_hr)
32 |                 img_lr = np.fliplr(img_lr)
33 | 
34 |             imgs_hr.append(img_hr)
35 |             imgs_lr.append(img_lr)
36 | 
37 |         imgs_hr = np.array(imgs_hr) / 127.5 - 1.
38 |         imgs_lr = np.array(imgs_lr) / 127.5 - 1.
39 | 
40 |         return imgs_hr, imgs_lr
41 | 
42 | 
43 |     def imread(self, path):
44 |         return scipy.misc.imread(path, mode='RGB').astype(np.float)
45 | dataset_name = 'train'
46 | data_loader = DataLoader(dataset_name=dataset_name,)


--------------------------------------------------------------------------------
/One_Day_One_GAN/day6/srgan/images/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day6/srgan/saved_model/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 | !.gitignore


--------------------------------------------------------------------------------
/One_Day_One_GAN/day6/srgan_pytorch/datasets.py:
--------------------------------------------------------------------------------
 1 | import glob
 2 | import random
 3 | import os
 4 | import numpy as np
 5 | 
 6 | import torch
 7 | from torch.utils.data import Dataset
 8 | from PIL import Image
 9 | import torchvision.transforms as transforms
10 | 
11 | # Normalization parameters for pre-trained PyTorch models
12 | mean = np.array([0.485, 0.456, 0.406])
13 | std = np.array([0.229, 0.224, 0.225])
14 | 
15 | def denormalize(tensors):
16 |     """ Denormalizes image tensors using mean and std """
17 |     for c in range(3):
18 |         tensors[:, c].mul_(std[c]).add_(mean[c])
19 |     return torch.clamp(tensors, 0, 255)
20 | 
21 | class ImageDataset(Dataset):
22 |     def __init__(self, root, hr_shape):
23 |         hr_height, hr_width = hr_shape
24 |         # Transforms for low resolution images and high resolution images
25 |         self.lr_transform = transforms.Compose(
26 |             [
27 |                 transforms.Resize((hr_height // 4, hr_height // 4), Image.BICUBIC),
28 |                 transforms.ToTensor(),
29 |                 transforms.Normalize(mean, std),
30 |             ]
31 |         )
32 |         self.hr_transform = transforms.Compose(
33 |             [
34 |                 transforms.Resize((hr_height, hr_height), Image.BICUBIC),
35 |                 transforms.ToTensor(),
36 |                 transforms.Normalize(mean, std),
37 |             ]
38 |         )
39 | 
40 |         self.files = sorted(glob.glob(root + "/*.*"))
41 |         print(root)
42 | 
43 |     def __getitem__(self, index):
44 |         img = Image.open(self.files[index % len(self.files)])
45 |         img_lr = self.lr_transform(img)
46 |         img_hr = self.hr_transform(img)
47 | 
48 |         return {"lr": img_lr, "hr": img_hr}
49 | 
50 |     def __len__(self):
51 |         return len(self.files)
52 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day6/srgan_pytorch/spectral.py:
--------------------------------------------------------------------------------
 1 | import torch
 2 | from torch.optim.optimizer import Optimizer, required
 3 | 
 4 | from torch.autograd import Variable
 5 | import torch.nn.functional as F
 6 | from torch import nn
 7 | from torch import Tensor
 8 | from torch.nn import Parameter
 9 | 
10 | def l2normalize(v, eps=1e-12):
11 |     return v / (v.norm() + eps)
12 | 
13 | 
14 | class SpectralNorm(nn.Module):
15 |     def __init__(self, module, name='weight', power_iterations=1):
16 |         super(SpectralNorm, self).__init__()
17 |         self.module = module
18 |         self.name = name
19 |         self.power_iterations = power_iterations
20 |         if not self._made_params():
21 |             self._make_params()
22 | 
23 |     def _update_u_v(self):
24 |         u = getattr(self.module, self.name + "_u")
25 |         v = getattr(self.module, self.name + "_v")
26 |         w = getattr(self.module, self.name + "_bar")
27 | 
28 |         height = w.data.shape[0]
29 |         for _ in range(self.power_iterations):
30 |             v.data = l2normalize(torch.mv(torch.t(w.view(height,-1).data), u.data))
31 |             u.data = l2normalize(torch.mv(w.view(height,-1).data, v.data))
32 | 
33 |         # sigma = torch.dot(u.data, torch.mv(w.view(height,-1).data, v.data))
34 |         sigma = u.dot(w.view(height, -1).mv(v))
35 |         setattr(self.module, self.name, w / sigma.expand_as(w))
36 | 
37 |     def _made_params(self):
38 |         try:
39 |             u = getattr(self.module, self.name + "_u")
40 |             v = getattr(self.module, self.name + "_v")
41 |             w = getattr(self.module, self.name + "_bar")
42 |             return True
43 |         except AttributeError:
44 |             return False
45 | 
46 | 
47 |     def _make_params(self):
48 |         w = getattr(self.module, self.name)
49 | 
50 |         height = w.data.shape[0]
51 |         width = w.view(height, -1).data.shape[1]
52 | 
53 |         u = Parameter(w.data.new(height).normal_(0, 1), requires_grad=False)
54 |         v = Parameter(w.data.new(width).normal_(0, 1), requires_grad=False)
55 |         u.data = l2normalize(u.data)
56 |         v.data = l2normalize(v.data)
57 |         w_bar = Parameter(w.data)
58 | 
59 |         del self.module._parameters[self.name]
60 | 
61 |         self.module.register_parameter(self.name + "_u", u)
62 |         self.module.register_parameter(self.name + "_v", v)
63 |         self.module.register_parameter(self.name + "_bar", w_bar)
64 | 
65 | 
66 |     def forward(self, *args):
67 |         self._update_u_v()
68 |         return self.module.forward(*args)


--------------------------------------------------------------------------------
/One_Day_One_GAN/day7/1809.00219 (1).pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day7/1809.00219 (1).pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day7/esrgan/datasets.py:
--------------------------------------------------------------------------------
 1 | import glob
 2 | import random
 3 | import os
 4 | import numpy as np
 5 | 
 6 | import torch
 7 | from torch.utils.data import Dataset
 8 | from PIL import Image
 9 | import torchvision.transforms as transforms
10 | 
11 | # Normalization parameters for pre-trained PyTorch models
12 | mean = np.array([0.485, 0.456, 0.406])
13 | std = np.array([0.229, 0.224, 0.225])
14 | 
15 | 
16 | class ImageDataset(Dataset):
17 |     def __init__(self, root, hr_shape):
18 |         hr_height, hr_width = hr_shape
19 |         # Transforms for low resolution images and high resolution images
20 |         self.lr_transform = transforms.Compose(
21 |             [
22 |                 transforms.Resize((hr_height // 4, hr_height // 4), Image.BICUBIC),
23 |                 transforms.ToTensor(),
24 |                 transforms.Normalize(mean, std),
25 |             ]
26 |         )
27 |         self.hr_transform = transforms.Compose(
28 |             [
29 |                 transforms.Resize((hr_height, hr_height), Image.BICUBIC),
30 |                 transforms.ToTensor(),
31 |                 transforms.Normalize(mean, std),
32 |             ]
33 |         )
34 | 
35 |         self.files = sorted(glob.glob(root + "/*.*"))
36 | 
37 |     def __getitem__(self, index):
38 |         img = Image.open(self.files[index % len(self.files)])
39 |         img_lr = self.lr_transform(img)
40 |         img_hr = self.hr_transform(img)
41 | 
42 |         return {"lr": img_lr, "hr": img_hr}
43 | 
44 |     def __len__(self):
45 |         return len(self.files)
46 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day7/esrgan/test.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 | 
3 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day7/esrgan/test_on_image.py:
--------------------------------------------------------------------------------
 1 | from models import GeneratorRRDB
 2 | from datasets import denormalize, mean, std
 3 | import torch
 4 | from torch.autograd import Variable
 5 | import argparse
 6 | import os
 7 | from torchvision.utils import save_image
 8 | from PIL import Image
 9 | 
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument("--image_path", type=str, required=True, help="Path to image")
12 | parser.add_argument("--checkpoint_model", type=str, required=True, help="Path to checkpoint model")
13 | parser.add_argument("--channels", type=int, default=3, help="Number of image channels")
14 | parser.add_argument("--residual_blocks", type=int, default=23, help="Number of residual blocks in G")
15 | opt = parser.parse_args()
16 | print(opt)
17 | 
18 | os.makedirs("images/outputs", exist_ok=True)
19 | 
20 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
21 | 
22 | # Define model and load model checkpoint
23 | generator = GeneratorRRDB(opt.channels, filters=64, num_res_blocks=opt.residual_blocks).to(device)
24 | generator.load_state_dict(torch.load(opt.checkpoint_model))
25 | generator.eval()
26 | 
27 | transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean, std)])
28 | 
29 | # Prepare input
30 | image_tensor = Variable(transform(Image.open(opt.image_path))).to(device).unsqueeze(0)
31 | 
32 | # Upsample image
33 | with torch.no_grad():
34 |     sr_image = denormalize(generator(image_tensor)).cpu()
35 | 
36 | # Save image
37 | fn = opt.image_path.split("/")[-1]
38 | save_image(sr_image, f"images/outputs/sr-{fn}")
39 | 


--------------------------------------------------------------------------------
/One_Day_One_GAN/day7/esrgan_slides.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day7/esrgan_slides.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day8/1711.10098.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day8/1711.10098.pdf


--------------------------------------------------------------------------------
/One_Day_One_GAN/day8/Attentive GAN.docx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day8/Attentive GAN.docx


--------------------------------------------------------------------------------
/One_Day_One_GAN/day9/WechatIMG399.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day9/WechatIMG399.jpeg


--------------------------------------------------------------------------------
/One_Day_One_GAN/day9/WechatIMG400.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/One_Day_One_GAN/day9/WechatIMG400.jpeg


--------------------------------------------------------------------------------
/One_Day_One_GAN/day9/readme.md:
--------------------------------------------------------------------------------
 1 | # One Day One GAN
 2 | 
 3 | Hi ,my name is Chen Yang ,I am a sophomore in ocean university of China .I do some scientific research in my spare time. Based on the current hot direction of artificial intelligence, I hope to share my research progress with **Generative adversarial network**
 4 | 
 5 | 嗨,我的名字是陈扬,我是中国海洋大学的二年级学生。我在业余时间做了一些科学研究。基于当前人工智能的热点方向,我希望与**生成对抗网络分享我的研究进展**
 6 | 
 7 | ## 前言
 8 | 
 9 | **ODOG**,顾名思义就我我希望能每天抽出一个小时的时间来讲讲到目前为止,GAN的前沿发展和研究,笔者观察了很多深度学习的应用,特别是在图像这一方面,GAN已经在扮演着越来越重要的角色,我们经常可以看到老黄的NVIDIA做了各种各样的application,而且其中涉及到了大量GAN的理论及其实现,再者笔者个人也觉得目前国内缺少GAN在pytorch,keras,tensorflow等主流的框架下的实现教学.
10 | 
11 | 我的老师曾经对我说过:"**深度学习是一块未知的新大陆,它是一个大的黑箱系统,而GAN则是黑箱中的黑箱,谁要是能打开这个盒子,将会引领一个新的时代**"
12 | 
13 | 今天带来的的《花书》1~3章的手写笔记,当我今天下午拿起花书的那一刻我突然间明白了,功夫到了,读起来自然是一气哼成,畅快淋漓.
14 | 
15 | ![WechatIMG399](./WechatIMG399.jpeg)
16 | 
17 | ![WechatIMG399](./WechatIMG400.jpeg)


--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
 1 | # OUCML
 2 | 
 3 | [ODOG](https://github.com/OUCMachineLearning/OUCML/tree/master/One_Day_One_GAN)一天一GAN
 4 | 
 5 | [GAN](https://github.com/OUCMachineLearning/OUCML/tree/master/GAN)
 6 | 
 7 | [AUTOML](https://github.com/OUCMachineLearning/OUCML/tree/master/AutoML)
 8 | 
 9 | 大家编写README.MD文件的时候,可以参考[github上README.md文件如何编写](https://blog.csdn.net/liu537192/article/details/45693529)。
10 | 


--------------------------------------------------------------------------------
/Regularization/Cutout-master/images/cutout_on_cifar10.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/Regularization/Cutout-master/images/cutout_on_cifar10.jpg


--------------------------------------------------------------------------------
/Regularization/Cutout-master/shake-shake/README.md:
--------------------------------------------------------------------------------
 1 | # Cutout in Shake-Shake Regularization Networks
 2 | 
 3 | In order to add cutout to Xavier Gastaldi's shake-shake regularization code we simply add a cutout function to transforms.lua (lines 16 to 29) and then append the cutout function to the CIFAR-10 and CIFAR-100 pre-processing pipelines (lines 49 and 60 in cifar10.lua and cifar100.lua respectively). 
 4 | 
 5 | ## Usage  
 6 | 1. Follow Usage instruction 1 from https://github.com/xgastaldi/shake-shake to install fb.resnet.torch and related libraries.
 7 | 2. Once installed, navigate to your local fb.resnet.torch/datasets folder.
 8 | 3. Copy the files from this folder (shake-shake) and paste them into the datasets folder. This should overwrite cifar10.lua, cifar100.lua, and transforms.lua.
 9 | 4. Continue following remaining instructions from https://github.com/xgastaldi/shake-shake. CIFAR-10 should now train using cutout with a length of 16 and CIFAR-100 will train using cutout with a length of 8.
10 | 


--------------------------------------------------------------------------------
/Regularization/Cutout-master/shake-shake/cifar10.lua:
--------------------------------------------------------------------------------
 1 | --
 2 | --  Copyright (c) 2016, Facebook, Inc.
 3 | --  All rights reserved.
 4 | --
 5 | --  This source code is licensed under the BSD-style license found in the
 6 | --  LICENSE file in the root directory of this source tree. An additional grant
 7 | --  of patent rights can be found in the PATENTS file in the same directory.
 8 | --
 9 | --  CIFAR-10 dataset loader
10 | --
11 | 
12 | local t = require 'datasets/transforms'
13 | 
14 | local M = {}
15 | local CifarDataset = torch.class('resnet.CifarDataset', M)
16 | 
17 | function CifarDataset:__init(imageInfo, opt, split)
18 |    assert(imageInfo[split], split)
19 |    self.imageInfo = imageInfo[split]
20 |    self.split = split
21 | end
22 | 
23 | function CifarDataset:get(i)
24 |    local image = self.imageInfo.data[i]:float()
25 |    local label = self.imageInfo.labels[i]
26 | 
27 |    return {
28 |       input = image,
29 |       target = label,
30 |    }
31 | end
32 | 
33 | function CifarDataset:size()
34 |    return self.imageInfo.data:size(1)
35 | end
36 | 
37 | -- Computed from entire CIFAR-10 training set
38 | local meanstd = {
39 |    mean = {125.3, 123.0, 113.9},
40 |    std  = {63.0,  62.1,  66.7},
41 | }
42 | 
43 | function CifarDataset:preprocess()
44 |    if self.split == 'train' then
45 |       return t.Compose{
46 |          t.ColorNormalize(meanstd),
47 |          t.HorizontalFlip(0.5),
48 |          t.RandomCrop(32, 4),
49 |          t.CutOut(8),
50 |       }
51 |    elseif self.split == 'val' then
52 |       return t.ColorNormalize(meanstd)
53 |    else
54 |       error('invalid split: ' .. self.split)
55 |    end
56 | end
57 | 
58 | return M.CifarDataset
59 | 


--------------------------------------------------------------------------------
/Regularization/Cutout-master/shake-shake/cifar100.lua:
--------------------------------------------------------------------------------
 1 | --
 2 | --  Copyright (c) 2016, Facebook, Inc.
 3 | --  All rights reserved.
 4 | --
 5 | --  This source code is licensed under the BSD-style license found in the
 6 | --  LICENSE file in the root directory of this source tree. An additional grant
 7 | --  of patent rights can be found in the PATENTS file in the same directory.
 8 | --
 9 | 
10 | ------------
11 | -- This file is downloading and transforming CIFAR-100.
12 | -- It is based on cifar10.lua
13 | -- Ludovic Trottier
14 | ------------
15 | 
16 | local t = require 'datasets/transforms'
17 | 
18 | local M = {}
19 | local CifarDataset = torch.class('resnet.CifarDataset', M)
20 | 
21 | function CifarDataset:__init(imageInfo, opt, split)
22 |    assert(imageInfo[split], split)
23 |    self.imageInfo = imageInfo[split]
24 |    self.split = split
25 | end
26 | 
27 | function CifarDataset:get(i)
28 |    local image = self.imageInfo.data[i]:float()
29 |    local label = self.imageInfo.labels[i]
30 | 
31 |    return {
32 |       input = image,
33 |       target = label,
34 |    }
35 | end
36 | 
37 | function CifarDataset:size()
38 |    return self.imageInfo.data:size(1)
39 | end
40 | 
41 | 
42 | -- Computed from entire CIFAR-100 training set with this code:
43 | --      dataset = torch.load('cifar100.t7')
44 | --      tt = dataset.train.data:double();
45 | --      tt = tt:transpose(2,4);
46 | --      tt = tt:reshape(50000*32*32, 3);
47 | --      tt:mean(1)
48 | --      tt:std(1)
49 | local meanstd = {
50 |    mean = {129.3, 124.1, 112.4},
51 |    std  = {68.2,  65.4,  70.4},
52 | }
53 | 
54 | function CifarDataset:preprocess()
55 |    if self.split == 'train' then
56 |       return t.Compose{
57 |          t.ColorNormalize(meanstd),
58 |          t.HorizontalFlip(0.5),
59 |          t.RandomCrop(32, 4),
60 |          t.CutOut(4),
61 |       }
62 |    elseif self.split == 'val' then
63 |       return t.ColorNormalize(meanstd)
64 |    else
65 |       error('invalid split: ' .. self.split)
66 |    end
67 | end
68 | 
69 | return M.CifarDataset
70 | 


--------------------------------------------------------------------------------
/Regularization/Cutout-master/util/cutout.py:
--------------------------------------------------------------------------------
 1 | import torch
 2 | import numpy as np
 3 | 
 4 | 
 5 | class Cutout(object):
 6 |     """Randomly mask out one or more patches from an image.
 7 | 
 8 |     Args:
 9 |         n_holes (int): Number of patches to cut out of each image.
10 |         length (int): The length (in pixels) of each square patch.
11 |     """
12 |     def __init__(self, n_holes, length):
13 |         self.n_holes = n_holes
14 |         self.length = length
15 | 
16 |     def __call__(self, img):
17 |         """
18 |         The useful code is here.
19 |         Args:
20 |             img (Tensor): Tensor image of size (C, H, W).
21 |         Returns:
22 |             Tensor: Image with n_holes of dimension length x length cut out of it.
23 |         """
24 |         h = img.size(1)
25 |         w = img.size(2)
26 | 
27 |         mask = np.ones((h, w), np.float32)
28 | 
29 |         for n in range(self.n_holes):
30 |             y = np.random.randint(h)
31 |             x = np.random.randint(w)
32 | 
33 |             y1 = np.clip(y - self.length // 2, 0, h)
34 |             y2 = np.clip(y + self.length // 2, 0, h)
35 |             x1 = np.clip(x - self.length // 2, 0, w)
36 |             x2 = np.clip(x + self.length // 2, 0, w)
37 | 
38 |             mask[y1: y2, x1: x2] = 0.
39 | 
40 |         mask = torch.from_numpy(mask)
41 |         mask = mask.expand_as(img)
42 |         img = img * mask
43 | 
44 |         return img
45 | 


--------------------------------------------------------------------------------
/Regularization/Cutout-master/util/misc.py:
--------------------------------------------------------------------------------
 1 | import csv
 2 | 
 3 | 
 4 | class CSVLogger():
 5 |     def __init__(self, args, fieldnames, filename='log.csv'):
 6 | 
 7 |         self.filename = filename
 8 |         self.csv_file = open(filename, 'w')
 9 | 
10 |         # Write model configuration at top of csv
11 |         writer = csv.writer(self.csv_file)
12 |         for arg in vars(args):
13 |             writer.writerow([arg, getattr(args, arg)])
14 |         writer.writerow([''])
15 | 
16 |         self.writer = csv.DictWriter(self.csv_file, fieldnames=fieldnames)
17 |         self.writer.writeheader()
18 | 
19 |         self.csv_file.flush()
20 | 
21 |     def writerow(self, row):
22 |         self.writer.writerow(row)
23 |         self.csv_file.flush()
24 | 
25 |     def close(self):
26 |         self.csv_file.close()
27 | 


--------------------------------------------------------------------------------
/Regularization/README.md:
--------------------------------------------------------------------------------
1 | ## Tools and metods used for regularization
2 | 


--------------------------------------------------------------------------------
/md2zh.py:
--------------------------------------------------------------------------------
 1 | 
 2 | import re
 3 | def repl(m):
 4 | 	inner_word = m.group(1)
 5 | 	return '<br><br>
#39; + inner_word + '
lt;br><br>'
 6 | with open('readme.md', 'r') as f_read:
 7 | 	text = f_read.readlines()
 8 | 	for k, item in enumerate(text):
 9 | 		text[k] = re.sub(r'\$\$(.*?)\$\
#39;, repl, item)
10 | 	
11 | 	with open('zhihu.md', 'w') as f_write:
12 | 		f_write.writelines(text)


--------------------------------------------------------------------------------
/paper_of_NLP/attention_is_all_your_need.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/paper_of_NLP/attention_is_all_your_need.pdf


--------------------------------------------------------------------------------
/paper_of_NLP/bert.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/paper_of_NLP/bert.pdf


--------------------------------------------------------------------------------
/results/3921554452543_.pic.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/results/3921554452543_.pic.jpg


--------------------------------------------------------------------------------
/results/WechatIMG391.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/results/WechatIMG391.jpeg


--------------------------------------------------------------------------------
/代码技巧汇总/Isolating Sources of Disentanglement in VAEs.md:
--------------------------------------------------------------------------------
 1 | # Isolating Sources of Disentanglement in VAEs
 2 | 
 3 | 论文地址:<https://arxiv.org/abs/1802.04942>
 4 | 
 5 | ![image-20190413161118241](https://ws4.sinaimg.cn/large/006tNc79ly1g212oae4nvj31940tuq8j.jpg)
 6 | 
 7 | #### 作者:Ricky T. Q. Chen, Xuechen Li, Roger Grosse, David Duvenaud University of Toronto, Vector Institute 
 8 | 
 9 | 我们分解了变分下界,展示了潜变量中的全相关值(Total Correlation)的存在,并设计了β-TCVAE算法。β-TCVAE算法是一个精炼的,可以替代β-VAE来完成解纠缠(Disentanglement)表示算法,在训练过程中不需要额外的参数。我们并进一步提出了一种不需要分类器的解纠缠度量方法,叫做互信息间隔(Mutual Information Gap)。我们展示了大量的和高质量的实验,使用我们的模型在一些限制条件和非线性条件下的设置中,证明了全相关和解纠缠之间重要的关系。
10 | 
11 | 


--------------------------------------------------------------------------------
/代码技巧汇总/Latex simbols table.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码技巧汇总/Latex simbols table.pdf


--------------------------------------------------------------------------------
/代码技巧汇总/Protecting networks against adversial attacks.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码技巧汇总/Protecting networks against adversial attacks.pdf


--------------------------------------------------------------------------------
/代码技巧汇总/acm-book.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码技巧汇总/acm-book.pdf


--------------------------------------------------------------------------------
/代码技巧汇总/how to write research proposal.md:
--------------------------------------------------------------------------------
 1 | # how to write research proposal
 2 | 
 3 | 三、研究计划的四要素
 4 | 
 5 | 研究计划要回答四个问题:为什么这个问题重要,为什么这个问题很难解决,为什么现在要考虑解决这个问题,为什么你能解决这个问题。研究计划最好能回答清楚这四个问题。
 6 | 
 7 | 1.为什么这个问题那么重要?
 8 | 
 9 | 记得有个麻省理工的教授在指导博士生写论文时,说过这么一句话“天底下有成千上万的难题,你不妨找一个最最值得解决的难题,作为你的研究方向。”你在研究计划中,也要让读者信服,确确实实有一个很需要解决的问题,如果你把这个问题解决了,将对社会产生重要的积极意义。这是研究计划中最重要的一点。
10 | 
11 | 2.为什么这个问题难以解决?
12 | 
13 | 你研究的问题需要有足够的难度,需要进行深入的思考,需要运用跨学科的研究方法。你研究的问题应当超越近期其他研究活动,短时间内其他人暂时还无法解决这个问题。如果你的问题别人很容易就能解决,或者别人已经在解决中了,那么你研究的意义也就丧失了。研究计划的读者,也就当然不会愿意给你资助或其他支持了。
14 | 
15 | 3.为什么现在要解决这个问题?
16 | 
17 | 大多数研究的问题,都不是全新的话题。当然,不排除你可能会提出一个全新的问题。但是,大多数情况下,需要研究的问题,其实早已长期存在了,只不过人们是以不同的形式或切入点提这些问题。你需要说服你的读者,尽管问题已经长期存在,但是现在是解决这个问题的最佳时机。比如,最近出现了其他研究成果、技术或政策导向,借助这些机会,能大大提高现在解决这个问题的可行性;比如最近出现了新的状况,导致解决这个问题的紧迫性大大提高。总之,现在出现了解决这些问题的新机遇。
18 | 
19 | 4.为什么你最适合做这个研究?
20 | 
21 | 这一点,其实是最最重要的一点,也是大家在写研究计划时最容易忽视的一点。你为什么是进行这项研究最合适的人选?你可能已经让读者相信,这个问题非常值得研究,但是你还需要说服读者,应该资质你(而不是别人)来研究这个问题。你必须让你的读者相信,你有这个资格。
22 | 
23 | 所以,你需要发掘你自己最独特的优势。你有一门杀手锏、必杀技,这项独门绝技传男不传女传内不传外,别人都没有。最常见的说法是:你有最合适的专业知识,你之前学过什么,掌握什么经验和技能,这些知识正好是研究所需要的。除此之外,还可以列举你之前的工作经历,以及这些经历里取得的成就。在列举工作经历时,一定要巧妙地把这些工作经历和你将来要进行的研究方向结合起来,让大家觉得这些工作经历对新的研究非常有帮助。工作经历对于说你的能力,非常重要。如果你暂时还不具备相应的工作经历,你可以先暂时进行一些初步的工作,把这些工作经历记录下来,并且能够通过某种方式证明这种经历。然后你再拿这些工作经历,去作为研究计划的依据。
24 | 
25 | 


--------------------------------------------------------------------------------
/代码技巧汇总/img2Latex simplified document.md:
--------------------------------------------------------------------------------
 1 | # img2Latex simplified document
 2 | 
 3 | ![demo](https://cy-1256894686.cos.ap-beijing.myqcloud.com/2019-12-01-045200.gif)
 4 | 
 5 | 1. Log in https://mathpix.com/ocr/
 6 | 2. ![image-20191201123842901](https://cy-1256894686.cos.ap-beijing.myqcloud.com/2019-12-01-043842.png)
 7 | 
 8 | First 1K requests are free per month(infactly, if you use more than 1K, the price of per requests is also chaep).
 9 | 
10 | ![image-20191201124137357](https://cy-1256894686.cos.ap-beijing.myqcloud.com/2019-12-01-044137.png)
11 | 
12 | Fuck, 没有VISA….
13 | 
14 | 等有了VISA再补上…...


--------------------------------------------------------------------------------
/代码技巧汇总/linux 技巧.md:
--------------------------------------------------------------------------------
 1 | 作者:程序员客栈
 2 | 
 3 | 链接:https://www.zhihu.com/question/41115077/answer/602854935
 4 | 
 5 | 来源:知乎
 6 | 
 7 | 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
 8 | 
 9 | 推荐几个堪称神器的命令行软件,记得看完哈,越到后面越精彩!
10 | 
11 | 1. [WordGrinder](http://link.zhihu.com/?target=https%3A//cowlark.com/wordgrinder/):它是一款使用起来很简单,但拥有足够的编写和发布功能的文字编辑器。它支持基本的格式和样式,并且你可以将你的文字以 Markdown、ODT、LaTeX 或者 HTML 等格式导出;
12 | 
13 | \2. [Proselint](http://link.zhihu.com/?target=http%3A//proselint.com/):它是一款全能的实时检查工具。它会找出行话、大话、不正确日期和时间格式、滥用的术语[等等](http://link.zhihu.com/?target=http%3A//proselint.com/checks/)。它也很容易运行并忽略文本中的标记;
14 | 
15 | \3. [GNU Aspell](http://link.zhihu.com/?target=http%3A//aspell.net/):它能够交互式地检测文本文档,能高亮显示拼写错误,还能在拼写错误的上方提供正确的拼写建议。Aspell 在进行拼写检查的时候,同样能够忽略许多语法标记;
16 | 
17 | \4. [tldr](http://link.zhihu.com/?target=https%3A//github.com/tldr-pages/tldr):你能通过这个工具,快速查看查看各种命令的常用命令行例子:
18 | 
19 | ![img](https://ws2.sinaimg.cn/large/006tNc79ly1g2okcmaobtj30go0cewfj.jpg)
20 | 
21 | \5. [Alex](http://link.zhihu.com/?target=https%3A//github.com/get-alex/alex):它是一个简单但很有用的小工具。适用于明文文本或者格式为 Markdown 或 HTML 的文档。Alex 会对“性别偏好、极端主义、种族相关、宗教,或者文章中其他不平等的措辞”产生警告。如果你想要试试看 Alex,这里有一个在线 [demo](http://link.zhihu.com/?target=https%3A//alexjs.com/%23demo);
22 | 
23 | \6. nmon:它能够帮你进行电脑的性能监控,包括 CPU,内存,磁盘 IO,网络 IO,并且界面很炫酷,是不是很像黑客,快去试试吧 [nmon for Linux | Main](http://link.zhihu.com/?target=http%3A//nmon.sourceforge.net/pmwiki.php)
24 | 
25 | ![img](https://ws4.sinaimg.cn/large/006tNc79ly1g2okcna92dj30go0h3dj5.jpg)
26 | 
27 | \7. axel:多线程断点下载工具,非常好用。例如下图中这样,指定了 8 个线程同时下载。
28 | 
29 | ![img](https://ws1.sinaimg.cn/large/006tNc79ly1g2okcnpywtj30go0b60uv.jpg)
30 | 
31 | \8. [SpaceVim](http://link.zhihu.com/?target=https%3A//github.com/SpaceVim/SpaceVim):这是一个 vim 插件,使你的 Vim 变成带代码自动补全等功能的更加强大的代码编辑器!
32 | 
33 | ![img](https://ws3.sinaimg.cn/large/006tNc79ly1g2okcmzojzj30go09e3zj.jpg)
34 | 
35 | \9. [thefuck](http://link.zhihu.com/?target=https%3A//github.com/nvbn/thefuck):你 git branch 打成 branch 了,然后命令行报错,你是不是心里会冒出一句 fuck?那你就在命令行里输入 fuck 然后回车!咦,成功了!
36 | 
37 | ![img](https://ws4.sinaimg.cn/large/006tNc79ly1g2okcmjshwj30go09ct8h.jpg)
38 | 
39 | apt-get update 打成 aptget update 报错?输入 fuck 然后回车!就解决了!爽吧哈哈哈


--------------------------------------------------------------------------------
/代码技巧汇总/linux显卡驱动修复.md:
--------------------------------------------------------------------------------
 1 | #  nvidia-smi报错(重装Nvidia驱动)
 2 | 
 3 | 遇到一个莫名其妙的问题:
 4 | 
 5 | > NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
 6 | 
 7 | 解决方案:重装NVIDIA驱动(非cuda)
 8 | 
 9 | 首先在[官网](https://www.nvidia.com/Download/index.aspx?lang=cn)下载你自己显卡对应的驱动`NVIDIA-Linux-x86_64-xxx.xx.run`,拷贝到Linux某个目录后先改权限
10 | 
11 | ```
12 | chomod 777 NVIDIA-Linux-x86_64-xxx.xx.run
13 | 1
14 | ```
15 | 
16 | 卸载原驱动
17 | 
18 | ```
19 | sudo apt-get remove --purge nvidia*  # 提示有残留可以接 
20 | sudo apt autoremove
21 | 1
22 | ```
23 | 
24 | 临时关闭显示服务
25 | 
26 | ```
27 | sudo service lightdm stop
28 | 1
29 | ```
30 | 
31 | 运行安装程序
32 | 
33 | ```
34 | sudo ./NVIDIA-Linux-x86_64-375.66.run 
35 | 
36 | ```
37 | 
38 | 安装后再重启显示
39 | 
40 | ```
41 | sudo service lightdm start
42 | ```


--------------------------------------------------------------------------------
/代码技巧汇总/pair_dataset.py:
--------------------------------------------------------------------------------
 1 | import glob
 2 | import random
 3 | import os
 4 | 
 5 | from torch.utils.data import Dataset
 6 | from PIL import Image
 7 | import torchvision.transforms as transforms
 8 | 
 9 | 
10 | def to_rgb(image):
11 | 	rgb_image = Image.new("RGB", image.size)
12 | 	rgb_image.paste(image)
13 | 	return rgb_image
14 | 
15 | 
16 | class ImageDataset(Dataset):
17 | 	def __init__(self, root, transforms_=None, unaligned=False, mode="pair"):
18 | 		self.transform = transforms.Compose(transforms_)
19 | 		self.unaligned = unaligned
20 | 
21 | 		self.files_A = sorted(glob.glob(os.path.join(root, "%s/c_img" % mode) + "/*.*"))
22 | 		self.files_B = sorted(glob.glob(os.path.join(root, "%s/n_img" % mode) + "/*.*"))
23 | 
24 | 	def __getitem__(self, index):
25 | 		image_A = Image.open(self.files_A[index % len(self.files_A)])
26 | 
27 | 		if self.unaligned:
28 | 			image_B = Image.open(self.files_B[random.randint(0, len(self.files_B) - 1)])
29 | 		else:
30 | 			image_B = Image.open(self.files_B[index % len(self.files_B)])
31 | 
32 | 		# Convert grayscale images to rgb
33 | 		if image_A.mode != "RGB":
34 | 			image_A = to_rgb(image_A)
35 | 		if image_B.mode != "RGB":
36 | 			image_B = to_rgb(image_B)
37 | 
38 | 		item_A = self.transform(image_A)
39 | 		item_B = self.transform(image_B)
40 | 		return {"A": item_A, "B": item_B}
41 | 
42 | 	def __len__(self):
43 | 		return max(len(self.files_A), len(self.files_B))
44 | 


--------------------------------------------------------------------------------
/代码技巧汇总/python编程技巧.md:
--------------------------------------------------------------------------------
 1 | # Python 3 新特性:类型注解
 2 | 
 3 | 前几天有同学问到,这个写法是什么意思:
 4 | 
 5 | ```python3
 6 | def add(x:int, y:int) -> int:
 7 |     return x + y
 8 | ```
 9 | 
10 | 我们知道 Python 是一种动态语言,变量以及函数的参数是**不区分类型**。因此我们定义函数只需要这样写就可以了:
11 | 
12 | ```python3
13 | def add(x, y):
14 |     return x + y
15 | ```
16 | 
17 | 这样的好处是有极大的灵活性,但坏处就是对于别人代码,无法一眼判断出参数的类型,IDE 也无法给出正确的提示。
18 | 
19 | 于是 Python 3 提供了一个新的特性:
20 | **函数注解**
21 | 
22 | 也就是文章开头的这个例子:
23 | 
24 | ```python3
25 | def add(x:int, y:int) -> int:
26 |     return x + y
27 | ```
28 | 
29 | 用 `: 类型` 的形式指定函数的**参数类型**,用 `-> 类型` 的形式指定函数的**返回值**类型。
30 | 
31 | 然后特别要强调的是,Python 解释器**并不会**因为这些注解而提供额外的校验,没有任何的类型检查工作。也就是说,这些类型注解加不加,对你的代码来说**没有任何影响**:
32 | 
33 | ![img](https://pic4.zhimg.com/80/v2-9b87d43afdc941929c428b03865037c3_hd.jpg)
34 | 
35 | 输出:
36 | 
37 | ![img](https://pic4.zhimg.com/80/v2-f00b176bddec7e1d242e3c0e13f30f33_hd.jpg)
38 | 
39 | 但这么做的好处是:
40 | 
41 | 1. 让别的程序员看得更明白
42 | 2. 让 IDE 了解类型,从而提供更准确的代码提示、补全和语法检查(包括类型检查,可以看到 str 和 float 类型的参数被高亮提示)
43 | 
44 | ![img](https://pic2.zhimg.com/80/v2-057629e3d59552c856dba7341882ea55_hd.jpg)
45 | 
46 | 在函数的 `__annotations__` 属性中会有你设定的注解:
47 | 
48 | ![img](https://pic1.zhimg.com/80/v2-9a9f55559930501472752996a3b772a4_hd.jpg)
49 | 
50 | 输出:
51 | 
52 | ![img](https://pic4.zhimg.com/80/v2-35506981e2c4f0e021b4b0902e36441b_hd.jpg)
53 | 
54 | 在 Python 3.6 中,又引入了对**变量类型**进行注解的方法:
55 | 
56 | ```python3
57 | a: int = 123
58 | b: str = 'hello'
59 | ```
60 | 
61 | 更进一步,如果你需要指明一个全部由整数组成的列表:
62 | 
63 | ```python3
64 | from typing import List
65 | l: List[int] = [1, 2, 3]
66 | ```
67 | 
68 | 但同样,这些仅仅是“**注解**”,不会对代码产生任何影响。
69 | 
70 | 不过,你可以通过 **mypy** 库来检验最终代码是否符合注解。
71 | 
72 | 安装 mypy:
73 | 
74 | ```bash
75 | pip install mypy
76 | ```
77 | 
78 | 执行代码:
79 | 
80 | ```bash
81 | mypy test.py
82 | ```
83 | 
84 | 如果类型都符合,则不会有任何输出,否则就会给出类似输出:
85 | 
86 | ![img](https://pic4.zhimg.com/80/v2-57b0ba00b7e95b092ae6fd6acc50eb37_hd.jpg)
87 | 
88 | 这些新特性也许你并不会在代码中使用,不过当你在别人的代码中看到时,请按照对方的约定进行赋值或调用。
89 | 
90 | 当然,也不排除 Python 以后的版本把类型检查做到解释器里,谁知道呢。


--------------------------------------------------------------------------------
/代码技巧汇总/pytorch_psnr_ssim.py:
--------------------------------------------------------------------------------
 1 | #PSNR
 2 | per_image_mse_loss = F.mse_loss (gen_hr,imgs_hr, reduction='none')
 3 | per_image_psnr = 10 * torch.log10 (10 / per_image_mse_loss)
 4 | tensor_average_psnr = torch.mean (per_image_psnr).item ()
 5 | 
 6 | #SSIM
 7 | import pytorch_ssim
 8 | import torch
 9 | from torch.autograd import Variable
10 | 
11 | img1 = Variable(torch.rand(1, 1, 256, 256))
12 | img2 = Variable(torch.rand(1, 1, 256, 256))
13 | 
14 | if torch.cuda.is_available():
15 | 	img1 = img1.cuda()
16 | 	img2 = img2.cuda()
17 | 
18 | print(pytorch_ssim.ssim(img1, img2))
19 | 
20 | ssim_loss = pytorch_ssim.SSIM(window_size = 11)
21 | 
22 | print(ssim_loss(img1, img2))
23 | 
24 | #MSSSIM
25 | import pytorch_ssim
26 | import torch
27 | from torch.autograd import Variable
28 | 
29 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
30 | m = pytorch_msssim.MSSSIM()
31 | 
32 | img1 = torch.rand(1, 1, 256, 256)
33 | img2 = torch.rand(1, 1, 256, 256)
34 | 
35 | print(pytorch_msssim.msssim(img1, img2))
36 | print(m(img1, img2)))
37 | 


--------------------------------------------------------------------------------
/代码技巧汇总/pytorch学习率设置.md:
--------------------------------------------------------------------------------
 1 | # pytorch在不同的层使用不同的学习率
 2 | 
 3 | 有时候我们希望某些层的学习率与整个网络有些差别,这里我简单介绍一下在pytorch里如何设置,方法略麻烦,如果有更好的方法,请务必教我:
 4 | 
 5 | 首先我们定义一个网络:
 6 | 
 7 | ```python
 8 | class net(nn.Module):
 9 |     def __init__(self):
10 |         super(net, self).__init__()
11 |         self.conv1 = nn.Conv2d(3, 64, 1)
12 |         self.conv2 = nn.Conv2d(64, 64, 1)
13 |         self.conv3 = nn.Conv2d(64, 64, 1)
14 |         self.conv4 = nn.Conv2d(64, 64, 1)
15 |         self.conv5 = nn.Conv2d(64, 64, 1)
16 |     def forward(self, x):
17 |         out = conv5(conv4(conv3(conv2(conv1(x)))))
18 |         return out
19 | 123456789101112
20 | ```
21 | 
22 | 我们希望conv5学习率是其他层的100倍,我们可以:
23 | 
24 | ```python
25 | net = net()
26 | lr = 0.001
27 | 
28 | conv5_params = list(map(id, net.conv5.parameters()))
29 | base_params = filter(lambda p: id(p) not in conv5_params,
30 |                      net.parameters())
31 | optimizer = torch.optim.SGD([
32 |             {'params': base_params},
33 |             {'params': net.conv5.parameters(), 'lr': lr * 100},
34 | , lr=lr, momentum=0.9)12345678910
35 | ```
36 | 
37 | 如果多层,则:
38 | 
39 | ```python
40 | conv5_params = list(map(id, net.conv5.parameters()))
41 | conv4_params = list(map(id, net.conv4.parameters()))
42 | base_params = filter(lambda p: id(p) not in conv5_params + conv4_params,
43 |                      net.parameters())
44 | optimizer = torch.optim.SGD([
45 |             {'params': base_params},
46 |             {'params': net.conv5.parameters(), 'lr': lr * 100},
47 |             {'params': net.conv4.parameters(), 'lr': lr * 100},
48 |             , lr=lr, momentum=0.9)
49 | ```
50 | 
51 | 


--------------------------------------------------------------------------------
/代码技巧汇总/x 1import PIL.md:
--------------------------------------------------------------------------------
 1 | ```python
 2 | import PIL.Image as Image
 3 | import os
 4 | 
 5 | IMAGES_PATH = './test/'  # 图片集地址
 6 | IMAGES_FORMAT = ['.jpg','.JPG']  # 图片格式
 7 | IMAGE_SIZE = 256  # 每张小图片的大小
 8 | IMAGE_ROW = 10  # 图片间隔,也就是合并成一张图后,一共有几行
 9 | IMAGE_COLUMN = 10  # 图片间隔,也就是合并成一张图后,一共有几列
10 | IMAGE_SAVE_PATH = './big_test'  # 图片转换后的地址
11 | try:
12 |     os.mkdir(IMAGE_SAVE_PATH)
13 | except:
14 |     pass
15 | # 获取图片集地址下的所有图片名称
16 | image_names = [name for name in os.listdir (IMAGES_PATH) for item in IMAGES_FORMAT if
17 |                os.path.splitext (name)[1] == item]
18 | image_names.sort()
19 | 
20 | print(len(image_names))
21 | # # 简单的对于参数的设定和实际图片集的大小进行数量判断
22 | # if len (image_names) != IMAGE_ROW * IMAGE_COLUMN:
23 | #     raise ValueError ("合成图片的参数和要求的数量不能匹配!")
24 | 
25 | 
26 | # 定义图像拼接函数
27 | 
28 | def image_compose (i):
29 |     to_image = Image.new ('RGB',(IMAGE_COLUMN * IMAGE_SIZE,IMAGE_ROW * IMAGE_SIZE))  # 创建一个新图
30 |     # 循环遍历,把每张图片按顺序粘贴到对应位置上
31 |     for y in range (1,IMAGE_ROW + 1):
32 |         for x in range (1,IMAGE_COLUMN + 1):
33 |             from_image = Image.open (IMAGES_PATH + image_names[i*100+IMAGE_COLUMN * (y - 1) + x - 1]).resize (
34 |                 (IMAGE_SIZE,IMAGE_SIZE),Image.ANTIALIAS)
35 |             to_image.paste (from_image,((x - 1) * IMAGE_SIZE,(y - 1) * IMAGE_SIZE))
36 |     return to_image.save (IMAGE_SAVE_PATH+'/'+str(i)+'_100x.jpg')  # 保存新图
37 | 
38 | 
39 | for i in range(len(image_names)//100):
40 |     image_compose (i)  # 调用函数
41 | ```
42 | 
43 | 


--------------------------------------------------------------------------------
/代码技巧汇总/知乎导入公式.md:
--------------------------------------------------------------------------------
 1 | https://regex101.com/
 2 | 
 3 | 查找目标:
 4 | 
 5 | ```text
 6 | \$\$\n*(.*?)\n*\$\$
 7 | ```
 8 | 
 9 | 替换为:
10 | 
11 | ```text
12 | \n<img src="https://www.zhihu.com/equation?tex=\1" alt="\1" class="ee_img tr_noresize" eeimg="1">\n
13 | ```
14 | 
15 | 查找目标:
16 | 
17 | ```text
18 | \$\n*(.*?)\n*\$
19 | ```
20 | 
21 | 替换为:
22 | 
23 | ```text
24 | \n<img src="https://www.zhihu.com/equation?tex=\1" alt="\1" class="ee_img tr_noresize" eeimg="1">\n
25 | ```
26 | 
27 | 最后用知乎的导入功能导入该md文件就可以了。
28 | 
29 | 


--------------------------------------------------------------------------------
/代码技巧汇总/矩阵总结.md:
--------------------------------------------------------------------------------
 1 | 对称矩阵、Hermite矩阵、正交矩阵、酉矩阵、奇异矩阵、正规矩阵、幂等矩阵
 2 | 
 3 |         看文献的时候,经常见到各种各样矩阵,本篇总结了常见的对称矩阵、Hermite矩阵、正交矩阵、酉矩阵、奇异矩阵、正规矩阵、幂等矩阵七种矩阵的定义,作为概念备忘录吧,忘了可以随时查一下。
 4 | 
 5 | 1、对称矩阵(文献【1】第40页)
 6 | 
 7 | 
 8 | 其中上标T表示求矩阵的转置(文献【1】第38-39页)
 9 | 
10 | 
11 | 
12 | 
13 | 2、Hermite矩阵(文献【2】第97页)
14 | 
15 | 
16 | 
17 | 其中H表示求矩阵的复共轭转置:(文献【2】第96页)
18 | 
19 | 
20 |         Hermite阵是对称阵概念的推广,对称阵针对实矩阵(矩阵元素均为实数),Hermite阵针对复矩阵。
21 | 
22 | 3、正交矩阵(文献【1】第115页)
23 | 
24 | 
25 | 4、酉矩阵(文献【2】第102页)
26 | 
27 | 
28 |         类似于Hermite阵相对于对称阵,酉矩阵是正交阵概念的推广。
29 | 
30 | 5、奇异矩阵(文献【1】第43页)
31 | 
32 | 
33 | 6、正规矩阵(文献【2】第119页)
34 | 
35 | 
36 | 7、幂等矩阵(文献【2】第106-107页)
37 | 
38 | ![image-20190507115203740](https://ws2.sinaimg.cn/large/006tNc79ly1g2sm1w1og8j30u00zhk5r.jpg)
39 | 
40 | 
41 | 参考文献:
42 | 
43 | 【1】同济大学数学系 编. 工程数学线性代数[M]. 5版.高等教育出版社,2007.
44 | 
45 | 【2】史荣昌, 魏丰. 矩阵分析[M]. 3版.北京:北京理工大学出版社, 2010.
46 | --------------------- 
47 | 作者:jbb0523 
48 | 来源:CSDN 
49 | 原文:https://blog.csdn.net/jbb0523/article/details/50596604 
50 | 版权声明:本文为博主原创文章,转载请附上博文链接!


--------------------------------------------------------------------------------
/代码技巧汇总/超分辨率的损失函数总结.md:
--------------------------------------------------------------------------------
 1 | # 超分辨率的损失函数总结
 2 | 
 3 | MSE,是图像空间的**内容**“相似”,而在图像上普遍存在区域,其属于某个类别(老虎皮,草,渔网等),如果出现纹理或者网格,那么优化MSE很容易将这个区域磨平,即平滑。【直接对应评价指标MSE和MAE,以及PSNR,但是PSNR高不见得好,图像可能不自然。见文献2的图:PSNR/SSIM与感知质量(视觉效果)的一致性】
 4 | 
 5 | ![img](https://ws2.sinaimg.cn/large/006tNc79ly1g2vdtyj7v2j30go04u753.jpg)PSNR/SSIM与感知质量(视觉效果)的一致性
 6 | 
 7 | L1,可忍受异常值,相较于MSE和L2是没有那么平滑一些的。
 8 | 
 9 | Perceptual loss,是特征空间的**类别/纹理**“相似”。毕竟有学者认为深度卷积网络用于图像分类,利用的是**物体的纹理差异**。
10 | 
11 | 多尺度(MS)-SSIM,那就是图像空间的**结构**“相似”。在文献[1]中,就找到了MS-SSIM+L1的混合损失适合于图像复原。
12 | 
13 | 图像SR问题有两个主要方向,见文献[2]的图。其一为了实现更好的图像重建效果,即灰度/RGB复原;其二为了实现更好的视觉质量,即看起来“自然”。有没有方法是兼顾两者的呢?似乎。。。
14 | 
15 | ![img](https://ws4.sinaimg.cn/large/006tNc79ly1g2vdty9cboj30go0bi3zb.jpg)图像SR的两个方向
16 | 
17 | [1] Loss Functions for Image Restoration with Neural Networks IEEE TCI 2017 [[[paper](http://link.zhihu.com/?target=http%3A//ieeexplore.ieee.org/document/7797130/)]] [[[code](http://link.zhihu.com/?target=https%3A//github.com/NVlabs/PL4NN)]]
18 | 
19 | [2] 2018 PIRM Challenge on Perceptual Image Super-resolution [[[paper](http://link.zhihu.com/?target=https%3A//arxiv.org/abs/1809.07517)]]
20 | 
21 | 


--------------------------------------------------------------------------------
/代码技巧汇总/送给研一入学的你们—炼丹师入门手册.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码技巧汇总/送给研一入学的你们—炼丹师入门手册.pdf


--------------------------------------------------------------------------------
/代码速查表/O().png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/O().png


--------------------------------------------------------------------------------
/代码速查表/README.md:
--------------------------------------------------------------------------------
 1 | #  这是一张机器&深度学习代码速查表
 2 | 标签(空格分隔): 陈扬
 3 | 
 4 | ---
 5 | [TOC]
 6 | 随着深度学习的蓬勃发展,越来越多的小伙伴们开始使用python作为主打代码,python有着种类繁多的第三方库,这里为大家从网络上收集了一些代码速查表,希望可以帮您在码代码时提速.
 7 | ##基础
 8 | ### 神经网络
 9 | ![神经网络.png-604.8kB][1]
10 | ![网络.png-603.1kB][2]
11 | ### 线性代数
12 | ![liner.png-1030.9kB][3]
13 | ### python基础
14 | ![sci.png-300.6kB][4]
15 | ### scipy科学计算
16 | ![sci.png-300.6kB][5]
17 | ### spark
18 | ![spark.jpeg-763.8kB][6]
19 | ## 数据保存及可视化
20 | ### numpy
21 | ![np.png-607.7kB][7]
22 | ### pandas
23 | ![pd.png-509.4kB][8]
24 | ![table.jpeg-625.8kB][9]
25 | ![pdvis.jpeg-639.1kB][10]
26 | ### bokeh
27 | ![bokeh.png-1086.9kB][11]
28 | ### 
29 | ## 画图
30 | ### matplotlib
31 | ![matplot.png-572.5kB][12]
32 | ### ggplot
33 | ![data vis.jpeg-661.9kB][13]
34 | ![gg.jpeg-755.7kB][14]
35 | ## 机器学习
36 | ### sklearn
37 | ![sk.png-673.4kB][15]
38 | ![scikit.png-610.5kB][16]
39 | ### keras
40 | ![keras.jpeg-795.7kB][17]
41 | ### tensorflow
42 | ![TF.png-642.9kB][18]
43 | ## 算法
44 | ### 数据结构
45 | ![datastruct.png-290.8kB][19]
46 | ### 复杂度
47 | ![O().png-286.6kB][20]
48 | ### 排序算法
49 | ![sort.png-237kB][21]
50 | 
51 | 
52 |   [1]: http://static.zybuluo.com/Team/y6pjoywhkn1pdapep1zad7il/%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C.png
53 |   [2]: http://static.zybuluo.com/Team/0tw11jzagxc0npus6ickr1oh/%E7%BD%91%E7%BB%9C.png
54 |   [3]: http://static.zybuluo.com/Team/cn56twrdhlahrl7ix46xwqw3/liner.png
55 |   [4]: http://static.zybuluo.com/Team/vtg8inupaj5vy3ln3vk7t85w/sci.png
56 |   [5]: http://static.zybuluo.com/Team/84xqmt690m6hnctb6qdfvxdn/sci.png
57 |   [6]: http://static.zybuluo.com/Team/24gmxj519n2edcnvwytsopc0/spark.jpeg
58 |   [7]: http://static.zybuluo.com/Team/im3ch5odz67vop0pmpv79h68/np.png
59 |   [8]: http://static.zybuluo.com/Team/1i33ru0iy64tor6vb9dap5nx/pd.png
60 |   [9]: http://static.zybuluo.com/Team/dsncme8bp2vc5gcmejes8feg/table.jpeg
61 |   [10]: http://static.zybuluo.com/Team/phaqdebkavcat5tde2kmsofj/pdvis.jpeg
62 |   [11]: http://static.zybuluo.com/Team/xvujgg3n97hiq4wkf796h4el/bokeh.png
63 |   [12]: http://static.zybuluo.com/Team/ohs4rsm5o0lm3vggeyleytgf/matplot.png
64 |   [13]: http://static.zybuluo.com/Team/n8w79016ep6i384sfbwj05vm/data%20vis.jpeg
65 |   [14]: http://static.zybuluo.com/Team/cwsjvkzgnu1b2bh5sljs90ey/gg.jpeg
66 |   [15]: http://static.zybuluo.com/Team/ee3ufgacykyx3ilavub3iwr8/sk.png
67 |   [16]: http://static.zybuluo.com/Team/4a9nifp5px9yc0hj06jcce56/scikit.png
68 |   [17]: http://static.zybuluo.com/Team/ndgm8wakx6ka1zw42i0y6fqa/keras.jpeg
69 |   [18]: http://static.zybuluo.com/Team/i7ebze2onf185a8zte1v7xqh/TF.png
70 |   [19]: http://static.zybuluo.com/Team/ujay8qwle1dm9wtgu6ottdt9/datastruct.png
71 |   [20]: http://static.zybuluo.com/Team/vumiy0qoezaskxmlex9xz4tu/O%28%29.png
72 |   [21]: http://static.zybuluo.com/Team/s1smbd6gcm5m10pgluj6jbqr/sort.png


--------------------------------------------------------------------------------
/代码速查表/TF.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/TF.png


--------------------------------------------------------------------------------
/代码速查表/bokeh.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/bokeh.png


--------------------------------------------------------------------------------
/代码速查表/data vis.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/data vis.jpeg


--------------------------------------------------------------------------------
/代码速查表/datastruct.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/datastruct.png


--------------------------------------------------------------------------------
/代码速查表/df.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/df.jpeg


--------------------------------------------------------------------------------
/代码速查表/df2.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/df2.jpeg


--------------------------------------------------------------------------------
/代码速查表/gg.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/gg.jpeg


--------------------------------------------------------------------------------
/代码速查表/keras.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/keras.jpeg


--------------------------------------------------------------------------------
/代码速查表/liner.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/liner.png


--------------------------------------------------------------------------------
/代码速查表/matplot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/matplot.png


--------------------------------------------------------------------------------
/代码速查表/np.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/np.png


--------------------------------------------------------------------------------
/代码速查表/pd.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/pd.png


--------------------------------------------------------------------------------
/代码速查表/sci.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/sci.png


--------------------------------------------------------------------------------
/代码速查表/scikit.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/scikit.png


--------------------------------------------------------------------------------
/代码速查表/scipy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/scipy.png


--------------------------------------------------------------------------------
/代码速查表/sk.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/sk.png


--------------------------------------------------------------------------------
/代码速查表/sort.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/sort.png


--------------------------------------------------------------------------------
/代码速查表/spark.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/spark.jpeg


--------------------------------------------------------------------------------
/代码速查表/网络.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/网络.png


--------------------------------------------------------------------------------
/代码速查表/这是一张机器&深度学习代码速查表.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/代码速查表/这是一张机器&深度学习代码速查表.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/SR/AIM 2019 Challenge on Video Extreme Super-Resolution- Methods and Results.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/SR/AIM 2019 Challenge on Video Extreme Super-Resolution- Methods and Results.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/SR/Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/SR/Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/SR/SinGAN- Learning a Generative Model from a Single Natural Image.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/SR/SinGAN- Learning a Generative Model from a Single Natural Image.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/08358814.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/08358814.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1603.08155v1.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1603.08155v1.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1609.04802v5.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1609.04802v5.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1610.04490.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1610.04490.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1611.04076.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1611.04076.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1611.07004.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1611.07004.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1703.10593.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1703.10593.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1704.00028.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1704.00028.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1706.04983.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1706.04983.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1706.08224v2 (1).pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1706.08224v2 (1).pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1710.04026v2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1710.04026v2.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1710.10196.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1710.10196.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1711.07064.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1711.07064.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1711.11585.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1711.11585.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1802.05957.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1802.05957.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1803.04189.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1803.04189.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1804.02815.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1804.02815.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1804.02900v2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1804.02900v2.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1807.00734.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1807.00734.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1807.04720.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1807.04720.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1809.02983.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1809.02983.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1903.02271v1.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1903.02271v1.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1903.09814v2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1903.09814v2.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1904.04514.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1904.04514.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1904.08118v3.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1904.08118v3.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1905.01723.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1905.01723.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1906.01529.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1906.01529.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1907.10107.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1907.10107.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1908.03826.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1908.03826.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1909.11573.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1909.11573.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/1909.11856.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/1909.11856.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/2019-05-07.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/2019-05-07.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/A Deep Journey into Super-resolution-A Survey.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/A Deep Journey into Super-resolution-A Survey.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Bau_et_al_Semantic_Photo_Manipulation_preprint.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Bau_et_al_Semantic_Photo_Manipulation_preprint.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Bayesian Generative Active Deep learning.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Bayesian Generative Active Deep learning.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/CVPR2019-Filter Pruning via Geometric Median.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/CVPR2019-Filter Pruning via Geometric Median.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Expectation-Maximization Attention Networks for Semantic Segmentation.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Expectation-Maximization Attention Networks for Semantic Segmentation.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Generating Classification Weights with GNN Denoising Autoencoders for Few-Shot Learning.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Generating Classification Weights with GNN Denoising Autoencoders for Few-Shot Learning.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Generative Adversarial Networks_A Survey and Taxonomy.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Generative Adversarial Networks_A Survey and Taxonomy.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Han_umd_0117E_19307.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Han_umd_0117E_19307.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Li_Perceptual_Generative_Adversarial_CVPR_2017_paper.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Li_Perceptual_Generative_Adversarial_CVPR_2017_paper.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Lifelong GAN Continual Learning for Conditional Image Generation .pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Lifelong GAN Continual Learning for Conditional Image Generation .pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/MODE REGULARIZED GENERATIVE ADVERSARIAL.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/MODE REGULARIZED GENERATIVE ADVERSARIAL.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/NIPS2018-Discrimination-aware Channel Pruning.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/NIPS2018-Discrimination-aware Channel Pruning.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Non-local Neural Networks.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Non-local Neural Networks.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Zhaoyi_Yan_Shift-Net_Image_Inpainting_ECCV_2018_paper.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Zhaoyi_Yan_Shift-Net_Image_Inpainting_ECCV_2018_paper.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/Zheng_Looking_for_the_Devil_in_the_Details_Learning_Trilinear_Attention_CVPR_2019_paper.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/Zheng_Looking_for_the_Devil_in_the_Details_Learning_Trilinear_Attention_CVPR_2019_paper.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/deblur_cvpr19.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/deblur_cvpr19.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/funit-190708162302.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/funit-190708162302.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/paper_of_GAN/paper.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/paper_of_GAN/paper.pdf


--------------------------------------------------------------------------------
/论文推荐/Marcus/写给一位陌生人的一封信.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/写给一位陌生人的一封信.jpg


--------------------------------------------------------------------------------
/论文推荐/Marcus/写给一位陌生人的一封信.md:
--------------------------------------------------------------------------------
 1 | # 写给一位陌生人的一封信
 2 | 
 3 | 致一位陌生人:
 4 | 
 5 | 不知你近期安好与否。
 6 | 
 7 | 总感觉有很多话想对你说,却总是想不起来说是什么,也似乎再也没有机会对你说了。
 8 | 
 9 | 有时候我也会假设,如果那天在火车上我早点睡了,没在那条说说下留言的话,这一切的故事会不会都不会发生,也许我们两个人不过是变成了普通朋友,然后相忘于江湖……
10 | 
11 | 可是人生就好像是一辆单程的列车,我们是何其幸运才降临到了这个世上,又怎么能奢望回看过去的风景呢。
12 | 
13 | 希望你去到你想去的地方之后,可以更开心的过着每一天。你确实很优秀,我从认识你那一天起就一直打心底里佩服你。过去的事情我一直在逃避责任,很抱歉即使后来你给了我那么多次机会我都没有明白自己的错误,一直强加我的意志于你……本来说好散了的却一直也念念不忘。
14 | 
15 | 希望此后半生不再相遇。
16 | 
17 | 祝好。


--------------------------------------------------------------------------------
/论文推荐/Marcus/写给一位陌生人的一封信.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/Marcus/写给一位陌生人的一封信.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/Denoise_underwater_实验.docx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/Denoise_underwater_实验.docx


--------------------------------------------------------------------------------
/论文推荐/工作报告/Denoise_underwater_实验.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/Denoise_underwater_实验.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/Denoise_实验_1126.md:
--------------------------------------------------------------------------------
 1 | **Denoise** **实验**
 2 | 
 3 | Ø 实验内容:以新的数据集来做我们之前的DN网络实验。
 4 | 
 5 | Ø 数据集:DN dataset(包含一个原图文件夹和15个噪声图文件夹),说明如下:
 6 | 
 7 | CBSD500:原图
 8 | 
 9 | CBSD500N1-CBSD500NNN5:15个文件夹分别是对CBSD500加噪声后的图片,其中每个文件夹内是相同类型的噪声。噪声图文件的命名序号是和原图对应的。测试集请根据情况从每个文件夹中分别取出部分图片。
10 | 
11 | Ø 实验结果
12 | 
13 | 1.指标对比
14 | 
15 | |       | PSNR | SSIM | SN   |      | DA   | TODO |      |      |      |
16 | | ----- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
17 | | DNGAN |      |      | no   |      | no   |      |      |      |      |
18 | | DNGAN |      |      | yes  |      | no   |      |      |      |      |
19 | | DNGAN |      |      | no   |      | yes  |      |      |      |      |
20 | | DNGAN |      |      | yes  |      | yes  |      |      |      |      |
21 | 
22 |  
23 | 
24 | 2.训练稳定性曲线变化
25 | 
26 | (如以10epochs间隔采样一次)
27 | 
28 | 


--------------------------------------------------------------------------------
/论文推荐/工作报告/Dive-into-DL-Pytorch.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/Dive-into-DL-Pytorch.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/IJCAI.md:
--------------------------------------------------------------------------------
 1 | http://ai.stanford.edu/~acoates/stl10/
 2 | 
 3 | STL-10数据集
 4 | 
 5 | ------
 6 | 
 7 | STL-10数据集是用于开发无监督特征学习,深度学习,自学型学习算法的图像识别数据集。它受[CIFAR-10数据集的](http://www.cs.toronto.edu/~kriz/cifar.html)启发,但进行了一些修改。特别是,与CIFAR-10相比,每个班级的带标签的训练示例要少,但是在监督训练之前,提供了大量的无标签的示例来学习图像模型。主要挑战是利用未标记的数据(来自与标记的数据相似但不同的分布)来建立有用的先验。我们还期望该数据集的更高分辨率(96x96)将成为开发更具可扩展性的无监督学习方法的具有挑战性的基准。
 8 | 
 9 | 总览
10 | 
11 | - 10类:飞机,鸟,汽车,猫,鹿,狗,马,猴子,船,卡车。图片为96x96像素,彩色。500张训练图像(10张预定义的折叠),每节课800张测试图像。100000张未标记图像,用于无监督学习。这些示例是从相似但分布更广的图像中提取的。例如,除标记集中的动物外,它还包含其他类型的动物(熊,兔子等)和车辆(火车,公共汽车等)。
12 | 
13 | - 图像是从[ImageNet ](http://www.image-net.org/)上[带有](http://www.image-net.org/)标签的示例中[获取的](http://www.image-net.org/)。
14 | 
15 | ![img](https://cy-1256894686.cos.ap-beijing.myqcloud.com/2019-11-12-022704.png)
16 | 
17 | ## 测试协议
18 | 
19 | 我们建议使用以下标准化测试协议来报告结果:
20 | 
21 | - 对未贴标签的产品进行无监督的培训。
22 | - 使用来自训练数据的100个示例的10个(预定义)折叠对标记的数据执行监督训练。提供了用于每个折叠的示例的索引。
23 | - 报告整个测试集的平均准确度


--------------------------------------------------------------------------------
/论文推荐/工作报告/RSR_补充实验.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/RSR_补充实验.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/TIP_补充实验.md:
--------------------------------------------------------------------------------
 1 | # TIP 补充实验
 2 | 
 3 | | | $\lambda_{vgg}$ | $\lambda_{adv}$ | $\lambda_{mse}$ | $\lambda_D $ | PSNR | SSIM |
 4 | | :-----: | :--: | :--: | :--: | :-----: | :-----: | :-----: |
 5 | | RSRGAN+gp+sn+sa | 1e-2 | 2e-2 | 1 | 1 | 24.16 |0.7886|
 6 | | R-SA    | 1e-2 | 2e-2 | 1 | 1 | 23.509221 |0.782333|
 7 | | R-SN    | 1e-2 | 2e-2 | 1 | 1 | 23.125118 |0.755338|
 8 | | R-SN+GP | 1e-2 | 2e-2 | 1 | 1 | 23.811751 |0.786565|
 9 | | R+SN+SA | 1e-2 | 2e-2 | 1 | 1 |  ||
10 | | R-SA-SN | 1e-2 | 2e-2 | 1 | 1 | 23.089206 |0.750486|
11 | 
12 | 
13 | 
14 | - 补充论文中展示的真实水下图像
15 | - 画那种曲线
16 | - ![931569460708_.pic_hd](https://cy-1256894686.cos.ap-beijing.myqcloud.com/cy/2019-09-28-141558.png)
17 | 
18 | ---
19 | 
20 | ~~DDL 10.1~~
21 | 
22 | DDL10.2
23 | 
24 | ---
25 | 
26 | 补充 2
27 | 
28 | RSRGAN+SN+SA
29 | 
30 | 补充论文图片:真实水下
31 | 
32 | Recover:
33 | 
34 | |                |      |      |
35 | | :------------: | ---- | ---- |
36 | |  Pix2pix+EDSR  |      |      |
37 | |  Pix2pix+VDSR  |      |      |
38 | | Pix2pix+SRCNN  |      |      |
39 | | Pix2pix+ESRGAN |      |      |
40 | | 我们最好的算法 |      |      |
41 | 
42 | ```
43 | parser.add_argument("--image_path", type=str,default='./tip_data', help="Path to image")
44 | parser.add_argument("--save_path", type=str,default='show_result/tip_ex', help="test to save")
45 | ```
46 | 
47 | 


--------------------------------------------------------------------------------
/论文推荐/工作报告/TIP_补充实验.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/TIP_补充实验.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/人工智能发展的现状与反思3.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/人工智能发展的现状与反思3.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/周报模板.md:
--------------------------------------------------------------------------------
  1 | # 工作报告9.2-MARCUS
  2 | 
  3 | ## 时间:2019.9.23~2019.29
  4 | 
  5 | ## 周一
  6 | 
  7 | ### 代码
  8 | 
  9 | 
 10 | 
 11 | ### paper
 12 | 
 13 | 
 14 | 
 15 | ### 知乎好文
 16 | 
 17 | 
 18 | 
 19 | ### 工作时间
 20 | 
 21 | 
 22 | 
 23 | ### 每日小结
 24 | 
 25 | ## 周二
 26 | 
 27 | ### 代码
 28 | 
 29 | 
 30 | 
 31 | ### paper
 32 | 
 33 | 
 34 | 
 35 | ### 知乎好文
 36 | 
 37 | 
 38 | 
 39 | ### 工作时间
 40 | 
 41 | 
 42 | 
 43 | ### 每日小结
 44 | 
 45 | ## 周三
 46 | 
 47 | ### 代码
 48 | 
 49 | 
 50 | 
 51 | ### paper
 52 | 
 53 | 
 54 | 
 55 | ### 知乎好文
 56 | 
 57 | 
 58 | 
 59 | ### 工作时间
 60 | 
 61 | 
 62 | 
 63 | ### 每日小结
 64 | 
 65 | ## 周四
 66 | 
 67 | ### 代码
 68 | 
 69 | 
 70 | 
 71 | ### paper
 72 | 
 73 | 
 74 | 
 75 | ### 知乎好文
 76 | 
 77 | 
 78 | 
 79 | ### 工作时间
 80 | 
 81 | 
 82 | 
 83 | ### 每日小结
 84 | 
 85 | ## 周五
 86 | 
 87 | ### 代码
 88 | 
 89 | 
 90 | 
 91 | ### paper
 92 | 
 93 | 
 94 | 
 95 | ### 知乎好文
 96 | 
 97 | 
 98 | 
 99 | ### 工作时间
100 | 
101 | 
102 | 
103 | ### 每日小结
104 | 
105 | ## 周六
106 | 
107 | ### 代码
108 | 
109 | 
110 | 
111 | ### paper
112 | 
113 | 
114 | 
115 | ### 知乎好文
116 | 
117 | 
118 | 
119 | ### 工作时间
120 | 
121 | 
122 | 
123 | ### 每日小结
124 | 
125 | ## 周日
126 | 
127 | ### 代码
128 | 
129 | 
130 | 
131 | ### paper
132 | 
133 | 
134 | 
135 | ### 知乎好文
136 | 
137 | 
138 | 
139 | ### 工作时间
140 | 
141 | 
142 | 
143 | ### 每日小结
144 | 
145 | 
146 | 
147 | ---
148 | 
149 | ## 一周小结


--------------------------------------------------------------------------------
/论文推荐/工作报告/大三上期末总复习.md:
--------------------------------------------------------------------------------
  1 | # 大三上期末总复习
  2 | 
  3 | [toc]
  4 | 
  5 | ## 复习内容
  6 | 
  7 | 1. 计算机网络
  8 | 2. 操作系统
  9 | 3. 软件工程
 10 | 4. 大学物理
 11 | 5. 大学英语
 12 | 6. 毛概
 13 | 
 14 | ---
 15 | 
 16 | ## 计算机网络
 17 | 
 18 | - [ ] <计算机网络>谢希仁过一遍
 19 | - [ ] 课后习题
 20 | - [ ] 五层网络要点总结
 21 | 
 22 |   - [ ] 物理层
 23 |   - [ ] 链路层
 24 |   - [ ] 网络层
 25 |   - [ ] 传输层
 26 |   - [ ] 应用层
 27 | - [ ] 计算题
 28 |   - [ ] 一
 29 |   - [ ] 二
 30 | - [ ] 预习内容
 31 | - [ ] 无线网络自学内容
 32 | 
 33 | 
 34 | 
 35 | ---
 36 | 
 37 | ## 操作系统
 38 | 
 39 | - [x] 课件要点总结
 40 | - [ ] 往年试题练习
 41 |   - [ ] 2012
 42 |   - [ ] 2013
 43 |   - [ ] 2016
 44 |   - [ ] 2018
 45 | - [ ] 过一遍课本里面的重点以及习题
 46 | - [ ] 复习实验fork
 47 | 
 48 | 
 49 | 
 50 | ---
 51 | 
 52 | ## 软件工程
 53 | 
 54 | - [ ] 完成微信小程序实验
 55 | - [ ] 等老师画重点
 56 | - [ ] 粗略过一遍课本
 57 | 
 58 | 
 59 | 
 60 | ---
 61 | 
 62 | ## 大学物理
 63 | 
 64 | - [ ] 电学部分
 65 | - [ ] 电学公式总结
 66 | - [ ] 电学习题复习
 67 |   - [ ] 第五章
 68 |   - [ ] 第六章
 69 |   - [ ] 第七章
 70 |   - [ ] 第八章
 71 | - [ ] 自学大学物理下光学部分
 72 |   - [ ] 光学部分课后习题
 73 |   - [ ] 光学部分公式要点
 74 |   - [ ] 量子力学和相对论基础题型总结
 75 | - [ ] 在网上找几套试卷复习
 76 | 
 77 | ----
 78 | 
 79 | ## 通识课
 80 | 
 81 | 完成通识课展示报告PPT
 82 | 
 83 | ---
 84 | 
 85 | ## 毛概
 86 | 
 87 | - [ ] 复习思维导图 X5
 88 | - [ ] 每天刷题,目标100套练习
 89 | 
 90 | ---
 91 | 
 92 | ## 大学英语
 93 | 
 94 | - [ ] 完成U校园(尽快)
 95 | - [ ] 综合教程阅读部分4~8的大阅读翻译
 96 | - [ ] 单词本过一遍,主要是背例句
 97 | - [ ] summary背诵1~5+7_1
 98 | - [ ] 未完待续
 99 | 
100 | ---
101 | 
102 | 
103 | 
104 | 


--------------------------------------------------------------------------------
/论文推荐/工作报告/工作报告11.27.md:
--------------------------------------------------------------------------------
  1 | # 工作报告11.27-MARCUS
  2 | 
  3 | > TO DO list
  4 | >
  5 | > denoise 4期补充实验(可以拖)
  6 | >
  7 | > 大英听力U5deadline--yes
  8 | >
  9 | > 大雾第八单元作业
 10 | >
 11 | > 加急:毛概调研报告--yes
 12 | >
 13 | > 读书笔记DDL12.1
 14 | 
 15 | ## 时间:2019.11.27to12.4
 16 | 
 17 | ## 周一
 18 | 
 19 | ### 代码
 20 | 
 21 | 
 22 | 
 23 | ### paper
 24 | 
 25 | 
 26 | 
 27 | ### 知乎好文
 28 | 
 29 | 
 30 | 
 31 | ### 工作时间
 32 | 
 33 | 
 34 | 
 35 | ### 每日小结
 36 | 
 37 | ---
 38 | 
 39 | ## 周二
 40 | 
 41 | ### 代码
 42 | 
 43 | 
 44 | 
 45 | ### paper
 46 | 
 47 | >UNIT
 48 | >
 49 | >MUNIT
 50 | >
 51 | >FUNIT
 52 | 
 53 | ### 知乎好文
 54 | 
 55 | 
 56 | 
 57 | ### 工作时间
 58 | 
 59 | 
 60 | 
 61 | ### 每日小结
 62 | 
 63 | ---
 64 | 
 65 | ## 周三
 66 | 
 67 | ### 代码
 68 | 
 69 | 
 70 | 
 71 | ### paper
 72 | 
 73 | 
 74 | 
 75 | ### 知乎好文
 76 | 
 77 | 
 78 | 
 79 | ### 工作时间
 80 | 
 81 | 
 82 | 
 83 | ### 每日小结
 84 | 
 85 | ---
 86 | 
 87 | ## 周四
 88 | 
 89 | ### 代码
 90 | 
 91 | 
 92 | 
 93 | ### paper
 94 | 
 95 | 
 96 | 
 97 | ### 知乎好文
 98 | 
 99 | [Advanced Topics in GANs](https://towardsdatascience.com/comprehensive-introduction-to-turing-learning-and-gans-part-2-fd8e4a70775)
100 | 
101 | [GANs vs. Autoencoders: Comparison of Deep Generative Models](https://towardsdatascience.com/gans-vs-autoencoders-comparison-of-deep-generative-models-985cf15936ea)
102 | 
103 | 
104 | 
105 | ### 工作时间
106 | 
107 | 
108 | 
109 | ### 每日小结
110 | 
111 | ## 周五
112 | 
113 | ### 代码
114 | 
115 | 
116 | 
117 | ### paper
118 | 
119 | 
120 | 
121 | ### 知乎好文
122 | 
123 | 
124 | 
125 | ### 工作时间
126 | 
127 | 
128 | 
129 | ### 每日小结
130 | 
131 | ## 周六
132 | 
133 | ### 代码
134 | 
135 | 
136 | 
137 | ### paper
138 | 
139 | 
140 | 
141 | ### 知乎好文
142 | 
143 | 
144 | 
145 | ### 工作时间
146 | 
147 | 
148 | 
149 | ### 每日小结
150 | 
151 | ## 周日
152 | 
153 | ### 代码
154 | 
155 | 
156 | 
157 | ### paper
158 | 
159 | 
160 | 
161 | ### 知乎好文
162 | 
163 | 
164 | 
165 | ### 工作时间
166 | 
167 | 
168 | 
169 | ### 每日小结
170 | 
171 | 
172 | 
173 | ---
174 | 
175 | ## 一周小结


--------------------------------------------------------------------------------
/论文推荐/工作报告/工作报告9.16-MARCU.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/工作报告9.16-MARCU.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/工作报告9.2-MARCUS.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/工作报告9.2-MARCUS.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/工作报告9.23-MARCUS.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/工作报告/工作报告9.23-MARCUS.pdf


--------------------------------------------------------------------------------
/论文推荐/工作报告/工作报告9.9-MARCUS.md:
--------------------------------------------------------------------------------
1 | # 工作报告9.2-MARCUS
2 | 
3 | ## 时间:2019.9.9~2019.15
4 | 
5 | 一周都在认真打数学建模,工作时间80h以上
6 | 


--------------------------------------------------------------------------------
/论文推荐/报告/1905.01723.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/报告/1905.01723.pdf


--------------------------------------------------------------------------------
/论文推荐/报告/Shaham_SinGAN_Learning_a_Generative_Model_From_a_Single_Natural_Image_ICCV_2019_paper.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/报告/Shaham_SinGAN_Learning_a_Generative_Model_From_a_Single_Natural_Image_ICCV_2019_paper.pdf


--------------------------------------------------------------------------------
/论文推荐/报告/introduction_GAN.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/报告/introduction_GAN.pdf


--------------------------------------------------------------------------------
/论文推荐/报告/weekly_slides.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OUCMachineLearning/OUCML/5b54337d7c0316084cb1a74befda2bba96137d4a/论文推荐/报告/weekly_slides.pdf


--------------------------------------------------------------------------------