├── CONTRIBUTING.md ├── LICENSE ├── Procfile ├── README.md ├── __pycache__ └── app.cpython-38.pyc ├── app.py ├── models ├── braintumor │ ├── README.md │ ├── __pycache__ │ │ ├── brainapp.cpython-36.pyc │ │ ├── brainapp.cpython-37.pyc │ │ └── brainapp.cpython-38.pyc │ ├── brainapp.py │ ├── readme_images │ │ ├── brainres.png │ │ ├── data.png │ │ ├── fig1.jpg │ │ └── fig4.jpg │ ├── src │ │ ├── __pycache__ │ │ │ ├── config.cpython-37.pyc │ │ │ ├── config.cpython-38.pyc │ │ │ ├── data_load.cpython-37.pyc │ │ │ ├── data_load.cpython-38.pyc │ │ │ ├── dataset_class.cpython-38.pyc │ │ │ ├── dice_loss.cpython-38.pyc │ │ │ ├── plot_everything.cpython-38.pyc │ │ │ ├── predict.cpython-36.pyc │ │ │ ├── predict.cpython-37.pyc │ │ │ ├── predict.cpython-38.pyc │ │ │ ├── unet_arch.cpython-37.pyc │ │ │ └── unet_arch.cpython-38.pyc │ │ ├── config.py │ │ ├── data_load.py │ │ ├── dataset_class.py │ │ ├── dice_loss.py │ │ ├── plot_everything.py │ │ ├── predict.py │ │ ├── test.py │ │ ├── train.py │ │ └── unet_arch.py │ ├── static │ │ ├── TCGA_HT_8563_19981209_9.tif │ │ ├── secondPage.css │ │ └── styles.css │ ├── templates │ │ └── btindex.html │ └── weights │ │ └── model.h5 ├── cataract │ ├── README.md │ ├── __pycache__ │ │ ├── catapp.cpython-36.pyc │ │ ├── catapp.cpython-37.pyc │ │ └── catapp.cpython-38.pyc │ ├── catapp.py │ ├── readme_images │ │ ├── fig1.jpg │ │ ├── fig2.png │ │ ├── fig4.png │ │ └── res.png │ ├── src │ │ ├── __pycache__ │ │ │ ├── config.cpython-37.pyc │ │ │ ├── dataset_class.cpython-37.pyc │ │ │ ├── predict.cpython-36.pyc │ │ │ ├── predict.cpython-37.pyc │ │ │ ├── predict.cpython-38.pyc │ │ │ └── preprocess.cpython-37.pyc │ │ ├── config.py │ │ ├── dataset_class.py │ │ ├── predict.py │ │ ├── preprocess.py │ │ └── train.py │ ├── static │ │ ├── eye.png │ │ ├── secondPage.css │ │ └── styles.css │ ├── templates │ │ └── catindex.html │ └── weight │ │ ├── cat.h5 │ │ └── cat1.h5 ├── pneumonia │ ├── README.md │ ├── __pycache__ │ │ ├── pneapp.cpython-36.pyc │ │ ├── pneapp.cpython-37.pyc │ │ └── pneapp.cpython-38.pyc │ ├── data │ │ ├── test │ │ │ ├── NORMAL │ │ │ │ ├── IM-0001-0001.jpeg │ │ │ │ ├── IM-0007-0001.jpeg │ │ │ │ ├── IM-0129-0001.jpeg │ │ │ │ └── NORMAL2-IM-1436-0001.jpeg │ │ │ └── PNEUMONIA │ │ │ │ ├── person1000_virus_1681.jpeg │ │ │ │ ├── person100_bacteria_475.jpeg │ │ │ │ ├── person100_bacteria_481.jpeg │ │ │ │ └── person109_bacteria_528.jpeg │ │ ├── train │ │ │ ├── NORMAL │ │ │ │ ├── IM-0001-0001.jpeg │ │ │ │ ├── IM-0007-0001.jpeg │ │ │ │ ├── IM-0129-0001.jpeg │ │ │ │ └── NORMAL2-IM-1436-0001.jpeg │ │ │ └── PNEUMONIA │ │ │ │ ├── person1000_virus_1681.jpeg │ │ │ │ ├── person100_bacteria_475.jpeg │ │ │ │ ├── person100_bacteria_481.jpeg │ │ │ │ └── person109_bacteria_528.jpeg │ │ └── val │ │ │ ├── NORMAL │ │ │ ├── IM-0001-0001.jpeg │ │ │ ├── IM-0007-0001.jpeg │ │ │ ├── IM-0129-0001.jpeg │ │ │ └── NORMAL2-IM-1436-0001.jpeg │ │ │ └── PNEUMONIA │ │ │ ├── person1000_virus_1681.jpeg │ │ │ ├── person100_bacteria_475.jpeg │ │ │ ├── person100_bacteria_481.jpeg │ │ │ └── person109_bacteria_528.jpeg │ ├── notebook │ │ ├── .ipynb_checkpoints │ │ │ └── x-ray-image-classification-using-pytorch-checkpoint.ipynb │ │ └── x-ray-image-classification-using-pytorch.ipynb │ ├── pneapp.py │ ├── readme_images │ │ ├── fig1.jpg │ │ ├── fig2.jpg │ │ ├── fig3.jpeg │ │ └── fig4.jpg │ ├── src │ │ ├── __pycache__ │ │ │ ├── predict.cpython-36.pyc │ │ │ ├── predict.cpython-37.pyc │ │ │ └── predict.cpython-38.pyc │ │ ├── config.py │ │ ├── plot_me.py │ │ ├── predict.py │ │ └── train.py │ ├── static │ │ ├── inputImage.jpg │ │ ├── secondPage.css │ │ └── styles.css │ ├── templates │ │ └── pneindex.html │ └── weights │ │ └── pne.pt └── riskmodel │ ├── README.md │ ├── __pycache__ │ ├── rapp.cpython-37.pyc │ └── rapp.cpython-38.pyc │ ├── data │ ├── NHANESI_X.csv │ └── NHANESI_y.csv │ ├── rapp.py │ ├── readme_images │ └── fig.4.jpg │ ├── src │ ├── __pycache__ │ │ ├── config.cpython-38.pyc │ │ ├── predict.cpython-37.pyc │ │ └── predict.cpython-38.pyc │ ├── config.py │ ├── predict.py │ ├── train.py │ └── utils.py │ ├── static │ ├── secondPage.css │ └── styles.css │ ├── templates │ └── rkindex.html │ └── weight │ └── model.pkl ├── readme_images └── medicalai-2020-11-25_00.10.09.gif ├── requirements.txt ├── runtime.txt ├── static ├── images │ ├── brain.png │ ├── doc.png │ ├── eye.png │ ├── lungs.png │ └── risk.jpg └── styles.css └── templates ├── getting_started.html └── index.html /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines : 2 | 3 | ### Best Practices for reporting or requesting for Issues/Enhancements: 4 | - Follow the Issue Template while creating the issue. 5 | - Include Screenshots if any (specially for UI related issues) 6 | - For UI enhancements or workflows, include mockups to get a clear idea. 7 | 8 | ### Best Practices for assigning an issue: 9 | - If you would like to work on an issue, inform in the issue ticket by commenting on it. 10 | - Please be sure that you are able to reproduce the issue, before working on it. If not, please ask for clarification by commenting or asking the issue creator. 11 | 12 | Note: Please do not work on issues which is already being worked on by another contributor. We don't encourage creating multiple pull requests for the same issue. Also, please allow the assigned person some days to work on the issue ( The time might vary depending on the difficulty). If there is no progress after the deadline, please comment on the issue asking the contributor whether he/she is still working on it. If there is no reply, then feel free to work on the issue. 13 | 14 | ### Commits in your pull-requests should 15 | 16 | - Have a useful description . 17 | 18 | ## Advice on pull requests 19 | 20 | Pull requests are the easiest way to contribute changes to git repos at Github. 21 | They are the preferred contribution method, as they offer a nice way for 22 | commenting and amending the proposed changes. 23 | 24 | - You need a local "fork" of the Github repo. 25 | - Keep your fork up to date 26 | - Before you create a branch for new changes you should update your fork with the latest changes from our master. 27 | 28 | - Use a "feature branch" for your changes ( "ui branch" for UI/UX related changes). That separates the changes in the 29 | pull request from your other changes and makes it easy to edit/amend commits in the pull request. Workflow using "feature_x" as the example: 30 | - Update your local git fork to the tip (of the master, usually) 31 | - Create the feature branch with `git checkout -b feature_x` 32 | - Edit changes and commit them locally 33 | - Push them to your Github fork by `git push -u origin feature_x`. That 34 | creates the "feature_x" branch at your Github fork and sets it as the 35 | remote of this branch 36 | - When you now visit Github, you should see a proposal to create a pull 37 | request 38 | 39 | - If you later need to add new commits to the pull request, you can simply 40 | commit the changes to the local branch and then use `git push` to 41 | automatically update the pull request. 42 | 43 | - If you need to change something in the existing pull request (e.g. to add a 44 | missing signed-off-by line to the commit message), you can use `git push -f` 45 | to overwrite the original commits. That is easy and safe when using a feature 46 | branch. Example workflow: 47 | - Checkout the feature branch by `git checkout feature_x` 48 | - Edit changes and commit them locally. If you are just updating the commit 49 | message in the last commit, you can use `git commit --amend` to do that 50 | - If you added several new commits or made other changes that require 51 | cleaning up, you can use `git rebase -i HEAD~X` (X = number of commits to 52 | edit) to possibly squash some commits 53 | - Push the changed commits to Github with `git push -f` to overwrite the 54 | original commits in the "feature_x" branch with the new ones. The pull 55 | request gets automatically updated 56 | 57 | ## If you have commit access 58 | 59 | - Do NOT use git push --force. 60 | - Do NOT commit to other maintainer's packages without their consent. 61 | - Use Pull Requests if you are unsure and to suggest changes to other 62 | maintainers. 63 | 64 | 65 | 66 | 67 | 68 | > Thank you for contributing to this repository! 🙇‍♂️ 69 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Manpreet Singh 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Procfile: -------------------------------------------------------------------------------- 1 | web: gunicorn app:app -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Medical AI 2 | [![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/master/CONTRIBUTING.md) 3 | 4 | Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN, considered one of the most significant early uses of artificial intelligence in medicine . 5 | 6 | The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians. Approaches involving fuzzy set theory, Bayesian networks, and artificial neural networks,have been applied to intelligent computing systems in healthcare. 7 | 8 | Various specialties in medicine have shown an increase in research regarding AI. As the novel coronavirus ravages through the globe, the United States is estimated to invest more than $2 billion in AI related healthcare research over the next 5 years, more than 4 times the amount spent in 2019 ($463 million). 9 | 10 | ## PROJECT : 11 | You can find our project/website by [Clicking Here!](https://arcane-garden-82331.herokuapp.com/) . 12 | 13 | Figure 1. Illustration of Medical AI website 14 | 15 | ## OUR INITIATIVE : 16 | * We purpose a portal that has all possible solutions to Medical problems. 17 | * It tries to remove the barrier between what factors effect the problem. 18 | * We purpose simple and handy website using Machine Learning algorithms. 19 | 20 | 21 | ## HOW TO GET STARTED ? 22 | Let's first understand the directory structure . 23 | 24 | #### Directory Layout 25 | . 26 | ├── models # contains all models 27 | │ ├── braintumor 28 | │ ├── cataract 29 | │ ├── pneumonia 30 | │ ├── riskmodel 31 | ├── static # contains image file , css file and js files . 32 | │ ├── images 33 | │ ├── style.css 34 | ├── templates # contains all html files 35 | │ ├── getting_started.html 36 | │ ├── index.html 37 | ├── Procfile # contains profile information 38 | ├── app.py # web app file 39 | ├── requirements.txt # contains all required libraries and dependencies 40 | ├── runtime.txt # 41 | 42 | 43 | ## How to Install and Run 44 | * Fork this repository [Repo Link](https://github.com/manpreet2000/Medical-AI) . 45 | * Clone the repository . 46 | * Run following commands in command prompt 47 | ```bash 48 | cd Medical-AI-Main 49 | ``` 50 | 51 | ```bash 52 | pip install -r requirement.txt 53 | ``` 54 | * Run this to start server 55 | ```bash 56 | python app.py 57 | ``` 58 | * Select the `problem` you want to check . 59 | * Update `input` for that specific prroblem and check the `prediction\output` . 60 | 61 | ## About Contribution : 62 | * Raise the `issue` . 63 | * Work on raised issues . 64 | * Come up with interesting Medical related problems and solutions . 65 | * You can improve the UI/UX . 66 | * Can contribute on readme files as well . 67 | 68 | ## Package Guidelines 69 | 70 | See [CONTRIBUTING.md](CONTRIBUTING.md) file for detailed information. 71 | 72 | ## License 73 | 74 | See [LICENSE](LICENSE) file. 75 | 76 | 77 | "Take stands, take risks, take responsibility." 78 | — Muriel Siebert 79 | -------------------------------------------------------------------------------- /__pycache__/app.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/__pycache__/app.cpython-38.pyc -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | # if you get this error : 2 | # Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. 3 | # uncomment this code 4 | ########################################### 5 | import os 6 | os.environ['KMP_DUPLICATE_LIB_OK']='True' 7 | ########################################### 8 | 9 | from flask import Flask,render_template 10 | from models.cataract.catapp import catapp 11 | from models.pneumonia.pneapp import pneapp 12 | from models.braintumor.brainapp import brainapp 13 | from models.riskmodel.rapp import rapp 14 | app=Flask(__name__,template_folder="./templates",static_folder="./static") 15 | app.register_blueprint(brainapp,url_prefix="/models/braintumor") 16 | app.register_blueprint(catapp,url_prefix="/models/cataract") 17 | app.register_blueprint(pneapp,url_prefix="/models/pneumonia") 18 | app.register_blueprint(rapp,url_prefix="/models/riskmodel") 19 | 20 | @app.route("/") 21 | def index(): 22 | return render_template("index.html") 23 | 24 | @app.route("/getting_started") 25 | def gs(): 26 | return render_template("getting_started.html") 27 | 28 | 29 | if __name__=="__main__": 30 | app.run(port=8000,debug=True) 31 | -------------------------------------------------------------------------------- /models/braintumor/README.md: -------------------------------------------------------------------------------- 1 | # Brain Tumor Segmentation 2 | [![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/master/CONTRIBUTING.md) 3 | 4 | 5 | A cancerous or non-cancerous mass or growth of abnormal cells in the brain. Tumours can start in the brain, or cancer elsewhere in the body can spread to the brain.This repository provides source code for a deep convolutional neural network architecture 6 | designed for brain tumor segmentation.The architecture is fully convolutional network (FCN) built upon the well-known U-net model and it makes use of residual units instead of plain units to speedup training and convergence. 7 | The implementation is based on pyTorch . 8 | 9 | Figure 1.  Brain Tumor/Normal 10 | 11 | ## Dataset 12 | Dataset used in: Mateusz Buda, AshirbaniSaha, Maciej A. Mazurowski "Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by 13 | a deep learning algorithm." Computers in Biology and Medicine, 2019. 14 | 15 | This dataset contains brain MR images together with manual FLAIR abnormality segmentation masks. The images were obtained from The Cancer Imaging Archive (TCIA). They correspond 16 | to 110 patients included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available. 17 | 18 | Figure 2. Illustrative Examples of Brain MR Image in Patients with Brain Tumor 19 | 20 | ## Introduction 21 | 22 | #### Directory Layout 23 | . 24 | ├── data # data folder is hidden , path is provided in .gitnore file 25 | │ ├── data.csv 26 | | |── Image_folder 27 | ├── src 28 | │ ├── config.py # contains all the configuration 29 | | ├── data_load.py # loading dataset 30 | | ├── dataset_class.py # creating dataset 31 | | ├── dice_loss.py # creating loss function 32 | | ├── plot_everything.py # plotting dataset 33 | | ├── predict.py # End-to-end, prediction file 34 | | ├── test.py # testing model 35 | | ├── train.py # training model 36 | | ├── unet_arch # creating UNet Architecture 37 | ├── static 38 | | ├── inputImage.jpg # input image 39 | ├── templates 40 | | ├── btindex.html # html file for the UI 41 | ├── weights 42 | | ├── model.h5 # trained weights 43 | ├── brainapp.py # web app file 44 | 45 | #### Content 46 | | Directory | Info | 47 | |-----------|--------------| 48 | | `notesbook` | contains jupyter notebook related to `EDA` and `experiment` | 49 | | `src` | Contains all Python files | 50 | | `templates` | Contains HTML file | 51 | | `static` | Contains css, js files and images | 52 | | `data` | Contains [data](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation) which is hidden | 53 | | `weights` | contains trained weights | 54 | 55 | ## Evaluation 56 | The proposed approach was evaluated using Precision , Recall , Accuracy and F1 Score. Our source code is freely available here. 57 | 58 | Figure 3. Evaluation of Model 59 | 60 | ## Prerequisites 61 | * Python 3.4+ 62 | * PyTorch and its dependencies 63 | 64 | ## How to Install and Run 65 | * Clone this repository and run in command prompt 66 | ```bash 67 | pip install -r requirement.txt 68 | ``` 69 | * Run this to start server 70 | ```bash 71 | python brainapp.py 72 | ``` 73 | * Update `MR` Image and predict if user has `brain tumor` or not and if yes then `where` 74 | Figure 4.  Brain Tumor Segment Prediction 75 | 76 | 77 | ## Train your own model* 78 | * For traning you need to run `train.py` in src directory. 79 | * if want to change epochs, data directory, random seed, learning rate, etc change it from `config.py`. 80 | 81 | > Note : 82 | > *:- This project purely utilize pytorch, it would be appriciated to use pytorch only. 83 | -------------------------------------------------------------------------------- /models/braintumor/__pycache__/brainapp.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/__pycache__/brainapp.cpython-36.pyc -------------------------------------------------------------------------------- /models/braintumor/__pycache__/brainapp.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/__pycache__/brainapp.cpython-37.pyc -------------------------------------------------------------------------------- /models/braintumor/__pycache__/brainapp.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/__pycache__/brainapp.cpython-38.pyc -------------------------------------------------------------------------------- /models/braintumor/brainapp.py: -------------------------------------------------------------------------------- 1 | import os 2 | from flask import Flask, request, render_template,url_for,Blueprint 3 | from flask_cors import CORS, cross_origin 4 | import shutil 5 | import models.braintumor.src.predict as predict 6 | import base64 7 | import numpy as np 8 | from io import BytesIO 9 | #brainapp = Flask(__name__) 10 | brainapp=Blueprint("brainapp",__name__,template_folder="templates",static_folder="static") 11 | #CORS(brainapp) 12 | 13 | upload_folder="./models/braintumor/static" 14 | 15 | 16 | @brainapp.route("/", methods=["GET","POST"]) 17 | def index(): 18 | if request.method=="POST": 19 | image_file=request.files["file"] 20 | 21 | if image_file: 22 | 23 | npimg = np.fromstring(image_file.read(),np.uint8) 24 | classifier=predict.predict_img(npimg) 25 | uri=classifier.predict_image() 26 | 27 | return render_template('/btindex.html',image_loc=uri) 28 | return render_template('/btindex.html',image_loc=None) 29 | 30 | 31 | # if __name__ == '__main__': 32 | # brainapp.run(debug=True,port=8000) 33 | -------------------------------------------------------------------------------- /models/braintumor/readme_images/brainres.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/readme_images/brainres.png -------------------------------------------------------------------------------- /models/braintumor/readme_images/data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/readme_images/data.png -------------------------------------------------------------------------------- /models/braintumor/readme_images/fig1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/readme_images/fig1.jpg -------------------------------------------------------------------------------- /models/braintumor/readme_images/fig4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/readme_images/fig4.jpg -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/config.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/config.cpython-37.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/config.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/config.cpython-38.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/data_load.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/data_load.cpython-37.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/data_load.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/data_load.cpython-38.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/dataset_class.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/dataset_class.cpython-38.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/dice_loss.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/dice_loss.cpython-38.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/plot_everything.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/plot_everything.cpython-38.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/predict.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/predict.cpython-36.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/predict.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/predict.cpython-37.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/predict.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/predict.cpython-38.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/unet_arch.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/unet_arch.cpython-37.pyc -------------------------------------------------------------------------------- /models/braintumor/src/__pycache__/unet_arch.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/src/__pycache__/unet_arch.cpython-38.pyc -------------------------------------------------------------------------------- /models/braintumor/src/config.py: -------------------------------------------------------------------------------- 1 | #Data has been taken from Kaggle "https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation" 2 | IMAGE_SIZE=256 3 | IN_CHANNELS=3 4 | OUT_CHANNELS=1 5 | INIT_FEATURES=32 6 | DATA_PATH="./models/braintumor/data/" 7 | BASE_LEN = 70 #89 # len(/kaggle/input/lgg-mri-segmentation/kaggle_3m/TCGA_DU_6404_19850629/TCGA_DU_6404_19850629_ <-!!!43.tif) 8 | END_IMG_LEN = 4 # len(/kaggle/input/lgg-mri-segmentation/kaggle_3m/TCGA_DU_6404_19850629/TCGA_DU_6404_19850629_43 !!!->.tif) 9 | END_MASK_LEN = 9 # (/kaggle/input/lgg-mri-segmentation/kaggle_3m/TCGA_DU_6404_19850629/TCGA_DU_6404_19850629_43 !!!->_mask.tif) 10 | WEIGHT="models/braintumor/weights/model.h5" 11 | STEP_SIZE=4 12 | EPOCHS=20 13 | LR=0.13 14 | -------------------------------------------------------------------------------- /models/braintumor/src/data_load.py: -------------------------------------------------------------------------------- 1 | import config 2 | import glob 3 | import pandas as pd 4 | import os 5 | import numpy as np 6 | import cv2 7 | 8 | def load_images_in_df(): 9 | data_map = [] 10 | 11 | for sub_dir_path in glob.glob(config.DATA_PATH+"*"): 12 | if os.path.isdir(sub_dir_path): 13 | dirname = sub_dir_path.split("/")[-1] 14 | for filename in os.listdir(sub_dir_path): 15 | image_path = sub_dir_path + "/" + filename 16 | data_map.extend([dirname, image_path]) 17 | else: 18 | print("This is not a dir:", sub_dir_path) 19 | 20 | 21 | df = pd.DataFrame({"dirname" : data_map[::2], 22 | "path" : data_map[1::2]}) 23 | #print(df.head()) 24 | df_imgs = df[~df['path'].str.contains("mask")] 25 | df_masks = df[df['path'].str.contains("mask")] 26 | 27 | # Data sorting 28 | imgs = sorted(df_imgs["path"].values, key=lambda x : int(x[config.BASE_LEN:-config.END_IMG_LEN])) 29 | masks = sorted(df_masks["path"].values, key=lambda x : int(x[config.BASE_LEN:-config.END_MASK_LEN])) 30 | 31 | 32 | # Final dataframe 33 | df = pd.DataFrame({"patient": df_imgs.dirname.values, 34 | "image_path": imgs, 35 | "mask_path": masks}) 36 | 37 | 38 | # Adding A/B column for diagnosis 39 | def positiv_negativ_diagnosis(mask_path): 40 | value = np.max(cv2.imread(mask_path)) 41 | if value > 0 : return 1 42 | else: return 0 43 | 44 | df["diagnosis"] = df["mask_path"].apply(lambda m: positiv_negativ_diagnosis(m)) 45 | 46 | 47 | return df 48 | 49 | # if __name__ == "__main__": 50 | # load_images_in_df() -------------------------------------------------------------------------------- /models/braintumor/src/dataset_class.py: -------------------------------------------------------------------------------- 1 | from torch.utils.data import Dataset 2 | import cv2 3 | 4 | class BrainMriDataset(Dataset): 5 | def __init__(self, df, transforms): 6 | 7 | self.df = df 8 | self.transforms = transforms 9 | 10 | def __len__(self): 11 | return len(self.df) 12 | 13 | def __getitem__(self, idx): 14 | image = cv2.imread(self.df.iloc[idx, 1]) 15 | mask = cv2.imread(self.df.iloc[idx, 2], 0) 16 | 17 | augmented = self.transforms(image=image, 18 | mask=mask) 19 | 20 | image = augmented['image'] 21 | mask = augmented['mask'] 22 | 23 | return image, mask 24 | 25 | -------------------------------------------------------------------------------- /models/braintumor/src/dice_loss.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import numpy as np 3 | def dice_loss(pred,target,e=1e-6): 4 | inter=2*(pred*target).sum()+e 5 | union=(pred).sum()+(target).sum()+e 6 | return 1-(inter/union).sum() 7 | 8 | def bce_dice_loss(inputs, target): 9 | dicescore = dice_loss(inputs, target) 10 | bcescore = nn.BCELoss() 11 | bceloss = bcescore(inputs, target) 12 | 13 | return dicescore+bceloss -------------------------------------------------------------------------------- /models/braintumor/src/plot_everything.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pandas as pd 3 | import matplotlib.pyplot as plt 4 | 5 | def show_aug(inputs, nrows=5, ncols=5, image=True): 6 | plt.figure(figsize=(10, 10)) 7 | plt.subplots_adjust(wspace=0., hspace=0.) 8 | i_ = 0 9 | 10 | if len(inputs) > 25: 11 | inputs = inputs[:25] 12 | 13 | for idx in range(len(inputs)): 14 | 15 | # normalization 16 | if image is True: 17 | img = inputs[idx].numpy().transpose(1,2,0) 18 | mean = [0.485, 0.456, 0.406] 19 | std = [0.229, 0.224, 0.225] 20 | img = (img*std+mean).astype(np.float32) 21 | else: 22 | img = inputs[idx].numpy().astype(np.float32) 23 | img = img[0,:,:] 24 | 25 | #plot 26 | #print(img.max(), len(np.unique(img))) 27 | plt.subplot(nrows, ncols, i_+1) 28 | plt.imshow(img); 29 | plt.axis('off') 30 | 31 | i_ += 1 32 | 33 | return plt.show() 34 | 35 | -------------------------------------------------------------------------------- /models/braintumor/src/predict.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import os 4 | import albumentations as A 5 | import numpy as np 6 | from collections import OrderedDict 7 | import cv2 8 | import matplotlib.pyplot as plt 9 | from albumentations.pytorch import ToTensor 10 | import io 11 | import base64 12 | from PIL import Image 13 | 14 | class UNet(nn.Module): 15 | 16 | def __init__(self, in_channels=3, out_channels=1, init_features=32): 17 | super(UNet, self).__init__() 18 | 19 | features = init_features 20 | self.encoder1 = UNet._block(in_channels, features, name="enc1") 21 | self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) 22 | self.encoder2 = UNet._block(features, features * 2, name="enc2") 23 | self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) 24 | self.encoder3 = UNet._block(features * 2, features * 4, name="enc3") 25 | self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) 26 | self.encoder4 = UNet._block(features * 4, features * 8, name="enc4") 27 | self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2) 28 | 29 | self.bottleneck = UNet._block(features * 8, features * 16, name="bottleneck") 30 | 31 | self.upconv4 = nn.ConvTranspose2d( 32 | features * 16, features * 8, kernel_size=2, stride=2 33 | ) 34 | self.decoder4 = UNet._block((features * 8) * 2, features * 8, name="dec4") 35 | self.upconv3 = nn.ConvTranspose2d( 36 | features * 8, features * 4, kernel_size=2, stride=2 37 | ) 38 | self.decoder3 = UNet._block((features * 4) * 2, features * 4, name="dec3") 39 | self.upconv2 = nn.ConvTranspose2d( 40 | features * 4, features * 2, kernel_size=2, stride=2 41 | ) 42 | self.decoder2 = UNet._block((features * 2) * 2, features * 2, name="dec2") 43 | self.upconv1 = nn.ConvTranspose2d( 44 | features * 2, features, kernel_size=2, stride=2 45 | ) 46 | self.decoder1 = UNet._block(features * 2, features, name="dec1") 47 | 48 | self.conv = nn.Conv2d( 49 | in_channels=features, out_channels=out_channels, kernel_size=1 50 | ) 51 | 52 | def forward(self, x): 53 | #print(x.shape) 54 | enc1 = self.encoder1(x) 55 | enc2 = self.encoder2(self.pool1(enc1)) 56 | enc3 = self.encoder3(self.pool2(enc2)) 57 | enc4 = self.encoder4(self.pool3(enc3)) 58 | 59 | bottleneck = self.bottleneck(self.pool4(enc4)) 60 | 61 | dec4 = self.upconv4(bottleneck) 62 | dec4 = torch.cat((dec4, enc4), dim=1) 63 | dec4 = self.decoder4(dec4) 64 | dec3 = self.upconv3(dec4) 65 | dec3 = torch.cat((dec3, enc3), dim=1) 66 | dec3 = self.decoder3(dec3) 67 | dec2 = self.upconv2(dec3) 68 | dec2 = torch.cat((dec2, enc2), dim=1) 69 | dec2 = self.decoder2(dec2) 70 | dec1 = self.upconv1(dec2) 71 | dec1 = torch.cat((dec1, enc1), dim=1) 72 | dec1 = self.decoder1(dec1) 73 | #print(dec1.shape) 74 | return torch.sigmoid(self.conv(dec1)) 75 | 76 | @staticmethod 77 | def _block(in_channels, features, name): 78 | return nn.Sequential( 79 | OrderedDict( 80 | [ 81 | ( 82 | name + "conv1", 83 | nn.Conv2d( 84 | in_channels=in_channels, 85 | out_channels=features, 86 | kernel_size=3, 87 | padding=1, 88 | bias=False, 89 | ), 90 | ), 91 | (name + "norm1", nn.BatchNorm2d(num_features=features)), 92 | (name + "relu1", nn.ReLU(inplace=True)), 93 | ( 94 | name + "conv2", 95 | nn.Conv2d( 96 | in_channels=features, 97 | out_channels=features, 98 | kernel_size=3, 99 | padding=1, 100 | bias=False, 101 | ), 102 | ), 103 | (name + "norm2", nn.BatchNorm2d(num_features=features)), 104 | (name + "relu2", nn.ReLU(inplace=True)), 105 | ] 106 | ) 107 | ) 108 | 109 | 110 | 111 | class predict_img: 112 | def __init__(self,image_code): 113 | self.image_code=image_code 114 | self.test_transforms = A.Compose([ 115 | A.Resize(width = 256, height = 256, p=1.0), 116 | A.Normalize(p=1.0), 117 | ToTensor(), 118 | ]) 119 | 120 | def predict_image(self): 121 | device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') 122 | model=UNet() 123 | print("\n Model found! Loading \n") 124 | model.load_state_dict(torch.load("./models/braintumor/weights/model.h5", map_location=lambda storage, loc: storage)) 125 | model=model.to(device) 126 | img=cv2.imdecode(self.image_code,cv2.IMREAD_COLOR) 127 | imgo = Image.fromarray(img.astype("uint8")) 128 | img=np.array(imgo) 129 | img_p=self.test_transforms(image=img)['image'] 130 | img_p=img_p.reshape((1,3,256,256)) 131 | pred_o=model(img_p.to(device).float()) 132 | pred_o=pred_o.detach() 133 | pred_o=pred_o.reshape((256,256)) 134 | 135 | 136 | plt.subplot(1,2,1) 137 | plt.imshow(img) 138 | plt.title('Original Image') 139 | plt.subplot(1,2,2) 140 | plt.imshow(pred_o.numpy(),cmap='gray') 141 | plt.title('Tumor Segmentation') 142 | buf = io.BytesIO() 143 | plt.savefig(buf, format="jpg", dpi=180) 144 | buf.seek(0) 145 | img_arr = np.frombuffer(buf.getvalue(), dtype=np.uint8) 146 | 147 | img_base64 = base64.b64encode(buf.getvalue()).decode('ascii') 148 | mime = "image/jpeg" 149 | uri = "data:%s;base64,%s"%(mime, img_base64) 150 | return uri 151 | 152 | 153 | 154 | # if __name__=="__main__": 155 | # c=predict_img("./brain tumor/static/TCGA_HT_8111_19980330_9.tif","","") 156 | # c.predict_image() 157 | 158 | 159 | # #print(pred.min(),pred.max(),pred.mean()) 160 | # #print(pred.shape) 161 | -------------------------------------------------------------------------------- /models/braintumor/src/test.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pandas as pd 3 | import os 4 | import torch 5 | import torch.nn as nn 6 | import glob 7 | import matplotlib.pyplot as plt 8 | import random 9 | #from torch.utils.data import Dataset, DataLoader 10 | import torch.nn.functional as F 11 | import albumentations as A 12 | from albumentations.pytorch import ToTensor, ToTensorV2 13 | from collections import OrderedDict 14 | #from sklearn.model_selection import train_test_split 15 | import cv2 16 | #from torch.optim.lr_scheduler import StepLR 17 | 18 | device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') 19 | print(device) 20 | 21 | PATCH_SIZE=256 22 | transforms = A.Compose([ 23 | A.Resize(width = PATCH_SIZE, height = PATCH_SIZE, p=1.0), 24 | A.HorizontalFlip(p=0.2), 25 | A.VerticalFlip(p=0.2), 26 | A.RandomRotate90(p=0.2), 27 | A.Transpose(p=0.2), 28 | A.ShiftScaleRotate(shift_limit=0.01, scale_limit=0.04, rotate_limit=0, p=0.25), 29 | 30 | 31 | 32 | A.Normalize(p=1.0), 33 | ToTensor(), 34 | ]) 35 | 36 | class UNet(nn.Module): 37 | 38 | def __init__(self, in_channels=3, out_channels=1, init_features=32): 39 | super(UNet, self).__init__() 40 | 41 | features = init_features 42 | self.encoder1 = UNet._block(in_channels, features, name="enc1") 43 | self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) 44 | self.encoder2 = UNet._block(features, features * 2, name="enc2") 45 | self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) 46 | self.encoder3 = UNet._block(features * 2, features * 4, name="enc3") 47 | self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) 48 | self.encoder4 = UNet._block(features * 4, features * 8, name="enc4") 49 | self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2) 50 | 51 | self.bottleneck = UNet._block(features * 8, features * 16, name="bottleneck") 52 | 53 | self.upconv4 = nn.ConvTranspose2d( 54 | features * 16, features * 8, kernel_size=2, stride=2 55 | ) 56 | self.decoder4 = UNet._block((features * 8) * 2, features * 8, name="dec4") 57 | self.upconv3 = nn.ConvTranspose2d( 58 | features * 8, features * 4, kernel_size=2, stride=2 59 | ) 60 | self.decoder3 = UNet._block((features * 4) * 2, features * 4, name="dec3") 61 | self.upconv2 = nn.ConvTranspose2d( 62 | features * 4, features * 2, kernel_size=2, stride=2 63 | ) 64 | self.decoder2 = UNet._block((features * 2) * 2, features * 2, name="dec2") 65 | self.upconv1 = nn.ConvTranspose2d( 66 | features * 2, features, kernel_size=2, stride=2 67 | ) 68 | self.decoder1 = UNet._block(features * 2, features, name="dec1") 69 | 70 | self.conv = nn.Conv2d( 71 | in_channels=features, out_channels=out_channels, kernel_size=1 72 | ) 73 | 74 | def forward(self, x): 75 | #print(x.shape) 76 | enc1 = self.encoder1(x) 77 | enc2 = self.encoder2(self.pool1(enc1)) 78 | enc3 = self.encoder3(self.pool2(enc2)) 79 | enc4 = self.encoder4(self.pool3(enc3)) 80 | 81 | bottleneck = self.bottleneck(self.pool4(enc4)) 82 | 83 | dec4 = self.upconv4(bottleneck) 84 | dec4 = torch.cat((dec4, enc4), dim=1) 85 | dec4 = self.decoder4(dec4) 86 | dec3 = self.upconv3(dec4) 87 | dec3 = torch.cat((dec3, enc3), dim=1) 88 | dec3 = self.decoder3(dec3) 89 | dec2 = self.upconv2(dec3) 90 | dec2 = torch.cat((dec2, enc2), dim=1) 91 | dec2 = self.decoder2(dec2) 92 | dec1 = self.upconv1(dec2) 93 | dec1 = torch.cat((dec1, enc1), dim=1) 94 | dec1 = self.decoder1(dec1) 95 | #print(dec1.shape) 96 | return torch.sigmoid(self.conv(dec1)) 97 | 98 | @staticmethod 99 | def _block(in_channels, features, name): 100 | return nn.Sequential( 101 | OrderedDict( 102 | [ 103 | ( 104 | name + "conv1", 105 | nn.Conv2d( 106 | in_channels=in_channels, 107 | out_channels=features, 108 | kernel_size=3, 109 | padding=1, 110 | bias=False, 111 | ), 112 | ), 113 | (name + "norm1", nn.BatchNorm2d(num_features=features)), 114 | (name + "relu1", nn.ReLU(inplace=True)), 115 | ( 116 | name + "conv2", 117 | nn.Conv2d( 118 | in_channels=features, 119 | out_channels=features, 120 | kernel_size=3, 121 | padding=1, 122 | bias=False, 123 | ), 124 | ), 125 | (name + "norm2", nn.BatchNorm2d(num_features=features)), 126 | (name + "relu2", nn.ReLU(inplace=True)), 127 | ] 128 | ) 129 | ) 130 | 131 | m=UNet() 132 | m.load_state_dict(torch.load('./braintumor/weights/model.h5', map_location=lambda storage, loc: storage)) 133 | m=m.to(device) 134 | 135 | test_transforms = A.Compose([ 136 | A.Resize(width = PATCH_SIZE, height = PATCH_SIZE, p=1.0), 137 | A.Normalize(p=1.0), 138 | ToTensor(), 139 | ]) 140 | 141 | img=cv2.imread("./braintumor/data/TCGA_CS_4941_19960909/TCGA_CS_4941_19960909_4.tif") 142 | img=test_transforms(image=img)['image'] 143 | 144 | img=img.reshape((1,3,256,256)) 145 | pred_o=m(img.float()) 146 | 147 | # pred_o=pred_o.to('cpu').detach().squeeze(0).numpy().squeeze(0) 148 | pred_o=pred_o.to('cpu').detach() 149 | pred_o=pred_o.reshape((256,256)) 150 | plt.imshow(pred_o,cmap='gray') 151 | plt.show() 152 | -------------------------------------------------------------------------------- /models/braintumor/src/train.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pandas as pd 3 | import os 4 | import torch 5 | import torch.nn as nn 6 | import matplotlib.pyplot as plt 7 | import torch.nn.functional as F 8 | import albumentations as A 9 | from albumentations.pytorch import ToTensor, ToTensorV2 10 | from sklearn.model_selection import train_test_split 11 | import cv2 12 | from torch.optim.lr_scheduler import StepLR 13 | # custom packages 14 | import config 15 | import unet_arch 16 | import data_load 17 | import dataset_class 18 | import plot_everything 19 | import dice_loss 20 | 21 | print("Training ") 22 | df=data_load.load_images_in_df() 23 | device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') 24 | 25 | 26 | transforms = A.Compose([ 27 | A.Resize(width = config.IMAGE_SIZE, height = config.IMAGE_SIZE, p=1.0), 28 | A.HorizontalFlip(p=0.2), 29 | A.VerticalFlip(p=0.2), 30 | A.RandomRotate90(p=0.2), 31 | A.Transpose(p=0.2), 32 | A.ShiftScaleRotate(shift_limit=0.01, scale_limit=0.04, rotate_limit=0, p=0.25), 33 | A.Normalize(p=1.0), 34 | ToTensor(), 35 | ]) 36 | 37 | print("Spliting data") 38 | # Split df into train_df and val_df 39 | train_df, val_df = train_test_split(df, stratify=df.diagnosis, test_size=0.1) 40 | train_df = train_df.reset_index(drop=True) 41 | val_df = val_df.reset_index(drop=True) 42 | 43 | # Split train_df into train_df and test_df 44 | train_df, test_df = train_test_split(train_df, stratify=train_df.diagnosis, test_size=0.15) 45 | train_df = train_df.reset_index(drop=True) 46 | 47 | #train_df = train_df[:1000] 48 | print(f"Train: {train_df.shape} \nVal: {val_df.shape} \nTest: {test_df.shape}") 49 | 50 | train_dataset = dataset_class.BrainMriDataset(df=train_df, transforms=transforms) 51 | train_dataloader = DataLoader(train_dataset, batch_size=26, num_workers=4, shuffle=True) 52 | 53 | # val 54 | val_dataset = dataset_class.BrainMriDataset(df=val_df, transforms=transforms) 55 | val_dataloader = DataLoader(val_dataset, batch_size=26, num_workers=4, shuffle=True) 56 | 57 | #test 58 | test_dataset = dataset_class.BrainMriDataset(df=test_df, transforms=transforms) 59 | test_dataloader = DataLoader(test_dataset, batch_size=26, num_workers=4, shuffle=True) 60 | 61 | 62 | 63 | images, masks = next(iter(train_dataloader)) 64 | plot_everything.show_aug(images) 65 | plot_everything.show_aug(masks, image=False) 66 | 67 | model=unet_arch.UNet() 68 | 69 | def train(model,optimizer,epoch,lr_schedular,train_loader): 70 | model.train() 71 | loss_collection=0 72 | total_size=0 73 | for i,(img,lab) in enumerate(train_loader): 74 | optimizer.zero_grad() 75 | img,lab=img.to(device),lab.to(device) 76 | pred=model(img) 77 | loss=dice_loss.bce_dice_loss(pred,lab) 78 | loss_collection+=loss.item() 79 | total_size += img.size(0) 80 | loss.backward() 81 | optimizer.step() 82 | if i%50==0: 83 | lr_schedular.step() 84 | print("\n Learning rate is {}".format(lr_schedular.get_last_lr())) 85 | print('\nTrain Epoch: {} [{}/{} ({:.0f}%)]\tAverage loss: {:.6f}'.format( 86 | epoch, i * len(img), len(train_loader.dataset), 87 | 100. * i / len(train_loader), loss_collection / total_size)) 88 | return loss_collection 89 | 90 | best_loss=10 91 | 92 | def test(model,test_loader,tst=None): 93 | global best_loss 94 | model.eval() 95 | test_loss = 0 96 | with torch.no_grad(): 97 | for data, target in test_loader: 98 | data, target = data.to(device), target.to(device) 99 | output = model(data) 100 | test_loss += dice_loss.bce_dice_loss(output, target).item() 101 | test_loss /= len(test_loader.dataset) 102 | if(test_loss 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | Brain Tumor 14 | 15 | 16 | 17 | 18 | 19 |
20 |

Brain Tumor Segmentation

21 |
22 |
23 |
24 | 25 | 26 |
27 |


28 | 29 |
30 |
31 | 32 | 33 |
34 | 35 | {% if image_loc %} 36 | 37 |
38 | 39 | 40 | {% endif %} 41 |
42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | -------------------------------------------------------------------------------- /models/braintumor/weights/model.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/braintumor/weights/model.h5 -------------------------------------------------------------------------------- /models/cataract/README.md: -------------------------------------------------------------------------------- 1 | # Cataract Detection 2 | [![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/master/CONTRIBUTING.md) 3 | 4 | A cataract is an opacification of the lens of the eye which leads to a decrease in vision. Cataracts often develop slowly and can affect one or both eyes.When a cataract 5 | interferes with someone's usual activities, the cloudy lens can be replaced with a clear, artificial lens. 6 | 7 | Figure 1.  Cataract/Normal 8 | 9 | ## Dataset 10 | Ocular Disease Intelligent Recognition (ODIR) is a structured ophthalmic database of 5,000 patients with age, color fundus photographs from left and right eyes and doctors' diagnostic keywords from doctors. 11 | 12 | This dataset is meant to represent ‘‘real-life’’ set of patient information collected by Shanggong Medical Technology Co., Ltd. from different hospitals/medical centers in China. 13 | 14 | Figure 2. Illustrative Examples of Retina Image in Patients with Cataract 15 | 16 | ## Introduction 17 | 18 | #### Directory Layout 19 | . 20 | ├── data # data folder is hidden , path is provided in .gitnore file 21 | │ ├── Images_folder 22 | | |── data.csv 23 | ├── src 24 | │ ├── config.py # contains all the configuration 25 | | ├── dataset_class.py # create dataset 26 | | ├── predict.py # End-to-end, prediction file 27 | | ├── preprocess.py # pre process dataset 28 | | ├── train.py # training model 29 | ├── static 30 | | ├── inputImage.jpg # input image 31 | ├── templates 32 | | ├── catindex.html # html file for the UI 33 | ├── weights 34 | | ├── cat.h5 # trained weights 35 | | ├── cat1.h5 # trained weights 36 | ├── catapp.py # web app file 37 | 38 | #### Content 39 | | Directory | Info | 40 | |-----------|--------------| 41 | | `src` | Contains all Python files | 42 | | `templates` | Contains HTML file | 43 | | `static` | Contains css, js files and images | 44 | | `data` | Contains [data](https://www.kaggle.com/andrewmvd/ocular-disease-recognition-odir5k) which is hidden | 45 | | `weights` | contains trained weights | 46 | 47 | ## Evaluation 48 | The proposed approach was evaluated using Precision , Recall , Accuracy and F1 Score. Our source code is freely available here. 49 | 50 | Figure 3. Evaluation of Model 51 | 52 | ## Prerequisites 53 | * Python 3.4+ 54 | * PyTorch and its dependencies 55 | 56 | ## How to Install and Run 57 | * Clone this repository and run in command prompt 58 | ```bash 59 | pip install -r requirement.txt 60 | ``` 61 | * Run this to start server 62 | ```bash 63 | python catapp.py 64 | ``` 65 | * Update `Retina` image and predict if user has `cataract` or not 66 | Figure 4.  Cataract/Normal Prediction 67 | 68 | 69 | ## Train your own model* 70 | * For traning you need to run `train.py` in src directory. 71 | * if want to change epochs, data directory, random seed, learning rate, etc change it from `config.py`. 72 | 73 | > Note : 74 | > *:- This project purely utilize pytorch, it would be appriciated to use pytorch only. 75 | 76 | -------------------------------------------------------------------------------- /models/cataract/__pycache__/catapp.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/__pycache__/catapp.cpython-36.pyc -------------------------------------------------------------------------------- /models/cataract/__pycache__/catapp.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/__pycache__/catapp.cpython-37.pyc -------------------------------------------------------------------------------- /models/cataract/__pycache__/catapp.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/__pycache__/catapp.cpython-38.pyc -------------------------------------------------------------------------------- /models/cataract/catapp.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | from flask import Flask,Blueprint,request,render_template,url_for 4 | import torch 5 | import torch.nn as nn 6 | from flask_cors import CORS 7 | import base64 8 | import numpy as np 9 | from io import BytesIO 10 | # custom package 11 | #import src.predict as predict 12 | import models.cataract.src.predict as predict 13 | 14 | # catapp=Flask(__name__) 15 | catapp=Blueprint("catapp",__name__,template_folder="templates",static_folder="static") 16 | #CORS(catapp) 17 | 18 | 19 | @catapp.route("/",methods=["GET","POST"]) 20 | def index(): 21 | if request.method=="POST": 22 | image_file=request.files["file"] 23 | if image_file: 24 | npimg = np.fromstring(image_file.read(),np.uint8) 25 | classifier=predict.predict_img(npimg) 26 | result,img=classifier.predict_cataract() 27 | byteIO = BytesIO() 28 | img.save(byteIO, format="JPEG") 29 | img_base64 = base64.b64encode(byteIO.getvalue()).decode('ascii') 30 | mime = "image/jpeg" 31 | uri = "data:%s;base64,%s"%(mime, img_base64) 32 | 33 | return render_template('/catindex.html',resultt=result,image_loc=uri) 34 | return render_template('/catindex.html',resultt=None,image_loc=None) 35 | 36 | # if __name__ == '__main__': 37 | # catapp.run(debug=True,port=5000) 38 | -------------------------------------------------------------------------------- /models/cataract/readme_images/fig1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/readme_images/fig1.jpg -------------------------------------------------------------------------------- /models/cataract/readme_images/fig2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/readme_images/fig2.png -------------------------------------------------------------------------------- /models/cataract/readme_images/fig4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/readme_images/fig4.png -------------------------------------------------------------------------------- /models/cataract/readme_images/res.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/readme_images/res.png -------------------------------------------------------------------------------- /models/cataract/src/__pycache__/config.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/src/__pycache__/config.cpython-37.pyc -------------------------------------------------------------------------------- /models/cataract/src/__pycache__/dataset_class.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/src/__pycache__/dataset_class.cpython-37.pyc -------------------------------------------------------------------------------- /models/cataract/src/__pycache__/predict.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/src/__pycache__/predict.cpython-36.pyc -------------------------------------------------------------------------------- /models/cataract/src/__pycache__/predict.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/src/__pycache__/predict.cpython-37.pyc -------------------------------------------------------------------------------- /models/cataract/src/__pycache__/predict.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/src/__pycache__/predict.cpython-38.pyc -------------------------------------------------------------------------------- /models/cataract/src/__pycache__/preprocess.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/src/__pycache__/preprocess.cpython-37.pyc -------------------------------------------------------------------------------- /models/cataract/src/config.py: -------------------------------------------------------------------------------- 1 | 2 | oc_path="models/cataract/data/full_df.csv" 3 | oc_img_path="models/cataract/data/ODIR-5K/ODIR-5K/Training Images" 4 | IMG_SIZE=256 5 | BATCH=64 6 | EPOCHS=20 7 | WEIGHT="models/cataract/weight/cat.h5" -------------------------------------------------------------------------------- /models/cataract/src/dataset_class.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import pandas as pd 3 | import numpy as np 4 | import cv2 5 | from torchvision import transforms 6 | 7 | class cat_dataset(torch.utils.data.Dataset): 8 | def __init__(self,df,transforms=None): 9 | self.df=df 10 | self.transforms=transforms 11 | def __len__(self): 12 | return len(self.df) 13 | def __getitem__(self,idx): 14 | 15 | img=cv2.imread(self.df.Path.iloc[idx]) 16 | img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB) 17 | if self.transforms: 18 | img=self.transforms(img) 19 | label=self.df.cataract.iloc[idx] 20 | return (img,label) -------------------------------------------------------------------------------- /models/cataract/src/predict.py: -------------------------------------------------------------------------------- 1 | # importing libraries 2 | import torch 3 | import numpy as np 4 | from torchvision import models,transforms 5 | import torch.nn as nn 6 | import cv2 7 | import base64 8 | from PIL import Image 9 | from io import BytesIO 10 | 11 | device='cuda:0' if torch.cuda.is_available() else 'cpu' 12 | IMG_SIZE=256 13 | 14 | model=models.densenet121(pretrained=True) 15 | model.classifier=nn.Sequential(nn.Linear(1024,2)) 16 | model=model.to(device) 17 | 18 | 19 | 20 | if torch.cuda.is_available()==False: 21 | model.load_state_dict(torch.load("models/cataract/weight/cat1.h5", map_location=lambda storage, loc: storage)) 22 | else: 23 | model.load_state_dict(torch.load("models/cataract/weight/cat.h5")) 24 | 25 | transform=transforms.Compose([ 26 | transforms.ToPILImage(), 27 | transforms.Resize((IMG_SIZE,IMG_SIZE)), 28 | transforms.ToTensor(), 29 | transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) 30 | ]) 31 | 32 | 33 | 34 | class predict_img(object): 35 | def __init__(self,image_code): 36 | self.image_code=image_code 37 | 38 | def predict_cataract(self): 39 | img=cv2.imdecode(self.image_code,cv2.IMREAD_COLOR) 40 | imgo = Image.fromarray(img.astype("uint8")) 41 | img=np.array(imgo) 42 | img=transform(img) 43 | try: 44 | img=img.reshape((1,img.shape[0],img.shape[1],img.shape[2])) 45 | except: 46 | img=img.reshape((1,img.shape[0],img.shape[1])) 47 | model.eval() 48 | with torch.no_grad(): 49 | pred=model(img.to(device)) 50 | _,predicted = torch.max(pred.data, 1) 51 | 52 | predicted="Cataract" if predicted==1 else "Normal" 53 | return predicted,imgo 54 | 55 | 56 | # if __name__=="__main__": 57 | 58 | # m=predict_img("./models/cataract/data/ODIR-5K/ODIR-5K/Training Images/2111_left.jpg") 59 | # print(m.predict_cataract()) 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | -------------------------------------------------------------------------------- /models/cataract/src/preprocess.py: -------------------------------------------------------------------------------- 1 | # importing libraries 2 | import pandas as pd 3 | 4 | # custom libraries 5 | import config 6 | 7 | def cataract_or_not(txt): 8 | if "cataract" in txt: 9 | return 1 10 | else: 11 | return 0 12 | 13 | def downsample(df): 14 | df = pd.concat([ 15 | df.query('cataract==1'), 16 | df.query('cataract==0').sample(sum(df['cataract']), 17 | random_state=42) 18 | ]) 19 | return df 20 | 21 | 22 | def preprocess_me(path): 23 | """ path: dataframe""" 24 | df=pd.read_csv(path) 25 | df['left_eye_cataract']=df["Left-Diagnostic Keywords"].apply(lambda x:cataract_or_not(x)) 26 | df['right_eye_cataract']=df["Right-Diagnostic Keywords"].apply(lambda x:cataract_or_not(x)) 27 | left_df=df.loc[:,['Left-Fundus','left_eye_cataract']].rename(columns={'left_eye_cataract':'cataract'}) 28 | left_df['Path']=config.oc_img_path+"/"+left_df['Left-Fundus'] 29 | left_df=left_df.drop(['Left-Fundus'],1) 30 | 31 | right_df=df.loc[:,['Right-Fundus','right_eye_cataract']].rename(columns={'right_eye_cataract':'cataract'}) 32 | right_df['Path']=config.oc_img_path+"/"+right_df['Right-Fundus'] 33 | right_df=right_df.drop(['Right-Fundus'],1) 34 | 35 | ## if you have data with little unbalance skip this step 36 | left_df = downsample(left_df) 37 | right_df = downsample(right_df) 38 | 39 | train_df = pd.concat([left_df, right_df], ignore_index=True) 40 | 41 | # shuffle 42 | train_df=train_df.sample(frac=1.0) 43 | 44 | return train_df 45 | 46 | 47 | 48 | -------------------------------------------------------------------------------- /models/cataract/src/train.py: -------------------------------------------------------------------------------- 1 | # import packages 2 | import numpy as np 3 | import pandas as pd 4 | import matplotlib.pyplot as plt 5 | import glob 6 | import gc 7 | import os 8 | import cv2 9 | from sklearn.metrics import precision_score, recall_score,f1_score, confusion_matrix 10 | from torch.utils.data import DataLoader,SubsetRandomSampler 11 | from sklearn.model_selection import train_test_split 12 | import torch 13 | import torch.nn as nn 14 | from torchvision import transforms,models 15 | from tqdm import tqdm 16 | from torch.optim.lr_scheduler import ReduceLROnPlateau 17 | 18 | # import custom packages 19 | import config 20 | import preprocess 21 | import dataset_class 22 | 23 | device='cuda:0' if torch.cuda.is_available() else 'cpu' 24 | 25 | 26 | train_df=preprocess.preprocess_me(config.oc_path) 27 | 28 | train_df,test_df=train_test_split(train_df,test_size=0.12,shuffle=True,stratify=train_df.cataract) 29 | 30 | 31 | transform=transforms.Compose([ 32 | transforms.ToPILImage(), 33 | transforms.Resize((config.IMG_SIZE,config.IMG_SIZE)), 34 | transforms.ToTensor(), 35 | transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) 36 | ]) 37 | 38 | train_set=dataset_class.cat_dataset(train_df,transforms=transform) 39 | test_set=dataset_class.cat_dataset(test_df,transforms=transform) 40 | 41 | 42 | class_weights = 1./torch.Tensor(class_count_samples) 43 | train_target=list(train_df.cataract) 44 | train_samples_weight = [class_weights[class_id] for class_id in train_target] 45 | test_target=list(test_df.cataract) 46 | test_samples_weight = [class_weights[class_id] for class_id in test_target] 47 | 48 | 49 | train_sampler = torch.utils.data.sampler.WeightedRandomSampler(train_samples_weight, len(train_df)) 50 | test_sampler = torch.utils.data.sampler.WeightedRandomSampler(test_samples_weight, len(test_df)) 51 | 52 | train_loader = torch.utils.data.DataLoader(train_set, batch_size=BATCH,sampler=train_sampler) 53 | val_loader = torch.utils.data.DataLoader(test_set, batch_size=BATCH, sampler=test_sampler) 54 | 55 | 56 | model=models.densenet121(pretrained=True) 57 | model.classifier=nn.Sequential(nn.Linear(1024,2)) 58 | model=model.to(device) 59 | 60 | crit=nn.CrossEntropyLoss() 61 | optimizer=torch.optim.Adam(model.parameters()) 62 | sch=ReduceLROnPlateau(optimizer, mode='max', factor=0.1, patience=0, verbose=True) 63 | 64 | def train(model, epochs, optimizer, train_loader, criterion,test_loader,sch=None): 65 | for epoch in range(1,epochs+1): 66 | # train 67 | total_loss = 0 68 | total=0 69 | model.train() 70 | correct=0 71 | for batch_idx, (data, target) in enumerate(train_loader): 72 | 73 | data, target = data.to(device), target.to(device) 74 | optimizer.zero_grad() 75 | output = model(data) 76 | loss = criterion(output, target) 77 | total_loss += loss.item() 78 | # acc+=binary_acc(output.view(-1),target) 79 | _, predicted = torch.max(output.data, 1) 80 | correct += (predicted == target).sum().item() 81 | total += target.size(0) 82 | loss.backward() 83 | optimizer.step() 84 | acc = 100 * correct / total 85 | print('Train Epoch: {} [{}/{} ({:.0f}%)]\tAverage loss: {:.6f}'.format( 86 | epoch, batch_idx * len(data), len(train_loader), 87 | 100. * batch_idx / len(train_loader), total_loss /len(train_loader))) 88 | #print('Train Accuracy for epoch {} is {} \n'.format(epoch,100. *correct/len(train_loader.dataset))) 89 | print("Train acc \n",acc) 90 | # test 91 | model.eval() 92 | test_loss = 0 93 | correct=0 94 | best_acc=0 95 | total=0 96 | with torch.no_grad(): 97 | for data, target in test_loader: 98 | data, target = data.to(device), target.to(device) 99 | output = model(data) 100 | test_loss += criterion(output, target).item() 101 | _, predicted = torch.max(output.data, 1) 102 | correct += (predicted == target).sum().item() 103 | total += target.size(0) 104 | acc = 100 * correct / total 105 | if best_acc 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | Cataract 14 | 15 | 16 | 17 | 18 |
19 |

Cataract Prediction using Eye Fundus

20 |
21 |
22 |
23 | 24 | 25 |
26 |


27 | 28 |
29 |
30 | 31 | 32 |
33 | {% if resultt %} 34 | 35 |

{{resultt}}

36 |
37 | 38 | {% endif %} 39 |
40 | 41 | 42 | 43 | 44 | 45 | -------------------------------------------------------------------------------- /models/cataract/weight/cat.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/weight/cat.h5 -------------------------------------------------------------------------------- /models/cataract/weight/cat1.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/cataract/weight/cat1.h5 -------------------------------------------------------------------------------- /models/pneumonia/README.md: -------------------------------------------------------------------------------- 1 | # Pneumonia Detection 2 | [![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/master/CONTRIBUTING.md) 3 | 4 | 10 | Pneumonia is the leading cause of death among young children and one of the top mortality causes worldwide. The pneumonia detection is usually performed through examine of chest X-Ray radiograph by highly trained specialists. This process is tedious and often leads to a disagreement between radiologists. Computer-aided diagnosis systems showed potential for improving the diagnostic accuracy. In this work, we develop the computational approach for pneumonia regions detection based on single-shot detectors, squeeze-and-extinction deep convolution neural networks and augmentations. 11 | 12 | Figure 1.  Pneumonia/Normal 13 | 14 | ## Dataset 15 | The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal). 16 | 17 | The normal chest X-ray (left panel) depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia (middle) typically exhibits a focal lobar consolidation, in this case in the right upper lobe (white arrows), whereas viral pneumonia (right) manifests with a more diffuse ‘‘interstitial’’ pattern in both lungs. 18 | 19 | Figure 2. Illustrative Examples of Chest X-Rays in Patients with Pneumonia 20 | 21 | ## Introduction 22 | 23 | #### Directory Layout 24 | . 25 | ├── data # data folder is hidden , path is provided in .gitnore file 26 | │ ├── test 27 | │ │ ├── NORMAL 28 | │ │ ├── PNEUMONIA 29 | │ ├── train 30 | │ │ ├── NORMAL 31 | │ │ ├── PNEUMONIA 32 | │ ├── val 33 | │ │ ├── NORMAL 34 | │ │ ├── PNEUMONIA 35 | ├── notebook 36 | │ ├── x-ray-image-classification-using-pytorch.ipynb # notebooks related to `EDA` and `experiment` 37 | ├── src 38 | │ ├── config.py # contains all the configuration 39 | | ├── plot_me.py # program file for visualisation of dataset 40 | | ├── predict.py # End-to-end, prediction file 41 | | ├── train.py # training model 42 | ├── static 43 | | ├── inputImage.jpg # input image 44 | ├── templates 45 | | ├── pneindex.html # html file for the UI 46 | ├── weights 47 | | ├── pne.pt # trained weights 48 | ├── pneapp.py # web app file 49 | 50 | 51 | 52 | #### Content 53 | | Directory | Info | 54 | |-----------|--------------| 55 | | `notebooks` | Contains all jupyter notebooks related to `EDA` and `experiment` | 56 | | `src` | Contains all Python files | 57 | | `templates` | Contains HTML file | 58 | | `static` | Contains css, js files and images | 59 | | `data` | Contains [data](https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia) which is hidden | 60 | | `weights` | contains trained weights | 61 | 62 | ## Evaluation 63 | The proposed approach was evaluated using Precision , Recall , Accuracy and F1 Score. Our source code is freely available here. 64 | Figure 3. Evaluation of Model 65 | 66 | ## Prerequisites 67 | * Python 3.4+ 68 | * PyTorch and its dependencies 69 | 70 | ## How to Install and Run 71 | * Clone this repository and run in command prompt 72 | ```bash 73 | pip install -r requirement.txt 74 | ``` 75 | * Run this to start server 76 | ```bash 77 | python pneapp.py 78 | ``` 79 | * Update `X-Ray` image and predict if user has `pnemonia` or not 80 | 81 |  Prediction of Model 82 | 83 | 84 | ## Train your own model* 85 | * For traning you need to run `train.py` in src directory. 86 | * if want to change epochs, data directory, random seed, learning rate, etc change it from `config.py`. 87 | 88 | > Note : 89 | > *:- This project purely utilize pytorch, it would be appriciated to use pytorch only. 90 | -------------------------------------------------------------------------------- /models/pneumonia/__pycache__/pneapp.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/__pycache__/pneapp.cpython-36.pyc -------------------------------------------------------------------------------- /models/pneumonia/__pycache__/pneapp.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/__pycache__/pneapp.cpython-37.pyc -------------------------------------------------------------------------------- /models/pneumonia/__pycache__/pneapp.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/__pycache__/pneapp.cpython-38.pyc -------------------------------------------------------------------------------- /models/pneumonia/data/test/NORMAL/IM-0001-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/test/NORMAL/IM-0001-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/test/NORMAL/IM-0007-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/test/NORMAL/IM-0007-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/test/NORMAL/IM-0129-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/test/NORMAL/IM-0129-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/test/NORMAL/NORMAL2-IM-1436-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/test/NORMAL/NORMAL2-IM-1436-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/test/PNEUMONIA/person1000_virus_1681.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/test/PNEUMONIA/person1000_virus_1681.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/test/PNEUMONIA/person100_bacteria_475.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/test/PNEUMONIA/person100_bacteria_475.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/test/PNEUMONIA/person100_bacteria_481.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/test/PNEUMONIA/person100_bacteria_481.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/test/PNEUMONIA/person109_bacteria_528.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/test/PNEUMONIA/person109_bacteria_528.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/train/NORMAL/IM-0001-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/train/NORMAL/IM-0001-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/train/NORMAL/IM-0007-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/train/NORMAL/IM-0007-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/train/NORMAL/IM-0129-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/train/NORMAL/IM-0129-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/train/NORMAL/NORMAL2-IM-1436-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/train/NORMAL/NORMAL2-IM-1436-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/train/PNEUMONIA/person1000_virus_1681.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/train/PNEUMONIA/person1000_virus_1681.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/train/PNEUMONIA/person100_bacteria_475.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/train/PNEUMONIA/person100_bacteria_475.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/train/PNEUMONIA/person100_bacteria_481.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/train/PNEUMONIA/person100_bacteria_481.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/train/PNEUMONIA/person109_bacteria_528.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/train/PNEUMONIA/person109_bacteria_528.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/val/NORMAL/IM-0001-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/val/NORMAL/IM-0001-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/val/NORMAL/IM-0007-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/val/NORMAL/IM-0007-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/val/NORMAL/IM-0129-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/val/NORMAL/IM-0129-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/val/NORMAL/NORMAL2-IM-1436-0001.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/val/NORMAL/NORMAL2-IM-1436-0001.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/val/PNEUMONIA/person1000_virus_1681.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/val/PNEUMONIA/person1000_virus_1681.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/val/PNEUMONIA/person100_bacteria_475.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/val/PNEUMONIA/person100_bacteria_475.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/val/PNEUMONIA/person100_bacteria_481.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/val/PNEUMONIA/person100_bacteria_481.jpeg -------------------------------------------------------------------------------- /models/pneumonia/data/val/PNEUMONIA/person109_bacteria_528.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/data/val/PNEUMONIA/person109_bacteria_528.jpeg -------------------------------------------------------------------------------- /models/pneumonia/pneapp.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | from flask import Flask,request,render_template,Blueprint 4 | import torch 5 | import torch.nn as nn 6 | from flask_cors import CORS 7 | import base64 8 | import numpy as np 9 | from io import BytesIO 10 | # custom package 11 | #import src.predict as predict 12 | import models.pneumonia.src.predict as predict 13 | #pneapp = Flask(__name__) 14 | pneapp=Blueprint("pneapp",__name__,template_folder="templates",static_folder="static") 15 | CORS(pneapp) 16 | 17 | @pneapp.route("/", methods=["GET","POST"]) 18 | def index(): 19 | if request.method=="POST": 20 | image_file=request.files["file"] 21 | if image_file: 22 | npimg = np.fromstring(image_file.read(),np.uint8) 23 | classifier=predict.predict_img(npimg) 24 | result,img=classifier.predict_pneumonia() 25 | byteIO = BytesIO() 26 | img.save(byteIO, format="JPEG") 27 | img_base64 = base64.b64encode(byteIO.getvalue()).decode('ascii') 28 | mime = "image/jpeg" 29 | uri = "data:%s;base64,%s"%(mime, img_base64) 30 | 31 | return render_template('/pneindex.html',resultt=result,image_loc=uri) 32 | return render_template('/pneindex.html',resultt=None,image_loc=None) 33 | 34 | # if __name__ == '__main__': 35 | # pneapp.run(debug=True,port=3000) -------------------------------------------------------------------------------- /models/pneumonia/readme_images/fig1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/readme_images/fig1.jpg -------------------------------------------------------------------------------- /models/pneumonia/readme_images/fig2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/readme_images/fig2.jpg -------------------------------------------------------------------------------- /models/pneumonia/readme_images/fig3.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/readme_images/fig3.jpeg -------------------------------------------------------------------------------- /models/pneumonia/readme_images/fig4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/readme_images/fig4.jpg -------------------------------------------------------------------------------- /models/pneumonia/src/__pycache__/predict.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/src/__pycache__/predict.cpython-36.pyc -------------------------------------------------------------------------------- /models/pneumonia/src/__pycache__/predict.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/src/__pycache__/predict.cpython-37.pyc -------------------------------------------------------------------------------- /models/pneumonia/src/__pycache__/predict.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/src/__pycache__/predict.cpython-38.pyc -------------------------------------------------------------------------------- /models/pneumonia/src/config.py: -------------------------------------------------------------------------------- 1 | EPOCHS = 30 2 | data_dir = "./models/pneumonia/data/" 3 | TEST = 'test' 4 | TRAIN = 'train' 5 | VAL ='val' 6 | WEIGHT="models/pneumonia/weight/pne.pt" -------------------------------------------------------------------------------- /models/pneumonia/src/plot_me.py: -------------------------------------------------------------------------------- 1 | # import packages 2 | import matplotlib.pyplot as plt 3 | import torchvision 4 | import numpy as np 5 | import seaborn as sns 6 | 7 | def imshow(inp, title=None): 8 | inp = inp.numpy().transpose((1, 2, 0)) 9 | mean = np.array([0.485, 0.456, 0.406]) 10 | std = np.array([0.229, 0.224, 0.225]) 11 | inp = std * inp + mean 12 | inp = np.clip(inp, 0, 1) 13 | plt.imshow(inp) 14 | if title is not None: 15 | plt.title(title) 16 | plt.pause(0.001) 17 | 18 | def matrix(cm): 19 | ax = sns.heatmap(cm, annot=True, fmt="d") -------------------------------------------------------------------------------- /models/pneumonia/src/predict.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import numpy as np 3 | from torchvision import models,transforms 4 | import torch.nn as nn 5 | import cv2 6 | import base64 7 | from PIL import Image 8 | from io import BytesIO 9 | 10 | device='cuda:0' if torch.cuda.is_available() else 'cpu' 11 | 12 | model_pre = models.densenet121() 13 | for param in model_pre.features.parameters(): 14 | param.required_grad = False 15 | 16 | num_features = model_pre.classifier.in_features 17 | features = list(model_pre.classifier.children())[:-1] 18 | features.extend([nn.Linear(num_features, 2)]) 19 | model_pre.classifier = nn.Sequential(*features) 20 | model_pre = model_pre.to(device) 21 | 22 | if torch.cuda.is_available()==False: 23 | model_pre.load_state_dict(torch.load("models/pneumonia/weights/pne.pt", map_location=lambda storage, loc: storage)) 24 | else: 25 | model_pre.load_state_dict(torch.load("models/pneumonia/weights/pne.pt")) 26 | 27 | transform = transforms.Compose([ 28 | transforms.ToPILImage(), 29 | transforms.Resize(256), 30 | transforms.CenterCrop(224), 31 | transforms.ToTensor(), 32 | transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), 33 | ]) 34 | 35 | class predict_img(object): 36 | def __init__(self,image_code): 37 | self.image_code=image_code 38 | 39 | def predict_pneumonia(self): 40 | img=cv2.imdecode(self.image_code,cv2.IMREAD_COLOR) 41 | imgo = Image.fromarray(img.astype("uint8")) 42 | img=np.array(imgo) 43 | img=transform(img) 44 | try: 45 | img=img.reshape((1,img.shape[0],img.shape[1],img.shape[2])) 46 | except: 47 | img=img.reshape((1,img.shape[0],img.shape[1])) 48 | model_pre.eval() 49 | with torch.no_grad(): 50 | pred=model_pre(img.to(device)) 51 | 52 | _,predicted = torch.max(pred.data, 1) 53 | 54 | predicted="Pneumonia" if predicted==1 else "Normal" 55 | return predicted,imgo 56 | 57 | # if __name__=="__main__": 58 | 59 | # m=predict_img("./models/pneumonia/data/test/NORMAL/IM-0001-0001.jpeg") 60 | # print(m.predict_pneumonia()) 61 | # m=predict_img("./models/pneumonia/data/test/PNEUMONIA/person100_bacteria_481.jpeg") 62 | # print(m.predict_pneumonia()) 63 | -------------------------------------------------------------------------------- /models/pneumonia/src/train.py: -------------------------------------------------------------------------------- 1 | # import packages 2 | import numpy as np 3 | import pandas as pd 4 | import os 5 | import skimage 6 | from skimage import io, transform 7 | from sklearn.metrics import confusion_matrix 8 | import torch 9 | import torch.nn as nn 10 | import torch.optim as optim 11 | import torchvision 12 | from torchvision import datasets, models, transforms 13 | # custom packages 14 | import config 15 | import plot_me 16 | 17 | device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 18 | 19 | def data_transforms(): 20 | transform = transforms.Compose([ 21 | transforms.Resize(256), 22 | transforms.CenterCrop(224), 23 | transforms.ToTensor(), 24 | transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), 25 | ]) 26 | 27 | return transform 28 | 29 | image_datasets = {x: datasets.ImageFolder(os.path.join(config.data_dir, x), data_transforms()) 30 | for x in [config.TRAIN, config.VAL, config.TEST]} 31 | 32 | dataloaders = {config.TRAIN: torch.utils.data.DataLoader(image_datasets[config.TRAIN], batch_size = 4, shuffle=True), 33 | config.VAL: torch.utils.data.DataLoader(image_datasets[config.VAL], batch_size = 1, shuffle=True), 34 | config.TEST: torch.utils.data.DataLoader(image_datasets[config.TEST], batch_size = 1, shuffle=True)} 35 | 36 | dataset_sizes = {x: len(image_datasets[x]) for x in [config.TRAIN, config.VAL]} 37 | classes = image_datasets[config.TRAIN].classes 38 | class_names = image_datasets[config.TRAIN].classes 39 | 40 | inputs, classes = next(iter(dataloaders[config.TRAIN])) 41 | out = torchvision.utils.make_grid(inputs) 42 | plot_me.imshow(out, title=[class_names[x] for x in classes]) 43 | 44 | model_pre = models.densenet121() 45 | for param in model_pre.features.parameters(): 46 | param.required_grad = False 47 | 48 | num_features = model_pre.classifier.in_features 49 | features = list(model_pre.classifier.children())[:-1] 50 | features.extend([nn.Linear(num_features, len(class_names))]) 51 | model_pre.classifier = nn.Sequential(*features) 52 | 53 | model_pre = model_pre.to(device) 54 | criterion = nn.CrossEntropyLoss() 55 | optimizer = optim.SGD(model_pre.parameters(), lr=0.001, momentum=0.9, weight_decay=0.01) 56 | # Decay LR by a factor of 0.1 every 10 epochs 57 | exp_lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1) 58 | 59 | def train_model(model, criterion, optimizer, scheduler, num_epochs): 60 | best_acc = 0.0 61 | 62 | for epoch in range(num_epochs): 63 | print("Epoch: {}/{}".format(epoch+1, num_epochs)) 64 | print("="*10) 65 | 66 | for phase in [config.TRAIN, config.VAL]: 67 | if phase == config.TRAIN: 68 | scheduler.step() 69 | model.train() 70 | else: 71 | model.eval() 72 | running_loss = 0.0 73 | running_corrects = 0 74 | for data in dataloaders[phase]: 75 | inputs, labels = data 76 | inputs = inputs.to(device) 77 | labels = labels.to(device) 78 | optimizer.zero_grad() 79 | with torch.set_grad_enabled(phase==config.TRAIN): 80 | outputs = model(inputs) 81 | _, preds = torch.max(outputs, 1) 82 | loss = criterion(outputs, labels) 83 | if phase == 'train': 84 | loss.backward() 85 | optimizer.step() 86 | running_loss += loss.item() * inputs.size(0) 87 | running_corrects += torch.sum(preds == labels.data) 88 | 89 | epoch_loss = running_loss / dataset_sizes[phase] 90 | epoch_acc = running_corrects.double() / dataset_sizes[phase] 91 | 92 | print('{} Loss: {:.4f} Acc: {:.4f}'.format( 93 | phase, epoch_loss, epoch_acc)) 94 | 95 | if phase == 'val' and epoch_acc > best_acc: 96 | print('Validation ac increased ({:.6f} --> {:.6f}). Saving model ...'.format( best_acc,epoch_acc)) 97 | torch.save(model.state_dict(), './pne.pt') 98 | best_acc = epoch_acc 99 | 100 | print('Best val Acc: {:4f}'.format(best_acc)) 101 | return model 102 | 103 | def test_model(): 104 | running_correct = 0.0 105 | running_total = 0.0 106 | true_labels = [] 107 | pred_labels = [] 108 | with torch.no_grad(): 109 | for data in dataloaders[config.TEST]: 110 | inputs, labels = data 111 | inputs = inputs.to(device) 112 | labels = labels.to(device) 113 | true_labels.append(labels.item()) 114 | outputs = model_pre(inputs) 115 | _, preds = torch.max(outputs.data, 1) 116 | pred_labels.append(preds.item()) 117 | running_total += labels.size(0) 118 | running_correct += (preds == labels).sum().item() 119 | acc = running_correct/running_total 120 | return (true_labels, pred_labels, running_correct, running_total, acc) 121 | 122 | if os.path.exists(config.WEIGHT): 123 | print("\n Model found! Loading \n") 124 | 125 | if torch.cuda.is_available()==False: 126 | model_pre.load_state_dict(torch.load(config.WEIGHT, map_location=lambda storage, loc: storage)) 127 | else: 128 | model_pre.load_state_dict(torch.load(config.WEIGHT)) 129 | else: 130 | train_model(model_pre, criterion, optimizer, exp_lr_scheduler, num_epochs=config.EPOCHS) 131 | 132 | print("Evaluation !! \n") 133 | true_labels, pred_labels, running_correct, running_total, acc = test_model() 134 | print("Total Correct: {}, Total Test Images: {}".format(running_correct, running_total)) 135 | print("Test Accuracy: ", acc) 136 | 137 | cm = confusion_matrix(true_labels, pred_labels) 138 | tn, fp, fn, tp = cm.ravel() 139 | plot_me.matrix(cm) -------------------------------------------------------------------------------- /models/pneumonia/static/inputImage.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/static/inputImage.jpg -------------------------------------------------------------------------------- /models/pneumonia/static/secondPage.css: -------------------------------------------------------------------------------- 1 | body { 2 | height: 100vh; 3 | width: 100%; 4 | } 5 | 6 | .upload { 7 | margin-top: 4rem; 8 | background-color: #fff7de; 9 | padding: 5rem; 10 | border-radius: 20px; 11 | } 12 | 13 | .upload .btn { 14 | color: black; 15 | width: 8rem; 16 | background-color: #ffeba9; 17 | border: 1px solid black; 18 | border-radius: 15px; 19 | font-weight: lighter; 20 | } 21 | 22 | .upload .btn:hover { 23 | background-color: black; 24 | color: white; 25 | } 26 | -------------------------------------------------------------------------------- /models/pneumonia/static/styles.css: -------------------------------------------------------------------------------- 1 | * { 2 | margin: 0; 3 | } 4 | 5 | body { 6 | background-color: #ffeba9; 7 | font-family: "Poppins", sans-serif; 8 | } 9 | 10 | .home { 11 | text-decoration: none; 12 | font-size: 32; 13 | color: black; 14 | display: inline; 15 | } 16 | 17 | nav a:hover { 18 | border-bottom: black 1px solid; 19 | color: black; 20 | text-decoration: none; 21 | } 22 | 23 | nav { 24 | margin-top: 3rem; 25 | } 26 | 27 | nav ul { 28 | float: right; 29 | list-style-type: none; 30 | display: inline; 31 | } 32 | 33 | nav ul li { 34 | display: inline-block; 35 | padding: 1rem; 36 | padding-right: 0; 37 | } 38 | 39 | nav ul li a { 40 | text-decoration: none; 41 | color: black; 42 | } 43 | 44 | #top-section { 45 | margin-top: 8rem; 46 | height: 41rem; 47 | } 48 | 49 | .heading { 50 | font-size: 50px; 51 | font-weight: normal; 52 | } 53 | 54 | #top-section p { 55 | font-weight: lighter; 56 | } 57 | 58 | #top-section a { 59 | border: 1px solid black; 60 | width: 10rem; 61 | border-radius: 20px; 62 | } 63 | 64 | #top-section a:hover { 65 | background-color: black; 66 | color: white; 67 | } 68 | 69 | #top-section .image img { 70 | float: right; 71 | } 72 | 73 | #mid-section { 74 | background-color: #fff7de; 75 | height: 35rem; 76 | padding: 6rem; 77 | } 78 | 79 | #mid-section .card { 80 | border-radius: 20px; 81 | align-items: center; 82 | height: 22rem; 83 | box-shadow: 4px 4px 4px #e5e5e5; 84 | } 85 | 86 | #mid-section .brain { 87 | padding: 2rem; 88 | padding-bottom: 0; 89 | width: 17rem; 90 | height: 12rem; 91 | } 92 | 93 | #mid-section .eye { 94 | margin-top: 1rem; 95 | padding-bottom: 0; 96 | padding: 2rem; 97 | height: 11rem; 98 | width: 18rem; 99 | } 100 | 101 | #mid-section .lungs { 102 | padding: 2rem; 103 | width: 12rem; 104 | padding-bottom: 0; 105 | height: 12rem; 106 | } 107 | 108 | #mid-section .btn { 109 | color: black; 110 | background-color: #ffeba9; 111 | border: 1px solid black; 112 | border-radius: 20px; 113 | } 114 | 115 | #mid-section .btn:hover { 116 | background-color: black; 117 | color: white; 118 | } 119 | 120 | #mid-section .card .card-body { 121 | width: 100%; 122 | height: 100%; 123 | text-align: center; 124 | } 125 | 126 | footer { 127 | padding: 5rem; 128 | background-color: black; 129 | color: white; 130 | } 131 | 132 | footer ul { 133 | list-style-type: none; 134 | } 135 | 136 | @media (max-width: 500px) { 137 | body { 138 | text-align: center; 139 | } 140 | #top-section { 141 | height: 100%; 142 | } 143 | 144 | #top-section a { 145 | margin-bottom: 3rem; 146 | } 147 | 148 | #mid-section .card { 149 | width: auto; 150 | } 151 | } 152 | -------------------------------------------------------------------------------- /models/pneumonia/templates/pneindex.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | Pneumonia 14 | 15 | 16 | 17 | 18 |
19 |

Pneumonia Prediction using Chest X-Ray

20 |
21 |
22 |
23 | 24 | 25 |
26 |


27 | 28 |
29 |
30 | 31 | 32 |
33 | {% if resultt %} 34 | 35 |

{{resultt}}

36 |
37 | 38 | {% endif %} 39 |
40 | 41 | 42 | 43 | 44 | -------------------------------------------------------------------------------- /models/pneumonia/weights/pne.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/pneumonia/weights/pne.pt -------------------------------------------------------------------------------- /models/riskmodel/README.md: -------------------------------------------------------------------------------- 1 | # Risk Model 2 | [![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/master/CONTRIBUTING.md) 3 | 4 | A risk model is a statistical procedure for assigning to an individual a probability of developing a future adverse outcome in a given time period. The assignment is made 5 | by combining his or her values for a set of risk-determining covariates with incidence and mortality data and published estimates of the covariates’ effects on the outcome. 6 | Such risk models are playing increasingly important roles in the practise of medicine, as clinical care becomes more tailored to individual characteristics and needs. 7 | Figure 1.  Risk Model 8 | 9 | ## Dataset 10 | The dataset is organized in 2 files NHANESI_X.csv/NHANESI_Y.csv . There are 9,933 examples .Using data from the prospective Nurses Health Study (NHS). 11 | The National Health and Nutrition Examination Survey (NHANES) is a program of studies designed to assess the health and nutritional status of adults and children in the United States. 12 | NHANES contains comprehensive health records of patients from the state of Arizona linked across systems and time. 13 | Figure 2. Illustrative Examples of Risk Model dataset 14 | 15 | ## Introduction 16 | 17 | #### Directory Layout 18 | . 19 | ├── data 20 | │ ├── NHANESI_X.csv # feature data 21 | │ ├── NHANESI_Y.csv # label data 22 | ├── src 23 | │ ├── config.py # contains all the configuration 24 | | ├── utlis.py # program file for data preparation 25 | | ├── predict.py # End-to-end, prediction file 26 | | ├── train.py # training model 27 | ├── static 28 | | ├── secondpage.css # CSS files 29 | | ├── style.css # CSS files 30 | ├── templates 31 | | ├── rkindex.html # html file for the UI 32 | ├── weights 33 | | ├── model.pkl # trained weights 34 | ├── rapp.py # web app file 35 | 36 | #### Content 37 | | Directory | Info | 38 | |-----------|--------------| 39 | | `src` | Contains all Python files | 40 | | `templates` | Contains HTML file | 41 | | `static` | Contains css, js files and images | 42 | | `data` | Contains [data](https://wwwn.cdc.gov/nchs/nhanes/default.aspx) | 43 | | `weights` | contains trained model | 44 | 45 | ## Evaluation 46 | The proposed approach was evaluated using Precision , Recall , Accuracy and F1 Score. Our source code is freely available here. 47 | Figure 3. Evaluation of Model 48 | 49 | ## Prerequisites 50 | * Python 3.4+ 51 | * PyTorch and its dependencies 52 | 53 | ## How to Install and Run 54 | * Clone this repository and run in command prompt 55 | ```bash 56 | pip install -r requirement.txt 57 | ``` 58 | * Run this to start server 59 | ```bash 60 | python rapp.py 61 | ``` 62 | * Update `Features` and predict user has `risk` or not . 63 | 64 |  Prediction of Model 65 | 66 | 67 | ## Train your own model* 68 | * For traning you need to run `train.py` in src directory. 69 | * if want to change epochs, data directory, random seed, learning rate, etc change it from `config.py`. 70 | 71 | > Note : 72 | > *:- This project purely utilize pytorch, it would be appriciated to use pytorch only. 73 | -------------------------------------------------------------------------------- /models/riskmodel/__pycache__/rapp.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/riskmodel/__pycache__/rapp.cpython-37.pyc -------------------------------------------------------------------------------- /models/riskmodel/__pycache__/rapp.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/riskmodel/__pycache__/rapp.cpython-38.pyc -------------------------------------------------------------------------------- /models/riskmodel/rapp.py: -------------------------------------------------------------------------------- 1 | from flask import Flask,render_template,Blueprint,request 2 | import models.riskmodel.src.predict as predict 3 | #import src.predict as predict 4 | #rapp=Flask(__name__) 5 | rapp=Blueprint("rapp",__name__,template_folder="templates",static_folder="static") 6 | @rapp.route('/',methods=['POST','GET']) 7 | def index(): 8 | if request.method=='POST': 9 | age=float(request.form['age']) 10 | bp=float(request.form['Diastolic BP']) 11 | pi=float(request.form['Poverty index']) 12 | race=float(request.form['Race']) 13 | rbc=float(request.form['rbc']) 14 | sr=float(request.form['sr']) 15 | sa=float(request.form['sa']) 16 | sc=float(request.form['sc']) 17 | si=float(request.form['si']) 18 | sm=float(request.form['sm']) 19 | sp=float(request.form['sp']) 20 | sex=float(request.form['sex']) 21 | sbp=float(request.form['sbp']) 22 | tibc=float(request.form['TIBC']) 23 | ts=float(request.form['ts']) 24 | wbc=float(request.form['wbc']) 25 | bmi=float(request.form['bmi']) 26 | pp=float(request.form['pp']) 27 | inp=[] 28 | inp.append([age,bp,pi,race,rbc,sr,sa,sc,si,sm,sp,sex,sbp,tibc,ts,wbc,bmi,pp]) 29 | model=predict.predict(inp) 30 | result=model.predict_risk() 31 | print(result) 32 | return render_template("/rkindex.html",resultt=result) 33 | return render_template("/rkindex.html",resultt=None) 34 | 35 | # if __name__=="__main__": 36 | # rapp.run(debug=True) -------------------------------------------------------------------------------- /models/riskmodel/readme_images/fig.4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/riskmodel/readme_images/fig.4.jpg -------------------------------------------------------------------------------- /models/riskmodel/src/__pycache__/config.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/riskmodel/src/__pycache__/config.cpython-38.pyc -------------------------------------------------------------------------------- /models/riskmodel/src/__pycache__/predict.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/riskmodel/src/__pycache__/predict.cpython-37.pyc -------------------------------------------------------------------------------- /models/riskmodel/src/__pycache__/predict.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/riskmodel/src/__pycache__/predict.cpython-38.pyc -------------------------------------------------------------------------------- /models/riskmodel/src/config.py: -------------------------------------------------------------------------------- 1 | data_dir="./models/riskmodel/data/" 2 | model_dir="./models/riskmodel/weight/" -------------------------------------------------------------------------------- /models/riskmodel/src/predict.py: -------------------------------------------------------------------------------- 1 | import pickle 2 | import numpy as np 3 | import pandas as pd 4 | import warnings 5 | warnings.filterwarnings("ignore") 6 | 7 | 8 | loaded_model = pickle.load(open('models/riskmodel/weight/model.pkl', 'rb')) 9 | 10 | class predict: 11 | def __init__(self,arr): 12 | 13 | """arr: [['Age', 'Diastolic BP', 'Poverty index', 'Race', 'Red blood cells', 14 | 'Sedimentation rate', 'Serum Albumin', 'Serum Cholesterol', 15 | 'Serum Iron', 'Serum Magnesium', 'Serum Protein', 'Sex', 'Systolic BP', 16 | 'TIBC', 'TS', 'White blood cells', 'BMI', 'Pulse pressure']]""" 17 | 18 | self.arr=arr 19 | 20 | def predict_risk(self): 21 | ret= loaded_model.predict(self.arr)[0] 22 | return ret 23 | 24 | if __name__=="__main__": 25 | p=predict([[35.0,92.0,126.0,2.0,77.7,12.0,5.0,165.0,135.0,1.37,7.6,2.0,142.0, 323.0, 41.8, 5.8, 31.109434, 50.0]]) 26 | print(p.predict_risk()) 27 | 28 | -------------------------------------------------------------------------------- /models/riskmodel/src/train.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | import config 4 | import utils 5 | from sklearn.experimental import enable_iterative_imputer 6 | from sklearn.impute import IterativeImputer 7 | from sklearn.model_selection import train_test_split 8 | from sklearn.ensemble import RandomForestClassifier 9 | import lifelines 10 | from sklearn.model_selection import GridSearchCV 11 | import pickle 12 | 13 | 14 | x,y=utils.load(10) 15 | X_train, X_val, y_train, y_val = train_test_split(x, y, test_size=0.25, random_state=10) 16 | 17 | imputer = IterativeImputer(random_state=0, sample_posterior=False, max_iter=1, min_value=0) 18 | imputer.fit(X_train) 19 | X_train_imputed = pd.DataFrame(imputer.transform(X_train), columns=X_train.columns) 20 | X_val_imputed = pd.DataFrame(imputer.transform(X_val), columns=X_val.columns) 21 | 22 | parameters = {'criterion':['gini','entropy'], 'max_depth':[1,2,3,10], 23 | 24 | 'n_estimators':[1,10,100,120,150,170,200,500], 25 | } 26 | 27 | rf = RandomForestClassifier(random_state=10) 28 | clf = GridSearchCV(rf, parameters) 29 | clf.fit(X_train_imputed, y_train) 30 | 31 | y_train_rf_preds = clf.predict_proba(X_train_imputed)[:, 1] 32 | print(f"Train C-Index: {utils.cindex(y_train.values, y_train_rf_preds)}") 33 | 34 | y_val_rf_preds = clf.predict_proba(X_val_imputed)[:, 1] 35 | print(f"Val C-Index: {utils.cindex(y_val.values, y_val_rf_preds)}") 36 | 37 | pickle.dump(clf, open(config.model_dir+'model.pkl', 'wb')) 38 | 39 | -------------------------------------------------------------------------------- /models/riskmodel/src/utils.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | import config 4 | 5 | def load(th): 6 | x=pd.read_csv(config.data_dir+"NHANESI_X.csv") 7 | y=pd.read_csv(config.data_dir+"NHANESI_y.csv")["y"] 8 | y=np.array(y) 9 | df = x.drop(['Unnamed: 0'], axis=1) 10 | df.loc[:, 'time'] = y 11 | df.loc[:, 'death'] = np.ones(len(x)) 12 | df.loc[df.time < 0, 'death'] = 0 13 | df.loc[:, 'time'] = np.abs(df.time) 14 | df = df.dropna(axis='rows') 15 | mask = (df.time > th) | (df.death == 1) 16 | df = df[mask] 17 | x = df.drop(['time', 'death'], axis='columns') 18 | y = df.time < th 19 | return x,y 20 | 21 | def cindex(y_true, scores): 22 | return lifelines.utils.concordance_index(y_true, scores) -------------------------------------------------------------------------------- /models/riskmodel/static/secondPage.css: -------------------------------------------------------------------------------- 1 | body { 2 | height: 100vh; 3 | width: 100%; 4 | } 5 | 6 | .upload { 7 | margin-top: 4rem; 8 | background-color: #fff7de; 9 | padding: 5rem; 10 | border-radius: 20px; 11 | } 12 | 13 | .upload .btn { 14 | color: black; 15 | width: 8rem; 16 | background-color: #ffeba9; 17 | border: 1px solid black; 18 | border-radius: 15px; 19 | font-weight: lighter; 20 | } 21 | 22 | .upload .btn:hover { 23 | background-color: black; 24 | color: white; 25 | } 26 | -------------------------------------------------------------------------------- /models/riskmodel/static/styles.css: -------------------------------------------------------------------------------- 1 | * { 2 | margin: 0; 3 | } 4 | 5 | body { 6 | background-color: #ffeba9; 7 | font-family: "Poppins", sans-serif; 8 | } 9 | 10 | .home { 11 | text-decoration: none; 12 | font-size: 32; 13 | color: black; 14 | display: inline; 15 | } 16 | 17 | nav a:hover { 18 | border-bottom: black 1px solid; 19 | color: black; 20 | text-decoration: none; 21 | } 22 | 23 | nav { 24 | margin-top: 3rem; 25 | } 26 | 27 | nav ul { 28 | float: right; 29 | list-style-type: none; 30 | display: inline; 31 | } 32 | 33 | nav ul li { 34 | display: inline-block; 35 | padding: 1rem; 36 | padding-right: 0; 37 | } 38 | 39 | nav ul li a { 40 | text-decoration: none; 41 | color: black; 42 | } 43 | 44 | #top-section { 45 | margin-top: 8rem; 46 | height: 41rem; 47 | } 48 | 49 | .heading { 50 | font-size: 50px; 51 | font-weight: normal; 52 | } 53 | 54 | #top-section p { 55 | font-weight: lighter; 56 | } 57 | 58 | #top-section a { 59 | border: 1px solid black; 60 | width: 10rem; 61 | border-radius: 20px; 62 | } 63 | 64 | #top-section a:hover { 65 | background-color: black; 66 | color: white; 67 | } 68 | 69 | #top-section .image img { 70 | float: right; 71 | } 72 | 73 | #mid-section { 74 | background-color: #fff7de; 75 | height: 35rem; 76 | padding: 6rem; 77 | } 78 | 79 | #mid-section .card { 80 | border-radius: 20px; 81 | align-items: center; 82 | height: 22rem; 83 | box-shadow: 4px 4px 4px #e5e5e5; 84 | } 85 | 86 | #mid-section .brain { 87 | padding: 2rem; 88 | padding-bottom: 0; 89 | width: 17rem; 90 | height: 12rem; 91 | } 92 | 93 | #mid-section .eye { 94 | margin-top: 1rem; 95 | padding-bottom: 0; 96 | padding: 2rem; 97 | height: 11rem; 98 | width: 18rem; 99 | } 100 | 101 | #mid-section .lungs { 102 | padding: 2rem; 103 | width: 12rem; 104 | padding-bottom: 0; 105 | height: 12rem; 106 | } 107 | 108 | #mid-section .btn { 109 | color: black; 110 | background-color: #ffeba9; 111 | border: 1px solid black; 112 | border-radius: 20px; 113 | } 114 | 115 | #mid-section .btn:hover { 116 | background-color: black; 117 | color: white; 118 | } 119 | 120 | #mid-section .card .card-body { 121 | width: 100%; 122 | height: 100%; 123 | text-align: center; 124 | } 125 | 126 | footer { 127 | padding: 5rem; 128 | background-color: black; 129 | color: white; 130 | } 131 | 132 | footer ul { 133 | list-style-type: none; 134 | } 135 | 136 | @media (max-width: 500px) { 137 | body { 138 | text-align: center; 139 | } 140 | #top-section { 141 | height: 100%; 142 | } 143 | 144 | #top-section a { 145 | margin-bottom: 3rem; 146 | } 147 | 148 | #mid-section .card { 149 | width: auto; 150 | } 151 | } 152 | -------------------------------------------------------------------------------- /models/riskmodel/templates/rkindex.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | Risk Model 14 | 15 | 16 | 17 | 18 |
19 |

Risk Prediction using General Information

20 |
21 |
22 |
23 | 24 | 25 |
26 | 27 |
28 | 29 |
30 |
31 | 32 |
33 | 34 |
35 |
36 | 37 |
38 | 39 |
40 |
41 | 42 |
43 | 44 |
45 |
46 | 47 |
48 | 49 |
50 |
51 | 52 |
53 | 54 |
55 |
56 | 57 |
58 | 59 |
60 |
61 | 62 |
63 | 64 |
65 |
66 | 67 |
68 | 69 |
70 |
71 | 72 |
73 | 74 |
75 |
76 | 77 |
78 | 79 |
80 |
81 | 82 |
83 | 84 |
85 |
86 | 87 |
88 | 89 |
90 |
91 | 92 |
93 | 94 |
95 |
96 | 97 |
98 | 99 |
100 |
101 | 102 |
103 | 104 |
105 |
106 | 107 |
108 | 109 |
110 |
111 | 112 |
113 | 114 |
115 |
116 |


117 | 118 | 119 |
120 | 121 | 122 |
123 | {% if result %} 124 |

Your result of being in 10 year risk is

125 |

{{ result }}

126 | {% endif %} 127 |
128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | -------------------------------------------------------------------------------- /models/riskmodel/weight/model.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/models/riskmodel/weight/model.pkl -------------------------------------------------------------------------------- /readme_images/medicalai-2020-11-25_00.10.09.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/readme_images/medicalai-2020-11-25_00.10.09.gif -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | -f https://download.pytorch.org/whl/torch_stable.html 2 | absl-py==0.9.0 3 | albumentations==0.4.5 4 | anykeystore==0.2 5 | astroid==2.4.2 6 | astunparse==1.6.3 7 | attrs==19.3.0 8 | backcall==0.2.0 9 | bleach==3.1.5 10 | cachetools==4.1.0 11 | certifi==2020.6.20 12 | click==7.1.2 13 | cryptacular==1.5.5 14 | cycler==0.10.0 15 | decorator==4.4.2 16 | defusedxml==0.6.0 17 | Flask==1.1.2 18 | Flask-Cors==3.0.9 19 | gast==0.3.3 20 | glob3==0.0.1 21 | google-auth==1.18.0 22 | google-auth-oauthlib==0.4.1 23 | google-pasta==0.2.0 24 | grpcio==1.29.0 25 | gunicorn==20.0.4 26 | h5py==2.10.0 27 | hupper==1.10.2 28 | imageio==2.8.0 29 | imgaug==0.2.6 30 | ipykernel==5.3.0 31 | ipython==7.15.0 32 | ipython-genutils==0.2.0 33 | ipywidgets==7.5.1 34 | isort==4.3.21 35 | itsdangerous==1.1.0 36 | jedi==0.17.0 37 | Jinja2==2.11.2 38 | joblib==0.15.1 39 | jsonschema==3.2.0 40 | jupyter==1.0.0 41 | jupyter-client==6.1.3 42 | jupyter-console==6.1.0 43 | jupyter-core==4.6.3 44 | Keras==2.4.2 45 | Keras-Applications==1.0.8 46 | Keras-Preprocessing==1.1.2 47 | kiwisolver==1.2.0 48 | lazy-object-proxy==1.4.3 49 | Markdown==3.2.2 50 | MarkupSafe==1.1.1 51 | matplotlib==3.2.1 52 | mccabe==0.6.1 53 | mistune==0.8.4 54 | nltk==3.5 55 | numpy==1.18.5 56 | opencv-python-headless==4.2.0.34 57 | opt-einsum==3.2.1 58 | pandas==1.0.4 59 | regex==2020.6.8 60 | scikit-image==0.17.2 61 | scikit-learn==0.23.2 62 | scipy==1.4.1 63 | seaborn==0.10.1 64 | sklearn==0.0 65 | torch==1.5.0+cpu 66 | torchvision==0.6.0+cpu 67 | tqdm==4.46.1 68 | -------------------------------------------------------------------------------- /runtime.txt: -------------------------------------------------------------------------------- 1 | python-3.8.3 2 | -------------------------------------------------------------------------------- /static/images/brain.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/static/images/brain.png -------------------------------------------------------------------------------- /static/images/doc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/static/images/doc.png -------------------------------------------------------------------------------- /static/images/eye.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/static/images/eye.png -------------------------------------------------------------------------------- /static/images/lungs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/static/images/lungs.png -------------------------------------------------------------------------------- /static/images/risk.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/manpreet2000/Medical-AI/de100f6ce3b783086d57dfb0ceab3fa28544df53/static/images/risk.jpg -------------------------------------------------------------------------------- /static/styles.css: -------------------------------------------------------------------------------- 1 | * { 2 | margin: 0; 3 | } 4 | 5 | body { 6 | background-color: #ffeba9; 7 | font-family: "Poppins", sans-serif; 8 | } 9 | 10 | .home { 11 | text-decoration: none; 12 | font-size: 32; 13 | color: black; 14 | display: inline; 15 | } 16 | 17 | nav a:hover { 18 | border-bottom: black 1px solid; 19 | color: black; 20 | text-decoration: none; 21 | } 22 | 23 | nav { 24 | margin-top: 3rem; 25 | } 26 | 27 | nav ul { 28 | float: right; 29 | list-style-type: none; 30 | display: inline; 31 | } 32 | 33 | nav ul li { 34 | display: inline-block; 35 | padding: 1rem; 36 | padding-right: 0; 37 | } 38 | 39 | nav ul li a { 40 | text-decoration: none; 41 | color: black; 42 | } 43 | 44 | #top-section { 45 | margin-top: 8rem; 46 | height: 41rem; 47 | } 48 | 49 | .heading { 50 | font-size: 6vh; 51 | font-weight: normal; 52 | } 53 | 54 | #top-section p { 55 | font-weight: lighter; 56 | } 57 | 58 | #top-section a { 59 | border: 1px solid black; 60 | width: 10rem; 61 | border-radius: 20px; 62 | } 63 | 64 | #top-section a:hover { 65 | background-color: black; 66 | color: white; 67 | } 68 | 69 | #top-section .image img { 70 | float: right; 71 | } 72 | 73 | #mid-section .row .col .card { 74 | margin-left: 0rem; 75 | } 76 | 77 | #mid-section .brain { 78 | padding: 0rem; 79 | padding-bottom: 0; 80 | width: 12rem; 81 | height: 12rem; 82 | } 83 | 84 | #mid-section .eye { 85 | margin-top: 0rem; 86 | padding-bottom: 0; 87 | padding: 0rem; 88 | height: 11rem; 89 | width: 11rem; 90 | } 91 | 92 | #mid-section .lungs { 93 | padding: 0rem; 94 | width: 12rem; 95 | padding-bottom: 0; 96 | height: 12rem; 97 | } 98 | 99 | #mid-section .btn { 100 | color: black; 101 | background-color: #ffeba9; 102 | border: 1px solid rgba(0, 0, 0, 0.534); 103 | border-radius: 20px; 104 | } 105 | 106 | #mid-section .btn:hover { 107 | background-color: black; 108 | color: white; 109 | } 110 | 111 | #mid-section .card .card-body { 112 | width: 100%; 113 | height: 100%; 114 | text-align: center; 115 | } 116 | 117 | #mid-section .card { 118 | border-radius: 20px; 119 | align-items: center; 120 | height: 22rem; 121 | width: 18rem; 122 | box-shadow: 4px 4px 4px #e5e5e5; 123 | } 124 | 125 | #mid-section { 126 | background-color: #fff7de; 127 | height: 35rem; 128 | padding: 5rem; 129 | } 130 | 131 | footer { 132 | padding: 4rem; 133 | background-color: black; 134 | color: white; 135 | } 136 | 137 | footer ul { 138 | list-style-type: none; 139 | } 140 | 141 | @media (min-width: 300px) and (max-width: 500px) { 142 | #mid-section .row .col .card { 143 | margin-left: 2; 144 | } 145 | 146 | #mid-section .card { 147 | width: 14rem; 148 | height: 20rem; 149 | } 150 | #mid-section .brain { 151 | width: 15rem; 152 | height: 10rem; 153 | } 154 | 155 | #mid-section .eye { 156 | width: 15rem; 157 | height: 9rem; 158 | } 159 | 160 | #mid-section .lungs { 161 | width: 12rem; 162 | height: 11rem; 163 | } 164 | } 165 | 166 | @media (max-width: 500px) { 167 | body { 168 | text-align: center; 169 | } 170 | 171 | .hone { 172 | margin-left: 0; 173 | } 174 | 175 | #top-section { 176 | margin-top: 3rem; 177 | height: auto; 178 | padding-bottom: 4rem; 179 | } 180 | 181 | #top-section a { 182 | margin-bottom: 3rem; 183 | } 184 | 185 | #mid-section { 186 | height: auto; 187 | padding: 1rem; 188 | padding-bottom: 4rem; 189 | } 190 | 191 | #mid-section .row .col .card { 192 | margin-top: 2rem; 193 | } 194 | 195 | #mid-section .card { 196 | width: 30vh; 197 | height: 43vh; 198 | } 199 | 200 | #mid-section .card3 { 201 | height: 45vh; 202 | } 203 | 204 | #mid-section .brain { 205 | width: 30vh; 206 | height: 20vh; 207 | } 208 | #mid-section .eye { 209 | margin-top: 0.5rem; 210 | padding: 4vh; 211 | width: 30vh; 212 | height: 20vh; 213 | } 214 | #mid-section .lungs { 215 | width: 30vh; 216 | height: 23vh; 217 | } 218 | 219 | footer ul { 220 | margin: 0; 221 | padding: 0; 222 | } 223 | } 224 | -------------------------------------------------------------------------------- /templates/getting_started.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Getting Started 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 30 | 31 | 32 | 38 |

We provide support every step of the way

39 |
40 | 41 |
42 |
43 |
44 |
45 |

46 | What can I do with Medical AI ? 47 |

48 |
49 |
50 |
51 |
    52 |
  • Create and compare models based on your data.
  • 53 |
  • Save and deploy a model.
  • 54 |
  • Learn which factors drive risk.
  • 55 |
  • Learn image segmentation .
  • 56 |
57 |
58 |
59 |
60 |
61 |
62 | 67 |
68 |
69 |
    70 |
  • Algorithm and hyperparameter choices make models accurate and easy to train.
  • 71 |
  • Practical algorithms suitable for many medical problem applications.
  • 72 |
  • Model training and deployment designed with common medical IT in mind.
  • 73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |

81 | How Do I Get Started? 82 |

83 |
84 |
85 |
86 |

Medical AI is available in PyTorch most emerging languages in data science field.

87 |
    88 |
  1. Installation
  2. 89 |
      90 |
    • Install Python 3.7.0
    • 91 |
    • pip install -r requirements.txt
    • 92 |
    93 |
  3. Fork the Medical AI repository
  4. 94 |
      95 |
    • https://github.com/manpreet2000/Medical-AI
    • 96 |
    97 |
  5. Clone the Medical AI repository
  6. 98 |
  7. Run the web app code in Medical AI directory .
  8. 99 |
      100 |
    • python app.py
    • 101 |
    102 |
103 |
104 |
105 |
106 |
107 |
108 | 109 | 110 | 111 | -------------------------------------------------------------------------------- /templates/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Medical 10 | 11 | 12 | 13 | 14 | 22 | 23 | 24 |
25 |
26 |
27 |
28 |

Medical problem?

29 |
Here's the solution
30 |

This portal can help you with medical solutions and for FREE!!

31 | Check It Out 32 |
33 |
34 | 35 |
36 |
37 |
38 |
39 | 40 | 41 |
42 |
43 |
44 | 45 |
46 |
47 | ... 48 |
49 |
Brain Tumor
Segmentation
50 | Click Here 51 |
52 |
53 |
54 | 55 |
56 |
57 | ... 58 |
59 |
Cataract
Classification
60 | Click Here 61 |
62 |
63 |
64 | 65 |
66 |
67 | ... 68 |
69 |
Pneumonia
Classification
70 | Click Here 71 |
72 |
73 |
74 | 75 |
76 |
77 | ... 78 |
79 |
10 year
Risk Classification
80 | Click Here 81 |
82 |
83 |
84 | 85 |
86 |
87 | 88 | 89 |
90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 134 | --------------------------------------------------------------------------------