├── loss ├── __init__.py └── hnl.py ├── saliency_maps ├── scripts │ └── __init__.py └── colab_requirements.txt ├── weak_segmentation ├── nnunetv2 │ ├── __init__.py │ ├── run │ │ └── __init__.py │ ├── imageio │ │ ├── __init__.py │ │ └── readme.md │ ├── inference │ │ └── __init__.py │ ├── nnunet │ │ └── __init__.py │ ├── tests │ │ ├── __init__.py │ │ └── integration_tests │ │ │ ├── __init__.py │ │ │ ├── run_integration_test_trainingOnly_DDP.sh │ │ │ ├── cleanup_integration_test.py │ │ │ ├── prepare_integration_tests.sh │ │ │ ├── run_integration_test.sh │ │ │ ├── add_lowres_and_cascade.py │ │ │ ├── lsf_commands.sh │ │ │ └── readme.md │ ├── training │ │ ├── __init__.py │ │ ├── loss │ │ │ ├── __init__.py │ │ │ ├── robust_ce_loss.py │ │ │ └── deep_supervision.py │ │ ├── logging │ │ │ └── __init__.py │ │ ├── dataloading │ │ │ ├── __init__.py │ │ │ └── utils.py │ │ ├── lr_scheduler │ │ │ ├── __init__.py │ │ │ └── polylr.py │ │ ├── nnUNetTrainer │ │ │ ├── __init__.py │ │ │ └── variants │ │ │ │ ├── __init__.py │ │ │ │ ├── loss │ │ │ │ ├── __init__.py │ │ │ │ └── nnUNetTrainerCELoss.py │ │ │ │ ├── benchmarking │ │ │ │ ├── __init__.py │ │ │ │ └── nnUNetTrainerBenchmark_5epochs_noDataLoading.py │ │ │ │ ├── lr_schedule │ │ │ │ ├── __init__.py │ │ │ │ └── nnUNetTrainerCosAnneal.py │ │ │ │ ├── optimizer │ │ │ │ └── __init__.py │ │ │ │ ├── sampling │ │ │ │ └── __init__.py │ │ │ │ ├── data_augmentation │ │ │ │ ├── __init__.py │ │ │ │ ├── nnUNetTrainerNoMirroring.py │ │ │ │ └── nnUNetTrainerNoDA.py │ │ │ │ ├── training_length │ │ │ │ └── __init__.py │ │ │ │ └── network_architecture │ │ │ │ └── __init__.py │ │ └── data_augmentation │ │ │ ├── __init__.py │ │ │ ├── custom_transforms │ │ │ ├── __init__.py │ │ │ ├── manipulating_data_dict.py │ │ │ ├── limited_length_multithreaded_augmenter.py │ │ │ ├── masking.py │ │ │ ├── region_based_training.py │ │ │ └── transforms_for_dummy_2d.py │ │ │ └── compute_initial_patch_size.py │ ├── utilities │ │ ├── __init__.py │ │ ├── label_handling │ │ │ └── __init__.py │ │ ├── plans_handling │ │ │ └── __init__.py │ │ ├── tensor_utilities.py │ │ ├── network_initialization.py │ │ ├── helpers.py │ │ ├── find_class_by_name.py │ │ ├── collate_outputs.py │ │ ├── ddp_allgather.py │ │ ├── utils.py │ │ ├── default_n_proc_DA.py │ │ └── json_export.py │ ├── batch_running │ │ ├── __init__.py │ │ ├── benchmarking │ │ │ ├── __init__.py │ │ │ └── generate_benchmarking_commands.py │ │ ├── release_trainings │ │ │ ├── __init__.py │ │ │ └── nnunetv2_v1 │ │ │ │ └── __init__.py │ │ └── collect_results_custom_Decathlon_2d.py │ ├── ensembling │ │ └── __init__.py │ ├── evaluation │ │ └── __init__.py │ ├── model_sharing │ │ ├── __init__.py │ │ ├── model_import.py │ │ └── model_download.py │ ├── postprocessing │ │ └── __init__.py │ ├── preprocessing │ │ ├── __init__.py │ │ ├── cropping │ │ │ ├── __init__.py │ │ │ └── cropping.py │ │ ├── resampling │ │ │ ├── __init__.py │ │ │ └── utils.py │ │ ├── normalization │ │ │ ├── __init__.py │ │ │ ├── readme.md │ │ │ └── map_channel_name_to_normalization.py │ │ └── preprocessors │ │ │ └── __init__.py │ ├── dataset_conversion │ │ ├── __init__.py │ │ ├── datasets_for_integration_tests │ │ │ ├── __init__.py │ │ │ ├── Dataset999_IntegrationTest_Hippocampus.py │ │ │ ├── Dataset998_IntegrationTest_Hippocampus_ignore.py │ │ │ └── Dataset997_IntegrationTest_Hippocampus_regions.py │ │ └── Dataset220_KiTS2023.py │ ├── experiment_planning │ │ ├── __init__.py │ │ ├── dataset_fingerprint │ │ │ └── __init__.py │ │ ├── experiment_planners │ │ │ ├── __init__.py │ │ │ └── readme.md │ │ └── plans_for_pretraining │ │ │ └── __init__.py │ ├── configuration.py │ └── paths.py └── .gitignore ├── biomedclip_finetuning └── open_clip │ ├── src │ ├── open_clip_train │ │ ├── __init__.py │ │ ├── precision.py │ │ ├── logger.py │ │ └── scheduler.py │ └── open_clip │ │ ├── version.py │ │ ├── bpe_simple_vocab_16e6.txt.gz │ │ ├── model_configs │ │ ├── ViT-B-16.json │ │ ├── ViT-B-32.json │ │ ├── ViT-M-16.json │ │ ├── ViT-M-32.json │ │ ├── ViT-S-16.json │ │ ├── ViT-S-32.json │ │ ├── ViT-L-14.json │ │ ├── ViT-L-16.json │ │ ├── ViT-M-32-alt.json │ │ ├── ViT-S-16-alt.json │ │ ├── ViT-S-32-alt.json │ │ ├── ViT-B-16-plus-240.json │ │ ├── ViT-B-16-plus.json │ │ ├── ViT-B-32-256.json │ │ ├── ViT-B-32-plus-256.json │ │ ├── ViT-L-14-280.json │ │ ├── ViT-L-14-336.json │ │ ├── ViT-L-16-320.json │ │ ├── mt5-base-ViT-B-32.json │ │ ├── xlm-roberta-base-ViT-B-32.json │ │ ├── ViT-H-14.json │ │ ├── ViT-H-16.json │ │ ├── ViT-B-16-quickgelu.json │ │ ├── ViT-B-32-quickgelu.json │ │ ├── ViT-H-14-378.json │ │ ├── ViT-M-16-alt.json │ │ ├── roberta-ViT-B-32.json │ │ ├── ViT-L-14-336-quickgelu.json │ │ ├── ViT-L-14-quickgelu.json │ │ ├── mt5-xl-ViT-H-14.json │ │ ├── xlm-roberta-large-ViT-H-14.json │ │ ├── ViT-e-14.json │ │ ├── ViT-g-14.json │ │ ├── ViT-H-14-quickgelu.json │ │ ├── ViT-bigG-14.json │ │ ├── ViT-H-14-378-quickgelu.json │ │ ├── nllb-clip-base.json │ │ ├── RN101.json │ │ ├── RN50.json │ │ ├── RN50x16.json │ │ ├── RN50x4.json │ │ ├── RN50x64.json │ │ ├── vit_medium_patch16_gap_256.json │ │ ├── ViT-bigG-14-quickgelu.json │ │ ├── nllb-clip-large.json │ │ ├── swin_base_patch4_window7_224.json │ │ ├── EVA01-g-14.json │ │ ├── vit_relpos_medium_patch16_cls_224.json │ │ ├── EVA01-g-14-plus.json │ │ ├── EVA02-B-16.json │ │ ├── EVA02-L-14.json │ │ ├── EVA02-E-14.json │ │ ├── EVA02-L-14-336.json │ │ ├── EVA02-E-14-plus.json │ │ ├── RN101-quickgelu.json │ │ ├── RN50x16-quickgelu.json │ │ ├── RN50x4-quickgelu.json │ │ ├── RN50-quickgelu.json │ │ ├── RN50x64-quickgelu.json │ │ ├── convnext_base.json │ │ ├── convnext_base_w.json │ │ ├── convnext_large.json │ │ ├── convnext_large_d.json │ │ ├── convnext_small.json │ │ ├── convnext_tiny.json │ │ ├── ViTamin-B.json │ │ ├── ViTamin-L.json │ │ ├── ViTamin-S.json │ │ ├── convnext_base_w_320.json │ │ ├── convnext_large_d_320.json │ │ ├── convnext_xlarge.json │ │ ├── ViTamin-B-LTT.json │ │ ├── ViTamin-L-256.json │ │ ├── ViTamin-L-336.json │ │ ├── ViTamin-L2.json │ │ ├── ViTamin-S-LTT.json │ │ ├── convnext_xxlarge.json │ │ ├── convnext_xxlarge_320.json │ │ ├── ViTamin-L2-256.json │ │ ├── ViTamin-L2-336.json │ │ ├── ViTamin-XL-256.json │ │ ├── ViTamin-XL-336.json │ │ ├── ViTamin-XL-384.json │ │ ├── MobileCLIP-S1.json │ │ ├── MobileCLIP-S2.json │ │ ├── MobileCLIP-B.json │ │ ├── nllb-clip-base-siglip.json │ │ ├── nllb-clip-large-siglip.json │ │ ├── coca_roberta-ViT-B-32.json │ │ ├── ViT-L-14-CLIPA.json │ │ ├── ViT-L-14-CLIPA-336.json │ │ ├── ViT-H-14-CLIPA-336.json │ │ ├── ViT-H-14-CLIPA.json │ │ ├── ViT-bigG-14-CLIPA.json │ │ ├── ViT-bigG-14-CLIPA-336.json │ │ ├── coca_ViT-B-32.json │ │ ├── coca_ViT-L-14.json │ │ ├── coca_base.json │ │ ├── ViT-B-16-SigLIP.json │ │ ├── ViT-B-16-SigLIP-256.json │ │ ├── ViT-B-16-SigLIP-384.json │ │ ├── ViT-B-16-SigLIP-512.json │ │ ├── ViT-L-16-SigLIP-256.json │ │ ├── ViT-L-16-SigLIP-384.json │ │ ├── ViT-B-16-SigLIP-i18n-256.json │ │ ├── ViT-SO400M-14-SigLIP-378.json │ │ ├── ViT-SO400M-14-SigLIP-384.json │ │ ├── ViT-SO400M-14-SigLIP.json │ │ └── ViT-SO400M-16-SigLIP-i18n-256.json │ │ ├── constants.py │ │ ├── __init__.py │ │ └── hf_configs.py │ ├── pytest.ini │ ├── .gitattributes │ ├── requirements-test.txt │ ├── MANIFEST.in │ ├── docs │ ├── CLIP.png │ ├── scaling.png │ ├── clip_loss.png │ ├── clip_recall.png │ ├── clip_val_loss.png │ ├── clip_zeroshot.png │ ├── clipa_acc_compute.png │ ├── effective_robustness.png │ ├── inverse_scaling_law.png │ ├── laion_clip_zeroshot.png │ ├── clipa_reduce_image_token.png │ ├── clipa_reduce_text_token.png │ ├── laion2b_clip_zeroshot_b32.png │ ├── laion_clip_zeroshot_b16.png │ ├── laion_clip_zeroshot_l14.png │ ├── laion_openai_compare_b32.jpg │ ├── laion_clip_zeroshot_b16_plus_240.png │ ├── clip_conceptual_captions.md │ └── script_examples │ │ ├── clipa │ │ ├── vit_b16 │ │ │ ├── i50_t16_pretrain.sh │ │ │ └── i50_t16_finetune.sh │ │ └── vit_l16 │ │ │ ├── i17_t16_pretrain.sh │ │ │ ├── i37_t8_pretrain.sh │ │ │ ├── i17_t16_finetune.sh │ │ │ └── i37_t8_finetune.sh │ │ ├── clipav2 │ │ └── vit_h14 │ │ │ ├── i50_t8_pretrain.sh │ │ │ ├── i257_t32_finetunex4.sh │ │ │ └── i577_t32_finetunex1.sh │ │ └── stability_example.sh │ ├── requirements.txt │ ├── requirements-training.txt │ ├── scripts │ ├── clipav1_vit_l16_i37_t8.sh │ ├── clipav2_vit_h14_i84_224_336_cl32_gap_datacomp1b.sh │ ├── biomedclip.sh │ ├── h14_224_32_finetune.sh │ └── h14_84_8_pretrain.sh │ ├── Makefile │ ├── tests │ ├── test_num_shards.py │ ├── test_hf_model.py │ └── test_inference_simple.py │ ├── .github │ └── workflows │ │ ├── clear-cache.yml │ │ └── python-publish.yml │ ├── CITATION.cff │ ├── LICENSE │ └── pyproject.toml ├── segment-anything ├── segment_anything.egg-info │ ├── dependency_links.txt │ ├── top_level.txt │ ├── requires.txt │ ├── PKG-INFO │ └── SOURCES.txt ├── segment_anything │ ├── utils │ │ └── __init__.py │ ├── modeling │ │ ├── __init__.py │ │ └── common.py │ └── __init__.py ├── setup.cfg ├── setup.py ├── linter.sh └── CONTRIBUTING.md ├── assets ├── example.png ├── SegExamples.png └── MedCLIP-SAMv2.png ├── zeroshot.sh ├── LICENSE └── zeroshot_scripts ├── zeroshot_brain_tumors.sh └── zeroshot_breast_tumors.sh /loss/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /saliency_maps/scripts/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/run/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/imageio/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/inference/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/nnunet/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/batch_running/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/ensembling/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/evaluation/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/model_sharing/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/postprocessing/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/loss/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/dataset_conversion/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/experiment_planning/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/logging/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip_train/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/cropping/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/resampling/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/integration_tests/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/dataloading/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/lr_scheduler/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/label_handling/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/plans_handling/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/batch_running/benchmarking/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/normalization/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/preprocessors/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/data_augmentation/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /segment-anything/segment_anything.egg-info/dependency_links.txt: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/batch_running/release_trainings/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /segment-anything/segment_anything.egg-info/top_level.txt: -------------------------------------------------------------------------------- 1 | segment_anything 2 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/experiment_planning/dataset_fingerprint/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/experiment_planning/experiment_planners/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/loss/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/version.py: -------------------------------------------------------------------------------- 1 | __version__ = '2.29.0' 2 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/batch_running/release_trainings/nnunetv2_v1/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/experiment_planning/plans_for_pretraining/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/data_augmentation/custom_transforms/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/benchmarking/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/lr_schedule/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/optimizer/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/sampling/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/dataset_conversion/datasets_for_integration_tests/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/data_augmentation/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/training_length/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/pytest.ini: -------------------------------------------------------------------------------- 1 | [pytest] 2 | markers = 3 | regression_test 4 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/network_architecture/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /assets/example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/assets/example.png -------------------------------------------------------------------------------- /assets/SegExamples.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/assets/SegExamples.png -------------------------------------------------------------------------------- /assets/MedCLIP-SAMv2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/assets/MedCLIP-SAMv2.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/.gitattributes: -------------------------------------------------------------------------------- 1 | *.py linguist-language=python 2 | *.ipynb linguist-documentation 3 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/requirements-test.txt: -------------------------------------------------------------------------------- 1 | pytest-split==0.8.0 2 | pytest==7.2.0 3 | transformers[sentencepiece] 4 | timm>=1.0.10 5 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/MANIFEST.in: -------------------------------------------------------------------------------- 1 | include src/open_clip/bpe_simple_vocab_16e6.txt.gz 2 | include src/open_clip/model_configs/*.json 3 | 4 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/CLIP.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/CLIP.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/scaling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/scaling.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/requirements.txt: -------------------------------------------------------------------------------- 1 | torch>=1.9.0 2 | torchvision 3 | regex 4 | ftfy 5 | tqdm 6 | huggingface_hub 7 | safetensors 8 | timm 9 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/clip_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/clip_loss.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/clip_recall.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/clip_recall.png -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/integration_tests/run_integration_test_trainingOnly_DDP.sh: -------------------------------------------------------------------------------- 1 | nnUNetv2_train $1 3d_fullres 0 -tr nnUNetTrainer_10epochs -num_gpus 2 2 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/clip_val_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/clip_val_loss.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/clip_zeroshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/clip_zeroshot.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/clipa_acc_compute.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/clipa_acc_compute.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/effective_robustness.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/effective_robustness.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/inverse_scaling_law.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/inverse_scaling_law.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/laion_clip_zeroshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/laion_clip_zeroshot.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/clipa_reduce_image_token.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/clipa_reduce_image_token.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/clipa_reduce_text_token.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/clipa_reduce_text_token.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/laion2b_clip_zeroshot_b32.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/laion2b_clip_zeroshot_b32.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/laion_clip_zeroshot_b16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/laion_clip_zeroshot_b16.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/laion_clip_zeroshot_l14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/laion_clip_zeroshot_l14.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/laion_openai_compare_b32.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/laion_openai_compare_b32.jpg -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/laion_clip_zeroshot_b16_plus_240.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/docs/laion_clip_zeroshot_b16_plus_240.png -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/bpe_simple_vocab_16e6.txt.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HealthX-Lab/MedCLIP-SAMv2/HEAD/biomedclip_finetuning/open_clip/src/open_clip/bpe_simple_vocab_16e6.txt.gz -------------------------------------------------------------------------------- /segment-anything/segment_anything.egg-info/requires.txt: -------------------------------------------------------------------------------- 1 | 2 | [all] 3 | matplotlib 4 | pycocotools 5 | opencv-python 6 | onnx 7 | onnxruntime 8 | 9 | [dev] 10 | flake8 11 | isort 12 | black 13 | mypy 14 | -------------------------------------------------------------------------------- /segment-anything/segment_anything/utils/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # All rights reserved. 3 | 4 | # This source code is licensed under the license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/model_sharing/model_import.py: -------------------------------------------------------------------------------- 1 | import zipfile 2 | 3 | from nnunetv2.paths import nnUNet_results 4 | 5 | 6 | def install_model_from_zip_file(zip_file: str): 7 | with zipfile.ZipFile(zip_file, 'r') as zip_ref: 8 | zip_ref.extractall(nnUNet_results) -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/requirements-training.txt: -------------------------------------------------------------------------------- 1 | torch>=1.9.0 2 | torchvision 3 | webdataset>=0.2.5,<=0.2.86 4 | regex 5 | ftfy 6 | tqdm 7 | pandas 8 | braceexpand 9 | huggingface_hub 10 | safetensors 11 | transformers[sentencepiece] 12 | timm>=1.0.10 13 | fsspec 14 | tensorboard 15 | -------------------------------------------------------------------------------- /segment-anything/segment_anything.egg-info/PKG-INFO: -------------------------------------------------------------------------------- 1 | Metadata-Version: 2.1 2 | Name: segment-anything 3 | Version: 1.0 4 | Summary: UNKNOWN 5 | Home-page: UNKNOWN 6 | License: UNKNOWN 7 | Platform: UNKNOWN 8 | Provides-Extra: all 9 | Provides-Extra: dev 10 | 11 | UNKNOWN 12 | 13 | -------------------------------------------------------------------------------- /saliency_maps/colab_requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.23.5 2 | Pillow 3 | typing-extensions 4 | ttach 5 | kornia 6 | tqdm 7 | opencv-python 8 | ftfy 9 | regex 10 | scikit-image 11 | ipython 12 | git+https://github.com/openai/CLIP.git 13 | open_clip_torch == 2.23.0 14 | transformers==4.35.2 15 | matplotlib 16 | grad_cam==1.4.6 -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/imageio/readme.md: -------------------------------------------------------------------------------- 1 | - Derive your adapter from `BaseReaderWriter`. 2 | - Reimplement all abstractmethods. 3 | - make sure to support 2d and 3d input images (or raise some error). 4 | - place it in this folder or nnU-Net won't find it! 5 | - add it to LIST_OF_IO_CLASSES in `reader_writer_registry.py` 6 | 7 | Bam, you're done! -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/scripts/clipav1_vit_l16_i37_t8.sh: -------------------------------------------------------------------------------- 1 | # eval on a single gpu 2 | CUDA_VISIBLE_DEVICES=2 TORCH_CUDNN_V8_API_ENABLED=1 TFDS_PREFETCH_SIZE=8192 python3 -m open_clip_train.main \ 3 | --model ViT-L-16-CL32-GAP \ 4 | --pretrained "/path/to/clipa_vit_l16_i37_t8.pt" \ 5 | --seed 0 \ 6 | --imagenet-val '/path/to/ImageNet/val' -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/clip_conceptual_captions.md: -------------------------------------------------------------------------------- 1 | ## Additional training curves for CLIP on Conceptual Captions 2 | 3 | # Zero shot accuracy 4 | ![](/docs/clip_zeroshot.png) 5 | 6 | # Training loss curve 7 | ![](/docs/clip_loss.png) 8 | 9 | # Validation loss curve 10 | ![](/docs/clip_val_loss.png) 11 | 12 | # Validation recall 13 | ![](/docs/clip_recall.png) -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/normalization/readme.md: -------------------------------------------------------------------------------- 1 | The channel_names entry in dataset.json only determines the normlaization scheme. So if you want to use something different 2 | then you can just 3 | - create a new subclass of ImageNormalization 4 | - map your custom channel identifier to that subclass in channel_name_to_normalization_mapping 5 | - run plan and preprocess again with your custom normlaization scheme -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 768, 7 | "patch_size": 16 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 512, 13 | "heads": 8, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-32.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 768, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 512, 13 | "heads": 8, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-M-16.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 512, 7 | "patch_size": 16 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 512, 13 | "heads": 8, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-M-32.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 512, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 512, 13 | "heads": 8, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-S-16.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 384, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 384, 7 | "patch_size": 16 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 384, 13 | "heads": 6, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-S-32.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 384, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 384, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 384, 13 | "heads": 6, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 24, 6 | "width": 1024, 7 | "patch_size": 14 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 768, 13 | "heads": 12, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-16.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 24, 6 | "width": 1024, 7 | "patch_size": 16 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 768, 13 | "heads": 12, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-M-32-alt.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 384, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 512, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 384, 13 | "heads": 6, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-S-16-alt.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 256, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 384, 7 | "patch_size": 16 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 256, 13 | "heads": 4, 14 | "layers": 10 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-S-32-alt.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 256, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 384, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 256, 13 | "heads": 4, 14 | "layers": 10 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16-plus-240.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 640, 3 | "vision_cfg": { 4 | "image_size": 240, 5 | "layers": 12, 6 | "width": 896, 7 | "patch_size": 16 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 640, 13 | "heads": 10, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16-plus.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 640, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 896, 7 | "patch_size": 16 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 640, 13 | "heads": 10, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-32-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 256, 5 | "layers": 12, 6 | "width": 768, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 512, 13 | "heads": 8, 14 | "layers": 12 15 | } 16 | } 17 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-32-plus-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 640, 3 | "vision_cfg": { 4 | "image_size": 256, 5 | "layers": 12, 6 | "width": 896, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 640, 13 | "heads": 10, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-14-280.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 280, 5 | "layers": 24, 6 | "width": 1024, 7 | "patch_size": 14 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 768, 13 | "heads": 12, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-14-336.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 336, 5 | "layers": 24, 6 | "width": 1024, 7 | "patch_size": 14 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 768, 13 | "heads": 12, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-16-320.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 320, 5 | "layers": 24, 6 | "width": 1024, 7 | "patch_size": 16 8 | }, 9 | "text_cfg": { 10 | "context_length": 77, 11 | "vocab_size": 49408, 12 | "width": 768, 13 | "heads": 12, 14 | "layers": 12 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/mt5-base-ViT-B-32.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 768, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "hf_model_name": "google/mt5-base", 11 | "hf_tokenizer_name": "google/mt5-base", 12 | "hf_pooler_type": "mean_pooler" 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/xlm-roberta-base-ViT-B-32.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 768, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "hf_model_name": "xlm-roberta-base", 11 | "hf_tokenizer_name": "xlm-roberta-base", 12 | "hf_pooler_type": "mean_pooler" 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /segment-anything/setup.cfg: -------------------------------------------------------------------------------- 1 | [isort] 2 | line_length=100 3 | multi_line_output=3 4 | include_trailing_comma=True 5 | known_standard_library=numpy,setuptools 6 | skip_glob=*/__init__.py 7 | known_myself=segment_anything 8 | known_third_party=matplotlib,cv2,torch,torchvision,pycocotools,onnx,black,isort 9 | no_lines_before=STDLIB,THIRDPARTY 10 | sections=FUTURE,STDLIB,THIRDPARTY,MYSELF,FIRSTPARTY,LOCALFOLDER 11 | default_section=FIRSTPARTY 12 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-H-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 32, 6 | "width": 1280, 7 | "head_width": 80, 8 | "patch_size": 14 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 1024, 14 | "heads": 16, 15 | "layers": 24 16 | } 17 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-H-16.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 32, 6 | "width": 1280, 7 | "head_width": 80, 8 | "patch_size": 16 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 1024, 14 | "heads": 16, 15 | "layers": 24 16 | } 17 | } -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/data_augmentation/custom_transforms/manipulating_data_dict.py: -------------------------------------------------------------------------------- 1 | from batchgenerators.transforms.abstract_transforms import AbstractTransform 2 | 3 | 4 | class RemoveKeyTransform(AbstractTransform): 5 | def __init__(self, key_to_remove: str): 6 | self.key_to_remove = key_to_remove 7 | 8 | def __call__(self, **data_dict): 9 | _ = data_dict.pop(self.key_to_remove, None) 10 | return data_dict 11 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 224, 6 | "layers": 12, 7 | "width": 768, 8 | "patch_size": 16 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 512, 14 | "heads": 8, 15 | "layers": 12 16 | } 17 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-32-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 224, 6 | "layers": 12, 7 | "width": 768, 8 | "patch_size": 32 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 512, 14 | "heads": 8, 15 | "layers": 12 16 | } 17 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-H-14-378.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 378, 5 | "layers": 32, 6 | "width": 1280, 7 | "head_width": 80, 8 | "patch_size": 14 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 1024, 14 | "heads": 16, 15 | "layers": 24 16 | } 17 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-M-16-alt.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 384, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 512, 7 | "patch_size": 16, 8 | "ls_init_value": 1e-4 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 384, 14 | "heads": 6, 15 | "layers": 12 16 | } 17 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/roberta-ViT-B-32.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 224, 6 | "layers": 12, 7 | "width": 768, 8 | "patch_size": 32 9 | }, 10 | "text_cfg": { 11 | "hf_model_name": "roberta-base", 12 | "hf_tokenizer_name": "roberta-base", 13 | "hf_pooler_type": "mean_pooler" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-14-336-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 336, 6 | "layers": 24, 7 | "width": 1024, 8 | "patch_size": 14 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 768, 14 | "heads": 12, 15 | "layers": 12 16 | } 17 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-14-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 224, 6 | "layers": 24, 7 | "width": 1024, 8 | "patch_size": 14 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 768, 14 | "heads": 12, 15 | "layers": 12 16 | } 17 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/mt5-xl-ViT-H-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 32, 6 | "width": 1280, 7 | "head_width": 80, 8 | "patch_size": 14 9 | }, 10 | "text_cfg": { 11 | "hf_model_name": "google/mt5-xl", 12 | "hf_tokenizer_name": "google/mt5-xl", 13 | "hf_pooler_type": "mean_pooler" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/Makefile: -------------------------------------------------------------------------------- 1 | install: ## [Local development] Upgrade pip, install requirements, install package. 2 | python -m pip install -U pip 3 | python -m pip install -e . 4 | 5 | install-training: 6 | python -m pip install -r requirements-training.txt 7 | 8 | install-test: ## [Local development] Install test requirements 9 | python -m pip install -r requirements-test.txt 10 | 11 | test: ## [Local development] Run unit tests 12 | python -m pytest -x -s -v tests 13 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/xlm-roberta-large-ViT-H-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 32, 6 | "width": 1280, 7 | "head_width": 80, 8 | "patch_size": 14 9 | }, 10 | "text_cfg": { 11 | "hf_model_name": "xlm-roberta-large", 12 | "hf_tokenizer_name": "xlm-roberta-large", 13 | "hf_pooler_type": "mean_pooler" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /segment-anything/segment_anything/modeling/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # All rights reserved. 3 | 4 | # This source code is licensed under the license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from .sam import Sam 8 | from .image_encoder import ImageEncoderViT 9 | from .mask_decoder import MaskDecoder 10 | from .prompt_encoder import PromptEncoder 11 | from .transformer import TwoWayTransformer 12 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-e-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1280, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 56, 6 | "width": 1792, 7 | "head_width": 112, 8 | "mlp_ratio": 8.5715, 9 | "patch_size": 14 10 | }, 11 | "text_cfg": { 12 | "context_length": 77, 13 | "vocab_size": 49408, 14 | "width": 1280, 15 | "heads": 20, 16 | "layers": 36 17 | } 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-g-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 40, 6 | "width": 1408, 7 | "head_width": 88, 8 | "mlp_ratio": 4.3637, 9 | "patch_size": 14 10 | }, 11 | "text_cfg": { 12 | "context_length": 77, 13 | "vocab_size": 49408, 14 | "width": 1024, 15 | "heads": 16, 16 | "layers": 24 17 | } 18 | } -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/configuration.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from nnunetv2.utilities.default_n_proc_DA import get_allowed_n_proc_DA 4 | 5 | default_num_processes = 8 if 'nnUNet_def_n_proc' not in os.environ else int(os.environ['nnUNet_def_n_proc']) 6 | 7 | ANISO_THRESHOLD = 3 # determines when a sample is considered anisotropic (3 means that the spacing in the low 8 | # resolution axis must be 3x as large as the next largest spacing) 9 | 10 | default_n_proc_DA = get_allowed_n_proc_DA() 11 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/data_augmentation/custom_transforms/limited_length_multithreaded_augmenter.py: -------------------------------------------------------------------------------- 1 | from batchgenerators.dataloading.nondet_multi_threaded_augmenter import NonDetMultiThreadedAugmenter 2 | 3 | 4 | class LimitedLenWrapper(NonDetMultiThreadedAugmenter): 5 | def __init__(self, my_imaginary_length, *args, **kwargs): 6 | super().__init__(*args, **kwargs) 7 | self.len = my_imaginary_length 8 | 9 | def __len__(self): 10 | return self.len 11 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-H-14-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 224, 6 | "layers": 32, 7 | "width": 1280, 8 | "head_width": 80, 9 | "patch_size": 14 10 | }, 11 | "text_cfg": { 12 | "context_length": 77, 13 | "vocab_size": 49408, 14 | "width": 1024, 15 | "heads": 16, 16 | "layers": 24 17 | } 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-bigG-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1280, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 48, 6 | "width": 1664, 7 | "head_width": 104, 8 | "mlp_ratio": 4.9231, 9 | "patch_size": 14 10 | }, 11 | "text_cfg": { 12 | "context_length": 77, 13 | "vocab_size": 49408, 14 | "width": 1280, 15 | "heads": 20, 16 | "layers": 32 17 | } 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-H-14-378-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 378, 6 | "layers": 32, 7 | "width": 1280, 8 | "head_width": 80, 9 | "patch_size": 14 10 | }, 11 | "text_cfg": { 12 | "context_length": 77, 13 | "vocab_size": 49408, 14 | "width": 1024, 15 | "heads": 16, 16 | "layers": 24 17 | } 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/nllb-clip-base.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 768, 7 | "patch_size": 32 8 | }, 9 | "text_cfg": { 10 | "hf_model_name": "facebook/nllb-200-distilled-600M", 11 | "hf_tokenizer_name": "facebook/nllb-200-distilled-600M", 12 | "hf_proj_type": "linear", 13 | "hf_pooler_type": "cls_pooler" 14 | } 15 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/scripts/clipav2_vit_h14_i84_224_336_cl32_gap_datacomp1b.sh: -------------------------------------------------------------------------------- 1 | CUDA_VISIBLE_DEVICES=1 python3 -m open_clip_train.main \ 2 | --model ViT-H-14-CL32-GAP-BigVision \ 3 | --pretrained "/path/to/vit_h14_i84_224_336_cl32_gap_datacomp1b.pt" \ 4 | --force-image-size 336 \ 5 | --square-resize-only \ 6 | --interpolation 'bilinear' \ 7 | --image-mean 0.485 0.456 0.406 \ 8 | --image-std 0.229 0.224 0.225 \ 9 | --seed 0 \ 10 | --imagenet-val '/path/to/ImageNet/val' 11 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip_train/precision.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from contextlib import suppress 3 | from functools import partial 4 | 5 | 6 | def get_autocast(precision, device_type='cuda'): 7 | if precision =='amp': 8 | amp_dtype = torch.float16 9 | elif precision == 'amp_bfloat16' or precision == 'amp_bf16': 10 | amp_dtype = torch.bfloat16 11 | else: 12 | return suppress 13 | 14 | return partial(torch.amp.autocast, device_type=device_type, dtype=amp_dtype) -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN101.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": [ 6 | 3, 7 | 4, 8 | 23, 9 | 3 10 | ], 11 | "width": 64, 12 | "patch_size": null 13 | }, 14 | "text_cfg": { 15 | "context_length": 77, 16 | "vocab_size": 49408, 17 | "width": 512, 18 | "heads": 8, 19 | "layers": 12 20 | } 21 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN50.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": [ 6 | 3, 7 | 4, 8 | 6, 9 | 3 10 | ], 11 | "width": 64, 12 | "patch_size": null 13 | }, 14 | "text_cfg": { 15 | "context_length": 77, 16 | "vocab_size": 49408, 17 | "width": 512, 18 | "heads": 8, 19 | "layers": 12 20 | } 21 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN50x16.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 384, 5 | "layers": [ 6 | 6, 7 | 8, 8 | 18, 9 | 8 10 | ], 11 | "width": 96, 12 | "patch_size": null 13 | }, 14 | "text_cfg": { 15 | "context_length": 77, 16 | "vocab_size": 49408, 17 | "width": 768, 18 | "heads": 12, 19 | "layers": 12 20 | } 21 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN50x4.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 640, 3 | "vision_cfg": { 4 | "image_size": 288, 5 | "layers": [ 6 | 4, 7 | 6, 8 | 10, 9 | 6 10 | ], 11 | "width": 80, 12 | "patch_size": null 13 | }, 14 | "text_cfg": { 15 | "context_length": 77, 16 | "vocab_size": 49408, 17 | "width": 640, 18 | "heads": 10, 19 | "layers": 12 20 | } 21 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN50x64.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 448, 5 | "layers": [ 6 | 3, 7 | 15, 8 | 36, 9 | 10 10 | ], 11 | "width": 128, 12 | "patch_size": null 13 | }, 14 | "text_cfg": { 15 | "context_length": 77, 16 | "vocab_size": 49408, 17 | "width": 1024, 18 | "heads": 16, 19 | "layers": 12 20 | } 21 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/vit_medium_patch16_gap_256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "timm_model_name": "vit_medium_patch16_gap_256", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "image_size": 256 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 512, 14 | "heads": 8, 15 | "layers": 12 16 | } 17 | } -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/tensor_utilities.py: -------------------------------------------------------------------------------- 1 | from typing import Union, List, Tuple 2 | 3 | import numpy as np 4 | import torch 5 | 6 | 7 | def sum_tensor(inp: torch.Tensor, axes: Union[np.ndarray, Tuple, List], keepdim: bool = False) -> torch.Tensor: 8 | axes = np.unique(axes).astype(int) 9 | if keepdim: 10 | for ax in axes: 11 | inp = inp.sum(int(ax), keepdim=True) 12 | else: 13 | for ax in sorted(axes, reverse=True): 14 | inp = inp.sum(int(ax)) 15 | return inp 16 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-bigG-14-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1280, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 224, 6 | "layers": 48, 7 | "width": 1664, 8 | "head_width": 104, 9 | "mlp_ratio": 4.9231, 10 | "patch_size": 14 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1280, 16 | "heads": 20, 17 | "layers": 32 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/nllb-clip-large.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 32, 6 | "width": 1280, 7 | "head_width": 80, 8 | "patch_size": 14 9 | }, 10 | "text_cfg": { 11 | "hf_model_name": "facebook/nllb-200-distilled-1.3B", 12 | "hf_tokenizer_name": "facebook/nllb-200-distilled-1.3B", 13 | "hf_proj_type": "linear", 14 | "hf_pooler_type": "cls_pooler" 15 | } 16 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/swin_base_patch4_window7_224.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 640, 3 | "vision_cfg": { 4 | "timm_model_name": "swin_base_patch4_window7_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "image_size": 224 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 640, 14 | "heads": 10, 15 | "layers": 12 16 | } 17 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/EVA01-g-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "timm_model_name": "eva_giant_patch14_224", 6 | "timm_model_pretrained": false, 7 | "timm_pool": "token", 8 | "timm_proj": null 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 768, 14 | "heads": 12, 15 | "layers": 12 16 | }, 17 | "custom_text": true 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/vit_relpos_medium_patch16_cls_224.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "timm_model_name": "vit_relpos_medium_patch16_cls_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "image_size": 224 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 512, 14 | "heads": 8, 15 | "layers": 12 16 | } 17 | } -------------------------------------------------------------------------------- /segment-anything/segment_anything/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # All rights reserved. 3 | 4 | # This source code is licensed under the license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from .build_sam import ( 8 | build_sam, 9 | build_sam_vit_h, 10 | build_sam_vit_l, 11 | build_sam_vit_b, 12 | sam_model_registry, 13 | ) 14 | from .predictor import SamPredictor 15 | from .automatic_mask_generator import SamAutomaticMaskGenerator 16 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/EVA01-g-14-plus.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "timm_model_name": "eva_giant_patch14_224", 6 | "timm_model_pretrained": false, 7 | "timm_pool": "token", 8 | "timm_proj": null 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 1024, 14 | "heads": 16, 15 | "layers": 24 16 | }, 17 | "custom_text": true 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/EVA02-B-16.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "timm_model_name": "eva02_base_patch16_clip_224", 6 | "timm_model_pretrained": false, 7 | "timm_pool": "token", 8 | "timm_proj": null 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 512, 14 | "heads": 8, 15 | "layers": 12 16 | }, 17 | "custom_text": true 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/EVA02-L-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "timm_model_name": "eva02_large_patch14_clip_224", 6 | "timm_model_pretrained": false, 7 | "timm_pool": "token", 8 | "timm_proj": null 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 768, 14 | "heads": 12, 15 | "layers": 12 16 | }, 17 | "custom_text": true 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/EVA02-E-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "timm_model_name": "eva02_enormous_patch14_clip_224", 6 | "timm_model_pretrained": false, 7 | "timm_pool": "token", 8 | "timm_proj": null 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 1024, 14 | "heads": 16, 15 | "layers": 24 16 | }, 17 | "custom_text": true 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/EVA02-L-14-336.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 336, 5 | "timm_model_name": "eva02_large_patch14_clip_336", 6 | "timm_model_pretrained": false, 7 | "timm_pool": "token", 8 | "timm_proj": null 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 768, 14 | "heads": 12, 15 | "layers": 12 16 | }, 17 | "custom_text": true 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/EVA02-E-14-plus.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "timm_model_name": "eva02_enormous_patch14_clip_224", 6 | "timm_model_pretrained": false, 7 | "timm_pool": "token", 8 | "timm_proj": null 9 | }, 10 | "text_cfg": { 11 | "context_length": 77, 12 | "vocab_size": 49408, 13 | "width": 1280, 14 | "heads": 20, 15 | "layers": 32 16 | }, 17 | "custom_text": true 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN101-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 224, 6 | "layers": [ 7 | 3, 8 | 4, 9 | 23, 10 | 3 11 | ], 12 | "width": 64, 13 | "patch_size": null 14 | }, 15 | "text_cfg": { 16 | "context_length": 77, 17 | "vocab_size": 49408, 18 | "width": 512, 19 | "heads": 8, 20 | "layers": 12 21 | } 22 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN50x16-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 384, 6 | "layers": [ 7 | 6, 8 | 8, 9 | 18, 10 | 8 11 | ], 12 | "width": 96, 13 | "patch_size": null 14 | }, 15 | "text_cfg": { 16 | "context_length": 77, 17 | "vocab_size": 49408, 18 | "width": 768, 19 | "heads": 12, 20 | "layers": 12 21 | } 22 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN50x4-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 640, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 288, 6 | "layers": [ 7 | 4, 8 | 6, 9 | 10, 10 | 6 11 | ], 12 | "width": 80, 13 | "patch_size": null 14 | }, 15 | "text_cfg": { 16 | "context_length": 77, 17 | "vocab_size": 49408, 18 | "width": 640, 19 | "heads": 10, 20 | "layers": 12 21 | } 22 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN50-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 224, 6 | "layers": [ 7 | 3, 8 | 4, 9 | 6, 10 | 3 11 | ], 12 | "width": 64, 13 | "patch_size": null 14 | }, 15 | "text_cfg": { 16 | "context_length": 77, 17 | "vocab_size": 49408, 18 | "width": 512, 19 | "heads": 8, 20 | "layers": 12 21 | } 22 | } 23 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/RN50x64-quickgelu.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "quick_gelu": true, 4 | "vision_cfg": { 5 | "image_size": 448, 6 | "layers": [ 7 | 3, 8 | 15, 9 | 36, 10 | 10 11 | ], 12 | "width": 128, 13 | "patch_size": null 14 | }, 15 | "text_cfg": { 16 | "context_length": 77, 17 | "vocab_size": 49408, 18 | "width": 1024, 19 | "heads": 16, 20 | "layers": 12 21 | } 22 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_base.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_base", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 512, 16 | "heads": 8, 17 | "layers": 12 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_base_w.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 640, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_base", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 640, 16 | "heads": 10, 17 | "layers": 12 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_large.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_large", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 12 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_large_d.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_large", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "mlp", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 16 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_small.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_small", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 512, 16 | "heads": 8, 17 | "layers": 12 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_tiny.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_tiny", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 512, 16 | "heads": 8, 17 | "layers": 12 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-B.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_base_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 512, 16 | "heads": 8, 17 | "layers": 12 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-L.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_large_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 12 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-S.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 384, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_small_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 384, 16 | "heads": 6, 17 | "layers": 12 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_base_w_320.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 640, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_base", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 320 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 640, 16 | "heads": 10, 17 | "layers": 12 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_large_d_320.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_large", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "mlp", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 320 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 16 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_xlarge.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_xlarge", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1024, 16 | "heads": 16, 17 | "layers": 20 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-B-LTT.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_base_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 12 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-L-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_large_256", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 12 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-L-336.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_large_336", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 336 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 12 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-L2.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_large2_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1024, 16 | "heads": 16, 17 | "layers": 24 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-S-LTT.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_small_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 12 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_xxlarge.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_xxlarge", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1024, 16 | "heads": 16, 17 | "layers": 24 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/convnext_xxlarge_320.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "timm_model_name": "convnext_xxlarge", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 320 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1024, 16 | "heads": 16, 17 | "layers": 24 18 | } 19 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-L2-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_large2_256", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1024, 16 | "heads": 16, 17 | "layers": 24 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-L2-336.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_large2_336", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 336 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1024, 16 | "heads": 16, 17 | "layers": 24 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-XL-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1152, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_xlarge_256", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1152, 16 | "heads": 16, 17 | "layers": 27 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-XL-336.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1152, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_xlarge_336", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 336 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1152, 16 | "heads": 16, 17 | "layers": 27 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViTamin-XL-384.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1152, 3 | "vision_cfg": { 4 | "timm_model_name": "vitamin_xlarge_384", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "", 7 | "timm_proj": "linear", 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.1, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 1152, 16 | "heads": 16, 17 | "layers": 27 18 | }, 19 | "custom_text": true 20 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/constants.py: -------------------------------------------------------------------------------- 1 | OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073) 2 | OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711) 3 | IMAGENET_MEAN = (0.485, 0.456, 0.406) 4 | IMAGENET_STD = (0.229, 0.224, 0.225) 5 | INCEPTION_MEAN = (0.5, 0.5, 0.5) 6 | INCEPTION_STD = (0.5, 0.5, 0.5) 7 | 8 | # Default name for a weights file hosted on the Huggingface Hub. 9 | HF_WEIGHTS_NAME = "open_clip_pytorch_model.bin" # default pytorch pkl 10 | HF_SAFE_WEIGHTS_NAME = "open_clip_model.safetensors" # safetensors version 11 | HF_CONFIG_NAME = 'open_clip_config.json' 12 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/network_initialization.py: -------------------------------------------------------------------------------- 1 | from torch import nn 2 | 3 | 4 | class InitWeights_He(object): 5 | def __init__(self, neg_slope=1e-2): 6 | self.neg_slope = neg_slope 7 | 8 | def __call__(self, module): 9 | if isinstance(module, nn.Conv3d) or isinstance(module, nn.Conv2d) or isinstance(module, nn.ConvTranspose2d) or isinstance(module, nn.ConvTranspose3d): 10 | module.weight = nn.init.kaiming_normal_(module.weight, a=self.neg_slope) 11 | if module.bias is not None: 12 | module.bias = nn.init.constant_(module.bias, 0) 13 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/MobileCLIP-S1.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "timm_model_name": "fastvit_mci1", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "avg", 7 | "timm_proj": null, 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.0, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 512, 16 | "heads": 8, 17 | "layers": 12, 18 | "no_causal_mask": true 19 | }, 20 | "custom_text": true 21 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/MobileCLIP-S2.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "timm_model_name": "fastvit_mci2", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "avg", 7 | "timm_proj": null, 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.0, 10 | "image_size": 256 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 512, 16 | "heads": 8, 17 | "layers": 12, 18 | "no_causal_mask": true 19 | }, 20 | "custom_text": true 21 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/MobileCLIP-B.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "timm_model_name": "vit_base_mci_224", 5 | "timm_model_pretrained": false, 6 | "timm_pool": "token", 7 | "timm_proj": null, 8 | "timm_drop": 0.0, 9 | "timm_drop_path": 0.0, 10 | "image_size": 224 11 | }, 12 | "text_cfg": { 13 | "context_length": 77, 14 | "vocab_size": 49408, 15 | "width": 512, 16 | "heads": 8, 17 | "layers": 12, 18 | "no_causal_mask": false 19 | }, 20 | "custom_text": true 21 | } -------------------------------------------------------------------------------- /segment-anything/setup.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # All rights reserved. 3 | 4 | # This source code is licensed under the license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from setuptools import find_packages, setup 8 | 9 | setup( 10 | name="segment_anything", 11 | version="1.0", 12 | install_requires=[], 13 | packages=find_packages(exclude="notebooks"), 14 | extras_require={ 15 | "all": ["matplotlib", "pycocotools", "opencv-python", "onnx", "onnxruntime"], 16 | "dev": ["flake8", "isort", "black", "mypy"], 17 | }, 18 | ) 19 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/nllb-clip-base-siglip.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "custom_text": true, 4 | "init_logit_bias": -10, 5 | "vision_cfg": { 6 | "image_size": 384, 7 | "timm_model_name": "vit_base_patch16_siglip_384", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "hf_model_name": "facebook/nllb-200-distilled-600M", 14 | "hf_tokenizer_name": "facebook/nllb-200-distilled-600M", 15 | "hf_proj_type": "linear", 16 | "hf_pooler_type": "cls_pooler" 17 | } 18 | } -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/lr_schedule/nnUNetTrainerCosAnneal.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch.optim.lr_scheduler import CosineAnnealingLR 3 | 4 | from nnunetv2.training.nnUNetTrainer.nnUNetTrainer import nnUNetTrainer 5 | 6 | 7 | class nnUNetTrainerCosAnneal(nnUNetTrainer): 8 | def configure_optimizers(self): 9 | optimizer = torch.optim.SGD(self.network.parameters(), self.initial_lr, weight_decay=self.weight_decay, 10 | momentum=0.99, nesterov=True) 11 | lr_scheduler = CosineAnnealingLR(optimizer, T_max=self.num_epochs) 12 | return optimizer, lr_scheduler 13 | 14 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/nllb-clip-large-siglip.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1152, 3 | "custom_text": true, 4 | "init_logit_bias": -10, 5 | "vision_cfg": { 6 | "image_size": 384, 7 | "timm_model_name": "vit_so400m_patch14_siglip_384", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "hf_model_name": "facebook/nllb-200-distilled-1.3B", 14 | "hf_tokenizer_name": "facebook/nllb-200-distilled-1.3B", 15 | "hf_proj_type": "linear", 16 | "hf_pooler_type": "cls_pooler" 17 | } 18 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/coca_roberta-ViT-B-32.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 768, 7 | "patch_size": 32, 8 | "output_tokens": true 9 | }, 10 | "text_cfg": { 11 | "hf_model_name": "roberta-base", 12 | "hf_tokenizer_name": "roberta-base", 13 | "hf_proj_type": "linear", 14 | "width": 768, 15 | "output_tokens": true 16 | }, 17 | "multimodal_cfg": { 18 | "context_length": 76, 19 | "width": 768, 20 | "heads": 8, 21 | "layers": 12 22 | }, 23 | "custom_text": true 24 | } 25 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-14-CLIPA.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 24, 6 | "width": 1024, 7 | "patch_size": 14, 8 | "no_ln_pre": true, 9 | "pool_type": "avg", 10 | "final_ln_after_pool": true 11 | }, 12 | "text_cfg": { 13 | "context_length": 32, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "bert-base-uncased", 16 | "tokenizer_kwargs": { 17 | "strip_sep_token": true 18 | }, 19 | "width": 768, 20 | "heads": 12, 21 | "layers": 12, 22 | "pool_type": "last", 23 | "no_causal_mask": true 24 | } 25 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-14-CLIPA-336.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 336, 5 | "layers": 24, 6 | "width": 1024, 7 | "patch_size": 14, 8 | "no_ln_pre": true, 9 | "pool_type": "avg", 10 | "final_ln_after_pool": true 11 | }, 12 | "text_cfg": { 13 | "context_length": 32, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "bert-base-uncased", 16 | "tokenizer_kwargs": { 17 | "strip_sep_token": true 18 | }, 19 | "width": 768, 20 | "heads": 12, 21 | "layers": 12, 22 | "pool_type": "last", 23 | "no_causal_mask": true 24 | } 25 | } -------------------------------------------------------------------------------- /segment-anything/linter.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | # Copyright (c) Facebook, Inc. and its affiliates. 3 | 4 | { 5 | black --version | grep -E "23\." > /dev/null 6 | } || { 7 | echo "Linter requires 'black==23.*' !" 8 | exit 1 9 | } 10 | 11 | ISORT_VERSION=$(isort --version-number) 12 | if [[ "$ISORT_VERSION" != 5.12* ]]; then 13 | echo "Linter requires isort==5.12.0 !" 14 | exit 1 15 | fi 16 | 17 | echo "Running isort ..." 18 | isort . --atomic 19 | 20 | echo "Running black ..." 21 | black -l 100 . 22 | 23 | echo "Running flake8 ..." 24 | if [ -x "$(command -v flake8)" ]; then 25 | flake8 . 26 | else 27 | python3 -m flake8 . 28 | fi 29 | 30 | echo "Running mypy..." 31 | 32 | mypy --exclude 'setup.py|notebooks' . 33 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-H-14-CLIPA-336.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 336, 5 | "layers": 32, 6 | "width": 1280, 7 | "head_width": 80, 8 | "patch_size": 14, 9 | "no_ln_pre": true, 10 | "pool_type": "avg", 11 | "final_ln_after_pool": true 12 | }, 13 | "text_cfg": { 14 | "context_length": 32, 15 | "vocab_size": 32000, 16 | "hf_tokenizer_name": "bert-base-uncased", 17 | "tokenizer_kwargs": { 18 | "strip_sep_token": true 19 | }, 20 | "width": 1024, 21 | "heads": 16, 22 | "layers": 24, 23 | "pool_type": "last", 24 | "no_causal_mask": true 25 | } 26 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-H-14-CLIPA.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 32, 6 | "width": 1280, 7 | "head_width": 80, 8 | "patch_size": 14, 9 | "no_ln_pre": true, 10 | "pool_type": "avg", 11 | "final_ln_after_pool": true 12 | }, 13 | "text_cfg": { 14 | "context_length": 32, 15 | "vocab_size": 32000, 16 | "hf_tokenizer_name": "bert-base-uncased", 17 | "tokenizer_kwargs": { 18 | "strip_sep_token": true 19 | }, 20 | "width": 1024, 21 | "heads": 16, 22 | "layers": 24, 23 | "pool_type": "last", 24 | "no_causal_mask": true 25 | } 26 | } -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/integration_tests/cleanup_integration_test.py: -------------------------------------------------------------------------------- 1 | import shutil 2 | 3 | from batchgenerators.utilities.file_and_folder_operations import isdir, join 4 | 5 | from nnunetv2.paths import nnUNet_raw, nnUNet_results, nnUNet_preprocessed 6 | 7 | if __name__ == '__main__': 8 | # deletes everything! 9 | dataset_names = [ 10 | 'Dataset996_IntegrationTest_Hippocampus_regions_ignore', 11 | 'Dataset997_IntegrationTest_Hippocampus_regions', 12 | 'Dataset998_IntegrationTest_Hippocampus_ignore', 13 | 'Dataset999_IntegrationTest_Hippocampus', 14 | ] 15 | for fld in [nnUNet_raw, nnUNet_preprocessed, nnUNet_results]: 16 | for d in dataset_names: 17 | if isdir(join(fld, d)): 18 | shutil.rmtree(join(fld, d)) 19 | 20 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-bigG-14-CLIPA.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1280, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 48, 6 | "width": 1664, 7 | "head_width": 104, 8 | "mlp_ratio": 4.9231, 9 | "patch_size": 14, 10 | "no_ln_pre": true, 11 | "pool_type": "avg", 12 | "final_ln_after_pool": true 13 | }, 14 | "text_cfg": { 15 | "context_length": 32, 16 | "vocab_size": 32000, 17 | "hf_tokenizer_name": "bert-base-uncased", 18 | "tokenizer_kwargs": { 19 | "strip_sep_token": true 20 | }, 21 | "width": 1280, 22 | "heads": 20, 23 | "layers": 32, 24 | "pool_type": "last", 25 | "no_causal_mask": true 26 | } 27 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-bigG-14-CLIPA-336.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1280, 3 | "vision_cfg": { 4 | "image_size": 336, 5 | "layers": 48, 6 | "width": 1664, 7 | "head_width": 104, 8 | "mlp_ratio": 4.9231, 9 | "patch_size": 14, 10 | "no_ln_pre": true, 11 | "pool_type": "avg", 12 | "final_ln_after_pool": true 13 | }, 14 | "text_cfg": { 15 | "context_length": 32, 16 | "vocab_size": 32000, 17 | "hf_tokenizer_name": "bert-base-uncased", 18 | "tokenizer_kwargs": { 19 | "strip_sep_token": true 20 | }, 21 | "width": 1280, 22 | "heads": 20, 23 | "layers": 32, 24 | "pool_type": "last", 25 | "no_causal_mask": true 26 | } 27 | } -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/resampling/utils.py: -------------------------------------------------------------------------------- 1 | from typing import Callable 2 | 3 | import nnunetv2 4 | from batchgenerators.utilities.file_and_folder_operations import join 5 | from nnunetv2.utilities.find_class_by_name import recursive_find_python_class 6 | 7 | 8 | def recursive_find_resampling_fn_by_name(resampling_fn: str) -> Callable: 9 | ret = recursive_find_python_class(join(nnunetv2.__path__[0], "preprocessing", "resampling"), resampling_fn, 10 | 'nnunetv2.preprocessing.resampling') 11 | if ret is None: 12 | raise RuntimeError("Unable to find resampling function named '%s'. Please make sure this fn is located in the " 13 | "nnunetv2.preprocessing.resampling module." % resampling_fn) 14 | else: 15 | return ret 16 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/helpers.py: -------------------------------------------------------------------------------- 1 | import torch 2 | 3 | #$ added tempreture to softmax 4 | def softmax_helper_dim0(x: torch.Tensor ,t: float = 1.0) -> torch.Tensor: 5 | x = x / t 6 | return torch.softmax(x, 0) 7 | 8 | #$ added tempreture to softmax 9 | def softmax_helper_dim1(x: torch.Tensor , t:float = 1.0) -> torch.Tensor: 10 | x = x / t 11 | return torch.softmax(x, 1) 12 | 13 | def empty_cache(device: torch.device): 14 | if device.type == 'cuda': 15 | torch.cuda.empty_cache() 16 | elif device.type == 'mps': 17 | from torch import mps 18 | mps.empty_cache() 19 | else: 20 | pass 21 | 22 | 23 | class dummy_context(object): 24 | def __enter__(self): 25 | pass 26 | 27 | def __exit__(self, exc_type, exc_val, exc_tb): 28 | pass 29 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/batch_running/collect_results_custom_Decathlon_2d.py: -------------------------------------------------------------------------------- 1 | from batchgenerators.utilities.file_and_folder_operations import * 2 | 3 | from nnunetv2.batch_running.collect_results_custom_Decathlon import collect_results, summarize 4 | from nnunetv2.paths import nnUNet_results 5 | 6 | if __name__ == '__main__': 7 | use_these_trainers = { 8 | 'nnUNetTrainer': ('nnUNetPlans', ), 9 | } 10 | all_results_file = join(nnUNet_results, 'hrnet_results.csv') 11 | datasets = [2, 3, 4, 17, 20, 24, 27, 38, 55, 64, 82] 12 | collect_results(use_these_trainers, datasets, all_results_file) 13 | 14 | folds = (0, ) 15 | configs = ('2d', ) 16 | output_file = join(nnUNet_results, 'hrnet_results_summary_fold0.csv') 17 | summarize(all_results_file, output_file, folds, configs, datasets, use_these_trainers) 18 | 19 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/coca_ViT-B-32.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 12, 6 | "width": 768, 7 | "patch_size": 32, 8 | "attentional_pool": true, 9 | "attn_pooler_heads": 8, 10 | "output_tokens": true 11 | }, 12 | "text_cfg": { 13 | "context_length": 76, 14 | "vocab_size": 49408, 15 | "width": 512, 16 | "heads": 8, 17 | "layers": 12, 18 | "embed_cls": true, 19 | "output_tokens": true 20 | }, 21 | "multimodal_cfg": { 22 | "context_length": 76, 23 | "vocab_size": 49408, 24 | "width": 512, 25 | "heads": 8, 26 | "layers": 12, 27 | "attn_pooler_heads": 8 28 | }, 29 | "custom_text": true 30 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/coca_ViT-L-14.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "vision_cfg": { 4 | "image_size": 224, 5 | "layers": 24, 6 | "width": 1024, 7 | "patch_size": 14, 8 | "attentional_pool": true, 9 | "attn_pooler_heads": 8, 10 | "output_tokens": true 11 | }, 12 | "text_cfg": { 13 | "context_length": 76, 14 | "vocab_size": 49408, 15 | "width": 768, 16 | "heads": 12, 17 | "layers": 12, 18 | "embed_cls": true, 19 | "output_tokens": true 20 | }, 21 | "multimodal_cfg": { 22 | "context_length": 76, 23 | "vocab_size": 49408, 24 | "width": 768, 25 | "heads": 12, 26 | "layers": 12, 27 | "attn_pooler_heads": 12 28 | }, 29 | "custom_text": true 30 | } 31 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipa/vit_b16/i50_t16_pretrain.sh: -------------------------------------------------------------------------------- 1 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 2 | --save-frequency 1 \ 3 | --save-most-recent \ 4 | --zeroshot-frequency 1 \ 5 | --train-data '/path/to/laion-400m' \ 6 | --dataset-type webdataset \ 7 | --lr "2.048e-3" \ 8 | --beta1 0.9 \ 9 | --beta2 0.95 \ 10 | --warmup 782 \ 11 | --wd 0.2 \ 12 | --batch-size 8192 \ 13 | --aug-cfg scale='(0.4, 1.0)' \ 14 | --epochs 6 \ 15 | --workers 6 \ 16 | --model ViT-B-16-CL16 \ 17 | --precision 'amp_bf16' \ 18 | --ddp-static-graph \ 19 | --local-loss \ 20 | --gather-with-grad \ 21 | --force-image-size 112 \ 22 | --grad-checkpointing \ 23 | --log-every-n-steps 32 \ 24 | --seed 0 \ 25 | --logs ./logs/ \ 26 | --imagenet-val '/path/to/imagenet/val' -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/coca_base.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 512, 3 | "multimodal_cfg": { 4 | "width": 768, 5 | "context_length": 76, 6 | "vocab_size": 64000, 7 | "mlp_ratio": 4, 8 | "layers": 12, 9 | "dim_head": 64, 10 | "heads": 12, 11 | "n_queries": 256, 12 | "attn_pooler_heads": 8 13 | }, 14 | "vision_cfg": { 15 | "image_size": 288, 16 | "layers": 12, 17 | "width": 768, 18 | "patch_size": 18, 19 | "output_tokens": true 20 | }, 21 | "text_cfg": { 22 | "context_length": 76, 23 | "vocab_size": 64000, 24 | "layers": 12, 25 | "heads": 12, 26 | "width": 768, 27 | "embed_cls": true, 28 | "output_tokens": true 29 | }, 30 | "custom_text": true 31 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/tests/test_num_shards.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from open_clip_train.data import get_dataset_size 4 | 5 | @pytest.mark.parametrize( 6 | "shards,expected_size", 7 | [ 8 | ('/path/to/shard.tar', 1), 9 | ('/path/to/shard_{000..000}.tar', 1), 10 | ('/path/to/shard_{000..009}.tar', 10), 11 | ('/path/to/shard_{000..009}_{000..009}.tar', 100), 12 | ('/path/to/shard.tar::/path/to/other_shard_{000..009}.tar', 11), 13 | ('/path/to/shard_{000..009}.tar::/path/to/other_shard_{000..009}.tar', 20), 14 | (['/path/to/shard.tar'], 1), 15 | (['/path/to/shard.tar', '/path/to/other_shard.tar'], 2), 16 | ] 17 | ) 18 | def test_num_shards(shards, expected_size): 19 | _, size = get_dataset_size(shards) 20 | assert size == expected_size, f'Expected {expected_size} for {shards} but found {size} instead.' 21 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/.github/workflows/clear-cache.yml: -------------------------------------------------------------------------------- 1 | name: Clear cache 2 | 3 | on: 4 | workflow_dispatch: 5 | 6 | permissions: 7 | actions: write 8 | 9 | jobs: 10 | clear-cache: 11 | runs-on: ubuntu-latest 12 | steps: 13 | - name: Clear cache 14 | uses: actions/github-script@v6 15 | with: 16 | script: | 17 | const caches = await github.rest.actions.getActionsCacheList({ 18 | owner: context.repo.owner, 19 | repo: context.repo.repo, 20 | }) 21 | for (const cache of caches.data.actions_caches) { 22 | console.log(cache) 23 | await github.rest.actions.deleteActionsCacheById({ 24 | owner: context.repo.owner, 25 | repo: context.repo.repo, 26 | cache_id: cache.id, 27 | }) 28 | } 29 | 30 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16-SigLIP.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 224, 7 | "timm_model_name": "vit_base_patch16_siglip_224", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 768, 20 | "heads": 12, 21 | "layers": 12, 22 | "no_causal_mask": true, 23 | "proj_bias": true, 24 | "pool_type": "last", 25 | "norm_kwargs":{ 26 | "eps": 1e-6 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16-SigLIP-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 256, 7 | "timm_model_name": "vit_base_patch16_siglip_256", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 768, 20 | "heads": 12, 21 | "layers": 12, 22 | "no_causal_mask": true, 23 | "proj_bias": true, 24 | "pool_type": "last", 25 | "norm_kwargs":{ 26 | "eps": 1e-6 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16-SigLIP-384.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 384, 7 | "timm_model_name": "vit_base_patch16_siglip_384", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 768, 20 | "heads": 12, 21 | "layers": 12, 22 | "no_causal_mask": true, 23 | "proj_bias": true, 24 | "pool_type": "last", 25 | "norm_kwargs":{ 26 | "eps": 1e-6 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16-SigLIP-512.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 512, 7 | "timm_model_name": "vit_base_patch16_siglip_512", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 768, 20 | "heads": 12, 21 | "layers": 12, 22 | "no_causal_mask": true, 23 | "proj_bias": true, 24 | "pool_type": "last", 25 | "norm_kwargs":{ 26 | "eps": 1e-6 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-16-SigLIP-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 256, 7 | "timm_model_name": "vit_large_patch16_siglip_256", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 1024, 20 | "heads": 16, 21 | "layers": 24, 22 | "no_causal_mask": true, 23 | "proj_bias": true, 24 | "pool_type": "last", 25 | "norm_kwargs":{ 26 | "eps": 1e-6 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-L-16-SigLIP-384.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1024, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 384, 7 | "timm_model_name": "vit_large_patch16_siglip_384", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 1024, 20 | "heads": 16, 21 | "layers": 24, 22 | "no_causal_mask": true, 23 | "proj_bias": true, 24 | "pool_type": "last", 25 | "norm_kwargs":{ 26 | "eps": 1e-6 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipa/vit_b16/i50_t16_finetune.sh: -------------------------------------------------------------------------------- 1 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 2 | --save-frequency 1 \ 3 | --save-most-recent \ 4 | --zeroshot-frequency 1 \ 5 | --train-data '/path/to/laion-400m' \ 6 | --dataset-type webdataset \ 7 | --lr "2.56e-5" \ 8 | --beta1 0.9 \ 9 | --beta2 0.95 \ 10 | --warmup 3072 \ 11 | --wd 0.2 \ 12 | --batch-size 1024 \ 13 | --aug-cfg scale='(0.4, 1.0)' \ 14 | --epochs 1 \ 15 | --train-num-samples 131072000 \ 16 | --workers 6 \ 17 | --model ViT-B-16-CL16 \ 18 | --pretrained '/path/to/ckpt' \ 19 | --precision 'amp_bf16' \ 20 | --ddp-static-graph \ 21 | --local-loss \ 22 | --gather-with-grad \ 23 | --grad-checkpointing \ 24 | --log-every-n-steps 256 \ 25 | --seed 0 \ 26 | --logs ./logs/ \ 27 | --imagenet-val '/path/to/imagenet/val' 28 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-B-16-SigLIP-i18n-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 768, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 256, 7 | "timm_model_name": "vit_base_patch16_siglip_256", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 250000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP-i18n-256", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 768, 20 | "heads": 12, 21 | "layers": 12, 22 | "no_causal_mask": true, 23 | "proj_bias": true, 24 | "pool_type": "last", 25 | "norm_kwargs":{ 26 | "eps": 1e-6 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/scripts/biomedclip.sh: -------------------------------------------------------------------------------- 1 | cd biomedclip_finetuning/open_clip/src 2 | 3 | ## You can change the following parameters like the GPU devices, batch size, training data, epochs, and DHN-NCE loss parameters. 4 | 5 | CUDA_VISIBLE_DEVICES=0 python3 -m open_clip_train.main \ 6 | --batch-size 16 \ 7 | --workers 4 \ 8 | --report-to tensorboard \ 9 | --save-frequency 1 \ 10 | --logs="logs" \ 11 | --dataset-type csv \ 12 | --csv-separator="," \ 13 | --train-data data/medpix_dataset/medpix_dataset.csv \ 14 | --csv-img-key filename \ 15 | --csv-caption-key Caption \ 16 | --lr=1e-3 \ 17 | --wd=0.1 \ 18 | --warmup 1000 \ 19 | --epochs=32 \ 20 | --model hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224 \ 21 | --dhnnce-loss \ 22 | --temperature-dhnnce 0.6 \ 23 | --alpha-dhnnce 0.0 \ 24 | --beta1-dhnnce 0.15 \ 25 | --beta2-dhnnce 0.15 -------------------------------------------------------------------------------- /segment-anything/segment_anything.egg-info/SOURCES.txt: -------------------------------------------------------------------------------- 1 | README.md 2 | setup.cfg 3 | setup.py 4 | segment_anything/__init__.py 5 | segment_anything/automatic_mask_generator.py 6 | segment_anything/build_sam.py 7 | segment_anything/predictor.py 8 | segment_anything.egg-info/PKG-INFO 9 | segment_anything.egg-info/SOURCES.txt 10 | segment_anything.egg-info/dependency_links.txt 11 | segment_anything.egg-info/requires.txt 12 | segment_anything.egg-info/top_level.txt 13 | segment_anything/modeling/__init__.py 14 | segment_anything/modeling/common.py 15 | segment_anything/modeling/image_encoder.py 16 | segment_anything/modeling/mask_decoder.py 17 | segment_anything/modeling/prompt_encoder.py 18 | segment_anything/modeling/sam.py 19 | segment_anything/modeling/transformer.py 20 | segment_anything/utils/__init__.py 21 | segment_anything/utils/amg.py 22 | segment_anything/utils/onnx.py 23 | segment_anything/utils/transforms.py -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-SO400M-14-SigLIP-378.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1152, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 378, 7 | "timm_model_name": "vit_so400m_patch14_siglip_378", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 1152, 20 | "heads": 16, 21 | "layers": 27, 22 | "mlp_ratio": 3.7362, 23 | "no_causal_mask": true, 24 | "proj_bias": true, 25 | "pool_type": "last", 26 | "norm_kwargs":{ 27 | "eps": 1e-6 28 | } 29 | } 30 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-SO400M-14-SigLIP-384.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1152, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 384, 7 | "timm_model_name": "vit_so400m_patch14_siglip_384", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 1152, 20 | "heads": 16, 21 | "layers": 27, 22 | "mlp_ratio": 3.7362, 23 | "no_causal_mask": true, 24 | "proj_bias": true, 25 | "pool_type": "last", 26 | "norm_kwargs":{ 27 | "eps": 1e-6 28 | } 29 | } 30 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-SO400M-14-SigLIP.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1152, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 224, 7 | "timm_model_name": "vit_so400m_patch14_siglip_224", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 16, 14 | "vocab_size": 32000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 1152, 20 | "heads": 16, 21 | "layers": 27, 22 | "mlp_ratio": 3.7362, 23 | "no_causal_mask": true, 24 | "proj_bias": true, 25 | "pool_type": "last", 26 | "norm_kwargs":{ 27 | "eps": 1e-6 28 | } 29 | } 30 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipa/vit_l16/i17_t16_pretrain.sh: -------------------------------------------------------------------------------- 1 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 2 | --save-frequency 1 \ 3 | --save-most-recent \ 4 | --zeroshot-frequency 1 \ 5 | --train-data '/path/to/laion-400m' \ 6 | --dataset-type webdataset \ 7 | --lr "1.024e-3" \ 8 | --beta1 0.9 \ 9 | --beta2 0.95 \ 10 | --warmup 1563 \ 11 | --wd 0.2 \ 12 | --batch-size 4096 \ 13 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 14 | --epochs 6 \ 15 | --workers 6 \ 16 | --model ViT-L-16-CL16-GAP \ 17 | --precision 'amp_bf16' \ 18 | --ddp-static-graph \ 19 | --local-loss \ 20 | --gather-with-grad \ 21 | --force-image-size 64 \ 22 | --grad-checkpointing \ 23 | --log-every-n-steps 64 \ 24 | --seed 0 \ 25 | --logs ./logs/ \ 26 | --imagenet-val '/path/to/imagenet/val' -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipa/vit_l16/i37_t8_pretrain.sh: -------------------------------------------------------------------------------- 1 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 2 | --save-frequency 1 \ 3 | --save-most-recent \ 4 | --zeroshot-frequency 1 \ 5 | --train-data '/path/to/laion-400m' \ 6 | --dataset-type webdataset \ 7 | --lr "1.024e-3" \ 8 | --beta1 0.9 \ 9 | --beta2 0.95 \ 10 | --warmup 1563 \ 11 | --wd 0.2 \ 12 | --batch-size 4096 \ 13 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 14 | --epochs 6 \ 15 | --workers 6 \ 16 | --model ViT-L-16-CL8-Syntax-GAP \ 17 | --precision 'amp_bf16' \ 18 | --ddp-static-graph \ 19 | --local-loss \ 20 | --gather-with-grad \ 21 | --force-image-size 96 \ 22 | --grad-checkpointing \ 23 | --log-every-n-steps 64 \ 24 | --seed 0 \ 25 | --logs ./logs/ \ 26 | --imagenet-val '/path/to/imagenet/val' -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/model_configs/ViT-SO400M-16-SigLIP-i18n-256.json: -------------------------------------------------------------------------------- 1 | { 2 | "embed_dim": 1152, 3 | "init_logit_bias": -10, 4 | "custom_text": true, 5 | "vision_cfg": { 6 | "image_size": 256, 7 | "timm_model_name": "vit_so400m_patch16_siglip_256", 8 | "timm_model_pretrained": false, 9 | "timm_pool": "map", 10 | "timm_proj": "none" 11 | }, 12 | "text_cfg": { 13 | "context_length": 64, 14 | "vocab_size": 250000, 15 | "hf_tokenizer_name": "timm/ViT-B-16-SigLIP-i18n-256", 16 | "tokenizer_kwargs": { 17 | "clean": "canonicalize" 18 | }, 19 | "width": 1152, 20 | "heads": 16, 21 | "layers": 27, 22 | "mlp_ratio": 3.7362, 23 | "no_causal_mask": true, 24 | "pool_type": "last", 25 | "proj_type": "none", 26 | "norm_kwargs":{ 27 | "eps": 1e-6 28 | } 29 | } 30 | } -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipa/vit_l16/i17_t16_finetune.sh: -------------------------------------------------------------------------------- 1 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 2 | --save-frequency 1 \ 3 | --save-most-recent \ 4 | --zeroshot-frequency 1 \ 5 | --train-data '/path/to/laion-400m' \ 6 | --dataset-type webdataset \ 7 | --lr "2.24e-5" \ 8 | --beta1 0.9 \ 9 | --beta2 0.95 \ 10 | --warmup 3571 \ 11 | --wd 0.2 \ 12 | --batch-size 896 \ 13 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 14 | --epochs 1 \ 15 | --train-num-samples 131072000 \ 16 | --workers 6 \ 17 | --model ViT-L-16-CL16-GAP \ 18 | --pretrained '/path/to/ckpt' \ 19 | --precision 'amp_bf16' \ 20 | --ddp-static-graph \ 21 | --local-loss \ 22 | --gather-with-grad \ 23 | --grad-checkpointing \ 24 | --log-every-n-steps 293 \ 25 | --seed 0 \ 26 | --logs ./logs/ \ 27 | --imagenet-val '/path/to/imagenet/val' -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipa/vit_l16/i37_t8_finetune.sh: -------------------------------------------------------------------------------- 1 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 2 | --save-frequency 1 \ 3 | --save-most-recent \ 4 | --zeroshot-frequency 1 \ 5 | --train-data '/path/to/laion-400m' \ 6 | --dataset-type webdataset \ 7 | --lr "2.24e-5" \ 8 | --beta1 0.9 \ 9 | --beta2 0.95 \ 10 | --warmup 3571 \ 11 | --wd 0.2 \ 12 | --batch-size 896 \ 13 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 14 | --epochs 1 \ 15 | --train-num-samples 131072000 \ 16 | --workers 6 \ 17 | --model ViT-L-16-CL32-GAP \ 18 | --pretrained '/path/to/ckpt' \ 19 | --precision 'amp_bf16' \ 20 | --ddp-static-graph \ 21 | --local-loss \ 22 | --gather-with-grad \ 23 | --grad-checkpointing \ 24 | --log-every-n-steps 293 \ 25 | --seed 0 \ 26 | --logs ./logs/ \ 27 | --imagenet-val '/path/to/imagenet/val' -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/CITATION.cff: -------------------------------------------------------------------------------- 1 | cff-version: 1.1.0 2 | message: If you use this software, please cite it as below. 3 | authors: 4 | - family-names: Ilharco 5 | given-names: Gabriel 6 | - family-names: Wortsman 7 | given-names: Mitchell 8 | - family-names: Wightman 9 | given-names: Ross 10 | - family-names: Gordon 11 | given-names: Cade 12 | - family-names: Carlini 13 | given-names: Nicholas 14 | - family-names: Taori 15 | given-names: Rohan 16 | - family-names: Dave 17 | given-names: Achal 18 | - family-names: Shankar 19 | given-names: Vaishaal 20 | - family-names: Namkoong 21 | given-names: Hongseok 22 | - family-names: Miller 23 | given-names: John 24 | - family-names: Hajishirzi 25 | given-names: Hannaneh 26 | - family-names: Farhadi 27 | given-names: Ali 28 | - family-names: Schmidt 29 | given-names: Ludwig 30 | title: OpenCLIP 31 | version: v0.1 32 | doi: 10.5281/zenodo.5143773 33 | date-released: 2021-07-28 34 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/find_class_by_name.py: -------------------------------------------------------------------------------- 1 | import importlib 2 | import pkgutil 3 | 4 | from batchgenerators.utilities.file_and_folder_operations import * 5 | 6 | 7 | def recursive_find_python_class(folder: str, class_name: str, current_module: str): 8 | tr = None 9 | for importer, modname, ispkg in pkgutil.iter_modules([folder]): 10 | # print(modname, ispkg) 11 | if not ispkg: 12 | m = importlib.import_module(current_module + "." + modname) 13 | if hasattr(m, class_name): 14 | tr = getattr(m, class_name) 15 | break 16 | 17 | if tr is None: 18 | for importer, modname, ispkg in pkgutil.iter_modules([folder]): 19 | if ispkg: 20 | next_current_module = current_module + "." + modname 21 | tr = recursive_find_python_class(join(folder, modname), class_name, current_module=next_current_module) 22 | if tr is not None: 23 | break 24 | return tr -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/collate_outputs.py: -------------------------------------------------------------------------------- 1 | from typing import List 2 | 3 | import numpy as np 4 | 5 | 6 | def collate_outputs(outputs: List[dict]): 7 | """ 8 | used to collate default train_step and validation_step outputs. If you want something different then you gotta 9 | extend this 10 | 11 | we expect outputs to be a list of dictionaries where each of the dict has the same set of keys 12 | """ 13 | collated = {} 14 | for k in outputs[0].keys(): 15 | if np.isscalar(outputs[0][k]): 16 | collated[k] = [o[k] for o in outputs] 17 | elif isinstance(outputs[0][k], np.ndarray): 18 | collated[k] = np.vstack([o[k][None] for o in outputs]) 19 | elif isinstance(outputs[0][k], list): 20 | collated[k] = [item for o in outputs for item in o[k]] 21 | else: 22 | raise ValueError(f'Cannot collate input of type {type(outputs[0][k])}. ' 23 | f'Modify collate_outputs to add this functionality') 24 | return collated -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/data_augmentation/custom_transforms/masking.py: -------------------------------------------------------------------------------- 1 | from typing import List 2 | 3 | from batchgenerators.transforms.abstract_transforms import AbstractTransform 4 | 5 | 6 | class MaskTransform(AbstractTransform): 7 | def __init__(self, apply_to_channels: List[int], mask_idx_in_seg: int = 0, set_outside_to: int = 0, 8 | data_key: str = "data", seg_key: str = "seg"): 9 | """ 10 | Sets everything outside the mask to 0. CAREFUL! outside is defined as < 0, not =0 (in the Mask)!!! 11 | """ 12 | self.apply_to_channels = apply_to_channels 13 | self.seg_key = seg_key 14 | self.data_key = data_key 15 | self.set_outside_to = set_outside_to 16 | self.mask_idx_in_seg = mask_idx_in_seg 17 | 18 | def __call__(self, **data_dict): 19 | mask = data_dict[self.seg_key][:, self.mask_idx_in_seg] < 0 20 | for c in self.apply_to_channels: 21 | data_dict[self.data_key][:, c][mask] = self.set_outside_to 22 | return data_dict 23 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip_train/logger.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | 4 | def setup_logging(log_file, level, include_host=False): 5 | if include_host: 6 | import socket 7 | hostname = socket.gethostname() 8 | formatter = logging.Formatter( 9 | f'%(asctime)s | {hostname} | %(levelname)s | %(message)s', datefmt='%Y-%m-%d,%H:%M:%S') 10 | else: 11 | formatter = logging.Formatter('%(asctime)s | %(levelname)s | %(message)s', datefmt='%Y-%m-%d,%H:%M:%S') 12 | 13 | logging.root.setLevel(level) 14 | loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict] 15 | for logger in loggers: 16 | logger.setLevel(level) 17 | 18 | stream_handler = logging.StreamHandler() 19 | stream_handler.setFormatter(formatter) 20 | logging.root.addHandler(stream_handler) 21 | 22 | if log_file: 23 | file_handler = logging.FileHandler(filename=log_file) 24 | file_handler.setFormatter(formatter) 25 | logging.root.addHandler(file_handler) 26 | 27 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/integration_tests/prepare_integration_tests.sh: -------------------------------------------------------------------------------- 1 | # assumes you are in the nnunet repo! 2 | 3 | # prepare raw datasets 4 | python nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset999_IntegrationTest_Hippocampus.py 5 | python nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset998_IntegrationTest_Hippocampus_ignore.py 6 | python nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset997_IntegrationTest_Hippocampus_regions.py 7 | python nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset996_IntegrationTest_Hippocampus_regions_ignore.py 8 | 9 | # now run experiment planning without preprocessing 10 | nnUNetv2_plan_and_preprocess -d 996 997 998 999 --no_pp 11 | 12 | # now add 3d lowres and cascade 13 | python nnunetv2/tests/integration_tests/add_lowres_and_cascade.py -d 996 997 998 999 14 | 15 | # now preprocess everything 16 | nnUNetv2_preprocess -d 996 997 998 999 -c 2d 3d_lowres 3d_fullres -np 8 8 8 # no need to preprocess cascade as its the same data as 3d_fullres 17 | 18 | # done -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/scripts/h14_224_32_finetune.sh: -------------------------------------------------------------------------------- 1 | # 64k batchsize for 2.048e-3 lr 2 | TORCH_CUDNN_V8_API_ENABLED=1 torchrun --nproc_per_node 8 -m open_clip_train.main \ 3 | --save-frequency 1 \ 4 | --save-most-recent \ 5 | --zeroshot-frequency 1 \ 6 | --train-data '/path/to/laion' \ 7 | --dataset-type webdataset \ 8 | --lr "2.048e-3" \ 9 | --beta1 0.9 \ 10 | --beta2 0.95 \ 11 | --warmup 782 \ 12 | --wd 0.2 \ 13 | --batch-size 4096 \ 14 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 15 | --epochs=7 \ 16 | --workers=6 \ 17 | --model ViT-H-14-CL32-GAP \ 18 | --precision 'amp_bf16' \ 19 | --local-loss \ 20 | --gather-with-grad \ 21 | --force-image-size 224 \ 22 | --grad-checkpointing \ 23 | --log-every-n-steps 32 \ 24 | --seed 0 \ 25 | --logs ./logs/ \ 26 | --imagenet-val '/path/to/ImageNet/val' \ 27 | --name 'name' \ 28 | --report-to "wandb" \ 29 | --wandb-project-name "project_name" 30 | 31 | 32 | -------------------------------------------------------------------------------- /zeroshot.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # custom config 4 | 5 | # Enter the path to your dataset 6 | DATASET=$1 7 | 8 | python saliency_maps/generate_saliency_maps.py \ 9 | --input-path ${DATASET}/images \ 10 | --output-path saliency_map_outputs/${DATASET}/masks \ 11 | --val-path ${DATASET}/val_images \ 12 | --model-name BiomedCLIP \ 13 | --finetuned \ 14 | --hyper-opt \ 15 | --val-path ${DATASET}/val_images 16 | 17 | python postprocessing/postprocess_saliency_maps.py \ 18 | --input-path ${DATASET}/images \ 19 | --output-path coarse_outputs/${DATASET}/masks \ 20 | --sal-path saliency_map_outputs/${DATASET}/masks \ 21 | --postprocess kmeans \ 22 | --filter 23 | # --num-contours 2 # number of contours to extract, for lungs, use 2 contours 24 | 25 | python segment-anything/prompt_sam.py \ 26 | --input ${DATASET}/images \ 27 | --mask-input coarse_outputs/${DATASET}/masks \ 28 | --output sam_outputs/${DATASET}/masks \ 29 | --model-type vit_h \ 30 | --checkpoint segment-anything/sam_checkpoints/sam_vit_h_4b8939.pth \ 31 | --prompts boxes \ 32 | # --multicontour # for lungs, use this flag 33 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/scripts/h14_84_8_pretrain.sh: -------------------------------------------------------------------------------- 1 | # 64k batchsize for 2.048e-3 lr 2 | TORCH_CUDNN_V8_API_ENABLED=1 torchrun --nproc_per_node 8 -m open_clip_train.main \ 3 | --save-frequency 1 \ 4 | --save-most-recent \ 5 | --zeroshot-frequency 1 \ 6 | --train-data '/path/to/laion' \ 7 | --dataset-type webdataset \ 8 | --lr "2.048e-3" \ 9 | --beta1 0.9 \ 10 | --beta2 0.95 \ 11 | --warmup 782 \ 12 | --wd 0.2 \ 13 | --batch-size 4096 \ 14 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 15 | --epochs=7 \ 16 | --workers=6 \ 17 | --model ViT-H-14-CL8-SyntaxMask-GAP \ 18 | --precision 'amp_bf16' \ 19 | --local-loss \ 20 | --gather-with-grad \ 21 | --force-image-size 84 \ 22 | --grad-checkpointing \ 23 | --log-every-n-steps 32 \ 24 | --seed 0 \ 25 | --logs ./logs/ \ 26 | --imagenet-val '/path/to/ImageNet/val' \ 27 | --name 'name' \ 28 | --report-to "wandb" \ 29 | --wandb-project-name "project_name" 30 | 31 | 32 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/normalization/map_channel_name_to_normalization.py: -------------------------------------------------------------------------------- 1 | from typing import Type 2 | 3 | from nnunetv2.preprocessing.normalization.default_normalization_schemes import CTNormalization, NoNormalization, \ 4 | ZScoreNormalization, RescaleTo01Normalization, RGBTo01Normalization, ImageNormalization 5 | 6 | channel_name_to_normalization_mapping = { 7 | 'CT': CTNormalization, 8 | 'noNorm': NoNormalization, 9 | 'zscore': ZScoreNormalization, 10 | 'rescale_0_1': RescaleTo01Normalization, 11 | 'rgb_to_0_1': RGBTo01Normalization 12 | } 13 | 14 | 15 | def get_normalization_scheme(channel_name: str) -> Type[ImageNormalization]: 16 | """ 17 | If we find the channel_name in channel_name_to_normalization_mapping return the corresponding normalization. If it is 18 | not found, use the default (ZScoreNormalization) 19 | """ 20 | norm_scheme = channel_name_to_normalization_mapping.get(channel_name) 21 | if norm_scheme is None: 22 | norm_scheme = ZScoreNormalization 23 | # print('Using %s for image normalization' % norm_scheme.__name__) 24 | return norm_scheme 25 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 Health-X Lab 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /zeroshot_scripts/zeroshot_brain_tumors.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # custom config 4 | 5 | # Enter the path to your dataset 6 | DATASET=$1 7 | 8 | python saliency_maps/generate_saliency_maps.py \ 9 | --input-path ${DATASET}/test_images \ 10 | --output-path saliency_map_outputs/${DATASET}/test_masks \ 11 | --model-name BiomedCLIP \ 12 | --finetuned \ 13 | --json-path saliency_maps/text_prompts/brain_tumors_testing.json \ 14 | --reproduce \ 15 | --vvar 0.3 \ 16 | --vbeta 2.0 \ 17 | --vlayer 9 \ 18 | --seed 12 19 | 20 | python postprocessing/postprocess_saliency_maps.py \ 21 | --input-path ${DATASET}/test_images \ 22 | --output-path coarse_outputs/${DATASET}/test_masks \ 23 | --sal-path saliency_map_outputs/${DATASET}/test_masks \ 24 | --postprocess kmeans \ 25 | --filter 26 | 27 | python segment-anything/prompt_sam.py \ 28 | --input ${DATASET}/test_images \ 29 | --mask-input coarse_outputs/${DATASET}/test_masks \ 30 | --output sam_outputs/${DATASET}/test_masks \ 31 | --model-type vit_h \ 32 | --checkpoint segment-anything/sam_checkpoints/sam_vit_h_4b8939.pth \ 33 | --prompts boxes 34 | 35 | python evaluation/eval.py \ 36 | --gt_path ${DATASET}/test_masks \ 37 | --seg_path sam_outputs/${DATASET}/test_masks 38 | -------------------------------------------------------------------------------- /zeroshot_scripts/zeroshot_breast_tumors.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # custom config 4 | 5 | # Enter the path to your dataset 6 | DATASET=$1 7 | 8 | python saliency_maps/generate_saliency_maps.py \ 9 | --input-path ${DATASET}/test_images \ 10 | --output-path saliency_map_outputs/${DATASET}/test_masks \ 11 | --model-name BiomedCLIP \ 12 | --finetuned \ 13 | --json-path saliency_maps/text_prompts/breast_tumors_testing.json \ 14 | --reproduce \ 15 | --vvar 1.0 \ 16 | --vbeta 1.0 \ 17 | --vlayer 9 \ 18 | --seed 12 19 | 20 | python postprocessing/postprocess_saliency_maps.py \ 21 | --input-path ${DATASET}/test_images \ 22 | --output-path coarse_outputs/${DATASET}/test_masks \ 23 | --sal-path saliency_map_outputs/${DATASET}/test_masks \ 24 | --postprocess kmeans \ 25 | --filter 26 | 27 | python segment-anything/prompt_sam.py \ 28 | --input ${DATASET}/test_images \ 29 | --mask-input coarse_outputs/${DATASET}/test_masks \ 30 | --output sam_outputs/${DATASET}/test_masks \ 31 | --model-type vit_h \ 32 | --checkpoint segment-anything/sam_checkpoints/sam_vit_h_4b8939.pth \ 33 | --prompts boxes 34 | 35 | python evaluation/eval.py \ 36 | --gt_path ${DATASET}/test_masks \ 37 | --seg_path sam_outputs/${DATASET}/test_masks 38 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/.github/workflows/python-publish.yml: -------------------------------------------------------------------------------- 1 | name: Release 2 | 3 | on: 4 | push: 5 | branches: 6 | - main 7 | jobs: 8 | deploy: 9 | runs-on: ubuntu-latest 10 | steps: 11 | - uses: actions/checkout@v2 12 | - uses: actions-ecosystem/action-regex-match@v2 13 | id: regex-match 14 | with: 15 | text: ${{ github.event.head_commit.message }} 16 | regex: '^Release ([^ ]+)' 17 | - name: Set up Python 18 | uses: actions/setup-python@v2 19 | with: 20 | python-version: '3.8' 21 | - name: Install dependencies 22 | run: | 23 | python -m pip install --upgrade pip 24 | pip install setuptools wheel twine build 25 | - name: Release 26 | if: ${{ steps.regex-match.outputs.match != '' }} 27 | uses: softprops/action-gh-release@v1 28 | with: 29 | tag_name: v${{ steps.regex-match.outputs.group1 }} 30 | - name: Build and publish 31 | if: ${{ steps.regex-match.outputs.match != '' }} 32 | env: 33 | TWINE_USERNAME: __token__ 34 | TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }} 35 | run: | 36 | python -m build 37 | twine upload dist/* 38 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/tests/test_hf_model.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | import torch 4 | from open_clip.hf_model import _POOLERS, HFTextEncoder 5 | from transformers import AutoConfig 6 | from transformers.modeling_outputs import BaseModelOutput 7 | 8 | # test poolers 9 | def test_poolers(): 10 | bs, sl, d = 2, 10, 5 11 | h = torch.arange(sl).repeat(bs).reshape(bs, sl)[..., None] * torch.linspace(0.2, 1., d) 12 | mask = torch.ones(bs, sl, dtype=torch.bool) 13 | mask[:2, 6:] = False 14 | x = BaseModelOutput(h) 15 | for name, cls in _POOLERS.items(): 16 | pooler = cls() 17 | res = pooler(x, mask) 18 | assert res.shape == (bs, d), f"{name} returned wrong shape" 19 | 20 | # test HFTextEncoder 21 | @pytest.mark.parametrize("model_id", ["arampacha/roberta-tiny", "roberta-base", "xlm-roberta-base", "google/mt5-base"]) 22 | def test_pretrained_text_encoder(model_id): 23 | bs, sl, d = 2, 10, 64 24 | cfg = AutoConfig.from_pretrained(model_id) 25 | model = HFTextEncoder(model_id, d, proj_type='linear') 26 | x = torch.randint(0, cfg.vocab_size, (bs, sl)) 27 | with torch.no_grad(): 28 | emb = model(x) 29 | 30 | assert emb.shape == (bs, d) 31 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipav2/vit_h14/i50_t8_pretrain.sh: -------------------------------------------------------------------------------- 1 | # have not been tested. use it at your own discretion 2 | # the original experiment was run on tpu v3-256. 3 | # this example script assumes 8 gpus, each with huge memory. Tune batchsize, warmup, and lr accordingly if you have different machine setups. 4 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 5 | --save-frequency 1 \ 6 | --save-most-recent \ 7 | --zeroshot-frequency 1 \ 8 | --train-data '/path/to/laion2b_or_datacomp1b' \ 9 | --train-num-samples 4e8 \ 10 | --dataset-type webdataset \ 11 | --lr "2.048e-3" \ 12 | --beta1 0.9 \ 13 | --beta2 0.95 \ 14 | --warmup 3200 \ 15 | --wd 0.2 \ 16 | --batch-size 8192 \ 17 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 18 | --epochs 32 \ 19 | --workers 6 \ 20 | --model ViT-H-14-CL8-Syntax-GAP \ 21 | --precision 'amp_bf16' \ 22 | --ddp-static-graph \ 23 | --local-loss \ 24 | --gather-with-grad \ 25 | --force-image-size 84 \ 26 | --grad-checkpointing \ 27 | --log-every-n-steps 32 \ 28 | --seed 0 \ 29 | --logs ./logs/ \ 30 | --imagenet-val '/path/to/imagenet/val' -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset999_IntegrationTest_Hippocampus.py: -------------------------------------------------------------------------------- 1 | import shutil 2 | 3 | from batchgenerators.utilities.file_and_folder_operations import isdir, join 4 | 5 | from nnunetv2.utilities.dataset_name_id_conversion import maybe_convert_to_dataset_name 6 | from nnunetv2.paths import nnUNet_raw 7 | 8 | 9 | if __name__ == '__main__': 10 | dataset_name = 'IntegrationTest_Hippocampus' 11 | dataset_id = 999 12 | dataset_name = f"Dataset{dataset_id:03d}_{dataset_name}" 13 | 14 | try: 15 | existing_dataset_name = maybe_convert_to_dataset_name(dataset_id) 16 | if existing_dataset_name != dataset_name: 17 | raise FileExistsError(f"A different dataset with id {dataset_id} already exists :-(: {existing_dataset_name}. If " 18 | f"you intent to delete it, remember to also remove it in nnUNet_preprocessed and " 19 | f"nnUNet_results!") 20 | except RuntimeError: 21 | pass 22 | 23 | if isdir(join(nnUNet_raw, dataset_name)): 24 | shutil.rmtree(join(nnUNet_raw, dataset_name)) 25 | 26 | source_dataset = maybe_convert_to_dataset_name(4) 27 | shutil.copytree(join(nnUNet_raw, source_dataset), join(nnUNet_raw, dataset_name)) 28 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/data_augmentation/compute_initial_patch_size.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | 4 | def get_patch_size(final_patch_size, rot_x, rot_y, rot_z, scale_range): 5 | if isinstance(rot_x, (tuple, list)): 6 | rot_x = max(np.abs(rot_x)) 7 | if isinstance(rot_y, (tuple, list)): 8 | rot_y = max(np.abs(rot_y)) 9 | if isinstance(rot_z, (tuple, list)): 10 | rot_z = max(np.abs(rot_z)) 11 | rot_x = min(90 / 360 * 2. * np.pi, rot_x) 12 | rot_y = min(90 / 360 * 2. * np.pi, rot_y) 13 | rot_z = min(90 / 360 * 2. * np.pi, rot_z) 14 | from batchgenerators.augmentations.utils import rotate_coords_3d, rotate_coords_2d 15 | coords = np.array(final_patch_size) 16 | final_shape = np.copy(coords) 17 | if len(coords) == 3: 18 | final_shape = np.max(np.vstack((np.abs(rotate_coords_3d(coords, rot_x, 0, 0)), final_shape)), 0) 19 | final_shape = np.max(np.vstack((np.abs(rotate_coords_3d(coords, 0, rot_y, 0)), final_shape)), 0) 20 | final_shape = np.max(np.vstack((np.abs(rotate_coords_3d(coords, 0, 0, rot_z)), final_shape)), 0) 21 | elif len(coords) == 2: 22 | final_shape = np.max(np.vstack((np.abs(rotate_coords_2d(coords, rot_x)), final_shape)), 0) 23 | final_shape /= min(scale_range) 24 | return final_shape.astype(int) 25 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/loss/nnUNetTrainerCELoss.py: -------------------------------------------------------------------------------- 1 | from nnunetv2.training.loss.deep_supervision import DeepSupervisionWrapper 2 | from nnunetv2.training.nnUNetTrainer.nnUNetTrainer import nnUNetTrainer 3 | from nnunetv2.training.loss.robust_ce_loss import RobustCrossEntropyLoss 4 | import numpy as np 5 | 6 | 7 | class nnUNetTrainerCELoss(nnUNetTrainer): 8 | def _build_loss(self): 9 | assert not self.label_manager.has_regions, 'regions not supported by this trainer' 10 | loss = RobustCrossEntropyLoss(weight=None, 11 | ignore_index=self.label_manager.ignore_label if self.label_manager.has_ignore_label else -100) 12 | 13 | deep_supervision_scales = self._get_deep_supervision_scales() 14 | 15 | # we give each output a weight which decreases exponentially (division by 2) as the resolution decreases 16 | # this gives higher resolution outputs more weight in the loss 17 | weights = np.array([1 / (2 ** i) for i in range(len(deep_supervision_scales))]) 18 | 19 | # we don't use the lowest 2 outputs. Normalize weights so that they sum to 1 20 | weights = weights / weights.sum() 21 | # now wrap the loss 22 | loss = DeepSupervisionWrapper(loss, weights) 23 | return loss 24 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2012-2021 Gabriel Ilharco, Mitchell Wortsman, 2 | Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, 3 | John Miller, Hongseok Namkoong, Hannaneh Hajishirzi, Ali Farhadi, 4 | Ludwig Schmidt 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining 7 | a copy of this software and associated documentation files (the 8 | "Software"), to deal in the Software without restriction, including 9 | without limitation the rights to use, copy, modify, merge, publish, 10 | distribute, sublicense, and/or sell copies of the Software, and to 11 | permit persons to whom the Software is furnished to do so, subject to 12 | the following conditions: 13 | 14 | The above copyright notice and this permission notice shall be 15 | included in all copies or substantial portions of the Software. 16 | 17 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 18 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 19 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 20 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 21 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 22 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 23 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 24 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipav2/vit_h14/i257_t32_finetunex4.sh: -------------------------------------------------------------------------------- 1 | # have not been tested. use it at your own discretion 2 | # the original experiment was run on tpu v3-256. 3 | # this example script assumes 8 gpus, each with huge memory. Tune batchsize, warmup, and lr accordingly if you have different machine setups. 4 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 5 | --save-frequency 1 \ 6 | --save-most-recent \ 7 | --zeroshot-frequency 1 \ 8 | --train-data '/path/to/laion2b_or_datacomp1b' \ 9 | --train-num-samples 131072000 \ 10 | --dataset-type webdataset \ 11 | --lr "5.12e-5" \ 12 | --beta1 0.9 \ 13 | --beta2 0.95 \ 14 | --warmup 800 \ 15 | --wd 0.2 \ 16 | --batch-size 4096 \ 17 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 18 | --epochs 4 \ 19 | --workers 6 \ 20 | --model ViT-H-14-CL32-GAP \ 21 | --pretrained '/path/to/pretrain84_ckpt' \ 22 | --precision 'amp_bf16' \ 23 | --ddp-static-graph \ 24 | --local-loss \ 25 | --gather-with-grad \ 26 | --force-image-size 224 \ 27 | --force-patch-dropout 0.3 \ 28 | --grad-checkpointing \ 29 | --log-every-n-steps 64 \ 30 | --seed 0 \ 31 | --logs ./logs/ \ 32 | --imagenet-val '/path/to/imagenet/val' -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/clipav2/vit_h14/i577_t32_finetunex1.sh: -------------------------------------------------------------------------------- 1 | # have not been tested. use it at your own discretion 2 | # the original experiment was run on tpu v3-256. 3 | # this example script assumes 8 gpus, each with huge memory. Tune batchsize, warmup, and lr accordingly if you have different machine setups. 4 | torchrun --nproc_per_node 8 -m open_clip_train.main \ 5 | --save-frequency 1 \ 6 | --save-most-recent \ 7 | --zeroshot-frequency 1 \ 8 | --train-data '/path/to/laion2b_or_datacomp1b' \ 9 | --train-num-samples 131072000 \ 10 | --dataset-type webdataset \ 11 | --lr "6.4e-6" \ 12 | --beta1 0.9 \ 13 | --beta2 0.95 \ 14 | --warmup 1600 \ 15 | --wd 0.2 \ 16 | --batch-size 2048 \ 17 | --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 \ 18 | --epochs 1 \ 19 | --workers 6 \ 20 | --model ViT-H-14-CL32-GAP \ 21 | --pretrained '/path/to/finetune224_ckpt' \ 22 | --precision 'amp_bf16' \ 23 | --ddp-static-graph \ 24 | --local-loss \ 25 | --gather-with-grad \ 26 | --force-image-size 336 \ 27 | --force-patch-dropout 0.4 \ 28 | --grad-checkpointing \ 29 | --log-every-n-steps 64 \ 30 | --seed 0 \ 31 | --logs ./logs/ \ 32 | --imagenet-val '/path/to/imagenet/val' -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/loss/robust_ce_loss.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch import nn, Tensor 3 | import numpy as np 4 | 5 | 6 | class RobustCrossEntropyLoss(nn.CrossEntropyLoss): 7 | """ 8 | this is just a compatibility layer because my target tensor is float and has an extra dimension 9 | 10 | input must be logits, not probabilities! 11 | """ 12 | def forward(self, input: Tensor, target: Tensor) -> Tensor: 13 | if len(target.shape) == len(input.shape): 14 | assert target.shape[1] == 1 15 | target = target[:, 0] 16 | return super().forward(input, target.long()) 17 | 18 | 19 | class TopKLoss(RobustCrossEntropyLoss): 20 | """ 21 | input must be logits, not probabilities! 22 | """ 23 | def __init__(self, weight=None, ignore_index: int = -100, k: float = 10, label_smoothing: float = 0): 24 | self.k = k 25 | super(TopKLoss, self).__init__(weight, False, ignore_index, reduce=False, label_smoothing=label_smoothing) 26 | 27 | def forward(self, inp, target): 28 | target = target[:, 0].long() 29 | res = super(TopKLoss, self).forward(inp, target) 30 | num_voxels = np.prod(res.shape, dtype=np.int64) 31 | res, _ = torch.topk(res.view((-1, )), int(num_voxels * self.k / 100), sorted=False) 32 | return res.mean() 33 | 34 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/__init__.py: -------------------------------------------------------------------------------- 1 | from .version import __version__ 2 | 3 | from .coca_model import CoCa 4 | from .constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD 5 | from .factory import create_model, create_model_and_transforms, create_model_from_pretrained, get_tokenizer, create_loss 6 | from .factory import list_models, add_model_config, get_model_config, load_checkpoint 7 | from .loss import ClipLoss, DistillClipLoss, CoCaLoss 8 | from .model import CLIP, CustomTextCLIP, CLIPTextCfg, CLIPVisionCfg, \ 9 | convert_weights_to_lp, convert_weights_to_fp16, trace_model, get_cast_dtype, get_input_dtype, \ 10 | get_model_tokenize_cfg, get_model_preprocess_cfg, set_model_preprocess_cfg 11 | from .openai import load_openai_model, list_openai_models 12 | from .pretrained import list_pretrained, list_pretrained_models_by_tag, list_pretrained_tags_by_model, \ 13 | get_pretrained_url, download_pretrained_from_url, is_pretrained_cfg, get_pretrained_cfg, download_pretrained 14 | from .push_to_hf_hub import push_pretrained_to_hf_hub, push_to_hf_hub 15 | from .tokenizer import SimpleTokenizer, tokenize, decode 16 | from .transform import image_transform, AugmentationCfg 17 | from .zero_shot_classifier import build_zero_shot_classifier, build_zero_shot_classifier_legacy 18 | from .zero_shot_metadata import OPENAI_IMAGENET_TEMPLATES, SIMPLE_IMAGENET_TEMPLATES, IMAGENET_CLASSNAMES 19 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/data_augmentation/nnUNetTrainerNoMirroring.py: -------------------------------------------------------------------------------- 1 | from nnunetv2.training.nnUNetTrainer.nnUNetTrainer import nnUNetTrainer 2 | 3 | 4 | class nnUNetTrainerNoMirroring(nnUNetTrainer): 5 | def configure_rotation_dummyDA_mirroring_and_inital_patch_size(self): 6 | rotation_for_DA, do_dummy_2d_data_aug, initial_patch_size, mirror_axes = \ 7 | super().configure_rotation_dummyDA_mirroring_and_inital_patch_size() 8 | mirror_axes = None 9 | self.inference_allowed_mirroring_axes = None 10 | return rotation_for_DA, do_dummy_2d_data_aug, initial_patch_size, mirror_axes 11 | 12 | 13 | class nnUNetTrainer_onlyMirror01(nnUNetTrainer): 14 | """ 15 | Only mirrors along spatial axes 0 and 1 for 3D and 0 for 2D 16 | """ 17 | def configure_rotation_dummyDA_mirroring_and_inital_patch_size(self): 18 | rotation_for_DA, do_dummy_2d_data_aug, initial_patch_size, mirror_axes = \ 19 | super().configure_rotation_dummyDA_mirroring_and_inital_patch_size() 20 | patch_size = self.configuration_manager.patch_size 21 | dim = len(patch_size) 22 | if dim == 2: 23 | mirror_axes = (0, ) 24 | else: 25 | mirror_axes = (0, 1) 26 | self.inference_allowed_mirroring_axes = mirror_axes 27 | return rotation_for_DA, do_dummy_2d_data_aug, initial_patch_size, mirror_axes 28 | 29 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/integration_tests/run_integration_test.sh: -------------------------------------------------------------------------------- 1 | 2 | 3 | nnUNetv2_train $1 3d_fullres 0 -tr nnUNetTrainer_5epochs --npz 4 | nnUNetv2_train $1 3d_fullres 1 -tr nnUNetTrainer_5epochs --npz 5 | nnUNetv2_train $1 3d_fullres 2 -tr nnUNetTrainer_5epochs --npz 6 | nnUNetv2_train $1 3d_fullres 3 -tr nnUNetTrainer_5epochs --npz 7 | nnUNetv2_train $1 3d_fullres 4 -tr nnUNetTrainer_5epochs --npz 8 | 9 | nnUNetv2_train $1 2d 0 -tr nnUNetTrainer_5epochs --npz 10 | nnUNetv2_train $1 2d 1 -tr nnUNetTrainer_5epochs --npz 11 | nnUNetv2_train $1 2d 2 -tr nnUNetTrainer_5epochs --npz 12 | nnUNetv2_train $1 2d 3 -tr nnUNetTrainer_5epochs --npz 13 | nnUNetv2_train $1 2d 4 -tr nnUNetTrainer_5epochs --npz 14 | 15 | nnUNetv2_train $1 3d_lowres 0 -tr nnUNetTrainer_5epochs --npz 16 | nnUNetv2_train $1 3d_lowres 1 -tr nnUNetTrainer_5epochs --npz 17 | nnUNetv2_train $1 3d_lowres 2 -tr nnUNetTrainer_5epochs --npz 18 | nnUNetv2_train $1 3d_lowres 3 -tr nnUNetTrainer_5epochs --npz 19 | nnUNetv2_train $1 3d_lowres 4 -tr nnUNetTrainer_5epochs --npz 20 | 21 | nnUNetv2_train $1 3d_cascade_fullres 0 -tr nnUNetTrainer_5epochs --npz 22 | nnUNetv2_train $1 3d_cascade_fullres 1 -tr nnUNetTrainer_5epochs --npz 23 | nnUNetv2_train $1 3d_cascade_fullres 2 -tr nnUNetTrainer_5epochs --npz 24 | nnUNetv2_train $1 3d_cascade_fullres 3 -tr nnUNetTrainer_5epochs --npz 25 | nnUNetv2_train $1 3d_cascade_fullres 4 -tr nnUNetTrainer_5epochs --npz 26 | 27 | python nnunetv2/tests/integration_tests/run_integration_test_bestconfig_inference.py -d $1 -------------------------------------------------------------------------------- /segment-anything/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to segment-anything 2 | We want to make contributing to this project as easy and transparent as 3 | possible. 4 | 5 | ## Pull Requests 6 | We actively welcome your pull requests. 7 | 8 | 1. Fork the repo and create your branch from `main`. 9 | 2. If you've added code that should be tested, add tests. 10 | 3. If you've changed APIs, update the documentation. 11 | 4. Ensure the test suite passes. 12 | 5. Make sure your code lints, using the `linter.sh` script in the project's root directory. Linting requires `black==23.*`, `isort==5.12.0`, `flake8`, and `mypy`. 13 | 6. If you haven't already, complete the Contributor License Agreement ("CLA"). 14 | 15 | ## Contributor License Agreement ("CLA") 16 | In order to accept your pull request, we need you to submit a CLA. You only need 17 | to do this once to work on any of Facebook's open source projects. 18 | 19 | Complete your CLA here: 20 | 21 | ## Issues 22 | We use GitHub issues to track public bugs. Please ensure your description is 23 | clear and has sufficient instructions to be able to reproduce the issue. 24 | 25 | Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe 26 | disclosure of security bugs. In those cases, please go through the process 27 | outlined on that page and do not file a public issue. 28 | 29 | ## License 30 | By contributing to segment-anything, you agree that your contributions will be licensed 31 | under the LICENSE file in the root directory of this source tree. 32 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset998_IntegrationTest_Hippocampus_ignore.py: -------------------------------------------------------------------------------- 1 | import shutil 2 | 3 | from batchgenerators.utilities.file_and_folder_operations import isdir, join, load_json, save_json 4 | 5 | from nnunetv2.utilities.dataset_name_id_conversion import maybe_convert_to_dataset_name 6 | from nnunetv2.paths import nnUNet_raw 7 | 8 | 9 | if __name__ == '__main__': 10 | dataset_name = 'IntegrationTest_Hippocampus_ignore' 11 | dataset_id = 998 12 | dataset_name = f"Dataset{dataset_id:03d}_{dataset_name}" 13 | 14 | try: 15 | existing_dataset_name = maybe_convert_to_dataset_name(dataset_id) 16 | if existing_dataset_name != dataset_name: 17 | raise FileExistsError(f"A different dataset with id {dataset_id} already exists :-(: {existing_dataset_name}. If " 18 | f"you intent to delete it, remember to also remove it in nnUNet_preprocessed and " 19 | f"nnUNet_results!") 20 | except RuntimeError: 21 | pass 22 | 23 | if isdir(join(nnUNet_raw, dataset_name)): 24 | shutil.rmtree(join(nnUNet_raw, dataset_name)) 25 | 26 | source_dataset = maybe_convert_to_dataset_name(4) 27 | shutil.copytree(join(nnUNet_raw, source_dataset), join(nnUNet_raw, dataset_name)) 28 | 29 | # set class 2 to ignore label 30 | dj = load_json(join(nnUNet_raw, dataset_name, 'dataset.json')) 31 | dj['labels']['ignore'] = 2 32 | del dj['labels']['Posterior'] 33 | save_json(dj, join(nnUNet_raw, dataset_name, 'dataset.json'), sort_keys=False) 34 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/loss/deep_supervision.py: -------------------------------------------------------------------------------- 1 | from torch import nn 2 | 3 | 4 | class DeepSupervisionWrapper(nn.Module): 5 | def __init__(self, loss, weight_factors=None): 6 | """ 7 | Wraps a loss function so that it can be applied to multiple outputs. Forward accepts an arbitrary number of 8 | inputs. Each input is expected to be a tuple/list. Each tuple/list must have the same length. The loss is then 9 | applied to each entry like this: 10 | l = w0 * loss(input0[0], input1[0], ...) + w1 * loss(input0[1], input1[1], ...) + ... 11 | If weights are None, all w will be 1. 12 | """ 13 | super(DeepSupervisionWrapper, self).__init__() 14 | self.weight_factors = weight_factors 15 | self.loss = loss 16 | 17 | def forward(self, *args): 18 | for i in args: 19 | assert isinstance(i, (tuple, list)), "all args must be either tuple or list, got %s" % type(i) 20 | # we could check for equal lengths here as well but we really shouldn't overdo it with checks because 21 | # this code is executed a lot of times! 22 | 23 | if self.weight_factors is None: 24 | weights = [1] * len(args[0]) 25 | else: 26 | weights = self.weight_factors 27 | 28 | # we initialize the loss like this instead of 0 to ensure it sits on the correct device, not sure if that's 29 | # really necessary 30 | l = weights[0] * self.loss(*[j[0] for j in args]) 31 | for i, inputs in enumerate(zip(*args)): 32 | if i == 0: 33 | continue 34 | l += weights[i] * self.loss(*inputs) 35 | return l -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset997_IntegrationTest_Hippocampus_regions.py: -------------------------------------------------------------------------------- 1 | import shutil 2 | 3 | from batchgenerators.utilities.file_and_folder_operations import isdir, join, load_json, save_json 4 | 5 | from nnunetv2.utilities.dataset_name_id_conversion import maybe_convert_to_dataset_name 6 | from nnunetv2.paths import nnUNet_raw 7 | 8 | if __name__ == '__main__': 9 | dataset_name = 'IntegrationTest_Hippocampus_regions' 10 | dataset_id = 997 11 | dataset_name = f"Dataset{dataset_id:03d}_{dataset_name}" 12 | 13 | try: 14 | existing_dataset_name = maybe_convert_to_dataset_name(dataset_id) 15 | if existing_dataset_name != dataset_name: 16 | raise FileExistsError( 17 | f"A different dataset with id {dataset_id} already exists :-(: {existing_dataset_name}. If " 18 | f"you intent to delete it, remember to also remove it in nnUNet_preprocessed and " 19 | f"nnUNet_results!") 20 | except RuntimeError: 21 | pass 22 | 23 | if isdir(join(nnUNet_raw, dataset_name)): 24 | shutil.rmtree(join(nnUNet_raw, dataset_name)) 25 | 26 | source_dataset = maybe_convert_to_dataset_name(4) 27 | shutil.copytree(join(nnUNet_raw, source_dataset), join(nnUNet_raw, dataset_name)) 28 | 29 | # additionally optimize entire hippocampus region, remove Posterior 30 | dj = load_json(join(nnUNet_raw, dataset_name, 'dataset.json')) 31 | dj['labels'] = { 32 | 'background': 0, 33 | 'hippocampus': (1, 2), 34 | 'anterior': 1 35 | } 36 | dj['regions_class_order'] = (2, 1) 37 | save_json(dj, join(nnUNet_raw, dataset_name, 'dataset.json'), sort_keys=False) 38 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/integration_tests/add_lowres_and_cascade.py: -------------------------------------------------------------------------------- 1 | from batchgenerators.utilities.file_and_folder_operations import * 2 | 3 | from nnunetv2.paths import nnUNet_preprocessed 4 | from nnunetv2.utilities.dataset_name_id_conversion import maybe_convert_to_dataset_name 5 | 6 | if __name__ == '__main__': 7 | import argparse 8 | 9 | parser = argparse.ArgumentParser() 10 | parser.add_argument('-d', nargs='+', type=int, help='List of dataset ids') 11 | args = parser.parse_args() 12 | 13 | for d in args.d: 14 | dataset_name = maybe_convert_to_dataset_name(d) 15 | plans = load_json(join(nnUNet_preprocessed, dataset_name, 'nnUNetPlans.json')) 16 | plans['configurations']['3d_lowres'] = { 17 | "data_identifier": "nnUNetPlans_3d_lowres", # do not be a dumbo and forget this. I was a dumbo. And I paid dearly with ~10 min debugging time 18 | 'inherits_from': '3d_fullres', 19 | "patch_size": [20, 28, 20], 20 | "median_image_size_in_voxels": [18.0, 25.0, 18.0], 21 | "spacing": [2.0, 2.0, 2.0], 22 | "n_conv_per_stage_encoder": [2, 2, 2], 23 | "n_conv_per_stage_decoder": [2, 2], 24 | "num_pool_per_axis": [2, 2, 2], 25 | "pool_op_kernel_sizes": [[1, 1, 1], [2, 2, 2], [2, 2, 2]], 26 | "conv_kernel_sizes": [[3, 3, 3], [3, 3, 3], [3, 3, 3]], 27 | "next_stage": "3d_cascade_fullres" 28 | } 29 | plans['configurations']['3d_cascade_fullres'] = { 30 | 'inherits_from': '3d_fullres', 31 | "previous_stage": "3d_lowres" 32 | } 33 | save_json(plans, join(nnUNet_preprocessed, dataset_name, 'nnUNetPlans.json'), sort_keys=False) -------------------------------------------------------------------------------- /segment-anything/segment_anything/modeling/common.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # All rights reserved. 3 | 4 | # This source code is licensed under the license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | import torch 8 | import torch.nn as nn 9 | 10 | from typing import Type 11 | 12 | 13 | class MLPBlock(nn.Module): 14 | def __init__( 15 | self, 16 | embedding_dim: int, 17 | mlp_dim: int, 18 | act: Type[nn.Module] = nn.GELU, 19 | ) -> None: 20 | super().__init__() 21 | self.lin1 = nn.Linear(embedding_dim, mlp_dim) 22 | self.lin2 = nn.Linear(mlp_dim, embedding_dim) 23 | self.act = act() 24 | 25 | def forward(self, x: torch.Tensor) -> torch.Tensor: 26 | return self.lin2(self.act(self.lin1(x))) 27 | 28 | 29 | # From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa 30 | # Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa 31 | class LayerNorm2d(nn.Module): 32 | def __init__(self, num_channels: int, eps: float = 1e-6) -> None: 33 | super().__init__() 34 | self.weight = nn.Parameter(torch.ones(num_channels)) 35 | self.bias = nn.Parameter(torch.zeros(num_channels)) 36 | self.eps = eps 37 | 38 | def forward(self, x: torch.Tensor) -> torch.Tensor: 39 | u = x.mean(1, keepdim=True) 40 | s = (x - u).pow(2).mean(1, keepdim=True) 41 | x = (x - u) / torch.sqrt(s + self.eps) 42 | x = self.weight[:, None, None] * x + self.bias[:, None, None] 43 | return x 44 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/experiment_planning/experiment_planners/readme.md: -------------------------------------------------------------------------------- 1 | What do experiment planners need to do (these are notes for myself while rewriting nnU-Net, they are provided as is 2 | without further explanations. These notes also include new features): 3 | - (done) preprocessor name should be configurable via cli 4 | - (done) gpu memory target should be configurable via cli 5 | - (done) plans name should be configurable via cli 6 | - (done) data name should be specified in plans (plans specify the data they want to use, this will allow us to manually 7 | edit plans files without having to copy the data folders) 8 | - plans must contain: 9 | - (done) transpose forward/backward 10 | - (done) preprocessor name (can differ for each config) 11 | - (done) spacing 12 | - (done) normalization scheme 13 | - (done) target spacing 14 | - (done) conv and pool op kernel sizes 15 | - (done) base num features for architecture 16 | - (done) data identifier 17 | - num conv per stage? 18 | - (done) use mask for norm 19 | - [NO. Handled by LabelManager & dataset.json] num segmentation outputs 20 | - [NO. Handled by LabelManager & dataset.json] ignore class 21 | - [NO. Handled by LabelManager & dataset.json] list of regions or classes 22 | - [NO. Handled by LabelManager & dataset.json] regions class order, if applicable 23 | - (done) resampling function to be used 24 | - (done) the image reader writer class that should be used 25 | 26 | 27 | dataset.json 28 | mandatory: 29 | - numTraining 30 | - labels (value 'ignore' has special meaning. Cannot have more than one ignore_label) 31 | - modalities 32 | - file_ending 33 | 34 | optional 35 | - overwrite_image_reader_writer (if absent, auto) 36 | - regions 37 | - region_class_order 38 | - -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/preprocessing/cropping/cropping.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | 4 | # Hello! crop_to_nonzero is the function you are looking for. Ignore the rest. 5 | from acvl_utils.cropping_and_padding.bounding_boxes import get_bbox_from_mask, crop_to_bbox, bounding_box_to_slice 6 | 7 | 8 | def create_nonzero_mask(data): 9 | """ 10 | 11 | :param data: 12 | :return: the mask is True where the data is nonzero 13 | """ 14 | from scipy.ndimage import binary_fill_holes 15 | assert len(data.shape) == 4 or len(data.shape) == 3, "data must have shape (C, X, Y, Z) or shape (C, X, Y)" 16 | nonzero_mask = np.zeros(data.shape[1:], dtype=bool) 17 | for c in range(data.shape[0]): 18 | this_mask = data[c] != 0 19 | nonzero_mask = nonzero_mask | this_mask 20 | nonzero_mask = binary_fill_holes(nonzero_mask) 21 | return nonzero_mask 22 | 23 | 24 | def crop_to_nonzero(data, seg=None, nonzero_label=-1): 25 | """ 26 | 27 | :param data: 28 | :param seg: 29 | :param nonzero_label: this will be written into the segmentation map 30 | :return: 31 | """ 32 | nonzero_mask = create_nonzero_mask(data) 33 | bbox = get_bbox_from_mask(nonzero_mask) 34 | 35 | slicer = bounding_box_to_slice(bbox) 36 | data = data[tuple([slice(None), *slicer])] 37 | 38 | if seg is not None: 39 | seg = seg[tuple([slice(None), *slicer])] 40 | 41 | nonzero_mask = nonzero_mask[slicer][None] 42 | if seg is not None: 43 | seg[(seg == 0) & (~nonzero_mask)] = nonzero_label 44 | else: 45 | nonzero_mask = nonzero_mask.astype(np.int8) 46 | nonzero_mask[nonzero_mask == 0] = nonzero_label 47 | nonzero_mask[nonzero_mask > 0] = 0 48 | seg = nonzero_mask 49 | return data, seg, bbox 50 | 51 | 52 | -------------------------------------------------------------------------------- /weak_segmentation/.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | 52 | # Django stuff: 53 | *.log 54 | local_settings.py 55 | 56 | # Flask stuff: 57 | instance/ 58 | .webassets-cache 59 | 60 | # Scrapy stuff: 61 | .scrapy 62 | 63 | # Sphinx documentation 64 | docs/_build/ 65 | 66 | # PyBuilder 67 | target/ 68 | 69 | # IPython Notebook 70 | .ipynb_checkpoints 71 | 72 | # pyenv 73 | .python-version 74 | 75 | # celery beat schedule file 76 | celerybeat-schedule 77 | 78 | # dotenv 79 | .env 80 | 81 | # virtualenv 82 | venv/ 83 | ENV/ 84 | 85 | # Spyder project settings 86 | .spyderproject 87 | 88 | # Rope project settings 89 | .ropeproject 90 | 91 | *.memmap 92 | *.png 93 | *.zip 94 | *.npz 95 | *.npy 96 | *.jpg 97 | *.jpeg 98 | .idea 99 | *.txt 100 | .idea/* 101 | *.png 102 | *.nii.gz 103 | *.nii 104 | *.tif 105 | *.bmp 106 | *.pkl 107 | *.xml 108 | *.pkl 109 | *.pdf 110 | *.png 111 | *.jpg 112 | *.jpeg 113 | 114 | *.model 115 | 116 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/tests/test_inference_simple.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from PIL import Image 3 | from open_clip.factory import get_tokenizer 4 | import pytest 5 | import open_clip 6 | import os 7 | os.environ["CUDA_VISIBLE_DEVICES"] = "" 8 | 9 | if hasattr(torch._C, '_jit_set_profiling_executor'): 10 | # legacy executor is too slow to compile large models for unit tests 11 | # no need for the fusion performance here 12 | torch._C._jit_set_profiling_executor(True) 13 | torch._C._jit_set_profiling_mode(False) 14 | 15 | 16 | test_simple_models = [ 17 | # model, pretrained, jit, force_custom_text 18 | ("ViT-B-32", "laion2b_s34b_b79k", False, False), 19 | ("ViT-B-32", "laion2b_s34b_b79k", True, False), 20 | ("ViT-B-32", "laion2b_s34b_b79k", True, True), 21 | ("roberta-ViT-B-32", "laion2b_s12b_b32k", False, False), 22 | ] 23 | 24 | 25 | @pytest.mark.parametrize("model_type,pretrained,jit,force_custom_text", test_simple_models) 26 | def test_inference_simple( 27 | model_type, 28 | pretrained, 29 | jit, 30 | force_custom_text, 31 | ): 32 | model, _, preprocess = open_clip.create_model_and_transforms( 33 | model_type, 34 | pretrained=pretrained, 35 | jit=jit, 36 | force_custom_text=force_custom_text, 37 | ) 38 | tokenizer = get_tokenizer(model_type) 39 | 40 | current_dir = os.path.dirname(os.path.realpath(__file__)) 41 | 42 | image = preprocess(Image.open(current_dir + "/../docs/CLIP.png")).unsqueeze(0) 43 | text = tokenizer(["a diagram", "a dog", "a cat"]) 44 | 45 | with torch.no_grad(): 46 | image_features = model.encode_image(image) 47 | text_features = model.encode_text(text) 48 | 49 | text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1) 50 | 51 | assert torch.allclose(text_probs.cpu()[0], torch.tensor([1.0, 0.0, 0.0])) 52 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/data_augmentation/custom_transforms/region_based_training.py: -------------------------------------------------------------------------------- 1 | from typing import List, Tuple, Union 2 | 3 | from batchgenerators.transforms.abstract_transforms import AbstractTransform 4 | import numpy as np 5 | 6 | 7 | class ConvertSegmentationToRegionsTransform(AbstractTransform): 8 | def __init__(self, regions: Union[List, Tuple], 9 | seg_key: str = "seg", output_key: str = "seg", seg_channel: int = 0): 10 | """ 11 | regions are tuple of tuples where each inner tuple holds the class indices that are merged into one region, 12 | example: 13 | regions= ((1, 2), (2, )) will result in 2 regions: one covering the region of labels 1&2 and the other just 2 14 | :param regions: 15 | :param seg_key: 16 | :param output_key: 17 | """ 18 | self.seg_channel = seg_channel 19 | self.output_key = output_key 20 | self.seg_key = seg_key 21 | self.regions = regions 22 | 23 | def __call__(self, **data_dict): 24 | seg = data_dict.get(self.seg_key) 25 | num_regions = len(self.regions) 26 | if seg is not None: 27 | seg_shp = seg.shape 28 | output_shape = list(seg_shp) 29 | output_shape[1] = num_regions 30 | region_output = np.zeros(output_shape, dtype=seg.dtype) 31 | for b in range(seg_shp[0]): 32 | for region_id, region_source_labels in enumerate(self.regions): 33 | if not isinstance(region_source_labels, (list, tuple)): 34 | region_source_labels = (region_source_labels, ) 35 | for label_value in region_source_labels: 36 | region_output[b, region_id][seg[b, self.seg_channel] == label_value] = 1 37 | data_dict[self.output_key] = region_output 38 | return data_dict 39 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/docs/script_examples/stability_example.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #SBATCH --partition=g40423 3 | #SBATCH --job-name=testopenclip 4 | #SBATCH --nodes 30 5 | #SBATCH --ntasks-per-node=8 6 | #SBATCH --cpus-per-task=12 7 | #SBATCH --output=%x_%j.out 8 | #SBATCH --comment=laion 9 | #SBATCH --open-mode=append 10 | #SBATCH --exclusive 11 | 12 | module load openmpi 13 | module load cuda/11.7 14 | 15 | export MASTER_ADDR=`hostname` 16 | export MASTER_PORT=12802 17 | export NCCL_PROTO=simple 18 | export FI_EFA_FORK_SAFE=1 19 | export FI_LOG_LEVEL=1 20 | export FI_EFA_USE_DEVICE_RDMA=1 21 | export NCCL_DEBUG=info 22 | 23 | export PYTHONFAULTHANDLER=1 24 | 25 | export CUDA_LAUNCH_BLOCKING=0 26 | export OMPI_MCA_mtl_base_verbose=1 27 | export FI_EFA_ENABLE_SHM_TRANSFER=0 28 | export FI_PROVIDER=efa 29 | export FI_EFA_TX_MIN_CREDITS=64 30 | export NCCL_TREE_THRESHOLD=0 31 | 32 | cd /admin/home-mitchellw/open_clip/src 33 | export PYTHONPATH="$PYTHONPATH:/admin/home-mitchellw/open_clip/src" 34 | 35 | EXP_NAME="test-B-32-laion5b-lr1e-3-bs90k" 36 | 37 | srun --comment laion --cpu_bind=v --accel-bind=gn python -m open_clip_train.main \ 38 | --save-frequency 1 \ 39 | --train-data="pipe:aws s3 cp s3://s-datasets/laion5b/{laion2B-data/{000000..231349}.tar,laion2B-multi-data/{000000..226687}.tar,laion1B-nolang-data/{000000..127231}.tar} -" \ 40 | --train-num-samples 135646078 \ 41 | --dataset-type webdataset \ 42 | --dataset-resampled \ 43 | --warmup 2000 \ 44 | --batch-size=375 \ 45 | --epochs=97 \ 46 | --lr 1e-3 \ 47 | --workers=8 \ 48 | --report-to wandb \ 49 | --name ${EXP_NAME} \ 50 | --logs /scratch/logs/ \ 51 | --model ViT-B-32 \ 52 | --seed 0 \ 53 | --ddp-static-graph \ 54 | --local-loss \ 55 | --gather-with-grad \ 56 | --grad-checkpointing \ 57 | --precision amp_bfloat16 \ 58 | --wandb-project-name open_clip6 \ 59 | --resume "latest" \ 60 | --remote-sync s3://s-laion/mitchellw/logs 61 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/ddp_allgather.py: -------------------------------------------------------------------------------- 1 | # Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | from typing import Any, Optional, Tuple 15 | 16 | import torch 17 | from torch import distributed 18 | 19 | 20 | def print_if_rank0(*args): 21 | if distributed.get_rank() == 0: 22 | print(*args) 23 | 24 | 25 | class AllGatherGrad(torch.autograd.Function): 26 | # stolen from pytorch lightning 27 | @staticmethod 28 | def forward( 29 | ctx: Any, 30 | tensor: torch.Tensor, 31 | group: Optional["torch.distributed.ProcessGroup"] = None, 32 | ) -> torch.Tensor: 33 | ctx.group = group 34 | 35 | gathered_tensor = [torch.zeros_like(tensor) for _ in range(torch.distributed.get_world_size())] 36 | 37 | torch.distributed.all_gather(gathered_tensor, tensor, group=group) 38 | gathered_tensor = torch.stack(gathered_tensor, dim=0) 39 | 40 | return gathered_tensor 41 | 42 | @staticmethod 43 | def backward(ctx: Any, *grad_output: torch.Tensor) -> Tuple[torch.Tensor, None]: 44 | grad_output = torch.cat(grad_output) 45 | 46 | torch.distributed.all_reduce(grad_output, op=torch.distributed.ReduceOp.SUM, async_op=False, group=ctx.group) 47 | 48 | return grad_output[torch.distributed.get_rank()], None 49 | 50 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/utils.py: -------------------------------------------------------------------------------- 1 | # Copyright 2021 HIP Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center 2 | # (DKFZ), Heidelberg, Germany 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from typing import Union 16 | 17 | from batchgenerators.utilities.file_and_folder_operations import * 18 | import numpy as np 19 | import re 20 | 21 | 22 | def get_identifiers_from_splitted_dataset_folder(folder: str, file_ending: str): 23 | files = subfiles(folder, suffix=file_ending, join=False) 24 | # all files must be .nii.gz and have 4 digit channel index 25 | crop = len(file_ending) + 5 26 | files = [i[:-crop] for i in files] 27 | # only unique image ids 28 | files = np.unique(files) 29 | return files 30 | 31 | 32 | def create_lists_from_splitted_dataset_folder(folder: str, file_ending: str, identifiers: List[str] = None) -> List[List[str]]: 33 | """ 34 | does not rely on dataset.json 35 | """ 36 | if identifiers is None: 37 | identifiers = get_identifiers_from_splitted_dataset_folder(folder, file_ending) 38 | files = subfiles(folder, suffix=file_ending, join=False, sort=True) 39 | list_of_lists = [] 40 | for f in identifiers: 41 | p = re.compile(re.escape(f) + r"_\d\d\d\d" + re.escape(file_ending)) 42 | list_of_lists.append([join(folder, i) for i in files if p.fullmatch(i)]) 43 | return list_of_lists 44 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip_train/scheduler.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | 4 | def assign_learning_rate(optimizer, new_lr): 5 | for param_group in optimizer.param_groups: 6 | param_group["lr"] = new_lr 7 | 8 | 9 | def _warmup_lr(base_lr, warmup_length, step): 10 | return base_lr * (step + 1) / warmup_length 11 | 12 | 13 | def const_lr(optimizer, base_lr, warmup_length, steps): 14 | def _lr_adjuster(step): 15 | if step < warmup_length: 16 | lr = _warmup_lr(base_lr, warmup_length, step) 17 | else: 18 | lr = base_lr 19 | assign_learning_rate(optimizer, lr) 20 | return lr 21 | return _lr_adjuster 22 | 23 | 24 | def const_lr_cooldown(optimizer, base_lr, warmup_length, steps, cooldown_steps, cooldown_power=1.0, cooldown_end_lr=0.): 25 | def _lr_adjuster(step): 26 | start_cooldown_step = steps - cooldown_steps 27 | if step < warmup_length: 28 | lr = _warmup_lr(base_lr, warmup_length, step) 29 | else: 30 | if step < start_cooldown_step: 31 | lr = base_lr 32 | else: 33 | e = step - start_cooldown_step 34 | es = steps - start_cooldown_step 35 | # linear decay if power == 1; polynomial decay otherwise; 36 | decay = (1 - (e/es)) ** cooldown_power 37 | lr = decay * (base_lr - cooldown_end_lr) + cooldown_end_lr 38 | assign_learning_rate(optimizer, lr) 39 | return lr 40 | return _lr_adjuster 41 | 42 | 43 | def cosine_lr(optimizer, base_lr, warmup_length, steps): 44 | def _lr_adjuster(step): 45 | if step < warmup_length: 46 | lr = _warmup_lr(base_lr, warmup_length, step) 47 | else: 48 | e = step - warmup_length 49 | es = steps - warmup_length 50 | lr = 0.5 * (1 + np.cos(np.pi * e / es)) * base_lr 51 | assign_learning_rate(optimizer, lr) 52 | return lr 53 | return _lr_adjuster 54 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/paths.py: -------------------------------------------------------------------------------- 1 | # Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import os 16 | 17 | """ 18 | PLEASE READ paths.md FOR INFORMATION TO HOW TO SET THIS UP 19 | """ 20 | 21 | nnUNet_raw = os.environ.get('nnUNet_raw') 22 | nnUNet_preprocessed = os.environ.get('nnUNet_preprocessed') 23 | nnUNet_results = os.environ.get('nnUNet_results') 24 | 25 | if nnUNet_raw is None: 26 | print("nnUNet_raw is not defined and nnU-Net can only be used on data for which preprocessed files " 27 | "are already present on your system. nnU-Net cannot be used for experiment planning and preprocessing like " 28 | "this. If this is not intended, please read documentation/setting_up_paths.md for information on how to set " 29 | "this up properly.") 30 | 31 | if nnUNet_preprocessed is None: 32 | print("nnUNet_preprocessed is not defined and nnU-Net can not be used for preprocessing " 33 | "or training. If this is not intended, please read documentation/setting_up_paths.md for information on how " 34 | "to set this up.") 35 | 36 | if nnUNet_results is None: 37 | print("nnUNet_results is not defined and nnU-Net cannot be used for training or " 38 | "inference. If this is not intended behavior, please read documentation/setting_up_paths.md for information " 39 | "on how to set this up.") 40 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/default_n_proc_DA.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import os 3 | 4 | 5 | def get_allowed_n_proc_DA(): 6 | """ 7 | This function is used to set the number of processes used on different Systems. It is specific to our cluster 8 | infrastructure at DKFZ. You can modify it to suit your needs. Everything is allowed. 9 | 10 | IMPORTANT: if the environment variable nnUNet_n_proc_DA is set it will overwrite anything in this script 11 | (see first line). 12 | 13 | Interpret the output as the number of processes used for data augmentation PER GPU. 14 | 15 | The way it is implemented here is simply a look up table. We know the hostnames, CPU and GPU configurations of our 16 | systems and set the numbers accordingly. For example, a system with 4 GPUs and 48 threads can use 12 threads per 17 | GPU without overloading the CPU (technically 11 because we have a main process as well), so that's what we use. 18 | """ 19 | 20 | if 'nnUNet_n_proc_DA' in os.environ.keys(): 21 | use_this = int(os.environ['nnUNet_n_proc_DA']) 22 | else: 23 | hostname = subprocess.getoutput(['hostname']) 24 | if hostname in ['Fabian', ]: 25 | use_this = 12 26 | elif hostname in ['hdf19-gpu16', 'hdf19-gpu17', 'hdf19-gpu18', 'hdf19-gpu19', 'e230-AMDworkstation']: 27 | use_this = 16 28 | elif hostname.startswith('e230-dgx1'): 29 | use_this = 10 30 | elif hostname.startswith('hdf18-gpu') or hostname.startswith('e132-comp'): 31 | use_this = 16 32 | elif hostname.startswith('e230-dgx2'): 33 | use_this = 6 34 | elif hostname.startswith('e230-dgxa100-'): 35 | use_this = 28 36 | elif hostname.startswith('lsf22-gpu'): 37 | use_this = 28 38 | elif hostname.startswith('hdf19-gpu') or hostname.startswith('e071-gpu'): 39 | use_this = 12 40 | else: 41 | use_this = 12 # default value 42 | 43 | use_this = min(use_this, os.cpu_count()) 44 | return use_this 45 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/model_sharing/model_download.py: -------------------------------------------------------------------------------- 1 | from typing import Optional 2 | 3 | import requests 4 | from batchgenerators.utilities.file_and_folder_operations import * 5 | from time import time 6 | from nnunetv2.model_sharing.model_import import install_model_from_zip_file 7 | from nnunetv2.paths import nnUNet_results 8 | from tqdm import tqdm 9 | 10 | 11 | def download_and_install_from_url(url): 12 | assert nnUNet_results is not None, "Cannot install model because network_training_output_dir is not " \ 13 | "set (RESULTS_FOLDER missing as environment variable, see " \ 14 | "Installation instructions)" 15 | print('Downloading pretrained model from url:', url) 16 | import http.client 17 | http.client.HTTPConnection._http_vsn = 10 18 | http.client.HTTPConnection._http_vsn_str = 'HTTP/1.0' 19 | 20 | import os 21 | home = os.path.expanduser('~') 22 | random_number = int(time() * 1e7) 23 | tempfile = join(home, '.nnunetdownload_%s' % str(random_number)) 24 | 25 | try: 26 | download_file(url=url, local_filename=tempfile, chunk_size=8192 * 16) 27 | print("Download finished. Extracting...") 28 | install_model_from_zip_file(tempfile) 29 | print("Done") 30 | except Exception as e: 31 | raise e 32 | finally: 33 | if isfile(tempfile): 34 | os.remove(tempfile) 35 | 36 | 37 | def download_file(url: str, local_filename: str, chunk_size: Optional[int] = 8192 * 16) -> str: 38 | # borrowed from https://stackoverflow.com/questions/16694907/download-large-file-in-python-with-requests 39 | # NOTE the stream=True parameter below 40 | with requests.get(url, stream=True, timeout=100) as r: 41 | r.raise_for_status() 42 | with tqdm.wrapattr(open(local_filename, 'wb'), "write", total=int(r.headers.get("Content-Length"))) as f: 43 | for chunk in r.iter_content(chunk_size=chunk_size): 44 | f.write(chunk) 45 | return local_filename 46 | 47 | 48 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/integration_tests/lsf_commands.sh: -------------------------------------------------------------------------------- 1 | bsub -q gpu.legacy -gpu num=1:j_exclusive=yes:gmem=1G -L /bin/bash ". /home/isensee/load_env_cluster4.sh && cd /home/isensee/git_repos/nnunet_remake && export nnUNet_keep_files_open=True && . nnunetv2/tests/integration_tests/run_integration_test.sh 996" 2 | bsub -q gpu.legacy -gpu num=1:j_exclusive=yes:gmem=1G -L /bin/bash ". /home/isensee/load_env_cluster4.sh && cd /home/isensee/git_repos/nnunet_remake && export nnUNet_keep_files_open=True && . nnunetv2/tests/integration_tests/run_integration_test.sh 997" 3 | bsub -q gpu.legacy -gpu num=1:j_exclusive=yes:gmem=1G -L /bin/bash ". /home/isensee/load_env_cluster4.sh && cd /home/isensee/git_repos/nnunet_remake && export nnUNet_keep_files_open=True && . nnunetv2/tests/integration_tests/run_integration_test.sh 998" 4 | bsub -q gpu.legacy -gpu num=1:j_exclusive=yes:gmem=1G -L /bin/bash ". /home/isensee/load_env_cluster4.sh && cd /home/isensee/git_repos/nnunet_remake && export nnUNet_keep_files_open=True && . nnunetv2/tests/integration_tests/run_integration_test.sh 999" 5 | 6 | 7 | bsub -q gpu.legacy -gpu num=2:j_exclusive=yes:gmem=1G -L /bin/bash ". /home/isensee/load_env_cluster4.sh && cd /home/isensee/git_repos/nnunet_remake && export nnUNet_keep_files_open=True && . nnunetv2/tests/integration_tests/run_integration_test_trainingOnly_DDP.sh 996" 8 | bsub -q gpu.legacy -gpu num=2:j_exclusive=yes:gmem=1G -L /bin/bash ". /home/isensee/load_env_cluster4.sh && cd /home/isensee/git_repos/nnunet_remake && export nnUNet_keep_files_open=True && . nnunetv2/tests/integration_tests/run_integration_test_trainingOnly_DDP.sh 997" 9 | bsub -q gpu.legacy -gpu num=2:j_exclusive=yes:gmem=1G -L /bin/bash ". /home/isensee/load_env_cluster4.sh && cd /home/isensee/git_repos/nnunet_remake && export nnUNet_keep_files_open=True && . nnunetv2/tests/integration_tests/run_integration_test_trainingOnly_DDP.sh 998" 10 | bsub -q gpu.legacy -gpu num=2:j_exclusive=yes:gmem=1G -L /bin/bash ". /home/isensee/load_env_cluster4.sh && cd /home/isensee/git_repos/nnunet_remake && export nnUNet_keep_files_open=True && . nnunetv2/tests/integration_tests/run_integration_test_trainingOnly_DDP.sh 999" 11 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/lr_scheduler/polylr.py: -------------------------------------------------------------------------------- 1 | from torch.optim.lr_scheduler import _LRScheduler 2 | 3 | 4 | class PolyLRScheduler(_LRScheduler): 5 | 6 | def __init__(self, optimizer, initial_lr: float, max_steps: int, num_of_cycles:int = 1, gamma: float = 0.8, exponent: float = 0.9, current_step: int = None): 7 | self.optimizer = optimizer 8 | self.initial_lr = initial_lr 9 | self.max_steps = max_steps #$ total amount of epochs 10 | self.exponent = exponent #$ epsilon 11 | self.num_of_cycles = num_of_cycles #$ amount of cycles C 12 | self.ctr = 0 13 | self.gamma = gamma #$ gamma sets the start of epoch part two 14 | super().__init__(optimizer, current_step if current_step is not None else -1, False) 15 | 16 | def step(self, current_step=None): 17 | #$ if number of cycles is 1, then we use the regular nnUNet poly lr scheduler 18 | if self.num_of_cycles == 1: 19 | if current_step is None or current_step == -1: 20 | current_step = self.ctr 21 | self.ctr += 1 22 | 23 | new_lr = self.initial_lr * (1 - current_step / self.max_steps) ** self.exponent 24 | for param_group in self.optimizer.param_groups: 25 | param_group['lr'] = new_lr 26 | 27 | #$ If number of cycles is more than 1, then we use the modified poly lr scheduler 28 | #$ alpha_r is rigid. It is set to 0.01 - this is the initial lr for the first step of each cycle. 29 | else: 30 | if current_step is None or current_step == -1: 31 | current_step = self.ctr 32 | self.ctr += 1 33 | 34 | Tc = self.max_steps//self.num_of_cycles 35 | tc = current_step%Tc 36 | step_part = min(tc,int(self.gamma*Tc)) 37 | print('->>>>>>>>>> current tc is:' ,tc) 38 | if tc == 0 : 39 | new_lr = self.initial_lr 40 | else: 41 | alpha_r = 0.01 42 | new_lr = alpha_r * (1 - step_part / self.max_steps) ** self.exponent 43 | 44 | for param_group in self.optimizer.param_groups: 45 | param_group['lr'] = new_lr -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/dataloading/utils.py: -------------------------------------------------------------------------------- 1 | import multiprocessing 2 | import os 3 | from multiprocessing import Pool 4 | from typing import List 5 | 6 | import numpy as np 7 | from batchgenerators.utilities.file_and_folder_operations import isfile, subfiles 8 | from nnunetv2.configuration import default_num_processes 9 | 10 | 11 | def _convert_to_npy(npz_file: str, unpack_segmentation: bool = True, overwrite_existing: bool = False) -> None: 12 | try: 13 | a = np.load(npz_file) # inexpensive, no compression is done here. This just reads metadata 14 | if overwrite_existing or not isfile(npz_file[:-3] + "npy"): 15 | np.save(npz_file[:-3] + "npy", a['data']) 16 | if unpack_segmentation and (overwrite_existing or not isfile(npz_file[:-4] + "_seg.npy")): 17 | np.save(npz_file[:-4] + "_seg.npy", a['seg']) 18 | except KeyboardInterrupt: 19 | if isfile(npz_file[:-3] + "npy"): 20 | os.remove(npz_file[:-3] + "npy") 21 | if isfile(npz_file[:-4] + "_seg.npy"): 22 | os.remove(npz_file[:-4] + "_seg.npy") 23 | raise KeyboardInterrupt 24 | 25 | 26 | def unpack_dataset(folder: str, unpack_segmentation: bool = True, overwrite_existing: bool = False, 27 | num_processes: int = default_num_processes): 28 | """ 29 | all npz files in this folder belong to the dataset, unpack them all 30 | """ 31 | with multiprocessing.get_context("spawn").Pool(num_processes) as p: 32 | npz_files = subfiles(folder, True, None, ".npz", True) 33 | p.starmap(_convert_to_npy, zip(npz_files, 34 | [unpack_segmentation] * len(npz_files), 35 | [overwrite_existing] * len(npz_files)) 36 | ) 37 | 38 | 39 | def get_case_identifiers(folder: str) -> List[str]: 40 | """ 41 | finds all npz files in the given folder and reconstructs the training case names from them 42 | """ 43 | case_identifiers = [i[:-4] for i in os.listdir(folder) if i.endswith("npz") and (i.find("segFromPrevStage") == -1)] 44 | return case_identifiers 45 | 46 | 47 | if __name__ == '__main__': 48 | unpack_dataset('/media/fabian/data/nnUNet_preprocessed/Dataset002_Heart/2d') -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/batch_running/benchmarking/generate_benchmarking_commands.py: -------------------------------------------------------------------------------- 1 | if __name__ == '__main__': 2 | """ 3 | This code probably only works within the DKFZ infrastructure (using LSF). You will need to adapt it to your scheduler! 4 | """ 5 | gpu_models = [#'NVIDIAA100_PCIE_40GB', 'NVIDIAGeForceRTX2080Ti', 'NVIDIATITANRTX', 'TeslaV100_SXM2_32GB', 6 | 'NVIDIAA100_SXM4_40GB']#, 'TeslaV100_PCIE_32GB'] 7 | datasets = [2, 3, 4, 5] 8 | trainers = ['nnUNetTrainerBenchmark_5epochs', 'nnUNetTrainerBenchmark_5epochs_noDataLoading'] 9 | plans = ['nnUNetPlans'] 10 | configs = ['2d', '2d_bs3x', '2d_bs6x', '3d_fullres', '3d_fullres_bs3x', '3d_fullres_bs6x'] 11 | num_gpus = 1 12 | 13 | benchmark_configurations = {d: configs for d in datasets} 14 | 15 | exclude_hosts = "-R \"select[hname!='e230-dgxa100-1']'\"" 16 | resources = "-R \"tensorcore\"" 17 | queue = "-q gpu" 18 | preamble = "-L /bin/bash \"source ~/load_env_torch210.sh && " 19 | train_command = 'nnUNet_compile=False nnUNet_results=/dkfz/cluster/gpu/checkpoints/OE0441/isensee/nnUNet_results_remake_benchmark nnUNetv2_train' 20 | 21 | folds = (0, ) 22 | 23 | use_these_modules = { 24 | tr: plans for tr in trainers 25 | } 26 | 27 | additional_arguments = f' -num_gpus {num_gpus}' # '' 28 | 29 | output_file = "/home/isensee/deleteme.txt" 30 | with open(output_file, 'w') as f: 31 | for g in gpu_models: 32 | gpu_requirements = f"-gpu num={num_gpus}:j_exclusive=yes:gmodel={g}" 33 | for tr in use_these_modules.keys(): 34 | for p in use_these_modules[tr]: 35 | for dataset in benchmark_configurations.keys(): 36 | for config in benchmark_configurations[dataset]: 37 | for fl in folds: 38 | command = f'bsub {exclude_hosts} {resources} {queue} {gpu_requirements} {preamble} {train_command} {dataset} {config} {fl} -tr {tr} -p {p}' 39 | if additional_arguments is not None and len(additional_arguments) > 0: 40 | command += f' {additional_arguments}' 41 | f.write(f'{command}\"\n') -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/data_augmentation/nnUNetTrainerNoDA.py: -------------------------------------------------------------------------------- 1 | from typing import Union, Tuple, List 2 | 3 | from batchgenerators.transforms.abstract_transforms import AbstractTransform 4 | 5 | from nnunetv2.training.nnUNetTrainer.nnUNetTrainer import nnUNetTrainer 6 | import numpy as np 7 | 8 | 9 | class nnUNetTrainerNoDA(nnUNetTrainer): 10 | @staticmethod 11 | def get_training_transforms(patch_size: Union[np.ndarray, Tuple[int]], 12 | rotation_for_DA: dict, 13 | deep_supervision_scales: Union[List, Tuple], 14 | mirror_axes: Tuple[int, ...], 15 | do_dummy_2d_data_aug: bool, 16 | order_resampling_data: int = 1, 17 | order_resampling_seg: int = 0, 18 | border_val_seg: int = -1, 19 | use_mask_for_norm: List[bool] = None, 20 | is_cascaded: bool = False, 21 | foreground_labels: Union[Tuple[int, ...], List[int]] = None, 22 | regions: List[Union[List[int], Tuple[int, ...], int]] = None, 23 | ignore_label: int = None) -> AbstractTransform: 24 | return nnUNetTrainer.get_validation_transforms(deep_supervision_scales, is_cascaded, foreground_labels, 25 | regions, ignore_label) 26 | 27 | def get_plain_dataloaders(self, initial_patch_size: Tuple[int, ...], dim: int): 28 | return super().get_plain_dataloaders( 29 | initial_patch_size=self.configuration_manager.patch_size, 30 | dim=dim 31 | ) 32 | 33 | def configure_rotation_dummyDA_mirroring_and_inital_patch_size(self): 34 | # we need to disable mirroring here so that no mirroring will be applied in inferene! 35 | rotation_for_DA, do_dummy_2d_data_aug, initial_patch_size, mirror_axes = \ 36 | super().configure_rotation_dummyDA_mirroring_and_inital_patch_size() 37 | mirror_axes = None 38 | self.inference_allowed_mirroring_axes = None 39 | return rotation_for_DA, do_dummy_2d_data_aug, initial_patch_size, mirror_axes 40 | 41 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/dataset_conversion/Dataset220_KiTS2023.py: -------------------------------------------------------------------------------- 1 | from batchgenerators.utilities.file_and_folder_operations import * 2 | import shutil 3 | from nnunetv2.dataset_conversion.generate_dataset_json import generate_dataset_json 4 | from nnunetv2.paths import nnUNet_raw 5 | 6 | 7 | def convert_kits2023(kits_base_dir: str, nnunet_dataset_id: int = 220): 8 | task_name = "KiTS2023" 9 | 10 | foldername = "Dataset%03.0d_%s" % (nnunet_dataset_id, task_name) 11 | 12 | # setting up nnU-Net folders 13 | out_base = join(nnUNet_raw, foldername) 14 | imagestr = join(out_base, "imagesTr") 15 | labelstr = join(out_base, "labelsTr") 16 | maybe_mkdir_p(imagestr) 17 | maybe_mkdir_p(labelstr) 18 | 19 | cases = subdirs(kits_base_dir, prefix='case_', join=False) 20 | for tr in cases: 21 | shutil.copy(join(kits_base_dir, tr, 'imaging.nii.gz'), join(imagestr, f'{tr}_0000.nii.gz')) 22 | shutil.copy(join(kits_base_dir, tr, 'segmentation.nii.gz'), join(labelstr, f'{tr}.nii.gz')) 23 | 24 | generate_dataset_json(out_base, {0: "CT"}, 25 | labels={ 26 | "background": 0, 27 | "kidney": (1, 2, 3), 28 | "masses": (2, 3), 29 | "tumor": 2 30 | }, 31 | regions_class_order=(1, 3, 2), 32 | num_training_cases=len(cases), file_ending='.nii.gz', 33 | dataset_name=task_name, reference='none', 34 | release='prerelease', 35 | overwrite_image_reader_writer='NibabelIOWithReorient', 36 | description="KiTS2023") 37 | 38 | 39 | if __name__ == '__main__': 40 | import argparse 41 | parser = argparse.ArgumentParser() 42 | parser.add_argument('input_folder', type=str, 43 | help="The downloaded and extracted KiTS2023 dataset (must have case_XXXXX subfolders)") 44 | parser.add_argument('-d', required=False, type=int, default=220, help='nnU-Net Dataset ID, default: 220') 45 | args = parser.parse_args() 46 | amos_base = args.input_folder 47 | convert_kits2023(amos_base, args.d) 48 | 49 | # /media/isensee/raw_data/raw_datasets/kits23/dataset 50 | 51 | -------------------------------------------------------------------------------- /loss/hnl.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import torch.nn.functional as F 3 | import torch 4 | 5 | class HardNegativeLoss(nn.Module): 6 | """ 7 | Hard Negative Noise Contrastive Estimation proposed in https://arxiv.org/abs/2301.02280 8 | beta1: hardness parameter for image features 9 | beta2: hardness parameter for text features 10 | alpha: the weighting function of the positive sample loss 11 | Setting alpha to 0, the loss is equivalent to the decoupled HN-NCE loss (DHN-NCE) 12 | temperature: temperature to control the sharpness of the distribution 13 | """ 14 | def __init__(self, temperature=1.0,beta1=1.0, beta2 = 1.0, alpha=0): 15 | super(HardNegativeLoss, self).__init__() 16 | self.temperature = temperature 17 | self.beta1 = beta1 18 | self.beta2 = beta2 19 | self.alpha = alpha 20 | 21 | def forward(self, image_features, text_features,batch_size): 22 | # Normalize features 23 | image_features = F.normalize(image_features, p=2, dim=1) 24 | text_features = F.normalize(text_features, p=2, dim=1) 25 | 26 | # Compute cosine similarity between image and text features 27 | logits_per_image = torch.matmul(image_features, text_features.t()) / self.temperature 28 | logits_per_text = logits_per_image.t() 29 | 30 | mask = torch.eye(logits_per_image.size(0), dtype=torch.bool) 31 | mask = mask.to(image_features.device) 32 | 33 | # Positive pairs: diagonal elements 34 | pos = torch.exp(logits_per_image*mask) 35 | 36 | # Negative pairs: off-diagonal elements 37 | N = batch_size - 1 38 | 39 | neg_mask = ~mask 40 | 41 | # Calculate reweighting factors 42 | norm_term_img = torch.sum(torch.exp(logits_per_image*neg_mask),dim=-1) 43 | reweight_img = N * (torch.exp(self.beta1*logits_per_image*neg_mask))/norm_term_img 44 | norm_term_text = torch.sum(torch.exp(logits_per_text*neg_mask),dim=-1) 45 | reweight_text = N * (torch.exp(self.beta2*logits_per_text*neg_mask))/norm_term_text 46 | 47 | neg_img = reweight_img * torch.exp(logits_per_image*neg_mask) 48 | neg_text = reweight_text * torch.exp(logits_per_text*neg_mask) 49 | 50 | # Calculate loss 51 | loss = -torch.log(pos / (pos*self.alpha + neg_img)) -torch.log(pos / (pos*self.alpha + neg_text)) 52 | 53 | return loss.mean() 54 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["pdm-backend"] 3 | build-backend = "pdm.backend" 4 | 5 | [project] 6 | name = "open_clip_torch" 7 | # NOTE for full list of authors see https://github.com/mlfoundations/open_clip?tab=readme-ov-file#citing 8 | # below covers most active / recent maintainers 9 | authors = [ 10 | {name = "Ross Wightman", email = "ross@huggingface.co"}, 11 | {name = "Gabriel Ilharco"}, 12 | {name = "Mitchell Wortsman"}, 13 | {name = "Romain Beaumont"}, 14 | ] 15 | description = "Open reproduction of consastive language-image pretraining (CLIP) and related." 16 | readme = "README.md" 17 | requires-python = ">=3.8" 18 | keywords = ["pytorch", "clip", "image-text", "language-image", "multimodal"] 19 | license = {text = "MIT"} 20 | classifiers = [ 21 | 'Development Status :: 4 - Beta', 22 | 'Intended Audience :: Education', 23 | 'Intended Audience :: Science/Research', 24 | 'License :: OSI Approved :: MIT License', 25 | 'Programming Language :: Python :: 3.8', 26 | 'Programming Language :: Python :: 3.9', 27 | 'Programming Language :: Python :: 3.10', 28 | 'Programming Language :: Python :: 3.11', 29 | 'Programming Language :: Python :: 3.12', 30 | 'Topic :: Scientific/Engineering', 31 | 'Topic :: Scientific/Engineering :: Artificial Intelligence', 32 | 'Topic :: Software Development', 33 | 'Topic :: Software Development :: Libraries', 34 | 'Topic :: Software Development :: Libraries :: Python Modules', 35 | ] 36 | dependencies = [ 37 | 'torch>=1.9.0', 38 | 'torchvision', 39 | 'regex', 40 | 'ftfy', 41 | 'tqdm', 42 | 'huggingface-hub', 43 | 'safetensors', 44 | 'timm', 45 | ] 46 | dynamic = ["version"] 47 | 48 | [project.optional-dependencies] 49 | training = [ 50 | 'torch>=2.0', 51 | 'webdataset>=0.2.5,<=0.2.86', 52 | 'pandas', 53 | 'transformers[sentencepiece]', 54 | 'timm>=1.0.10', 55 | 'fsspec', 56 | ] 57 | test = [ 58 | 'pytest-split', 59 | 'pytest', 60 | 'open_clip_torch[training]' 61 | ] 62 | 63 | [project.urls] 64 | homepage = "https://github.com/mlfoundations/open_clip" 65 | repository = "https://github.com/mlfoundations/open_clip" 66 | 67 | [tool.pdm.version] 68 | source = "file" 69 | path = "src/open_clip/version.py" 70 | 71 | [tool.pdm.build] 72 | excludes = ["./**/.git", "./**/logs/*"] 73 | package-dir = "src" 74 | includes = ["src/open_clip", "src/open_clip_train"] 75 | 76 | [tool.pytest.ini_options] 77 | testpaths = ['tests'] 78 | markers = [ 79 | 'regression_test' 80 | ] -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/utilities/json_export.py: -------------------------------------------------------------------------------- 1 | from collections.abc import Iterable 2 | 3 | import numpy as np 4 | import torch 5 | 6 | 7 | def recursive_fix_for_json_export(my_dict: dict): 8 | # json is stupid. 'cannot serialize object of type bool_/int64/float64'. Come on bro. 9 | keys = list(my_dict.keys()) # cannot iterate over keys() if we change keys.... 10 | for k in keys: 11 | if isinstance(k, (np.int64, np.int32, np.int8, np.uint8)): 12 | tmp = my_dict[k] 13 | del my_dict[k] 14 | my_dict[int(k)] = tmp 15 | del tmp 16 | k = int(k) 17 | 18 | if isinstance(my_dict[k], dict): 19 | recursive_fix_for_json_export(my_dict[k]) 20 | elif isinstance(my_dict[k], np.ndarray): 21 | assert len(my_dict[k].shape) == 1, 'only 1d arrays are supported' 22 | my_dict[k] = fix_types_iterable(my_dict[k], output_type=list) 23 | elif isinstance(my_dict[k], (np.bool_,)): 24 | my_dict[k] = bool(my_dict[k]) 25 | elif isinstance(my_dict[k], (np.int64, np.int32, np.int8, np.uint8)): 26 | my_dict[k] = int(my_dict[k]) 27 | elif isinstance(my_dict[k], (np.float32, np.float64, np.float16)): 28 | my_dict[k] = float(my_dict[k]) 29 | elif isinstance(my_dict[k], list): 30 | my_dict[k] = fix_types_iterable(my_dict[k], output_type=type(my_dict[k])) 31 | elif isinstance(my_dict[k], tuple): 32 | my_dict[k] = fix_types_iterable(my_dict[k], output_type=tuple) 33 | elif isinstance(my_dict[k], torch.device): 34 | my_dict[k] = str(my_dict[k]) 35 | else: 36 | pass # pray it can be serialized 37 | 38 | 39 | def fix_types_iterable(iterable, output_type): 40 | # this sh!t is hacky as hell and will break if you use it for anything outside nnunet. Keep you hands off of this. 41 | out = [] 42 | for i in iterable: 43 | if type(i) in (np.int64, np.int32, np.int8, np.uint8): 44 | out.append(int(i)) 45 | elif isinstance(i, dict): 46 | recursive_fix_for_json_export(i) 47 | out.append(i) 48 | elif type(i) in (np.float32, np.float64, np.float16): 49 | out.append(float(i)) 50 | elif type(i) in (np.bool_,): 51 | out.append(bool(i)) 52 | elif isinstance(i, str): 53 | out.append(i) 54 | elif isinstance(i, Iterable): 55 | # print('recursive call on', i, type(i)) 56 | out.append(fix_types_iterable(i, type(i))) 57 | else: 58 | out.append(i) 59 | return output_type(out) 60 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/nnUNetTrainer/variants/benchmarking/nnUNetTrainerBenchmark_5epochs_noDataLoading.py: -------------------------------------------------------------------------------- 1 | import torch 2 | 3 | from nnunetv2.training.nnUNetTrainer.variants.benchmarking.nnUNetTrainerBenchmark_5epochs import \ 4 | nnUNetTrainerBenchmark_5epochs 5 | from nnunetv2.utilities.label_handling.label_handling import determine_num_input_channels 6 | 7 | 8 | class nnUNetTrainerBenchmark_5epochs_noDataLoading(nnUNetTrainerBenchmark_5epochs): 9 | def __init__(self, plans: dict, configuration: str, fold: int, dataset_json: dict, unpack_dataset: bool = True, 10 | device: torch.device = torch.device('cuda')): 11 | super().__init__(plans, configuration, fold, dataset_json, unpack_dataset, device) 12 | self._set_batch_size_and_oversample() 13 | num_input_channels = determine_num_input_channels(self.plans_manager, self.configuration_manager, 14 | self.dataset_json) 15 | patch_size = self.configuration_manager.patch_size 16 | dummy_data = torch.rand((self.batch_size, num_input_channels, *patch_size), device=self.device) 17 | dummy_target = [ 18 | torch.round( 19 | torch.rand((self.batch_size, 1, *[int(i * j) for i, j in zip(patch_size, k)]), device=self.device) * 20 | max(self.label_manager.all_labels) 21 | ) for k in self._get_deep_supervision_scales()] 22 | self.dummy_batch = {'data': dummy_data, 'target': dummy_target} 23 | 24 | def get_dataloaders(self): 25 | return None, None 26 | 27 | def run_training(self): 28 | try: 29 | self.on_train_start() 30 | 31 | for epoch in range(self.current_epoch, self.num_epochs): 32 | self.on_epoch_start() 33 | 34 | self.on_train_epoch_start() 35 | train_outputs = [] 36 | for batch_id in range(self.num_iterations_per_epoch): 37 | train_outputs.append(self.train_step(self.dummy_batch)) 38 | self.on_train_epoch_end(train_outputs) 39 | 40 | with torch.no_grad(): 41 | self.on_validation_epoch_start() 42 | val_outputs = [] 43 | for batch_id in range(self.num_val_iterations_per_epoch): 44 | val_outputs.append(self.validation_step(self.dummy_batch)) 45 | self.on_validation_epoch_end(val_outputs) 46 | 47 | self.on_epoch_end() 48 | 49 | self.on_train_end() 50 | except RuntimeError: 51 | self.crashed_with_runtime_error = True 52 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/training/data_augmentation/custom_transforms/transforms_for_dummy_2d.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Union, List 2 | 3 | from batchgenerators.transforms.abstract_transforms import AbstractTransform 4 | 5 | 6 | class Convert3DTo2DTransform(AbstractTransform): 7 | def __init__(self, apply_to_keys: Union[List[str], Tuple[str]] = ('data', 'seg')): 8 | """ 9 | Transforms a 5D array (b, c, x, y, z) to a 4D array (b, c * x, y, z) by overloading the color channel 10 | """ 11 | self.apply_to_keys = apply_to_keys 12 | 13 | def __call__(self, **data_dict): 14 | for k in self.apply_to_keys: 15 | shp = data_dict[k].shape 16 | assert len(shp) == 5, 'This transform only works on 3D data, so expects 5D tensor (b, c, x, y, z) as input.' 17 | data_dict[k] = data_dict[k].reshape((shp[0], shp[1] * shp[2], shp[3], shp[4])) 18 | shape_key = f'orig_shape_{k}' 19 | assert shape_key not in data_dict.keys(), f'Convert3DTo2DTransform needs to store the original shape. ' \ 20 | f'It does that using the {shape_key} key. That key is ' \ 21 | f'already taken. Bummer.' 22 | data_dict[shape_key] = shp 23 | return data_dict 24 | 25 | 26 | class Convert2DTo3DTransform(AbstractTransform): 27 | def __init__(self, apply_to_keys: Union[List[str], Tuple[str]] = ('data', 'seg')): 28 | """ 29 | Reverts Convert3DTo2DTransform by transforming a 4D array (b, c * x, y, z) back to 5D (b, c, x, y, z) 30 | """ 31 | self.apply_to_keys = apply_to_keys 32 | 33 | def __call__(self, **data_dict): 34 | for k in self.apply_to_keys: 35 | shape_key = f'orig_shape_{k}' 36 | assert shape_key in data_dict.keys(), f'Did not find key {shape_key} in data_dict. Shitty. ' \ 37 | f'Convert2DTo3DTransform only works in tandem with ' \ 38 | f'Convert3DTo2DTransform and you probably forgot to add ' \ 39 | f'Convert3DTo2DTransform to your pipeline. (Convert3DTo2DTransform ' \ 40 | f'is where the missing key is generated)' 41 | original_shape = data_dict[shape_key] 42 | current_shape = data_dict[k].shape 43 | data_dict[k] = data_dict[k].reshape((original_shape[0], original_shape[1], original_shape[2], 44 | current_shape[-2], current_shape[-1])) 45 | return data_dict 46 | -------------------------------------------------------------------------------- /biomedclip_finetuning/open_clip/src/open_clip/hf_configs.py: -------------------------------------------------------------------------------- 1 | # HF architecture dict: 2 | arch_dict = { 3 | # https://huggingface.co/docs/transformers/model_doc/roberta#roberta 4 | "roberta": { 5 | "config_names": { 6 | "context_length": "max_position_embeddings", 7 | "vocab_size": "vocab_size", 8 | "width": "hidden_size", 9 | "heads": "num_attention_heads", 10 | "layers": "num_hidden_layers", 11 | "layer_attr": "layer", 12 | "token_embeddings_attr": "embeddings" 13 | }, 14 | "pooler": "mean_pooler", 15 | }, 16 | # https://huggingface.co/docs/transformers/model_doc/xlm-roberta#transformers.XLMRobertaConfig 17 | "xlm-roberta": { 18 | "config_names": { 19 | "context_length": "max_position_embeddings", 20 | "vocab_size": "vocab_size", 21 | "width": "hidden_size", 22 | "heads": "num_attention_heads", 23 | "layers": "num_hidden_layers", 24 | "layer_attr": "layer", 25 | "token_embeddings_attr": "embeddings" 26 | }, 27 | "pooler": "mean_pooler", 28 | }, 29 | # https://huggingface.co/docs/transformers/model_doc/mt5#mt5 30 | "mt5": { 31 | "config_names": { 32 | # unlimited seqlen 33 | # https://github.com/google-research/text-to-text-transfer-transformer/issues/273 34 | # https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/models/t5/modeling_t5.py#L374 35 | "context_length": "", 36 | "vocab_size": "vocab_size", 37 | "width": "d_model", 38 | "heads": "num_heads", 39 | "layers": "num_layers", 40 | "layer_attr": "block", 41 | "token_embeddings_attr": "embed_tokens" 42 | }, 43 | "pooler": "mean_pooler", 44 | }, 45 | # https://huggingface.co/docs/transformers/model_doc/bert 46 | "bert": { 47 | "config_names": { 48 | "context_length": "max_position_embeddings", 49 | "vocab_size": "vocab_size", 50 | "width": "hidden_size", 51 | "heads": "num_attention_heads", 52 | "layers": "num_hidden_layers", 53 | }, 54 | "pooler": "cls_pooler", 55 | }, 56 | # https://huggingface.co/docs/transformers/model_doc/m2m_100 57 | "m2m_100": { 58 | "config_names": { 59 | "context_length": "max_position_embeddings", 60 | "vocab_size": "vocab_size", 61 | "width": "d_model", 62 | "heads": "encoder_attention_heads", 63 | "layers": "encoder_layers", 64 | }, 65 | "pooler": "cls_pooler", 66 | }, 67 | } 68 | -------------------------------------------------------------------------------- /weak_segmentation/nnunetv2/tests/integration_tests/readme.md: -------------------------------------------------------------------------------- 1 | # Preface 2 | 3 | I am just a mortal with many tasks and limited time. Aint nobody got time for unittests. 4 | 5 | HOWEVER, at least some integration tests should be performed testing nnU-Net from start to finish. 6 | 7 | # Introduction - What the heck is happening? 8 | This test covers all possible labeling scenarios (standard labels, regions, ignore labels and regions with 9 | ignore labels). It runs the entire nnU-Net pipeline from start to finish: 10 | 11 | - fingerprint extraction 12 | - experiment planning 13 | - preprocessing 14 | - train all 4 configurations (2d, 3d_lowres, 3d_fullres, 3d_cascade_fullres) as 5-fold CV 15 | - automatically find the best model or ensemble 16 | - determine the postprocessing used for this 17 | - predict some test set 18 | - apply postprocessing to the test set 19 | 20 | To speed things up, we do the following: 21 | - pick Dataset004_Hipocampus because it is quadratisch praktisch gut. MNIST of medical image segmentation 22 | - by default this dataset does not have 3d_lowres or cascade. We just manually add them (cool new feature, eh?). See `add_lowres_and_cascade.py` to learn more! 23 | - we use nnUNetTrainer_5epochs for a short training 24 | 25 | # How to run it? 26 | 27 | Set your pwd to be the nnunet repo folder (the one where the `nnunetv2` folder and the `setup.py` are located!) 28 | 29 | Now generate the 4 dummy datasets (ids 996, 997, 998, 999) from dataset 4. This will crash if you don't have Dataset004! 30 | ```commandline 31 | bash nnunetv2/tests/integration_tests/prepare_integration_tests.sh 32 | ``` 33 | 34 | Now you can run the integration test for each of the datasets: 35 | ```commandline 36 | bash nnunetv2/tests/integration_tests/run_integration_test.sh DATSET_ID 37 | ``` 38 | use DATSET_ID 996, 997, 998 and 999. You can run these independently on different GPUs/systems to speed things up. 39 | This will take i dunno like 10-30 Minutes!? 40 | 41 | Also run 42 | ```commandline 43 | bash nnunetv2/tests/integration_tests/run_integration_test_trainingOnly_DDP.sh DATSET_ID 44 | ``` 45 | to verify DDP is working (needs 2 GPUs!) 46 | 47 | # How to check if the test was successful? 48 | If I was not as lazy as I am I would have programmed some automatism that checks if Dice scores etc are in an acceptable range. 49 | So you need to do the following: 50 | 1) check that none of your runs crashed (duh) 51 | 2) for each run, navigate to `nnUNet_results/DATASET_NAME` and take a look at the `inference_information.json` file. 52 | Does it make sense? If so: NICE! 53 | 54 | Once the integration test is completed you can delete all the temporary files associated with it by running: 55 | 56 | ```commandline 57 | python nnunetv2/tests/integration_tests/cleanup_integration_test.py 58 | ``` --------------------------------------------------------------------------------