├── .gitignore ├── LICENSE ├── README.md ├── chatgpt_submission_public.py ├── fig └── main.png └── process_xml_public.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | 113 | # Spyder project settings 114 | .spyderproject 115 | .spyproject 116 | 117 | # Rope project settings 118 | .ropeproject 119 | 120 | # mkdocs documentation 121 | /site 122 | 123 | # mypy 124 | .mypy_cache/ 125 | .dmypy.json 126 | dmypy.json 127 | 128 | # Pyre type checker 129 | .pyre/ 130 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Jupiter 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4 2 | \[In submission\] Code for: [DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4](https://arxiv.org/abs/2303.11032) 3 | 4 | The digitization of healthcare has facilitated the sharing and re-using of medical data but has also raised concerns about confidentiality and privacy. HIPAA (Health Insurance Portability and Accountability Act) mandates removing re-identifying information before the dissemination of medical records. Thus, effective and efficient solutions for de-identifying medical data, especially those in free-text forms, are highly needed. While various computer-assisted de-identification methods, including both rule-based and learning-based, have been developed and used in prior practice, such solutions still lack generalizability or need to be fine-tuned according to different scenarios, significantly imposing restrictions in wider use. The advancement of large language models (LLM), such as ChatGPT and GPT-4, have shown great potential in processing text data in the medical domain with zero-shot in-context learning, especially in the task of privacy protection, as these models can identify confidential information by their powerful named entity recognition (NER) capability. In this work, we developed a novel GPT4-enabled de-identification framework ("DeID-GPT") to automatically identify and remove the identifying information. Compared to existing commonly used medical text data de-identification methods, our developed DeID-GPT showed the highest accuracy and remarkable reliability in masking private information from the unstructured medical text while preserving the original structure and meaning of the text. This study is one of the earliest to utilize ChatGPT and GPT-4 for medical text data processing and de-identification, which provides insights for further research and solution development on the use of LLMs such as ChatGPT/GPT-4 in healthcare. 5 | 6 | 7 | Our framework is as follows. 8 | 9 | 10 | 11 | 12 | 13 | ## Dataset 14 | The i2b2/UTHealth Challenge: We benchmark our proposed method using the 2014 i2b2/UTHealth de-identification challenge dataset. Upon request, the Blavatnik Institute of Biomedical Informatics at Harvard University granted us access to this dataset. This dataset contains 1,304 free-form clinical notes of 296 diabetic patients. All PHI entities were manually annotated and replaced with surrogates. Specifically, names, professions, locations, ages, dates, contacts and IDs were replaced by surrogate information to protect privacy and facilitate de-identification research. For example, if there is a real patient named ”Mr. James McCarthy” who visited the hospital on 12/01/2013, these strings will be replaced by ”Mr. Joshua Howard” and ”04/01/2060”, respectively. The original Harvard 2014 i2b2/UTHealth de-identification challenge dataset is stored as XML files. One XML file corresponds to one complete clinical note that documents the symptoms, clinical records and medical impressions of one particular visit. Such files consist of various XML tags that correspond to different information in the clinical notes. 15 | 16 | 17 | 18 | ## Citation 19 | @article{liu2023deid, 20 | title={DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4}, 21 | author={Liu, Zhengliang and Yu, Xiaowei and Zhang, Lu and Wu, Zihao and Cao, Chao and Dai, Haixing and Zhao, Lin and Liu, Wei and Shen, Dinggang and Li, Quanzheng and others}, 22 | journal={arXiv preprint arXiv:2303.11032}, 23 | year={2023} 24 | } -------------------------------------------------------------------------------- /chatgpt_submission_public.py: -------------------------------------------------------------------------------- 1 | import os 2 | import openai 3 | import time 4 | from bs4 import BeautifulSoup 5 | import pprint 6 | import tiktoken 7 | 8 | openai.api_key = "" 9 | rewrite_path = "" 10 | 11 | import os 12 | 13 | if os.path.isfile(rewrite_path): 14 | os.remove(rewrite_path) 15 | 16 | def num_tokens_from_string(string: str, encoding_name: str) -> int: 17 | encoding = tiktoken.get_encoding(encoding_name) 18 | num_tokens = len(encoding.encode(string)) 19 | return num_tokens 20 | 21 | def chatgpt_completion(model_new="gpt-3.5-turbo",prompt_new="hi", temperature_new=0.05, top_p_new=1, n_new=1, max_tokens_new=100): 22 | Chat_Completion = openai.ChatCompletion.create( 23 | model=model_new, 24 | messages=[ 25 | {"role": "user", "content": prompt_new} 26 | ], 27 | temperature=temperature_new, 28 | top_p=top_p_new, 29 | n=n_new, 30 | max_tokens=max_tokens_new, 31 | presence_penalty=0, 32 | frequency_penalty=0 33 | ) 34 | return Chat_Completion 35 | 36 | directory = '' 37 | 38 | list_of_text_contents = [] 39 | list_of_files = [] 40 | 41 | for filename in os.listdir(directory): 42 | f = os.path.join(directory, filename) 43 | if os.path.isfile(f): 44 | print(os.path.basename(os.path.normpath(f))[:-4]) 45 | list_of_files.append(os.path.basename(os.path.normpath(f))[:-4]) 46 | with open(f) as fp: 47 | soup = BeautifulSoup(fp, features="xml") 48 | text = soup.find('TEXT') 49 | text_content = text.contents[0] 50 | list_of_text_contents.append(text_content) 51 | 52 | for i in range(len(list_of_text_contents)): 53 | prompt = "" + list_of_text_contents[i] 54 | 55 | num_tokens = num_tokens_from_string(prompt, "gpt2") 56 | print(num_tokens) 57 | 58 | completion = chatgpt_completion(prompt_new=prompt,max_tokens_new=4000) 59 | rewrite_finding = completion.choices[0].message.content 60 | 61 | rewrite_file = list_of_files[i] + "_anonymized.txt" 62 | 63 | with open(rewrite_file, "w") as f: 64 | f.write(rewrite_finding) 65 | 66 | print("-----------The" + str(i + 1) + "个\n-----------") 67 | print("-----------My prompt " + "\n-----------") 68 | print(prompt) 69 | print("-----------Anonymized " + "\n-----------") 70 | print(rewrite_finding) 71 | 72 | time.sleep(10) 73 | 74 | 75 | 76 | 77 | 78 | 79 | -------------------------------------------------------------------------------- /fig/main.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yhydhx/ChatGPT-API/d3ca165840ead46de313325df7e574016520524e/fig/main.png -------------------------------------------------------------------------------- /process_xml_public.py: -------------------------------------------------------------------------------- 1 | from bs4 import BeautifulSoup 2 | import os 3 | from statistics import mean 4 | 5 | def words_in_string(word_list, a_string): 6 | return set(word_list).intersection(a_string.split()) 7 | 8 | rewrite_directory = '' 9 | original_directory = '' 10 | 11 | list_of_files_to_check = [] 12 | list_of_anonymized_reports = [] 13 | 14 | for filename in os.listdir(rewrite_directory): 15 | f = os.path.join(rewrite_directory, filename) 16 | if os.path.isfile(f): 17 | target_file = os.path.basename(os.path.normpath(f))[:-15] 18 | print(target_file) 19 | list_of_files_to_check.append(target_file) 20 | 21 | text_file = open(f, "r") 22 | data = text_file.read() 23 | list_of_anonymized_reports.append(data) 24 | text_file.close() 25 | 26 | print(list_of_files_to_check) 27 | print(list_of_anonymized_reports) 28 | 29 | list_of_accuracies = [] 30 | for i in range(len(list_of_files_to_check)): 31 | names = [] 32 | professions = [] 33 | locations = [] 34 | ages = [] 35 | dates = [] 36 | contacts = [] 37 | ids = [] 38 | 39 | names_count = 0 40 | professions_count = 0 41 | locations_count = 0 42 | ages_count = 0 43 | dates_count = 0 44 | contacts_count = 0 45 | ids_count = 0 46 | 47 | with open(original_directory + "/" + list_of_files_to_check[i] + ".xml") as fp: 48 | soup = BeautifulSoup(fp, features="xml") 49 | text = soup.find('TEXT') 50 | text_content = text.contents[0] 51 | 52 | tags = soup.find("TAGS") 53 | 54 | name_tags = tags.find_all('NAME') 55 | for name_tag in name_tags: 56 | names.append(name_tag.get('text')) 57 | 58 | profession_tags = tags.find_all('PROFESSION') 59 | for profession_tag in profession_tags: 60 | professions.append(profession_tag.get('text')) 61 | 62 | location_tags = tags.find_all('LOCATION') 63 | for location_tag in location_tags: 64 | locations.append(location_tag.get('text')) 65 | 66 | age_tags = tags.find_all('AGE') 67 | for age_tag in age_tags: 68 | ages.append(age_tag.get('text')) 69 | 70 | date_tags = tags.find_all('DATE') 71 | for date_tag in date_tags: 72 | dates.append(date_tag.get('text')) 73 | 74 | contact_tags = tags.find_all('CONTACT') 75 | for contact_tag in contact_tags: 76 | contacts.append(contact_tag.get('text')) 77 | 78 | id_tags = tags.find_all('ID') 79 | for id_tag in id_tags: 80 | ids.append(id_tag.get('text')) 81 | 82 | print("==========================") 83 | print(list_of_files_to_check[i]) 84 | a_string = list_of_anonymized_reports[i] 85 | 86 | print("Names: ", names) 87 | for word in words_in_string(names, a_string): 88 | names_count += 1 89 | 90 | print("Professions: ", professions) 91 | for word in words_in_string(professions, a_string): 92 | professions_count += 1 93 | 94 | print("Locations: ", locations) 95 | for word in words_in_string(locations, a_string): 96 | locations_count += 1 97 | 98 | print("Ages: ", ages) 99 | for word in words_in_string(ages, a_string): 100 | ages_count += 1 101 | 102 | print("Dates: ", dates) 103 | for word in words_in_string(dates, a_string): 104 | dates_count += 1 105 | 106 | print("Contacts: ", contacts) 107 | for word in words_in_string(contacts, a_string): 108 | contacts_count += 1 109 | 110 | print("IDs: ", ids) 111 | for word in words_in_string(ids, a_string): 112 | ids_count += 1 113 | 114 | print("Names remaining: ", names_count) 115 | print("Professions remaining: ", professions_count) 116 | print("Locations remaining: ", locations_count) 117 | print("Ages remaining: ", ages_count) 118 | print("Dates remaining: ", dates_count) 119 | print("Contacts remaining: ", contacts_count) 120 | print("Ids remaining: ", ids_count) 121 | 122 | sum = names_count + professions_count + locations_count + ages_count + dates_count + contacts_count + ids_count 123 | total_length = len(names) + len(professions) + len(locations) + len(ages) + len(dates) + len(contacts) + len(ids) 124 | print("Remaining number of strings and Accuracy: ", sum, 1 - (sum/total_length)) 125 | 126 | list_of_accuracies.append(1 - (sum/total_length)) 127 | print("==========================\n") 128 | 129 | 130 | print(len(list_of_files_to_check)) 131 | print("Average accuracy =", round(mean(list_of_accuracies), 3)) 132 | 133 | --------------------------------------------------------------------------------