├── LICENSE
├── README.md
├── data_proc
├── Create New Species Files.ipynb
├── __pycache__
│ ├── data_utils.cpython-38.pyc
│ └── gene_embeddings.cpython-38.pyc
├── data_utils.py
├── download_proc_czi_cxg.py
├── gene_embeddings.py
├── generate_reduced_chrom_files.py
└── preproc_many_dataset.py
├── eval_data.py
├── eval_single_anndata.py
├── evaluate.py
├── examples
├── Benchmark Embeddings with scIB.ipynb
└── Label Transfer Using Logistic Classifier.ipynb
├── model.py
├── model_files
└── new_species_protein_embeddings.csv
├── requirements.txt
└── utils.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Yanay Rosen, Yusuf Roohani, Jure Leskovec
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Universal Cell Embeddings
2 |
3 | This repo includes a PyTorch [HuggingFace Accelerator](https://huggingface.co/docs/accelerate/package_reference/accelerator) implementation of the UCE model, to be used to embed individual anndata datasets.
4 |
5 | ## Installation
6 |
7 | ```
8 | pip install -r requirements.txt
9 | ```
10 |
11 | ## Embedding a new dataset
12 |
13 | To generate an embedding for a new single-cell RNA sequencing dataset in the AnnData format, use the `eval_single_anndata.py` script.
14 |
15 | ```
16 | python eval_single_anndata.py --adata_path {path_to_anndata} --dir {output_dir} --species {species} --model_loc {model_loc} --batch_size {batch_size}
17 | ```
18 |
19 | where
20 | - `adata_path`: a h5ad file. The `.X` slot of the file should be scRNA-seq counts. The `.var_names` slot should correspond to gene names, *not ENSEMBLIDs*.
21 | - `dir`: the working directory in which intermediate and final output files will be saved to skip repeated processing of the same dataset.
22 | - `species`: the species of the dataset you are embedding.
23 | - `model_loc`: the location of the model weights `.torch` file.
24 | - `batch_size`: the per GPU batch size. For the 33 layer model, on a 80GB GPU, you should use 25. For a 4 layer model on the same GPU, you can use 100.
25 |
26 | For a sample output on the 10k pbmc dataset, run
27 | ```
28 | python eval_single_anndata.py
29 | ```
30 | All necessary model files will be downloaded automatically.
31 |
32 |
33 | **Note**: This script makes use of additional files, which are described in the code documentation. These are downloaded automatically unless already present in the working directory. The script defaults to the pretrained 4-layer model. For running the pretrained 33-layer model from the paper, please download using this [link](https://figshare.com/articles/dataset/Universal_Cell_Embedding_Model_Files/24320806?file=43423236) and set `--nlayers 33`.
34 |
35 | ## Output
36 |
37 | Final evaluated AnnData: `dir/{dataset_name}.h5ad`. This AnnData will be
38 | identical to the proccessed input anndata, but have UCE embeddings added in the `.obsm["X_uce"]` slot.
39 |
40 | Please see documentation for information on additional output files. All
41 | outputs from `eval_single_anndata.py` are stored in the `dir` directory.
42 |
43 | ## Data
44 |
45 | You can download processed datasets used in the papere [here](https://drive.google.com/drive/folders/1f63fh0ykgEhCrkd_EVvIootBw7LYDVI7?usp=drive_link)
46 |
47 | **Note:** These datasets were embedded using the 33 layer model. Embeddings for the 33 layer model are not compatible with embeddings from the 4 layer model.
48 |
49 | ## Citing
50 |
51 | If you find our paper and code useful, please consider citing the [preprint](https://www.biorxiv.org/content/10.1101/2023.11.28.568918v1):
52 |
53 | ```
54 | @article{rosen2023universal,
55 | title={Universal Cell Embeddings: A Foundation Model for Cell Biology},
56 | author={Rosen, Yanay and Roohani, Yusuf and Agrawal, Ayush and Samotorcan, Leon and Consortium, Tabula Sapiens and Quake, Stephen R and Leskovec, Jure},
57 | journal={bioRxiv},
58 | pages={2023--11},
59 | year={2023},
60 | publisher={Cold Spring Harbor Laboratory}
61 | }
62 | ```
63 |
--------------------------------------------------------------------------------
/data_proc/Create New Species Files.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "id": "0e4018ee",
6 | "metadata": {},
7 | "source": [
8 | "# Embedding Novel Species\n",
9 | "\n",
10 | "This notebook will create the files you need to embed a novel species that wasn't included in the training data.\n",
11 | "\n",
12 | "To start, you will need to download the ESM2 protein embeddings and the reference proteome for the species.\n",
13 | "\n",
14 | "You can find precalculated ESM2 protein embeddings for many species [here](https://drive.google.com/drive/folders/1_Dz7HS5N3GoOAG6MdhsXWY1nwLoN13DJ?usp=drive_link)\n",
15 | "\n",
16 | "For reference proteomes, you can download them from [here](https://useast.ensembl.org/info/about/species.html).\n",
17 | "\n",
18 | "If there is no protein embedding for the species you are interested in, you can request to have it made via Github or email, or you can create it yourself following instructions [here](https://github.com/snap-stanford/SATURN/tree/main/protein_embeddings)."
19 | ]
20 | },
21 | {
22 | "cell_type": "code",
23 | "execution_count": 1,
24 | "id": "ab368d92",
25 | "metadata": {},
26 | "outputs": [],
27 | "source": [
28 | "import numpy as np\n",
29 | "import pickle as pkl\n",
30 | "import pandas as pd"
31 | ]
32 | },
33 | {
34 | "cell_type": "code",
35 | "execution_count": 2,
36 | "id": "c9a306f3",
37 | "metadata": {},
38 | "outputs": [],
39 | "source": [
40 | "SPECIES_NAME = \"chicken\" # short hand name for this species, will be used in arguments and files\n",
41 | "\n",
42 | "# Path to the species proteome\n",
43 | "SPECIES_PROTEIN_FASTA_PATH = \"../../../SATURN/protein_embeddings/data/Gallus_gallus.bGalGal1.mat.broiler.GRCg7b.pep.all.fa\"\n",
44 | "\n",
45 | "# Path to the ESM2 Embeddings\n",
46 | "SPECIES_PROTEIN_EMBEDDINGS_PATH = \"../model_files/protein_embeddings/Gallus_gallus.bGalGal1.mat.broiler.GRCg7b.pep.all.gene_symbol_to_embedding_ESM2.pt\"\n",
47 | "\n",
48 | "# primary_assembly name, this needs to be matched to the FASTA file\n",
49 | "ASSEMBLY_NAME = \"bGalGal1.mat.broiler.GRCg7b\"\n",
50 | "# NCBI Taxonomy ID, please set this so that if someone else also embeds the same species,\n",
51 | "# randomly generated chromosome tokens will be the same\n",
52 | "TAXONOMY_ID = 9031"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "id": "e5d37e52",
58 | "metadata": {},
59 | "source": [
60 | "You can view the FASTA format here, please confirm the primary_assembly name is correct."
61 | ]
62 | },
63 | {
64 | "cell_type": "code",
65 | "execution_count": 3,
66 | "id": "2ecf1464",
67 | "metadata": {},
68 | "outputs": [
69 | {
70 | "name": "stdout",
71 | "output_type": "stream",
72 | "text": [
73 | ">ENSGALP00010000002.1 pep primary_assembly:bGalGal1.mat.broiler.GRCg7b:MT:2824:3798:1 gene:ENSGALG00010000007.1 transcript:ENSGALT00010000007.1 gene_biotype:protein_coding transcript_biotype:protein_coding gene_symbol:ND1 description:NADH dehydrogenase subunit 1 [Source:NCBI gene (formerly Entrezgene);Acc:63549479]\r\n",
74 | "MTLPTLTNLLIMTLSYILPILIAVAFLTLVERKILSYMQARKGPNIVGPFGLLQPVADGV\r\n",
75 | "KLFIKEPIRPSTSSPFLFIITPILALLLALTIWVPLPLPFPLADLNLGLLFLLAMSSLTV\r\n",
76 | "YSLLWSGWASNSKYALIGALRAVAQTISYEVTLAIILLSTIMLSGNYTLSTLAITQEPIY\r\n",
77 | "LIFSAWPLAMMWYISTLAETNRAPFDLTEGESELVSGFNVEYAAGPFAMFFLAEYANIML\r\n",
78 | "MNTLTTVLFLNPSFLNLPPELFPIALATKTLLLSSSFLWIRASYPRFRYDQLMHLLWKNF\r\n",
79 | "LPLTLALCLWHTSMPISYAGLPPI\r\n",
80 | ">ENSGALP00010000003.1 pep primary_assembly:bGalGal1.mat.broiler.GRCg7b:MT:4015:5053:1 gene:ENSGALG00010000011.1 transcript:ENSGALT00010000011.1 gene_biotype:protein_coding transcript_biotype:protein_coding gene_symbol:ND2 description:NADH dehydrogenase subunit 2 [Source:NCBI gene (formerly Entrezgene);Acc:63549482]\r\n",
81 | "MNPHAKLICTVSLIMGTSITISSNHWILAWTGLEINTLAIIPLISKSHHPRAIEATIKYF\r\n",
82 | "LTQSTASALILFSSMTNAWSTGQWDITQLNHPTSCLMLTMAIAIKLGLVPFHFWFPEVLQ\r\n"
83 | ]
84 | }
85 | ],
86 | "source": [
87 | "!head {SPECIES_PROTEIN_FASTA_PATH}"
88 | ]
89 | },
90 | {
91 | "cell_type": "code",
92 | "execution_count": 4,
93 | "id": "90540d0b",
94 | "metadata": {},
95 | "outputs": [],
96 | "source": [
97 | "species_to_paths = {\n",
98 | " SPECIES_NAME: SPECIES_PROTEIN_FASTA_PATH,\n",
99 | "}\n",
100 | "\n",
101 | "species_to_ids = {\n",
102 | " SPECIES_NAME: ASSEMBLY_NAME,\n",
103 | "}"
104 | ]
105 | },
106 | {
107 | "cell_type": "code",
108 | "execution_count": 5,
109 | "id": "623b99cf",
110 | "metadata": {},
111 | "outputs": [],
112 | "source": [
113 | "all_pos_def = []\n",
114 | "\n",
115 | "missing_genes = {}\n",
116 | "for species in species_to_ids.keys():\n",
117 | " missing_genes[species] = []\n",
118 | " proteome_path = species_to_paths[species]\n",
119 | " species_id = species_to_ids[species]\n",
120 | "\n",
121 | " with open(proteome_path) as f:\n",
122 | " proteome_lines = f.readlines()\n",
123 | "\n",
124 | " gene_symbol_to_location = {}\n",
125 | " gene_symbol_to_chrom = {}\n",
126 | "\n",
127 | " for line in proteome_lines:\n",
128 | " if line.startswith(\">\"):\n",
129 | " split_line = line.split()\n",
130 | " gene_symbol = [token for token in split_line if token.startswith(\"gene_symbol\")]\n",
131 | " if len(gene_symbol) > 0:\n",
132 | " gene_symbol = gene_symbol[0].split(\":\")\n",
133 | " \n",
134 | " if len(gene_symbol) == 2:\n",
135 | " gene_symbol = gene_symbol[1]\n",
136 | " elif len(gene_symbol) > 2:\n",
137 | " gene_symbol = \":\".join(gene_symbol[1:]) # fix for annoying zebrafish gene names with colons in them\n",
138 | " else:\n",
139 | " 1/0 # something weird happening, throw an error\n",
140 | " \n",
141 | " \n",
142 | " chrom = None\n",
143 | " \n",
144 | " chrom_arr = [token for token in split_line if token.startswith(\"chromosome:\")]\n",
145 | " if len(chrom_arr) > 0:\n",
146 | " chrom = chrom_arr[0].replace(\"chromosome:\", \"\")\n",
147 | " else:\n",
148 | " chrom_arr = [token for token in split_line if token.startswith(\"primary_assembly:\")]\n",
149 | " if len(chrom_arr) > 0:\n",
150 | " chrom = chrom_arr[0].replace(\"primary_assembly:\", \"\")\n",
151 | " else:\n",
152 | " chrom_arr = [token for token in split_line if token.startswith(\"scaffold:\")] \n",
153 | " if len(chrom_arr) > 0:\n",
154 | " chrom = chrom_arr[0].replace(\"scaffold:\", \"\")\n",
155 | " if chrom is not None:\n",
156 | " gene_symbol_to_location[gene_symbol] = chrom.split(\":\")[2]\n",
157 | " gene_symbol_to_chrom[gene_symbol] = chrom.split(\":\")[1]\n",
158 | " else:\n",
159 | " missing_genes[species].append(gene_symbol)\n",
160 | " \n",
161 | "\n",
162 | " positional_df = pd.DataFrame()\n",
163 | " positional_df[\"gene_symbol\"] = [gn.upper() for gn in list(gene_symbol_to_chrom.keys())]\n",
164 | " positional_df[\"chromosome\"] = list(gene_symbol_to_chrom.values())\n",
165 | " positional_df[\"start\"] = list(gene_symbol_to_location.values())\n",
166 | " positional_df = positional_df.sort_values([\"chromosome\", \"start\"])\n",
167 | " #positional_df = positional_df.set_index(\"gene_symbol\")\n",
168 | " positional_df[\"species\"] = species\n",
169 | " all_pos_def.append(positional_df)"
170 | ]
171 | },
172 | {
173 | "cell_type": "code",
174 | "execution_count": 6,
175 | "id": "b72887b3",
176 | "metadata": {},
177 | "outputs": [
178 | {
179 | "data": {
180 | "text/html": [
181 | "
\n",
182 | "\n",
195 | "
\n",
196 | " \n",
197 | "
\n",
198 | "
\n",
199 | "
gene_symbol
\n",
200 | "
chromosome
\n",
201 | "
start
\n",
202 | "
species
\n",
203 | "
\n",
204 | " \n",
205 | " \n",
206 | "
\n",
207 | "
2327
\n",
208 | "
GCC1
\n",
209 | "
1
\n",
210 | "
1006145
\n",
211 | "
chicken
\n",
212 | "
\n",
213 | "
\n",
214 | "
2502
\n",
215 | "
NCAM2
\n",
216 | "
1
\n",
217 | "
100828671
\n",
218 | "
chicken
\n",
219 | "
\n",
220 | "
\n",
221 | "
3084
\n",
222 | "
ENS-2
\n",
223 | "
1
\n",
224 | "
101147482
\n",
225 | "
chicken
\n",
226 | "
\n",
227 | "
\n",
228 | "
2331
\n",
229 | "
DENND6B
\n",
230 | "
1
\n",
231 | "
1012031
\n",
232 | "
chicken
\n",
233 | "
\n",
234 | "
\n",
235 | "
3973
\n",
236 | "
MRPL39
\n",
237 | "
1
\n",
238 | "
102578362
\n",
239 | "
chicken
\n",
240 | "
\n",
241 | "
\n",
242 | "
...
\n",
243 | "
...
\n",
244 | "
...
\n",
245 | "
...
\n",
246 | "
...
\n",
247 | "
\n",
248 | "
\n",
249 | "
4722
\n",
250 | "
CA9
\n",
251 | "
Z
\n",
252 | "
9779343
\n",
253 | "
chicken
\n",
254 | "
\n",
255 | "
\n",
256 | "
4738
\n",
257 | "
ARHGEF39
\n",
258 | "
Z
\n",
259 | "
9835547
\n",
260 | "
chicken
\n",
261 | "
\n",
262 | "
\n",
263 | "
3885
\n",
264 | "
MRPL17
\n",
265 | "
Z
\n",
266 | "
9850679
\n",
267 | "
chicken
\n",
268 | "
\n",
269 | "
\n",
270 | "
4172
\n",
271 | "
CCBE1
\n",
272 | "
Z
\n",
273 | "
9852827
\n",
274 | "
chicken
\n",
275 | "
\n",
276 | "
\n",
277 | "
3293
\n",
278 | "
PMAIP1
\n",
279 | "
Z
\n",
280 | "
9998272
\n",
281 | "
chicken
\n",
282 | "
\n",
283 | " \n",
284 | "
\n",
285 | "
13271 rows × 4 columns
\n",
286 | "
"
287 | ],
288 | "text/plain": [
289 | " gene_symbol chromosome start species\n",
290 | "2327 GCC1 1 1006145 chicken\n",
291 | "2502 NCAM2 1 100828671 chicken\n",
292 | "3084 ENS-2 1 101147482 chicken\n",
293 | "2331 DENND6B 1 1012031 chicken\n",
294 | "3973 MRPL39 1 102578362 chicken\n",
295 | "... ... ... ... ...\n",
296 | "4722 CA9 Z 9779343 chicken\n",
297 | "4738 ARHGEF39 Z 9835547 chicken\n",
298 | "3885 MRPL17 Z 9850679 chicken\n",
299 | "4172 CCBE1 Z 9852827 chicken\n",
300 | "3293 PMAIP1 Z 9998272 chicken\n",
301 | "\n",
302 | "[13271 rows x 4 columns]"
303 | ]
304 | },
305 | "execution_count": 6,
306 | "metadata": {},
307 | "output_type": "execute_result"
308 | }
309 | ],
310 | "source": [
311 | "master_pos_def = pd.concat(all_pos_def)\n",
312 | "master_pos_def"
313 | ]
314 | },
315 | {
316 | "cell_type": "code",
317 | "execution_count": 7,
318 | "id": "6d9dac28",
319 | "metadata": {},
320 | "outputs": [
321 | {
322 | "data": {
323 | "text/plain": [
324 | "chicken 13271\n",
325 | "Name: species, dtype: int64"
326 | ]
327 | },
328 | "execution_count": 7,
329 | "metadata": {},
330 | "output_type": "execute_result"
331 | }
332 | ],
333 | "source": [
334 | "master_pos_def[\"species\"].value_counts() # double check how many genes are mapped"
335 | ]
336 | },
337 | {
338 | "cell_type": "code",
339 | "execution_count": 8,
340 | "id": "4a3d45c2",
341 | "metadata": {},
342 | "outputs": [
343 | {
344 | "name": "stdout",
345 | "output_type": "stream",
346 | "text": [
347 | "chicken: 0\n"
348 | ]
349 | }
350 | ],
351 | "source": [
352 | "for k, v in missing_genes.items():\n",
353 | " print(f\"{k}: {len(v)}\") # are any genes missing?"
354 | ]
355 | },
356 | {
357 | "cell_type": "code",
358 | "execution_count": 9,
359 | "id": "c59774b1",
360 | "metadata": {
361 | "scrolled": true
362 | },
363 | "outputs": [
364 | {
365 | "name": "stdout",
366 | "output_type": "stream",
367 | "text": [
368 | "*********\n",
369 | "chicken\n"
370 | ]
371 | },
372 | {
373 | "data": {
374 | "text/plain": [
375 | "1 1785\n",
376 | "2 1169\n",
377 | "3 1067\n",
378 | "4 953\n",
379 | "5 817\n",
380 | "Z 629\n",
381 | "6 458\n",
382 | "8 450\n",
383 | "7 442\n",
384 | "9 382\n",
385 | "10 366\n",
386 | "14 359\n",
387 | "11 327\n",
388 | "15 326\n",
389 | "13 306\n",
390 | "20 298\n",
391 | "12 293\n",
392 | "19 278\n",
393 | "18 274\n",
394 | "17 260\n",
395 | "26 237\n",
396 | "28 237\n",
397 | "27 235\n",
398 | "21 226\n",
399 | "23 214\n",
400 | "25 176\n",
401 | "34 155\n",
402 | "24 149\n",
403 | "22 142\n",
404 | "16 54\n",
405 | "30 52\n",
406 | "38 49\n",
407 | "31 14\n",
408 | "MT 13\n",
409 | "39 10\n",
410 | "JAENSK010000484.1 7\n",
411 | "35 6\n",
412 | "JAENSK010000592.1 6\n",
413 | "W 5\n",
414 | "MU179278.1 5\n",
415 | "MU179279.1 4\n",
416 | "36 3\n",
417 | "JAENSK010000483.1 3\n",
418 | "JAENSK010000585.1 3\n",
419 | "JAENSK010000593.1 2\n",
420 | "MU179258.1 2\n",
421 | "MU179272.1 2\n",
422 | "MU179273.1 2\n",
423 | "JAENSK010000584.1 2\n",
424 | "JAENSK010000656.1 1\n",
425 | "Name: chromosome, dtype: int64"
426 | ]
427 | },
428 | "metadata": {},
429 | "output_type": "display_data"
430 | },
431 | {
432 | "name": "stdout",
433 | "output_type": "stream",
434 | "text": [
435 | "*********\n"
436 | ]
437 | }
438 | ],
439 | "source": [
440 | "# Count genes per chromosome\n",
441 | "for species in species_to_ids.keys():\n",
442 | " print(\"*********\")\n",
443 | " print(species)\n",
444 | " display(master_pos_def[master_pos_def[\"species\"] == species][\"chromosome\"].value_counts().head(50))\n",
445 | " print(\"*********\")"
446 | ]
447 | },
448 | {
449 | "cell_type": "code",
450 | "execution_count": 10,
451 | "id": "541baded",
452 | "metadata": {},
453 | "outputs": [],
454 | "source": [
455 | "master_pos_def.to_csv(f\"{SPECIES_NAME}_to_chrom_pos.csv\", index=False) # Save the DF"
456 | ]
457 | },
458 | {
459 | "cell_type": "code",
460 | "execution_count": 11,
461 | "id": "eabd0e31",
462 | "metadata": {},
463 | "outputs": [
464 | {
465 | "name": "stdout",
466 | "output_type": "stream",
467 | "text": [
468 | "chicken_to_chrom_pos.csv\n"
469 | ]
470 | }
471 | ],
472 | "source": [
473 | "# The chromosome file path will be:\n",
474 | "print(f\"{SPECIES_NAME}_to_chrom_pos.csv\")"
475 | ]
476 | },
477 | {
478 | "cell_type": "code",
479 | "execution_count": 12,
480 | "id": "fe1345b1",
481 | "metadata": {},
482 | "outputs": [
483 | {
484 | "data": {
485 | "text/plain": [
486 | "66"
487 | ]
488 | },
489 | "execution_count": 12,
490 | "metadata": {},
491 | "output_type": "execute_result"
492 | }
493 | ],
494 | "source": [
495 | "N_UNIQ_CHROM = len(master_pos_def[master_pos_def[\"species\"] == species][\"chromosome\"].unique())\n",
496 | "N_UNIQ_CHROM"
497 | ]
498 | },
499 | {
500 | "cell_type": "markdown",
501 | "id": "e37e277f",
502 | "metadata": {},
503 | "source": [
504 | "# Generate token file"
505 | ]
506 | },
507 | {
508 | "cell_type": "code",
509 | "execution_count": 13,
510 | "id": "d6904975",
511 | "metadata": {},
512 | "outputs": [],
513 | "source": [
514 | "import torch\n",
515 | "import pickle\n",
516 | "token_dim = 5120"
517 | ]
518 | },
519 | {
520 | "cell_type": "markdown",
521 | "id": "a2798848",
522 | "metadata": {},
523 | "source": [
524 | "This will create the token file. Please note the offset value."
525 | ]
526 | },
527 | {
528 | "cell_type": "code",
529 | "execution_count": 14,
530 | "id": "4355dabd",
531 | "metadata": {},
532 | "outputs": [
533 | {
534 | "name": "stdout",
535 | "output_type": "stream",
536 | "text": [
537 | "CHROM_TOKEN_OFFSET: 13275\n",
538 | "Saved PE, offsets file\n"
539 | ]
540 | }
541 | ],
542 | "source": [
543 | "species_to_offsets = {}\n",
544 | "\n",
545 | "all_pe = torch.load(\"../model_files/all_tokens.torch\")[0:4] # read in existing token file to make sure \n",
546 | "# that special vocab tokens are the same for different seeds\n",
547 | "\n",
548 | "offset = len(all_pe) # special tokens at the top!\n",
549 | "\n",
550 | "PE = torch.load(SPECIES_PROTEIN_EMBEDDINGS_PATH)\n",
551 | "\n",
552 | "pe_stacked = torch.stack(list(PE.values()))\n",
553 | "all_pe = torch.vstack((all_pe, pe_stacked))\n",
554 | "species_to_offsets[species] = offset\n",
555 | "\n",
556 | "print(\"CHROM_TOKEN_OFFSET:\", all_pe.shape[0])\n",
557 | "torch.manual_seed(TAXONOMY_ID)\n",
558 | "CHROM_TENSORS = torch.normal(mean=0, std=1, size=(N_UNIQ_CHROM, 5120)) \n",
559 | "# N_UNIQ_CHROM is the total number of chromosome choices, it is hardcoded for now (for species in the training data)\n",
560 | "all_pe = torch.vstack(\n",
561 | " (all_pe, CHROM_TENSORS)) # Add the chrom tensors to the end\n",
562 | "all_pe.requires_grad = False\n",
563 | "\n",
564 | "\n",
565 | "torch.save(all_pe, f\"{SPECIES_NAME}_pe_tokens.torch\")\n",
566 | "\n",
567 | "with open(f\"{SPECIES_NAME}_offsets.pkl\", \"wb+\") as f:\n",
568 | " pickle.dump(species_to_offsets, f)\n",
569 | "print(\"Saved PE, offsets file\")"
570 | ]
571 | },
572 | {
573 | "cell_type": "code",
574 | "execution_count": 15,
575 | "id": "c26fe491",
576 | "metadata": {
577 | "scrolled": true
578 | },
579 | "outputs": [
580 | {
581 | "data": {
582 | "text/plain": [
583 | "torch.Size([13341, 5120])"
584 | ]
585 | },
586 | "execution_count": 15,
587 | "metadata": {},
588 | "output_type": "execute_result"
589 | }
590 | ],
591 | "source": [
592 | "all_pe.shape"
593 | ]
594 | },
595 | {
596 | "cell_type": "code",
597 | "execution_count": 16,
598 | "id": "21f937ea",
599 | "metadata": {
600 | "scrolled": true
601 | },
602 | "outputs": [
603 | {
604 | "data": {
605 | "text/plain": [
606 | "torch.Size([13341, 5120])"
607 | ]
608 | },
609 | "execution_count": 16,
610 | "metadata": {},
611 | "output_type": "execute_result"
612 | }
613 | ],
614 | "source": [
615 | "all_pe.shape"
616 | ]
617 | },
618 | {
619 | "cell_type": "code",
620 | "execution_count": 17,
621 | "id": "5faadace",
622 | "metadata": {},
623 | "outputs": [
624 | {
625 | "name": "stdout",
626 | "output_type": "stream",
627 | "text": [
628 | "chicken_offsets.pkl\n"
629 | ]
630 | }
631 | ],
632 | "source": [
633 | "print(f\"{SPECIES_NAME}_offsets.pkl\")"
634 | ]
635 | },
636 | {
637 | "cell_type": "code",
638 | "execution_count": 18,
639 | "id": "6ceac20b",
640 | "metadata": {},
641 | "outputs": [
642 | {
643 | "data": {
644 | "text/plain": [
645 | "'../model_files/protein_embeddings/Gallus_gallus.bGalGal1.mat.broiler.GRCg7b.pep.all.gene_symbol_to_embedding_ESM2.pt'"
646 | ]
647 | },
648 | "execution_count": 18,
649 | "metadata": {},
650 | "output_type": "execute_result"
651 | }
652 | ],
653 | "source": [
654 | "SPECIES_PROTEIN_EMBEDDINGS_PATH"
655 | ]
656 | },
657 | {
658 | "cell_type": "markdown",
659 | "id": "e4697330",
660 | "metadata": {},
661 | "source": [
662 | "# Example evaluation of new species"
663 | ]
664 | },
665 | {
666 | "cell_type": "markdown",
667 | "id": "2b72667d",
668 | "metadata": {},
669 | "source": [
670 | "**Note: when you evaluate a new species, you need to change some arguments and modify some files:**\n",
671 | "\n",
672 | "You will need to modify the csv in `model_files/new_species_protein_embeddings.csv` to include the new protein embeddings file you downloaded.\n",
673 | "\n",
674 | "In the file add a row for the new species with the format:\n",
675 | "`species name,full path to protein embedding file`\n",
676 | "\n",
677 | "Please also add this line to the dictionary created on line 247 in the file `data_proc/data_utils.py`.\n",
678 | "\n",
679 | "When you want to embed this new species, you will need to specify these newly created files as arguments.\n",
680 | "- `CHROM_TOKEN_OFFSET`: This tells UCE when the rows corresponding to chromosome tokens starts.\n",
681 | "- `spec_chrom_csv_path`: This is a new csv, created by this script, which maps genes to chromosomes and genomic positions\n",
682 | "- `token_file`: This is a new token file that will work just for this species. The embeddings generated will still be universal though!\n",
683 | "- `offset_pkl_path`: This is another file that maps genes to tokens\n",
684 | "\n",
685 | "\n",
686 | "```\n",
687 | "\n",
688 | "accelerate launch eval_single_anndata.py chicken_heart.h5ad --species=chicken --CHROM_TOKEN_OFFSET=13275 --spec_chrom_csv_path=data_proc/chicken_to_chrom_pos.csv --token_file=data_proc/chicken_pe_tokens.torch --offset_pkl_path=data_proc/chicken_offsets.pkl --dir=... --multi_gpu=True\n",
689 | "\n",
690 | "```"
691 | ]
692 | }
693 | ],
694 | "metadata": {
695 | "kernelspec": {
696 | "display_name": "Python 3 (ipykernel)",
697 | "language": "python",
698 | "name": "python3"
699 | },
700 | "language_info": {
701 | "codemirror_mode": {
702 | "name": "ipython",
703 | "version": 3
704 | },
705 | "file_extension": ".py",
706 | "mimetype": "text/x-python",
707 | "name": "python",
708 | "nbconvert_exporter": "python",
709 | "pygments_lexer": "ipython3",
710 | "version": "3.8.6"
711 | }
712 | },
713 | "nbformat": 4,
714 | "nbformat_minor": 5
715 | }
716 |
--------------------------------------------------------------------------------
/data_proc/__pycache__/data_utils.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/snap-stanford/UCE/8227a65cdd021b9186ef86671d2aef5c895c8e4b/data_proc/__pycache__/data_utils.cpython-38.pyc
--------------------------------------------------------------------------------
/data_proc/__pycache__/gene_embeddings.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/snap-stanford/UCE/8227a65cdd021b9186ef86671d2aef5c895c8e4b/data_proc/__pycache__/gene_embeddings.cpython-38.pyc
--------------------------------------------------------------------------------
/data_proc/data_utils.py:
--------------------------------------------------------------------------------
1 | import warnings
2 | warnings.filterwarnings("ignore")
3 |
4 | import scanpy as sc
5 | import torch
6 |
7 | from torch import nn, Tensor
8 | import torch.nn.functional as F
9 | import torch.utils.data as data
10 | import torch.optim as optim
11 | import numpy as np
12 | import pickle
13 | import os
14 | import argparse
15 | import logging
16 | import time
17 |
18 | from tqdm.auto import tqdm
19 | import pandas as pd
20 |
21 | import math
22 | import anndata
23 | from pathlib import Path
24 |
25 |
26 | from torch.utils.data import dataset
27 | from torch.utils.data import DataLoader, TensorDataset, dataset
28 | from scipy.stats import binom
29 | from typing import Dict, List, Optional, Tuple
30 | from scanpy import AnnData
31 |
32 |
33 | from data_proc.gene_embeddings import load_gene_embeddings_adata
34 |
35 | def data_to_torch_X(X):
36 | if isinstance(X, sc.AnnData):
37 | X = X.X
38 | if not isinstance(X, np.ndarray):
39 | X = X.toarray()
40 | return torch.from_numpy(X).float()
41 |
42 | class SincleCellDataset(data.Dataset):
43 | def __init__(self,
44 | expression: torch.tensor, # Subset to hv genes, count data! cells x genes
45 | protein_embeddings: torch.tensor, # same order as expression, also subset genes x pe
46 | labels: None, # optional, tensor of labels
47 | covar_vals: None, # tensor of covar values or none
48 | ) -> None:
49 | super(SincleCellDataset, self).__init__()
50 |
51 | # Set expression
52 | self.expression = expression
53 |
54 | row_sums = self.expression.sum(1) # UMI Counts
55 | log_norm_count_adj = torch.log1p(self.expression / (self.expression.sum(1)).unsqueeze(1) * torch.tensor(1000))
56 |
57 | # Set log norm and count adjusted expression
58 | max_vals, max_idx = torch.max(log_norm_count_adj, dim=0)
59 | self.expression_mod = log_norm_count_adj / max_vals
60 |
61 | # Calculate dropout likliehoods of each gene
62 | self.dropout_vec = (self.expression == 0).float().mean(0) # per gene dropout percentages
63 |
64 | # Set data info
65 | self.num_cells = self.expression.shape[0]
66 | self.num_genes = self.expression.shape[1]
67 |
68 | # Set optional label info, including categorical covariate index
69 | self.covar_vals = covar_vals
70 | self.labels = labels
71 |
72 | # Set protein embeddings
73 | self.protein_embeddings = protein_embeddings
74 |
75 | self.item_mode = "expression"
76 | if self.covar_vals is not None:
77 | self.item_mode = "expression+covar"
78 |
79 |
80 | def __getitem__(self, idx):
81 | if self.item_mode == "expression":
82 | if isinstance(idx, int):
83 | if idx < self.num_cells:
84 | return self.expression[idx, :]
85 | else:
86 | raise IndexError
87 | else:
88 | raise NotImplementedError
89 | elif self.item_mode == "expression+covar":
90 | if isinstance(idx, int):
91 | if idx < self.num_cells:
92 | return self.expression[idx, :], self.covar_vals[idx]
93 | else:
94 | raise IndexError
95 | else:
96 | raise NotImplementedError
97 |
98 |
99 | def __len__(self) -> int:
100 | return self.num_cells
101 |
102 | def get_dim(self) -> Dict[str, int]:
103 | return self.num_genes
104 |
105 |
106 | def data_to_torch_X(X):
107 | if isinstance(X, sc.AnnData):
108 | X = X.X
109 | if not isinstance(X, np.ndarray):
110 | X = X.toarray()
111 | return torch.from_numpy(X).float()
112 |
113 |
114 | def anndata_to_sc_dataset(adata:sc.AnnData,
115 | species:str="human",
116 | labels:list=[],
117 | covar_col:str=None,
118 | hv_genes=None,
119 | embedding_model="ESM2",
120 | ) -> (SincleCellDataset, AnnData):
121 |
122 | # Subset to just genes we have embeddings for
123 | adata, protein_embeddings = load_gene_embeddings_adata(
124 | adata=adata,
125 | species=[species],
126 | embedding_model=embedding_model
127 | )
128 |
129 | if hv_genes is not None:
130 | sc.pp.highly_variable_genes(adata, flavor='seurat_v3', n_top_genes=hv_genes) # Expects Count Data
131 |
132 | hv_index = adata.var["highly_variable"]
133 | adata = adata[:, hv_index] # Subset to hv genes only
134 |
135 | protein_embeddings = protein_embeddings[species][hv_index]
136 | else:
137 | protein_embeddings = protein_embeddings[species]
138 | expression = data_to_torch_X(adata.X)
139 |
140 | covar_vals = None
141 | if len(labels) > 0:
142 | assert covar_col is None or covar_col in labels, "Covar needs to be in labels" # make sure you keep track of covar column!
143 | labels = adata.obs.loc[:, labels].values
144 |
145 | if covar_col is not None:
146 | # we have a categorical label to use as covariate
147 | covar_vals = torch.tensor(pd.Categorical(adata.obs[covar_col]).codes)
148 | return SincleCellDataset(
149 | expression=expression,
150 | protein_embeddings=protein_embeddings,
151 | labels=labels,
152 | covar_vals=covar_vals
153 | ), adata
154 |
155 | def adata_path_to_prot_chrom_starts(adata, dataset_species, spec_pe_genes, gene_to_chrom_pos, offset):
156 | """
157 | Given a :path: to an h5ad,
158 | """
159 | pe_row_idxs = torch.tensor([spec_pe_genes.index(k.upper()) + offset for k in adata.var_names]).long()
160 | print(len(np.unique(pe_row_idxs)))
161 |
162 | spec_chrom = gene_to_chrom_pos[gene_to_chrom_pos["species"] == dataset_species].set_index("gene_symbol")
163 |
164 | gene_chrom = spec_chrom.loc[[k.upper() for k in adata.var_names]]
165 |
166 | dataset_chroms = gene_chrom["spec_chrom"].cat.codes # now this is correctely indexed by species and chromosome
167 | print("Max Code:", max(dataset_chroms))
168 | dataset_pos = gene_chrom["start"].values
169 | return pe_row_idxs, dataset_chroms, dataset_pos
170 |
171 |
172 |
173 | def process_raw_anndata(row, h5_folder_path, npz_folder_path, scp, skip,
174 | additional_filter, root):
175 | path = row.path
176 | if not os.path.isfile(root + "/" + path):
177 | print( "**********************************")
178 | print(f"***********{root + '/' + path} File Missing****")
179 | print( "**********************************")
180 | print(path, root)
181 | return None
182 |
183 | name = path.replace(".h5ad", "")
184 | proc_path = path.replace(".h5ad", "_proc.h5ad")
185 | if skip:
186 | if os.path.isfile(h5_folder_path + proc_path):
187 | print(f"{name} already processed. Skipping")
188 | return None, None, None
189 |
190 | print(f"Proccessing {name}")
191 |
192 | species = row.species
193 | covar_col = row.covar_col
194 |
195 | ad = sc.read(root + "/" + path)
196 | labels = []
197 | if "cell_type" in ad.obs.columns:
198 | labels.append("cell_type")
199 |
200 |
201 | if covar_col is np.nan or np.isnan(covar_col):
202 | covar_col = None
203 | else:
204 | labels.append(covar_col)
205 |
206 | if additional_filter:
207 | sc.pp.filter_genes(ad, min_cells=10)
208 | sc.pp.filter_cells(ad, min_genes=25)
209 |
210 |
211 | dataset, adata = anndata_to_sc_dataset(ad, species=species, labels=labels, covar_col=covar_col, hv_genes=None)
212 | adata = adata.copy()
213 |
214 | if additional_filter:
215 | sc.pp.filter_genes(ad, min_cells=10)
216 | sc.pp.filter_cells(ad, min_genes=25)
217 |
218 | num_cells = adata.X.shape[0]
219 | num_genes = adata.X.shape[1]
220 |
221 | adata_path = h5_folder_path + proc_path
222 | adata.write(adata_path)
223 |
224 | arr = data_to_torch_X(adata.X).numpy()
225 |
226 | print(arr.max()) # this is a nice check to make sure it's counts
227 | filename = npz_folder_path + f"{name}_counts.npz"
228 | shape = arr.shape
229 | print(name, shape)
230 | fp = np.memmap(filename, dtype='int64', mode='w+', shape=shape)
231 | fp[:] = arr[:]
232 | fp.flush()
233 |
234 | if scp != "":
235 | subprocess.call(["scp", filename, f"{scp}:{filename}"])
236 | subprocess.call(["scp", adata_path, f"{scp}:{adata_path}"])
237 |
238 | return adata, num_cells, num_genes
239 |
240 |
241 | def get_species_to_pe(EMBEDDING_DIR):
242 | """
243 | Given an embedding directory, return all embeddings as a dictionary coded by species.
244 | Note: In the current form, this function is written such that the directory needs all of the following species embeddings.
245 | """
246 | EMBEDDING_DIR = Path(EMBEDDING_DIR)
247 |
248 | embeddings_paths = {
249 | 'human': EMBEDDING_DIR / 'Homo_sapiens.GRCh38.gene_symbol_to_embedding_ESM2.pt',
250 | 'mouse': EMBEDDING_DIR / 'Mus_musculus.GRCm39.gene_symbol_to_embedding_ESM2.pt',
251 | 'frog': EMBEDDING_DIR / 'Xenopus_tropicalis.Xenopus_tropicalis_v9.1.gene_symbol_to_embedding_ESM2.pt',
252 | 'zebrafish': EMBEDDING_DIR / 'Danio_rerio.GRCz11.gene_symbol_to_embedding_ESM2.pt',
253 | "mouse_lemur": EMBEDDING_DIR / "Microcebus_murinus.Mmur_3.0.gene_symbol_to_embedding_ESM2.pt",
254 | "pig": EMBEDDING_DIR / 'Sus_scrofa.Sscrofa11.1.gene_symbol_to_embedding_ESM2.pt',
255 | "macaca_fascicularis": EMBEDDING_DIR / 'Macaca_fascicularis.Macaca_fascicularis_6.0.gene_symbol_to_embedding_ESM2.pt',
256 | "macaca_mulatta": EMBEDDING_DIR / 'Macaca_mulatta.Mmul_10.gene_symbol_to_embedding_ESM2.pt',
257 | }
258 | extra_species = pd.read_csv("./model_files/new_species_protein_embeddings.csv").set_index("species").to_dict()["path"]
259 | embeddings_paths.update(extra_species) # adds new species
260 |
261 |
262 |
263 | species_to_pe = {
264 | species:torch.load(pe_dir) for species, pe_dir in embeddings_paths.items()
265 | }
266 |
267 | species_to_pe = {species:{k.upper(): v for k,v in pe.items()} for species, pe in species_to_pe.items()}
268 | return species_to_pe
269 |
270 |
271 | def get_spec_chrom_csv(path="/dfs/project/cross-species/yanay/code/all_to_chrom_pos.csv"):
272 | """
273 | Get the species to chrom csv file
274 | """
275 | gene_to_chrom_pos = pd.read_csv(path)
276 | gene_to_chrom_pos["spec_chrom"] = pd.Categorical(gene_to_chrom_pos["species"] + "_" + gene_to_chrom_pos["chromosome"]) # add the spec_chrom list
277 | return gene_to_chrom_pos
--------------------------------------------------------------------------------
/data_proc/download_proc_czi_cxg.py:
--------------------------------------------------------------------------------
1 | import os
2 | os.environ["OMP_NUM_THREADS"] = "20" # export OMP_NUM_THREADS=4
3 | os.environ["OPENBLAS_NUM_THREADS"] = "20" # export OPENBLAS_NUM_THREADS=4
4 | os.environ["MKL_NUM_THREADS"] = "20" # export MKL_NUM_THREADS=6
5 | os.environ["VECLIB_MAXIMUM_THREADS"] = "20" # export VECLIB_MAXIMUM_THREADS=4
6 | os.environ["NUMEXPR_NUM_THREADS"] = "20"
7 |
8 |
9 | import warnings
10 | warnings.filterwarnings('ignore')
11 |
12 | import cellxgene_census
13 | from tqdm import tqdm
14 | import scanpy as sc
15 |
16 | from collections import defaultdict
17 | from typing import Dict, List, Optional, Tuple
18 |
19 | import torch
20 | import torch.utils.data as data
21 | import torch
22 | import numpy as np
23 | import scanpy as sc
24 | from numpy import array
25 | import os
26 | import pickle as pkl
27 | import glob
28 |
29 | def data_to_torch_X(X):
30 | if isinstance(X, sc.AnnData):
31 | X = X.X
32 | if not isinstance(X, np.ndarray):
33 | X = X.toarray()
34 | return torch.from_numpy(X).float()
35 |
36 | import sys
37 | sys.path.append('../')
38 |
39 | from gene_embeddings import load_gene_embeddings_adata
40 | import pandas as pd
41 | import numpy as np
42 | from scanpy import AnnData
43 | from multiprocessing import Pool, Process, Manager
44 |
45 | import multiprocessing.pool as mpp
46 | # https://stackoverflow.com/questions/57354700/starmap-combined-with-tqdm
47 | def istarmap(self, func, iterable, chunksize=1):
48 | """starmap-version of imap
49 | """
50 | if self._state != mpp.RUN:
51 | raise ValueError("Pool not running")
52 |
53 | if chunksize < 1:
54 | raise ValueError(
55 | "Chunksize must be 1+, not {0:n}".format(
56 | chunksize))
57 |
58 | task_batches = mpp.Pool._get_tasks(func, iterable, chunksize)
59 | result = mpp.IMapIterator(self._cache)
60 | self._taskqueue.put(
61 | (
62 | self._guarded_task_generation(result._job,
63 | mpp.starmapstar,
64 | task_batches),
65 | result._set_length
66 | ))
67 | return (item for chunk in result for item in chunk)
68 |
69 |
70 | mpp.Pool.istarmap = istarmap
71 |
72 |
73 | VERSION = "2023-04-25"
74 | N_TOP_GENES = 12000
75 |
76 |
77 | print(cellxgene_census.get_census_version_description(VERSION))
78 |
79 | census = cellxgene_census.open_soma(census_version=VERSION)
80 | census_datasets = census["census_info"]["datasets"].read().concat().to_pandas()
81 |
82 | # for convenience, indexing on the soma_joinid which links this to other census data.
83 | census_datasets = census_datasets.set_index("soma_joinid")
84 |
85 | species_to_readable = {
86 | "Homo sapiens":"human",
87 | "Mus musculus":"mouse"
88 | }
89 |
90 | def process_row(row, num_genes, num_cells, paths, all_species, covar_cols, dataset_title, h5_root="/dfs/project/uce/cxg_data/anndatas/", npz_root="/dfs/project/uce/cxg_data/npzs/"):
91 | dataset_id = row[1].dataset_id
92 | #dataset_title = row[1].dataset_title.lower().replace(' ', '_').replace(",", "").replace("/", "")
93 |
94 | save_path = h5_root + f"{dataset_title}.h5ad"
95 | no_primary_path = save_path.replace(".h5ad", "_no_primary.h5ad")
96 | proc_path = save_path.replace(".h5ad", "_proc.h5ad")
97 | npz_path = npz_root + f"{dataset_title}_counts.npz"
98 | # Download the anndata
99 |
100 | if os.path.exists(no_primary_path):
101 | print("No Primary, skipping")
102 | return
103 |
104 | if not os.path.exists(save_path) and not os.path.exists(no_primary_path):
105 | cellxgene_census.download_source_h5ad(
106 | dataset_id, to_path=save_path
107 | )
108 | if os.path.exists(proc_path) and os.path.exists(npz_path):
109 | print("Already Proc")
110 | try:
111 | ad = sc.read(proc_path)
112 | except:
113 | print()
114 | print()
115 | print("Error reading on:", dataset_title)
116 | print()
117 | print()
118 | return
119 | # Get organism
120 | if "organism" in ad.obs.columns:
121 | unique_organisms = list(ad.obs.organism.unique().categories)
122 | unique_organism_str = ", ".join(unique_organisms)
123 | else:
124 | unique_organism_str = "human"
125 | species = species_to_readable.get(unique_organism_str, "human")
126 | # don't need to do hv if already proc
127 | if "sample" in ad.obs.columns:
128 | covar_cols[dataset_title] = "sample"
129 | elif "batch" in ad.obs.columns:
130 | covar_cols[dataset_title] = "batch"
131 | else:
132 | covar_cols[dataset_title] = ""
133 |
134 |
135 | num_genes[dataset_title] = ad.X.shape[1]
136 | num_cells[dataset_title] = ad.X.shape[0]
137 | paths[dataset_title] = f"{dataset_title}.h5ad"
138 | all_species[dataset_title] = species
139 |
140 | return # Skip everything else
141 | # Read the raw AD
142 | ad = sc.read(save_path)
143 |
144 | # Change to counts
145 | if not sc._utils.check_nonnegative_integers(ad.X):
146 | # don't have counts yet, need raw
147 | if ad.raw is None:
148 | print("Skipped, no counts")
149 | return
150 | ad.X = ad.raw.X.toarray()
151 | if not sc._utils.check_nonnegative_integers(ad.X):
152 | print("Skipped, no counts")
153 | return
154 |
155 | # SUBSET TO primary data
156 | if len(np.unique(ad.obs["is_primary_data"])) >= 1:
157 | primary_data = ad.obs.is_primary_data.value_counts()
158 | ad = ad[ad.obs.is_primary_data]
159 | if ad.X.shape[0] == 0:
160 | print("no primary data")
161 | print(primary_data)
162 | os.rename(save_path, no_primary_path)
163 | return # No primary data
164 | print("has primary data")
165 | # Switch to gene symbols
166 | ad.var["feature_id_orig"] = list(ad.var.index)
167 | ad.var_names = list(ad.var.feature_name)
168 |
169 | # Get organism
170 | if "organism" in ad.obs.columns:
171 | unique_organisms = list(ad.obs.organism.unique().categories)
172 | unique_organism_str = ", ".join(unique_organisms)
173 | else:
174 | unique_organism_str = "human"
175 | species = species_to_readable.get(unique_organism_str, "human")
176 | # Filter to gene symbols with protein embeddings
177 | ad, _ = load_gene_embeddings_adata(
178 | adata=ad,
179 | species=[species],
180 | embedding_model="ESM2"
181 | )
182 |
183 | ad = ad.copy()
184 | # Simple filtering by counts
185 | sc.pp.filter_cells(ad, min_genes=200)
186 | sc.pp.filter_genes(ad, min_cells=10)
187 |
188 | #print(ad)
189 |
190 | if "sample" in ad.obs.columns:
191 | try:
192 | sc.pp.highly_variable_genes(ad, flavor="seurat_v3", n_top_genes=N_TOP_GENES, subset=True, batch_key="sample")
193 | except:
194 | try:
195 | sc.pp.highly_variable_genes(ad, flavor="seurat_v3", n_top_genes=N_TOP_GENES, subset=True, batch_key="sample", span=1)
196 | except:
197 | print(f"can't hv gene subset {dataset_title}")
198 | covar_cols[dataset_title] = "sample"
199 | elif "batch" in ad.obs.columns:
200 | try:
201 | sc.pp.highly_variable_genes(ad, flavor="seurat_v3", n_top_genes=N_TOP_GENES, subset=True, batch_key="batch")
202 | except:
203 | try:
204 | sc.pp.highly_variable_genes(ad, flavor="seurat_v3", n_top_genes=N_TOP_GENES, subset=True, batch_key="batch", span=1)
205 | except:
206 | print(f"can't hv gene subset {dataset_title}")
207 | covar_cols[dataset_title] = "batch"
208 | else:
209 | try:
210 | sc.pp.highly_variable_genes(ad, flavor="seurat_v3", n_top_genes=N_TOP_GENES, subset=True)
211 | except:
212 | try:
213 | sc.pp.highly_variable_genes(ad, flavor="seurat_v3", n_top_genes=N_TOP_GENES, subset=True, span=1)
214 | except:
215 | print(f"can't hv gene subset {dataset_title}")
216 | covar_cols[dataset_title] = ""
217 |
218 | num_genes[dataset_title] = ad.X.shape[1]
219 | num_cells[dataset_title] = ad.X.shape[0]
220 | paths[dataset_title] = f"{dataset_title}.h5ad"
221 | all_species[dataset_title] = species
222 |
223 | print("writing proc")
224 | ad.write(proc_path)
225 |
226 | arr = data_to_torch_X(ad.X).numpy()
227 |
228 | shape = arr.shape
229 |
230 | fp = np.memmap(npz_path, dtype='int64', mode='w+', shape=shape)
231 | fp[:] = arr[:]
232 | fp.flush()
233 |
234 | return
235 |
236 | if __name__ == '__main__':
237 | '''
238 | manager = Manager()
239 | num_genes = manager.dict()
240 | num_cells = manager.dict()
241 | paths = manager.dict()
242 | all_species = manager.dict()
243 | covar_cols = manager.dict()
244 | '''
245 | num_genes = {}
246 | num_cells = {}
247 | paths = {}
248 | all_species = {}
249 | covar_cols = {}
250 |
251 | df = pd.DataFrame()
252 | # Shuffle the dataset
253 | census_datasets = census_datasets#.iloc[270:]
254 | iterrows = list(census_datasets.iterrows())
255 | #p = Pool(8)
256 | #for row in tqdm(iterrows, total=len(census_datasets)):
257 | # p.apply_async(process_row, args=(row, num_genes, num_cells, paths, all_species, covar_cols))
258 | #p.close()
259 | #p.join()
260 | '''
261 | with Pool(1) as p:
262 | nrows = len(iterrows)
263 | inputs = zip(iterrows, [num_genes]*nrows, [num_cells]*nrows, [paths]*nrows, [all_species]*nrows, [covar_cols]*nrows)
264 | for _ in tqdm(p.istarmap(process_row, inputs),
265 | total=nrows):
266 | pass
267 |
268 | '''
269 |
270 | if os.path.exists("dataset_rows_mouse_fixed.pkl"):
271 | dataset_rows = {}
272 | for path in glob.glob("dataset_rows_mouse_fixed*.pkl"):
273 | with open(path, "rb") as f:
274 | dataset_rows_path = pkl.load(f)
275 | dataset_rows.update(dataset_rows_path)
276 |
277 | print(f"{len(dataset_rows)} already counted")
278 | else:
279 | dataset_rows = {}
280 |
281 |
282 | pbar = tqdm(iterrows)
283 | all_errors = []
284 | total_number_of_cells = 0
285 |
286 | duplicate_titles = ['Dissection: Body of hippocampus (HiB) - Rostral DG-CA4', 'Retina',
287 | 'Colon', 'Myeloid cells', 'Ileum', 'Airway']
288 | duplicate_titles_2 = ['retina', 'airway', 'myeloid_cells', 'colon', 'ileum', 'immune_cells']
289 |
290 | for row in pbar:
291 | dataset_title = row[1].dataset_title
292 | if dataset_title in duplicate_titles:
293 | dataset_title = row[1].collection_name + row[1].dataset_title
294 |
295 | dataset_title = dataset_title.lower().replace(' ', '_').replace(",", "").replace("/", "")
296 |
297 | if dataset_title in duplicate_titles_2:
298 | dataset_title = (row[1].collection_name + "_" + dataset_title).lower().replace(' ', '_').replace(",", "").replace("/", "")
299 |
300 | print(f"{total_number_of_cells} cells done")
301 | if dataset_title in dataset_rows:
302 | paths[dataset_title] = dataset_rows[dataset_title][0]
303 | all_species[dataset_title] = dataset_rows[dataset_title][1]
304 | covar_cols[dataset_title] = dataset_rows[dataset_title][2]
305 | num_cells[dataset_title] = dataset_rows[dataset_title][3]
306 | num_genes[dataset_title] = dataset_rows[dataset_title][4]
307 | #print("skipped read of proc")
308 |
309 | total_number_of_cells += dataset_rows[dataset_title][3]
310 | continue # Skip!
311 | else:
312 | pbar.set_description(f"{dataset_title} proc")
313 | try:
314 | process_row(row, num_genes, num_cells, paths, all_species, covar_cols, dataset_title=dataset_title)
315 | except:
316 | print(f"****{dataset_title} ERROR****")
317 | all_errors.append(dataset_title)
318 |
319 |
320 | pbar.set_description(f"{dataset_title} done")
321 |
322 | if dataset_title in paths:
323 | dataset_rows[dataset_title] = [paths[dataset_title], all_species[dataset_title], covar_cols[dataset_title], num_cells[dataset_title], num_genes[dataset_title], dataset_title]
324 |
325 | total_number_of_cells += dataset_rows[dataset_title][3]
326 |
327 | with open("dataset_rows_mouse_fixed.pkl", "wb") as f:
328 | pkl.dump(dataset_rows, f)
329 | print("wrote pkl")
330 |
331 | # path,species,covar_col,num_cells,names
332 |
333 | df["path"] = list(paths.values())
334 | df["species"] = list(all_species.values())
335 | df["covar_col"] = list(covar_cols.values())
336 | df["num_cells"] = list(num_cells.values())
337 | df["num_genes"] = list(num_genes.values())
338 | df["names"] = list(paths.keys())
339 |
340 | print(df.head(20))
341 | print()
342 | print("Errors:")
343 | print(all_errors)
344 | df.to_csv("cxg_datasets.csv", index=False)
345 |
--------------------------------------------------------------------------------
/data_proc/gene_embeddings.py:
--------------------------------------------------------------------------------
1 | """Helper functions for loading pretrained gene embeddings."""
2 | from pathlib import Path
3 | from typing import Dict, Tuple
4 |
5 | import torch
6 |
7 | from scanpy import AnnData
8 | import numpy as np
9 | import pandas as pd
10 |
11 |
12 | EMBEDDING_DIR = Path('model_files/protein_embeddings')
13 | MODEL_TO_SPECIES_TO_GENE_EMBEDDING_PATH = {
14 | 'ESM2': {
15 | 'human': EMBEDDING_DIR / 'Homo_sapiens.GRCh38.gene_symbol_to_embedding_ESM2.pt',
16 | 'mouse': EMBEDDING_DIR / 'Mus_musculus.GRCm39.gene_symbol_to_embedding_ESM2.pt',
17 | 'frog': EMBEDDING_DIR / 'Xenopus_tropicalis.Xenopus_tropicalis_v9.1.gene_symbol_to_embedding_ESM2.pt',
18 | 'zebrafish': EMBEDDING_DIR / 'Danio_rerio.GRCz11.gene_symbol_to_embedding_ESM2.pt',
19 | "mouse_lemur": EMBEDDING_DIR / "Microcebus_murinus.Mmur_3.0.gene_symbol_to_embedding_ESM2.pt",
20 | "pig": EMBEDDING_DIR / 'Sus_scrofa.Sscrofa11.1.gene_symbol_to_embedding_ESM2.pt',
21 | "macaca_fascicularis": EMBEDDING_DIR / 'Macaca_fascicularis.Macaca_fascicularis_6.0.gene_symbol_to_embedding_ESM2.pt',
22 | "macaca_mulatta": EMBEDDING_DIR / 'Macaca_mulatta.Mmul_10.gene_symbol_to_embedding_ESM2.pt',
23 | }
24 | }
25 |
26 | extra_species = pd.read_csv("./model_files/new_species_protein_embeddings.csv").set_index("species").to_dict()["path"]
27 | MODEL_TO_SPECIES_TO_GENE_EMBEDDING_PATH["ESM2"].update(extra_species) # adds new species
28 |
29 |
30 | def load_gene_embeddings_adata(adata: AnnData, species: list, embedding_model: str) -> Tuple[AnnData, Dict[str, torch.FloatTensor]]:
31 | """Loads gene embeddings for all the species/genes in the provided data.
32 |
33 | :param data: An AnnData object containing gene expression data for cells.
34 | :param species: Species corresponding to this adata
35 |
36 | :param embedding_model: The gene embedding model whose embeddings will be loaded.
37 | :return: A tuple containing:
38 | - A subset of the data only containing the gene expression for genes with embeddings in all species.
39 | - A dictionary mapping species name to the corresponding gene embedding matrix (num_genes, embedding_dim).
40 | """
41 | # Get species names
42 | species_names = species
43 | species_names_set = set(species_names)
44 |
45 | # Get embedding paths for the model
46 | species_to_gene_embedding_path = MODEL_TO_SPECIES_TO_GENE_EMBEDDING_PATH[embedding_model]
47 | available_species = set(species_to_gene_embedding_path)
48 |
49 | # Ensure embeddings are available for all species
50 | if not (species_names_set <= available_species):
51 | raise ValueError(f'The following species do not have gene embeddings: {species_names_set - available_species}')
52 |
53 | # Load gene embeddings for desired species (and convert gene symbols to lower case)
54 | species_to_gene_symbol_to_embedding = {
55 | species: {
56 | gene_symbol.lower(): gene_embedding
57 | for gene_symbol, gene_embedding in torch.load(species_to_gene_embedding_path[species]).items()
58 | }
59 | for species in species_names
60 | }
61 |
62 | # Determine which genes to include based on gene expression and embedding availability
63 | genes_with_embeddings = set.intersection(*[
64 | set(gene_symbol_to_embedding)
65 | for gene_symbol_to_embedding in species_to_gene_symbol_to_embedding.values()
66 | ])
67 | genes_to_use = {gene for gene in adata.var_names if gene.lower() in genes_with_embeddings}
68 |
69 | # Subset data to only use genes with embeddings
70 | adata = adata[:, adata.var_names.isin(genes_to_use)]
71 |
72 | # Set up dictionary mapping species to gene embedding matrix (num_genes, embedding_dim)
73 | species_to_gene_embeddings = {
74 | species_name: torch.stack([
75 | species_to_gene_symbol_to_embedding[species_name][gene_symbol.lower()]
76 | for gene_symbol in adata.var_names
77 | ])
78 | for species_name in species_names
79 | }
80 |
81 | return adata, species_to_gene_embeddings
82 |
--------------------------------------------------------------------------------
/data_proc/generate_reduced_chrom_files.py:
--------------------------------------------------------------------------------
1 | import os
2 | os.environ["OMP_NUM_THREADS"] = "4" # export OMP_NUM_THREADS=4
3 | os.environ["OPENBLAS_NUM_THREADS"] = "4" # export OPENBLAS_NUM_THREADS=4
4 | os.environ["MKL_NUM_THREADS"] = "4" # export MKL_NUM_THREADS=6
5 | os.environ["VECLIB_MAXIMUM_THREADS"] = "4" # export VECLIB_MAXIMUM_THREADS=4
6 | os.environ["NUMEXPR_NUM_THREADS"] = "4"
7 |
8 |
9 | import warnings
10 | warnings.filterwarnings("ignore")
11 |
12 | import scanpy as sc
13 | import torch
14 | import torch.nn as nn
15 | import torch.nn.functional as F
16 | import torch.optim as optim
17 | import numpy as np
18 | import pickle
19 | import os
20 | import argparse
21 | import logging
22 | import time
23 |
24 | from tqdm.auto import tqdm
25 | import matplotlib.pyplot as plt
26 | import pandas as pd
27 |
28 | #sc._settings.ScanpyConfig.n_jobs = 6
29 |
30 | import math
31 | from typing import Tuple
32 |
33 | import torch
34 | from torch import nn, Tensor
35 | import torch.nn.functional as F
36 | from torch.nn import TransformerEncoder, TransformerEncoderLayer
37 | from torch.utils.data import dataset
38 |
39 |
40 | from accelerate import Accelerator
41 | import anndata
42 | from data_utils import adata_path_to_prot_chrom_starts, get_spec_chrom_csv
43 |
44 |
45 |
46 | from torch.utils.data import dataset
47 | from torch.utils.data import DataLoader, TensorDataset
48 | from scipy.stats import binom
49 |
50 |
51 |
52 |
53 | def padding_tensor(sequences):
54 | """
55 | :param sequences: list of tensors
56 | :return:
57 | """
58 | num = len(sequences)
59 | max_len = max([s.size(0) for s in sequences])
60 | out_dims = (num, max_len, 1280)
61 |
62 |
63 | out_tensor = sequences[0].data.new(*out_dims).fill_(0)
64 | out_dims2 = (num, max_len)
65 |
66 | mask = sequences[0].data.new(*out_dims2).fill_(float('-inf'))
67 | for i, tensor in enumerate(sequences):
68 | length = tensor.size(0)
69 | out_tensor[i, :length] = tensor
70 | mask[i, :length] = 1
71 | return out_tensor.permute(1, 0, 2), mask
72 |
73 |
74 | from pathlib import Path
75 | # ESM1b
76 | '''
77 | EMBEDDING_DIR = Path('/dfs/project/cross-species/data/proteome/embeddings')
78 | human_pe_dir = EMBEDDING_DIR / 'Homo_sapiens.GRCh38.gene_symbol_to_embedding_ESM1b.pt'
79 | mouse_pe_dir = EMBEDDING_DIR / 'Mus_musculus.GRCm39.gene_symbol_to_embedding_ESM1b.pt'
80 | lemur_pe_dir = Path("/dfs/project/cross-species/yanay/data/proteome/embeddings/") / 'Microcebus_murinus.Mmur_3.0.gene_symbol_to_embedding_ESM1b.pt'
81 |
82 | '''
83 |
84 | # Upgrade to ESM2
85 | EMBEDDING_DIR = Path('/dfs/project/cross-species/data/proteome/embeddings')
86 | EMBEDDING_DIR = Path('/dfs/project/cross-species/yanay/data/proteome/embeddings')
87 |
88 | embeddings_paths = {
89 | 'human': EMBEDDING_DIR / 'Homo_sapiens.GRCh38.gene_symbol_to_embedding_ESM2.pt',
90 | 'mouse': EMBEDDING_DIR / 'Mus_musculus.GRCm39.gene_symbol_to_embedding_ESM2.pt',
91 | 'frog': EMBEDDING_DIR / 'Xenopus_tropicalis.Xenopus_tropicalis_v9.1.gene_symbol_to_embedding_ESM2.pt',
92 | 'zebrafish': EMBEDDING_DIR / 'Danio_rerio.GRCz11.gene_symbol_to_embedding_ESM2.pt',
93 | "mouse_lemur": EMBEDDING_DIR / "Microcebus_murinus.Mmur_3.0.gene_symbol_to_embedding_ESM2.pt",
94 | "pig": EMBEDDING_DIR / 'Sus_scrofa.Sscrofa11.1.gene_symbol_to_embedding_ESM2.pt',
95 | "macaca_fascicularis": EMBEDDING_DIR / 'Macaca_fascicularis.Macaca_fascicularis_6.0.gene_symbol_to_embedding_ESM2.pt',
96 | "macaca_mulatta": EMBEDDING_DIR / 'Macaca_mulatta.Mmul_10.gene_symbol_to_embedding_ESM2.pt',
97 | }
98 |
99 | species_to_pe = {
100 | species:torch.load(pe_dir) for species, pe_dir in embeddings_paths.items()
101 | }
102 |
103 | species_to_pe = {species:{k.upper(): v for k,v in pe.items()} for species, pe in species_to_pe.items()}
104 |
105 | #species_to_keys = {species:list(pe.keys()) for species, pe in species_to_pe.items()}
106 | #species_to_keys = {species:dict(zip(keys, np.arange(len(keys)))) for species, keys in species_to_keys.items()}
107 |
108 |
109 | #datasets_df = pd.read_csv("/dfs/project/cross-species/yanay/code/UCE/data_proc/full_train_datasets.csv")
110 | datasets_df = pd.read_csv("tissue_datasets.csv")
111 | datasets_df = pd.read_csv("perturb_datasets.csv")
112 | datasets_df = pd.read_csv("../new_perturb_datasets.csv")
113 |
114 |
115 | #pd.concat((#pd.read_csv("new_datasets.csv"),
116 | #pd.read_csv("pbmcs_nohvg.csv"),
117 | #pd.read_csv("lung_nohvg.csv"),
118 | #pd.read_csv("new_tabula_datasets.csv"),
119 | #pd.read_csv("updated_datasets.csv"),
120 | # #pd.read_csv("sanger_heart_atlas_datasets.csv"),
121 | # pd.read_csv("tissue_datasets.csv")
122 | # ))
123 |
124 |
125 |
126 |
127 | #datasets_df = pd.read_csv("cell_cycle_datasets.csv")
128 | #datasets_df = pd.read_csv("spatial_datasets.csv")
129 | #datasets_df = pd.read_csv("perturb_datasets.csv")
130 | #datasets_df = pd.read_csv("ccle_datasets.csv")
131 | #datasets_df = pd.read_csv("pancreas_datasets.csv")
132 |
133 |
134 |
135 | sorted_dataset_names = sorted(datasets_df["names"])
136 | with open("dataset_shapes.pkl", "rb") as f:
137 | shapes_dict = pickle.load(f)
138 |
139 |
140 | shapes_dict.update({
141 | "madissoon_novel_lung":(190728, 8000),
142 | 'flores_cerebellum_human': (20232, 8000),
143 | 'osuch_gut_human': (272310, 8000),
144 | 'msk_ovarian_human': (929690, 8000),
145 | 'htan_vmuc_dis_epi_human': (65084, 8000),
146 | 'htan_vmuc_val_epi_human': (57564, 8000),
147 | 'htan_vmuc_non_epi_human': (9099, 8000),
148 | 'hao_pbmc_3p_human': (161764, 8000),
149 | 'hao_pbmc_5p_human': (49147, 8000),
150 | 'gao_tumors_human': (36111, 8000),
151 | 'swabrick_breast_human': (92427, 8000),
152 | 'wu_cryo_tumors_human': (105662, 8000),
153 | 'cell_line_het_human': (53513, 8000),
154 | 'bi_allen_metastasis_human': (27787, 8000),
155 | 'zheng68k_human': (68579, 8000),
156 | 'zheng68k_12k_human': (68579, 12000),
157 | 'mouse_embryo_ct': (153597, 12000),
158 | "regev_gtex_heart": (36574, 8000),
159 | "tabula_sapiens_heart": (11505, 8000),
160 | "10k_pbmcs":(11990, 12000),
161 | "epo_ido":(35834,12000),
162 | 'tabula_sapiens_kidney': (9641, 8000),
163 | 'tabula_microcebus_kidney': (14592, 8000),
164 | 'tabula_muris_kidney': (2781, 8000),
165 | 'tabula_muris_senis_kidney': (19610, 8000),
166 | 'immune_human': (33506, 8000)
167 | })
168 |
169 | for row in datasets_df.iterrows():
170 | ngenes = row[1].num_genes
171 | ncells = row[1].num_cells
172 | name = row[1].names
173 | if not np.isnan(ngenes):
174 | shapes_dict[name] = (int(ncells), int(ngenes))
175 |
176 | #with open("dataset_shapes.pkl", "wb") as f:
177 | # pickle.dump(shapes_dict, f)
178 | token_dim = 5120
179 | mmap_dict = {}
180 |
181 | root_dir = "/lfs/local/0/yanay/uce_h5s/"
182 | root_dir_census = "/lfs/local/0/yanay/cxg_h5s/"
183 |
184 | dataset_to_paths = {r[1]["names"]:root_dir + r[1]["path"].replace(".h5ad", "_proc.h5ad") for r in datasets_df.iterrows()}
185 | for row in datasets_df.iterrows():
186 | name = row[1].names
187 | census = row[1].census
188 |
189 | if census == "yes":
190 | dataset_to_paths[name] = dataset_to_paths[name].replace(root_dir, root_dir_census)
191 |
192 |
193 | datasets_to_species = {r[1]["names"]:r[1]["species"] for r in datasets_df.iterrows()}
194 |
195 | #species_to_pe = {"mouse":mouse_pe, "human":human_pe, "mouse_lemur":lemur_pe}
196 |
197 | #dataset_to_protein_embeddings_all = {k:species_to_pe[v] for k, v in datasets_to_species.items()}
198 |
199 | dataset_to_protein_embeddings = {}
200 |
201 |
202 | #dataset_to_protein_embeddings_all["madissoon_novel_lung"] = species_to_pe["human"]
203 | datasets_to_species["madissoon_novel_lung"] = "human"
204 | #dataset_to_paths["madissoon_novel_lung"] = "/lfs/local/0/yanay/uce_h5s/madissoon_novel_lung_proc.h5ad"
205 |
206 |
207 |
208 | # New Chrom Based Code
209 | gene_to_chrom_pos = get_spec_chrom_csv()
210 | species_to_chrom_categories = {}
211 |
212 | for species in np.unique(gene_to_chrom_pos["species"]):
213 | species_to_chrom_categories[species] = pd.Categorical(gene_to_chrom_pos["chromosome"]).categories
214 |
215 |
216 | dataset_to_chroms = {}
217 | dataset_to_starts = {}
218 |
219 | sorted_species_names = sorted(species_to_pe.keys())
220 | print(sorted_species_names)
221 |
222 | if os.path.exists(f"/dfs/project/uce/all_species_pe_tokens.torch"):
223 | all_pe = torch.load(f"/dfs/project/uce/all_species_pe_tokens.torch")
224 | with open("/dfs/project/uce/all_species_offsets.pkl", "rb") as f:
225 | species_to_offsets = pickle.load(f)
226 | print("Loaded PE", all_pe.shape)
227 | else:
228 | torch.manual_seed(8)
229 | MASK_TENSOR = torch.zeros((1, token_dim)) # this is the padding token
230 | CHROM_TENSOR_LEFT = torch.normal(mean=0, std=1, size=(1, token_dim))
231 | CHROM_TENSOR_RIGHT = torch.normal(mean=0, std=1, size=(1, token_dim))
232 | CLS_TENSOR = torch.normal(mean=0, std=1, size=(1, token_dim))
233 | species_to_offsets = {}
234 |
235 | all_pe = [MASK_TENSOR, CHROM_TENSOR_LEFT, CHROM_TENSOR_RIGHT, CLS_TENSOR]
236 | offset = len(all_pe) # special tokens at the top!
237 | for species in sorted_species_names:
238 | pe_stacked = torch.stack(list(species_to_pe[species].values()))
239 | all_pe.append(pe_stacked)
240 | species_to_offsets[species] = offset
241 | offset += pe_stacked.shape[0]
242 |
243 | all_pe = torch.vstack(all_pe)
244 | print(all_pe.shape)
245 | torch.save(all_pe, f"/dfs/project/uce/all_species_pe_tokens.torch")
246 | with open("/dfs/project/uce/all_species_offsets.pkl", "wb+") as f:
247 | pickle.dump(species_to_offsets, f)
248 | print("Saved PE")
249 |
250 | # Load in already saved!
251 | if os.path.exists(f"/lfs/local/0/yanay/reduced_datasets_to_pe_chrom_{token_dim}_new.torch"):
252 | dataset_to_protein_embeddings = torch.load(f"/lfs/local/0/yanay/reduced_datasets_to_pe_chrom_{token_dim}_new.torch")
253 |
254 | with open("/lfs/local/0/yanay/dataset_to_chroms_new.pkl", "rb") as f:
255 | dataset_to_chroms = pickle.load(f)
256 | with open("/lfs/local/0/yanay/dataset_to_starts_new.pkl", "rb") as f:
257 | dataset_to_starts = pickle.load(f)
258 | else:
259 | dataset_to_protein_embeddings = {}
260 | dataset_to_chroms = {}
261 | dataset_to_starts = {}
262 |
263 |
264 | # Add the new ones
265 | print("creating reduced size protein embeddings file")
266 |
267 | redo = True
268 |
269 | for dataset, path in tqdm(list(dataset_to_paths.items())):
270 | if dataset in dataset_to_protein_embeddings.keys() and not redo:
271 | continue # skip since already procced
272 | print(dataset)
273 | adata = sc.read(path)
274 | dataset_species = datasets_to_species[dataset]
275 | spec_pe_genes = list(species_to_pe[dataset_species].keys())
276 | offset = species_to_offsets[dataset_species]
277 |
278 | # Get proper idxs
279 | pe_row_idxs, dataset_chroms, dataset_pos = adata_path_to_prot_chrom_starts(adata, dataset_species, spec_pe_genes, gene_to_chrom_pos, offset)
280 | # Add to dicts
281 | dataset_to_chroms[dataset] = dataset_chroms
282 | dataset_to_starts[dataset] = dataset_pos
283 | dataset_to_protein_embeddings[dataset] = pe_row_idxs
284 |
285 | del adata
286 | # save Dicts and idxs
287 | torch.save(dataset_to_protein_embeddings, f"/lfs/local/0/yanay/reduced_datasets_to_pe_chrom_{token_dim}_new.torch")
288 |
289 | with open("/lfs/local/0/yanay/dataset_to_chroms_new.pkl", "wb+") as f:
290 | pickle.dump(dataset_to_chroms, f)
291 | with open("/lfs/local/0/yanay/dataset_to_starts_new.pkl", "wb+") as f:
292 | pickle.dump(dataset_to_starts, f)
--------------------------------------------------------------------------------
/data_proc/preproc_many_dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | os.environ["OMP_NUM_THREADS"] = "10" # export OMP_NUM_THREADS=4
3 | os.environ["OPENBLAS_NUM_THREADS"] = "10" # export OPENBLAS_NUM_THREADS=4
4 | os.environ["MKL_NUM_THREADS"] = "10" # export MKL_NUM_THREADS=6
5 | os.environ["VECLIB_MAXIMUM_THREADS"] = "10" # export VECLIB_MAXIMUM_THREADS=4
6 | os.environ["NUMEXPR_NUM_THREADS"] = "10"
7 |
8 |
9 |
10 | from collections import defaultdict
11 | from typing import Dict, List, Optional, Tuple
12 |
13 | import torch
14 | import torch.utils.data as data
15 | import numpy as np
16 | import scanpy as sc
17 | from numpy import array
18 | import subprocess
19 | import os
20 | from tqdm import tqdm
21 | import warnings
22 | warnings.filterwarnings("ignore")
23 |
24 |
25 | from gene_embeddings import load_gene_embeddings_adata
26 | import pandas as pd
27 | import numpy as np
28 | from scanpy import AnnData
29 | from data_utils import process_raw_anndata
30 |
31 | def data_to_torch_X(X):
32 | if isinstance(X, sc.AnnData):
33 | X = X.X
34 | if not isinstance(X, np.ndarray):
35 | X = X.toarray()
36 | return torch.from_numpy(X).float()
37 |
38 | class SincleCellDataset(data.Dataset):
39 | def __init__(self,
40 | expression: torch.tensor, # Subset to hv genes, count data! cells x genes
41 | protein_embeddings: torch.tensor, # same order as expression, also subset genes x pe
42 | labels: None, # optional, tensor of labels
43 | covar_vals: None, # tensor of covar values or none
44 | ) -> None:
45 | super(SincleCellDataset, self).__init__()
46 |
47 | # Set expression
48 | self.expression = expression
49 |
50 | row_sums = self.expression.sum(1) # UMI Counts
51 | log_norm_count_adj = torch.log1p(self.expression / (self.expression.sum(1)).unsqueeze(1) * torch.tensor(1000))
52 |
53 | # Set log norm and count adjusted expression
54 | max_vals, max_idx = torch.max(log_norm_count_adj, dim=0)
55 | self.expression_mod = log_norm_count_adj / max_vals
56 |
57 | # Calculate dropout likliehoods of each gene
58 | self.dropout_vec = (self.expression == 0).float().mean(0) # per gene dropout percentages
59 |
60 | # Set data info
61 | self.num_cells = self.expression.shape[0]
62 | self.num_genes = self.expression.shape[1]
63 |
64 | # Set optional label info, including categorical covariate index
65 | self.covar_vals = covar_vals
66 | self.labels = labels
67 |
68 | # Set protein embeddings
69 | self.protein_embeddings = protein_embeddings
70 |
71 | self.item_mode = "expression"
72 | if self.covar_vals is not None:
73 | self.item_mode = "expression+covar"
74 |
75 |
76 | def __getitem__(self, idx):
77 | if self.item_mode == "expression":
78 | if isinstance(idx, int):
79 | if idx < self.num_cells:
80 | return self.expression[idx, :]
81 | else:
82 | raise IndexError
83 | else:
84 | raise NotImplementedError
85 | elif self.item_mode == "expression+covar":
86 | if isinstance(idx, int):
87 | if idx < self.num_cells:
88 | return self.expression[idx, :], self.covar_vals[idx]
89 | else:
90 | raise IndexError
91 | else:
92 | raise NotImplementedError
93 |
94 |
95 | def __len__(self) -> int:
96 | return self.num_cells
97 |
98 | def get_dim(self) -> Dict[str, int]:
99 | return self.num_genes
100 |
101 |
102 | def data_to_torch_X(X):
103 | if isinstance(X, sc.AnnData):
104 | X = X.X
105 | if not isinstance(X, np.ndarray):
106 | X = X.toarray()
107 | return torch.from_numpy(X).float()
108 |
109 |
110 | def anndata_to_sc_dataset(adata:sc.AnnData,
111 | species:str="human",
112 | labels:list=[],
113 | covar_col:str=None,
114 | hv_genes:int=12000,
115 | embedding_model="ESM1b",
116 | ) -> (SincleCellDataset, AnnData):
117 |
118 | # Subset to just genes we have embeddings for
119 | adata, protein_embeddings = load_gene_embeddings_adata(
120 | adata=adata,
121 | species=[species],
122 | embedding_model=embedding_model
123 | )
124 |
125 | if DO_HVG:
126 | sc.pp.highly_variable_genes(adata, flavor='seurat_v3', n_top_genes=hv_genes) # Expects Count Data
127 |
128 | hv_index = adata.var["highly_variable"]
129 | adata = adata[:, hv_index] # Subset to hv genes only
130 |
131 | protein_embeddings = protein_embeddings[species][hv_index]
132 | else:
133 | protein_embeddings = protein_embeddings[species]
134 | expression = data_to_torch_X(adata.X)
135 |
136 | covar_vals = None
137 | if len(labels) > 0:
138 | assert covar_col is None or covar_col in labels, "Covar needs to be in labels" # make sure you keep track of covar column!
139 | labels = adata.obs.loc[:, labels].values
140 |
141 | if covar_col is not None:
142 | # we have a categorical label to use as covariate
143 | covar_vals = torch.tensor(pd.Categorical(adata.obs[covar_col]).codes)
144 | return SincleCellDataset(
145 | expression=expression,
146 | protein_embeddings=protein_embeddings,
147 | labels=labels,
148 | covar_vals=covar_vals
149 | ), adata
150 |
151 | def proc(args):
152 | datasets_df = pd.read_csv(args.datasets_df)
153 | datasets_df["covar_col"] = np.nan
154 | skip = args.skip
155 | additional_filter = args.filter
156 | DO_HVG = args.DO_HVG
157 |
158 | num_genes = {}
159 | num_cells = {}
160 |
161 | ir = list(datasets_df.iterrows())
162 | for i, row in tqdm(ir, total=len(datasets_df)):
163 | _, ncells, ngenes = process_raw_anndata(row, h5_folder_path, npz_folder_path, scp, skip, additional_filter, root=args.file_root_path)
164 | if (ncells is not None) and (ngenes is not None):
165 | num_genes[path] = adata.X.shape[1]
166 | num_cells[path] = ngenes
167 |
168 | if "num_cells" not in datasets_df.columns:
169 | datasets_df["num_cells"] = 0
170 | if "num_genes" not in datasets_df.columns:
171 | datasets_df["num_genes"] = 0
172 | for k in num_genes.keys():
173 | ng = num_genes[k]
174 | nc = num_cells[k]
175 | datasets_df.loc[datasets_df["path"] == k, "num_cells"] = nc
176 | datasets_df.loc[datasets_df["path"] == k, "num_genes"] = ng
177 | # Write with the cells and genes info back to the original path
178 | datasets_df.to_csv(args.datasets_df, index=False)
179 | if __name__=="__main__":
180 | # Parse command-line arguments
181 |
182 | parser = argparse.ArgumentParser(description='Preproc datasets h5ad datasets.')
183 |
184 | # Define command-line arguments
185 | parser.add_argument('--scp', type=str, default="", help='Name of a SNAP server to SCP the results to. It should have the same folders as the script is already saving to.')
186 | parser.add_argument('--h5_folder_path', type=str, default="/lfs/local/0/yanay/uce_h5s/", help='Folder to save H5s to.')
187 | parser.add_argument('--npz_folder_path', type=str, default="/lfs/local/0/yanay/uce_proc/", help='Folder to save NPZs to.')
188 |
189 |
190 | parser.add_argument('--datasets_df', type=str, default="/dfs/project/uce/new_perturb_datasets.csv", help='Path to datasets csv. Will be overwritten to have the correct num cells and num genes for each dataset.')
191 |
192 | parser.add_argument('--filter', type=bool, default=True, help='Should you do an additional gene/cell filtering? This can be a good step since even if you have already done it, subsetting to protein embeddings can make some cells sparser.')
193 | parser.add_argument('--skip', type=bool, default=True, help='Should you skip datasets that appear to have already been created in the h5 folder?')
194 |
195 | parser.add_argument('--DO_HVG', type=bool, default=False, help='Should a HVG subset be done.')
196 |
197 |
198 | parse
199 | args = parser.parse_args()
200 | main(args)
201 |
--------------------------------------------------------------------------------
/eval_data.py:
--------------------------------------------------------------------------------
1 | """
2 | Dataloaders
3 |
4 | """
5 |
6 | import warnings
7 | warnings.filterwarnings("ignore")
8 | import sys
9 | sys.path.append('../')
10 | from typing import Dict, List, Optional, Tuple, Any
11 | import torch
12 | import numpy as np
13 | import pickle
14 | import torch.utils.data as data
15 |
16 |
17 | class MultiDatasetSentences(data.Dataset):
18 | def __init__(self, sorted_dataset_names, shapes_dict, args,
19 | dataset_to_protein_embeddings_path= "/lfs/local/0/yanay/reduced_datasets_to_pe_chrom_5120_new.torch",
20 | datasets_to_chroms_path="/lfs/local/0/yanay/dataset_to_chroms_new.pkl",
21 | datasets_to_starts_path="/lfs/local/0/yanay/dataset_to_starts_new.pkl",
22 | npzs_dir="/lfs/local/0/yanay/uce_proc/") -> None:
23 | super(MultiDatasetSentences, self).__init__()
24 | # self.xs = {}
25 | self.num_cells = {}
26 | self.num_genes = {}
27 | self.shapes_dict = shapes_dict
28 | self.args = args
29 |
30 | self.total_num_cells = 0
31 | for name in sorted_dataset_names:
32 | num_cells, num_genes = self.shapes_dict[name]
33 | # self.xs[name] = X
34 | self.num_cells[name] = num_cells
35 | self.num_genes[name] = num_genes
36 |
37 | self.total_num_cells += num_cells
38 |
39 | self.datasets = sorted_dataset_names
40 |
41 | # TODO: preferably not hard-coded here
42 | self.dataset_to_protein_embeddings = torch.load(dataset_to_protein_embeddings_path)
43 | with open(datasets_to_chroms_path, "rb") as f:
44 | self.dataset_to_chroms = pickle.load(f)
45 | with open(datasets_to_starts_path, "rb") as f:
46 | self.dataset_to_starts = pickle.load(f)
47 |
48 | self.npzs_dir = npzs_dir
49 |
50 | def __getitem__(self, idx):
51 | if isinstance(idx, int):
52 | for dataset in sorted(self.datasets):
53 | if idx < self.num_cells[dataset]:
54 | #cts = np.memmap(f"/lfs/local/0/yanay/cxg_npzs/" + f"{dataset}_counts.npz",
55 | # dtype='int64', mode='r', shape=self.shapes_dict[dataset])
56 | cts = np.memmap(self.npzs_dir + f"{dataset}_counts.npz", dtype='int64', mode='r', shape=self.shapes_dict[dataset])
57 | counts = cts[idx]
58 | counts = torch.tensor(counts).unsqueeze(0)
59 | weights = torch.log1p(counts)
60 | weights = (weights / torch.sum(weights))
61 | batch_sentences, mask, seq_len, cell_sentences = \
62 | sample_cell_sentences(counts, weights, dataset, self.args,
63 | dataset_to_protein_embeddings= self.dataset_to_protein_embeddings,
64 | dataset_to_chroms=self.dataset_to_chroms,
65 | dataset_to_starts=self.dataset_to_starts)
66 | return batch_sentences, mask, idx, seq_len, cell_sentences
67 | else:
68 | idx -= self.num_cells[dataset]
69 | raise IndexError
70 | else:
71 | raise NotImplementedError
72 |
73 | def __len__(self) -> int:
74 | return self.total_num_cells
75 |
76 | def get_dim(self) -> Dict[str, int]:
77 | return self.num_genes
78 |
79 |
80 | class MultiDatasetSentenceCollator(object):
81 | def __init__(self, args):
82 | self.pad_length = args.pad_length
83 |
84 |
85 | def __call__(self, batch):
86 | batch_size = len(batch)
87 | batch_sentences = torch.zeros((batch_size, self.pad_length))
88 | mask = torch.zeros((batch_size, self.pad_length))
89 | cell_sentences = torch.zeros((batch_size, self.pad_length))
90 |
91 | idxs = torch.zeros(batch_size)
92 |
93 | i = 0
94 | max_len = 0
95 | for bs, msk, idx, seq_len, cs in batch:
96 | batch_sentences[i, :] = bs
97 | cell_sentences[i, :] = cs
98 | max_len = max(max_len, seq_len)
99 | mask[i, :] = msk
100 | idxs[i] = idx
101 |
102 | i += 1
103 |
104 | return batch_sentences[:, :max_len] , mask[:, :max_len], idxs, cell_sentences
105 |
106 |
107 |
108 | def sample_cell_sentences(counts, batch_weights, dataset, args,
109 | dataset_to_protein_embeddings,
110 | dataset_to_chroms,
111 | dataset_to_starts):
112 |
113 | dataset_idxs = dataset_to_protein_embeddings[dataset] # get the dataset specific protein embedding idxs
114 | cell_sentences = torch.zeros((counts.shape[0], args.pad_length)) # init the cell representation as 0s
115 | mask = torch.zeros((counts.shape[0], args.pad_length)) # start of masking the whole sequence
116 | chroms = dataset_to_chroms[dataset] # get the dataset specific chroms for each gene
117 | starts = dataset_to_starts[dataset] # get the dataset specific genomic start locations for each gene
118 |
119 | longest_seq_len = 0 # we need to keep track of this so we can subset the batch at the end
120 |
121 | for c, cell in enumerate(counts):
122 | weights = batch_weights[c].numpy()
123 | weights = weights / sum(weights) # RE NORM after mask
124 |
125 | # randomly choose the genes that will make up the sample, weighted by expression, with replacement
126 | choice_idx = np.random.choice(np.arange(len(weights)),
127 | size=args.sample_size, p=weights,
128 | replace=True)
129 | choosen_chrom = chroms[choice_idx] # get the sampled genes chromosomes
130 | # order the genes by chromosome
131 | chrom_sort = np.argsort(choosen_chrom)
132 | choice_idx = choice_idx[chrom_sort]
133 |
134 | # sort the genes by start
135 | new_chrom = chroms[choice_idx]
136 | choosen_starts = starts[choice_idx]
137 |
138 | ordered_choice_idx = np.full((args.pad_length),
139 | args.cls_token_idx) # start with cls
140 | # i= 0 first token is CLS
141 | i = 1 # continue on to the rest of the sequence with left bracket being assumed.
142 | # Shuffle the chroms now, there's no natural order to chromosomes
143 | uq_chroms = np.unique(new_chrom)
144 | np.random.shuffle(uq_chroms) # shuffle
145 |
146 | # This loop is actually just over one cell
147 | for chrom in uq_chroms:
148 | # Open Chrom token
149 | ordered_choice_idx[i] = int(chrom) + args.CHROM_TOKEN_OFFSET # token of this chromosome # i = 1 next token is a chrom open
150 | i += 1
151 | # now sort the genes by start order within the chroms
152 | loc = np.where(new_chrom == chrom)[0]
153 | sort_by_start = np.argsort(
154 | choosen_starts[loc]) # start locations for this chromsome
155 |
156 | to_add = choice_idx[loc[sort_by_start]]
157 | ordered_choice_idx[i:(i + len(to_add))] = dataset_idxs[to_add]
158 | i += len(to_add)
159 | ordered_choice_idx[i] = args.chrom_token_right_idx # add the chrom sep again
160 | i += 1 # add the closing token again
161 |
162 | longest_seq_len = max(longest_seq_len, i)
163 | remainder_len = (args.pad_length - i)
164 |
165 | cell_mask = torch.concat((torch.ones(i),
166 | # pay attention to all of these tokens, ignore the rest!
167 | torch.zeros(remainder_len)))
168 |
169 | mask[c, :] = cell_mask
170 |
171 | ordered_choice_idx[i:] = args.pad_token_idx # the remainder of the sequence
172 | cell_sentences[c, :] = torch.from_numpy(ordered_choice_idx)
173 |
174 | cell_sentences_pe = cell_sentences.long() # token indices
175 |
176 | return cell_sentences_pe, mask, longest_seq_len, cell_sentences
--------------------------------------------------------------------------------
/eval_single_anndata.py:
--------------------------------------------------------------------------------
1 | """
2 | Script for Evaluating a Single AnnData
3 |
4 | Parameters:
5 | ----------
6 | - `adata_path` (str):
7 | Full path to the AnnData you want to embed.
8 | - `dir` (str):
9 | Working folder where all files will be saved.
10 | - `species` (str):
11 | Species of the AnnData.
12 | - `filter` (bool):
13 | Additional gene/cell filtering on the AnnData.
14 | - `skip` (bool):
15 | Skip datasets that appear to have already been created.
16 | - `model_loc` (str):
17 | Location of pretrained UCE model's weights in a `.torch` file.
18 | - `batch_size` (int):
19 | Batch size for processing.
20 | - `CXG` (bool):
21 | Use CXG model.
22 | - `nlayers` (int):
23 | Number of transformer layers.
24 | - `output_dim` (int):
25 | Desired output dimension.
26 | - `d_hid` (int):
27 | Hidden dimension for processing.
28 | - `token_dim` (int):
29 | Token dimension.
30 | - `spec_chrom_csv_path` (str):
31 | CSV file mapping genes from each species to their respective chromosomes
32 | and genomic start positions.
33 | - `token_file` (str):
34 | `.torch` file containing token/protein embeddings for all tokens.
35 | - `protein_embeddings_dir` (str):
36 | Directory containing protein embedding `.pt` files for all species.
37 | - `offset_pkl_path` (str):
38 | `.pkl` file mapping between species and their gene's locations in the `token_file`.
39 | - `pad_length` (int):
40 | Length to pad the cell sentence to.
41 | - `pad_token_idx` (int):
42 | Index of the padding token in the `token_file`.
43 | - `chrom_token_left_idx` (int):
44 | Left chromosome token index
45 | - `chrom_token_right_idx` (int):
46 | Right chromosome token index
47 | - `cls_token_idx` (int):
48 | CLS token index in the `token_file`.
49 | - `CHROM_TOKEN_OFFSET` (int):
50 | Offset index, tokens after this mark are chromosome identifiers.
51 | - `sample_size` (int):
52 | Number of genes sampled for cell sentence.
53 | - `multi_gpu` (bool):
54 | Run evaluation on multiple GPUs (using accelerator)
55 |
56 | Returns:
57 | -------
58 | - `dir/{dataset_name}_proc.h5ad`:
59 | The processed AnnData. Processing involves subsetting it to genes which
60 | have protein embeddings and then refiltering the dataset by minimum counts.
61 | - `dir/{dataset_name}_chroms.pkl`:
62 | File mapping the genes in the dataset to their corresponding chromosome
63 | indices.
64 | - `dir/{dataset_name}_counts.npz`:
65 | File containing the counts of the AnnData in an easily accessible format.
66 | - `dir/{dataset_name}_shapes_dict.pkl`:
67 | File containing the shape (ncell x ngene) of the AnnData, used to read the
68 | `.npz` file.
69 | - `dir/{dataset_name}_pe_idx.torch`:
70 | File mapping between the genes in the dataset and their index in the tokens file.
71 | - `dir/{dataset_name}_starts.pkl`:
72 | File mapping between the genes in the dataset and their genomic start locations.
73 |
74 | """
75 |
76 |
77 | import argparse
78 | from evaluate import AnndataProcessor
79 | from accelerate import Accelerator
80 |
81 | def main(args, accelerator):
82 | processor = AnndataProcessor(args, accelerator)
83 | processor.preprocess_anndata()
84 | processor.generate_idxs()
85 | processor.run_evaluation()
86 |
87 |
88 | if __name__ == "__main__":
89 | parser = argparse.ArgumentParser(
90 | description='Embed a single anndata using UCE.')
91 |
92 | # Anndata Processing Arguments
93 | parser.add_argument('--adata_path', type=str,
94 | default=None,
95 | help='Full path to the anndata you want to embed.')
96 | parser.add_argument('--dir', type=str,
97 | default="./",
98 | help='Working folder where all files will be saved.')
99 | parser.add_argument('--species', type=str, default="human",
100 | help='Species of the anndata.')
101 | parser.add_argument('--filter', type=bool, default=True,
102 | help='Additional gene/cell filtering on the anndata.')
103 | parser.add_argument('--skip', type=bool, default=True,
104 | help='Skip datasets that appear to have already been created.')
105 |
106 | # Model Arguments
107 | parser.add_argument('--model_loc', type=str,
108 | default=None,
109 | help='Location of the model.')
110 | parser.add_argument('--batch_size', type=int, default=25,
111 | help='Batch size.')
112 | parser.add_argument('--pad_length', type=int, default=1536,
113 | help='Batch size.')
114 | parser.add_argument("--pad_token_idx", type=int, default=0,
115 | help="PAD token index")
116 | parser.add_argument("--chrom_token_left_idx", type=int, default=1,
117 | help="Chrom token left index")
118 | parser.add_argument("--chrom_token_right_idx", type=int, default=2,
119 | help="Chrom token right index")
120 | parser.add_argument("--cls_token_idx", type=int, default=3,
121 | help="CLS token index")
122 | parser.add_argument("--CHROM_TOKEN_OFFSET", type=int, default=143574,
123 | help="Offset index, tokens after this mark are chromosome identifiers")
124 | parser.add_argument('--sample_size', type=int, default=1024,
125 | help='Number of genes sampled for cell sentence')
126 | parser.add_argument('--CXG', type=bool, default=True,
127 | help='Use CXG model.')
128 | parser.add_argument('--nlayers', type=int, default=4,
129 | help='Number of transformer layers.')
130 | parser.add_argument('--output_dim', type=int, default=1280,
131 | help='Output dimension.')
132 | parser.add_argument('--d_hid', type=int, default=5120,
133 | help='Hidden dimension.')
134 | parser.add_argument('--token_dim', type=int, default=5120,
135 | help='Token dimension.')
136 | parser.add_argument('--multi_gpu', type=bool, default=False,
137 | help='Use multiple GPUs')
138 |
139 | # Misc Arguments
140 | parser.add_argument("--spec_chrom_csv_path",
141 | default="./model_files/species_chrom.csv", type=str,
142 | help="CSV Path for species genes to chromosomes and start locations.")
143 | parser.add_argument("--token_file",
144 | default="./model_files/all_tokens.torch", type=str,
145 | help="Path for token embeddings.")
146 | parser.add_argument("--protein_embeddings_dir",
147 | default="./model_files/protein_embeddings/", type=str,
148 | help="Directory where protein embedding .pt files are stored.")
149 | parser.add_argument("--offset_pkl_path",
150 | default="./model_files/species_offsets.pkl", type=str,
151 | help="PKL file which contains offsets for each species.")
152 |
153 | args = parser.parse_args()
154 | accelerator = Accelerator(project_dir=args.dir)
155 | main(args, accelerator)
156 |
--------------------------------------------------------------------------------
/evaluate.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | # os.environ["NCCL_DEBUG"] = "INFO"
4 | os.environ["OMP_NUM_THREADS"] = "12" # export OMP_NUM_THREADS=4
5 | os.environ["OPENBLAS_NUM_THREADS"] = "12" # export OPENBLAS_NUM_THREADS=4
6 | os.environ["MKL_NUM_THREADS"] = "12" # export MKL_NUM_THREADS=6
7 | os.environ["VECLIB_MAXIMUM_THREADS"] = "12" # export VECLIB_MAXIMUM_THREADS=4
8 | os.environ["NUMEXPR_NUM_THREADS"] = "12"
9 |
10 | import warnings
11 |
12 | warnings.filterwarnings("ignore")
13 |
14 | import scanpy as sc
15 | from tqdm.auto import tqdm
16 | from torch import nn, Tensor
17 |
18 | from model import TransformerModel
19 | from eval_data import MultiDatasetSentences, MultiDatasetSentenceCollator
20 | from utils import figshare_download
21 |
22 | from torch.utils.data import DataLoader
23 | from data_proc.data_utils import adata_path_to_prot_chrom_starts, \
24 | get_spec_chrom_csv, process_raw_anndata, get_species_to_pe
25 |
26 | import os
27 | import pickle
28 | import pandas as pd
29 | import numpy as np
30 | import torch
31 |
32 |
33 | class AnndataProcessor:
34 | def __init__(self, args, accelerator):
35 | self.args = args
36 | self.accelerator = accelerator
37 | self.h5_folder_path = self.args.dir
38 | self.npz_folder_path = self.args.dir
39 | self.scp = ""
40 |
41 | # Check if paths exist, if not, create them
42 | self.check_paths()
43 |
44 | # Set up the anndata
45 | self.adata_name = self.args.adata_path.split("/")[-1]
46 | self.adata_root_path = self.args.adata_path.replace(self.adata_name, "")
47 | self.name = self.adata_name.replace(".h5ad", "")
48 | self.proc_h5_path = self.h5_folder_path + f"{self.name}_proc.h5ad"
49 | self.adata = None
50 |
51 | # Set up the row
52 | row = pd.Series()
53 | row.path = self.adata_name
54 | row.covar_col = np.nan
55 | row.species = self.args.species
56 | self.row = row
57 |
58 | # Set paths once to be used throughout the class
59 | self.pe_idx_path = self.args.dir + f"{self.name}_pe_idx.torch"
60 | self.chroms_path = self.args.dir + f"{self.name}_chroms.pkl"
61 | self.starts_path = self.args.dir + f"{self.name}_starts.pkl"
62 | self.shapes_dict_path = self.args.dir + f"{self.name}_shapes_dict.pkl"
63 |
64 | def check_paths(self):
65 | """
66 | Check if the paths exist, if not, create them
67 | """
68 | figshare_download("https://figshare.com/ndownloader/files/42706558",
69 | self.args.spec_chrom_csv_path)
70 | figshare_download("https://figshare.com/ndownloader/files/42706555",
71 | self.args.offset_pkl_path)
72 | if not os.path.exists(self.args.protein_embeddings_dir):
73 | figshare_download("https://figshare.com/ndownloader/files/42715213",
74 | 'model_files/protein_embeddings.tar.gz')
75 | figshare_download("https://figshare.com/ndownloader/files/42706585",
76 | self.args.token_file)
77 | if self.args.adata_path is None:
78 | print("Using sample AnnData: 10k pbmcs dataset")
79 | self.args.adata_path = "./data/10k_pbmcs_proc.h5ad"
80 | figshare_download(
81 | "https://figshare.com/ndownloader/files/42706966",
82 | self.args.adata_path)
83 | if self.args.model_loc is None:
84 | print("Using sample 4 layer model")
85 | self.args.model_loc = "./model_files/4layer_model.torch"
86 | figshare_download(
87 | "https://figshare.com/ndownloader/files/42706576",
88 | self.args.model_loc)
89 |
90 |
91 | def preprocess_anndata(self):
92 | if self.accelerator.is_main_process:
93 | self.adata, num_cells, num_genes = \
94 | process_raw_anndata(self.row,
95 | self.h5_folder_path,
96 | self.npz_folder_path,
97 | self.scp,
98 | self.args.skip,
99 | self.args.filter,
100 | root=self.adata_root_path)
101 | if (num_cells is not None) and (num_genes is not None):
102 | self.save_shapes_dict(self.name, num_cells, num_genes,
103 | self.shapes_dict_path)
104 |
105 | if self.adata is None:
106 | self.adata = sc.read(self.proc_h5_path)
107 |
108 | def save_shapes_dict(self, name, num_cells, num_genes, shapes_dict_path):
109 | shapes_dict = {name: (num_cells, num_genes)}
110 | with open(shapes_dict_path, "wb+") as f:
111 | pickle.dump(shapes_dict, f)
112 | print("Wrote Shapes Dict")
113 |
114 | def generate_idxs(self):
115 | if self.accelerator.is_main_process:
116 | if os.path.exists(self.pe_idx_path) and \
117 | os.path.exists(self.chroms_path) and \
118 | os.path.exists(self.starts_path):
119 | print("PE Idx, Chrom and Starts files already created")
120 |
121 | else:
122 | species_to_pe = get_species_to_pe(self.args.protein_embeddings_dir)
123 | with open(self.args.offset_pkl_path, "rb") as f:
124 | species_to_offsets = pickle.load(f)
125 |
126 | gene_to_chrom_pos = get_spec_chrom_csv(
127 | self.args.spec_chrom_csv_path)
128 | dataset_species = self.args.species
129 | spec_pe_genes = list(species_to_pe[dataset_species].keys())
130 | offset = species_to_offsets[dataset_species]
131 | pe_row_idxs, dataset_chroms, dataset_pos = adata_path_to_prot_chrom_starts(
132 | self.adata, dataset_species, spec_pe_genes, gene_to_chrom_pos, offset)
133 |
134 | # Save to the temp dict
135 | torch.save({self.name: pe_row_idxs}, self.pe_idx_path)
136 | with open(self.chroms_path, "wb+") as f:
137 | pickle.dump({self.name: dataset_chroms}, f)
138 | with open(self.starts_path, "wb+") as f:
139 | pickle.dump({self.name: dataset_pos}, f)
140 |
141 | def run_evaluation(self):
142 | self.accelerator.wait_for_everyone()
143 | with open(self.shapes_dict_path, "rb") as f:
144 | shapes_dict = pickle.load(f)
145 | run_eval(self.adata, self.name, self.pe_idx_path, self.chroms_path,
146 | self.starts_path, shapes_dict, self.accelerator, self.args)
147 |
148 |
149 | def get_ESM2_embeddings(args):
150 | # Load in ESM2 embeddings and special tokens
151 | all_pe = torch.load(args.token_file)
152 | if all_pe.shape[0] == 143574:
153 | torch.manual_seed(23)
154 | CHROM_TENSORS = torch.normal(mean=0, std=1, size=(1895, args.token_dim))
155 | # 1895 is the total number of chromosome choices, it is hardcoded for now
156 | all_pe = torch.vstack(
157 | (all_pe, CHROM_TENSORS)) # Add the chrom tensors to the end
158 | all_pe.requires_grad = False
159 |
160 | return all_pe
161 |
162 |
163 | def padding_tensor(sequences):
164 | """
165 | :param sequences: list of tensors
166 | :return:
167 | """
168 | num = len(sequences)
169 | max_len = max([s.size(0) for s in sequences])
170 | out_dims = (num, max_len, 1280)
171 |
172 | out_tensor = sequences[0].data.new(*out_dims).fill_(0)
173 | out_dims2 = (num, max_len)
174 |
175 | mask = sequences[0].data.new(*out_dims2).fill_(float('-inf'))
176 | for i, tensor in enumerate(sequences):
177 | length = tensor.size(0)
178 | out_tensor[i, :length] = tensor
179 | mask[i, :length] = 1
180 | return out_tensor.permute(1, 0, 2), mask
181 |
182 |
183 | def run_eval(adata, name, pe_idx_path, chroms_path, starts_path, shapes_dict,
184 | accelerator, args):
185 |
186 | #### Set up the model ####
187 | token_dim = args.token_dim
188 | emsize = 1280 # embedding dimension
189 | d_hid = args.d_hid # dimension of the feedforward network model in nn.TransformerEncoder
190 | nlayers = args.nlayers # number of nn.TransformerEncoderLayer in nn.TransformerEncoder
191 | nhead = 20 # number of heads in nn.MultiheadAttention
192 | dropout = 0.05 # dropout probability
193 | model = TransformerModel(token_dim=token_dim, d_model=emsize, nhead=nhead,
194 | d_hid=d_hid,
195 | nlayers=nlayers, dropout=dropout,
196 | output_dim=args.output_dim)
197 | if args.model_loc is None:
198 | raise ValueError("Must provide a model location")
199 | # intialize as empty
200 | empty_pe = torch.zeros(145469, 5120)
201 | empty_pe.requires_grad = False
202 | model.pe_embedding = nn.Embedding.from_pretrained(empty_pe)
203 | model.load_state_dict(torch.load(args.model_loc, map_location="cpu"),
204 | strict=True)
205 | # Load in the real token embeddings
206 | all_pe = get_ESM2_embeddings(args)
207 | # This will make sure that you don't overwrite the tokens in case you're embedding species from the training data
208 | # We avoid doing that just in case the random seeds are different across different versions.
209 | if all_pe.shape[0] != 145469:
210 | all_pe.requires_grad = False
211 | model.pe_embedding = nn.Embedding.from_pretrained(all_pe)
212 | print(f"Loaded model:\n{args.model_loc}")
213 | model = model.eval()
214 | model = accelerator.prepare(model)
215 | batch_size = args.batch_size
216 |
217 | #### Run the model ####
218 | # Dataloaders
219 | dataset = MultiDatasetSentences(sorted_dataset_names=[name],
220 | shapes_dict=shapes_dict,
221 | args=args, npzs_dir=args.dir,
222 | dataset_to_protein_embeddings_path=pe_idx_path,
223 | datasets_to_chroms_path=chroms_path,
224 | datasets_to_starts_path=starts_path
225 | )
226 | multi_dataset_sentence_collator = MultiDatasetSentenceCollator(args)
227 |
228 | dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False,
229 | collate_fn=multi_dataset_sentence_collator,
230 | num_workers=0)
231 | dataloader = accelerator.prepare(dataloader)
232 | pbar = tqdm(dataloader, disable=not accelerator.is_local_main_process)
233 | dataset_embeds = []
234 | with torch.no_grad():
235 | for batch in pbar:
236 | batch_sentences, mask, idxs = batch[0], batch[1], batch[2]
237 | batch_sentences = batch_sentences.permute(1, 0)
238 | if args.multi_gpu:
239 | batch_sentences = model.module.pe_embedding(batch_sentences.long())
240 | else:
241 | batch_sentences = model.pe_embedding(batch_sentences.long())
242 | batch_sentences = nn.functional.normalize(batch_sentences,
243 | dim=2) # Normalize token outputs now
244 | _, embedding = model.forward(batch_sentences, mask=mask)
245 | # Fix for duplicates in last batch
246 | accelerator.wait_for_everyone()
247 | embeddings = accelerator.gather_for_metrics((embedding))
248 | if accelerator.is_main_process:
249 | dataset_embeds.append(embeddings.detach().cpu().numpy())
250 |
251 | accelerator.wait_for_everyone()
252 | if accelerator.is_main_process:
253 | dataset_embeds = np.vstack(dataset_embeds)
254 | adata.obsm["X_uce"] = dataset_embeds
255 | write_path = args.dir + f"{name}_uce_adata.h5ad"
256 | adata.write(write_path)
257 |
258 | print("*****Wrote Anndata to:*****")
259 | print(write_path)
260 |
--------------------------------------------------------------------------------
/examples/Label Transfer Using Logistic Classifier.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "id": "3f4f1b19-5369-4e4d-9366-b6f07f88b402",
6 | "metadata": {},
7 | "source": [
8 | "# Transferring Labels Using UCE\n",
9 | "\n",
10 | "This notebook walks through the example from Figure 4d,4e of transferring labels from mouse kidney norn cells to a human lung disease dataset.\n",
11 | "\n",
12 | "To transfer labels, we use a basic default implementation of sklearn's logistic classifier."
13 | ]
14 | },
15 | {
16 | "cell_type": "code",
17 | "execution_count": 1,
18 | "id": "5ca49083-fd91-473f-b60a-621b07d52de2",
19 | "metadata": {},
20 | "outputs": [],
21 | "source": [
22 | "## Imports\n",
23 | "import scanpy as sc\n",
24 | "import numpy as np\n",
25 | "import random\n",
26 | "from sklearn.linear_model import LogisticRegression\n",
27 | "sc._settings.settings._vector_friendly=True\n",
28 | "import matplotlib\n",
29 | "import matplotlib.pyplot as plt\n",
30 | "\n",
31 | "## Seed\n",
32 | "np.random.seed(0)\n",
33 | "random.seed(0)"
34 | ]
35 | },
36 | {
37 | "cell_type": "markdown",
38 | "id": "72536f9e-b010-44a7-b323-c32f05cf7d98",
39 | "metadata": {},
40 | "source": [
41 | "## Load in anndatas\n",
42 | "You can download the anndatas here: https://drive.google.com/drive/folders/1f63fh0ykgEhCrkd_EVvIootBw7LYDVI7"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": 2,
48 | "id": "8e5d7a7e-2a86-4fce-82fe-17afcf83dec5",
49 | "metadata": {},
50 | "outputs": [],
51 | "source": [
52 | "epo_uce = sc.read(\"mouse_kidney_norn.h5ad\")\n",
53 | "kam_20_uce = sc.read(\"human_lung_disease.h5ad\")"
54 | ]
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "id": "dcbf764a-ad99-4622-a780-ecd62e471132",
59 | "metadata": {},
60 | "source": [
61 | "### Train Classifier on Mouse Kidney Cells\n",
62 | "\n",
63 | "We train a classifier to predict coarsened cell types, from the UCE embeddings"
64 | ]
65 | },
66 | {
67 | "cell_type": "code",
68 | "execution_count": 3,
69 | "id": "9e56e9aa-bf4a-42d9-b8a6-e76833161083",
70 | "metadata": {},
71 | "outputs": [],
72 | "source": [
73 | "epo_map = {\n",
74 | " \"Norn\":\"Norn\",\n",
75 | " \"Proximal tubule\":\"Proximal tubule\",\n",
76 | " \"Collecting duct principal\":\"Collecting duct\",\n",
77 | " \"Distal convoluted tubule\":\"Distal convoluted tubule\",\n",
78 | " \"Fibroblasts\":\"Fibroblast\",\n",
79 | " \"Endothelial\":\"Endothelial\",\n",
80 | " \"Collecting duct transient\":\"Collecting duct\",\n",
81 | " \"Other\":\"misc\",\n",
82 | " \"Pericyte Ren1+\":\"Pericyte\",\n",
83 | " \"Podocytes\":\"Podocyte\",\n",
84 | " \"Pericyte3\":\"Pericyte\",\n",
85 | " \"Pericyte1\":\"Pericyte\",\n",
86 | " \"Pericyte2\":\"Pericyte\",\n",
87 | " \"Collecting duct intercalated\":\"Collecting duct\",\n",
88 | " \"Loop of henle\":\"Loop of henle\",\n",
89 | " \"Proximal tubule2\":\"Proximal tubule\",\n",
90 | " \"Macrophages\":\"Macrophage\",\n",
91 | " \"Neutrophil\":\"Granulocyte\",\n",
92 | " \"T lymphocyte\":\"T cell\",\n",
93 | " \"Collecting duct\":\"Collecting duct\",\n",
94 | " \"Monocytes\":\"Monocyte\",\n",
95 | " \n",
96 | "} # coarse cell type map"
97 | ]
98 | },
99 | {
100 | "cell_type": "code",
101 | "execution_count": 4,
102 | "id": "39b160cd-4462-437f-b89e-7363e20e8ebe",
103 | "metadata": {},
104 | "outputs": [],
105 | "source": [
106 | "epo_uce_no_misc = epo_uce[epo_uce.obs.group != \"Other\"] # remove misc cells\n",
107 | "X = epo_uce_no_misc.obsm[\"X_uce\"] # input is UCE embeddings\n",
108 | "y = [epo_map[ct] for ct in epo_uce_no_misc.obs[\"group\"].values] # output is mapped cell types\n",
109 | "clf = LogisticRegression(random_state=0).fit(X, y) # fit classifier"
110 | ]
111 | },
112 | {
113 | "cell_type": "markdown",
114 | "id": "ae2fd8b6-e681-4f02-9f9c-71d71d95f925",
115 | "metadata": {},
116 | "source": [
117 | "### Predict norn-like cells using classifier"
118 | ]
119 | },
120 | {
121 | "cell_type": "code",
122 | "execution_count": 5,
123 | "id": "722bcc74-aaca-43f9-b5dd-2427584d7683",
124 | "metadata": {},
125 | "outputs": [],
126 | "source": [
127 | "kam_20_uce.obs[\"pred\"] = clf.predict(kam_20_uce.obsm[\"X_uce\"]) # predict cell types for lung disease dataset"
128 | ]
129 | },
130 | {
131 | "cell_type": "code",
132 | "execution_count": 6,
133 | "id": "ae31d2c0-5987-46c1-9be0-d874ae6577b5",
134 | "metadata": {},
135 | "outputs": [
136 | {
137 | "data": {
138 | "text/plain": [
139 | "pred\n",
140 | "Proximal tubule 119834\n",
141 | "T cell 93556\n",
142 | "Granulocyte 52485\n",
143 | "Collecting duct 15727\n",
144 | "Macrophage 11800\n",
145 | "Endothelial 7233\n",
146 | "Norn 6005\n",
147 | "Podocyte 4270\n",
148 | "Pericyte 1316\n",
149 | "Fibroblast 623\n",
150 | "Monocyte 56\n",
151 | "Loop of henle 23\n",
152 | "Name: count, dtype: int64"
153 | ]
154 | },
155 | "execution_count": 6,
156 | "metadata": {},
157 | "output_type": "execute_result"
158 | }
159 | ],
160 | "source": [
161 | "kam_20_uce.obs[\"pred\"].value_counts()"
162 | ]
163 | },
164 | {
165 | "cell_type": "markdown",
166 | "id": "23a839f8-88ce-4446-b4a5-961a38a264b5",
167 | "metadata": {},
168 | "source": [
169 | "# Check Differential Expression"
170 | ]
171 | },
172 | {
173 | "cell_type": "code",
174 | "execution_count": 7,
175 | "id": "159cc172-1be5-4ad2-a4b5-2ebd2e52302e",
176 | "metadata": {},
177 | "outputs": [],
178 | "source": [
179 | "# Preproccess Count Values\n",
180 | "sc.pp.highly_variable_genes(kam_20_uce, n_top_genes=8000, flavor=\"seurat_v3\", subset=True)\n",
181 | "sc.pp.normalize_per_cell(kam_20_uce)\n",
182 | "sc.pp.log1p(kam_20_uce)"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 8,
188 | "id": "57b9fb8a-2699-43f4-b74f-2a646e8610c8",
189 | "metadata": {},
190 | "outputs": [],
191 | "source": [
192 | "# Subset to predicted Norn-like cells\n",
193 | "kam20_norn_ad = kam_20_uce[kam_20_uce.obs.pred == \"Norn\"].copy()"
194 | ]
195 | },
196 | {
197 | "cell_type": "code",
198 | "execution_count": 9,
199 | "id": "591d1aa7-c4ad-4ad9-9bef-2ed91ed19b57",
200 | "metadata": {},
201 | "outputs": [],
202 | "source": [
203 | "all_de_dfs = {}\n",
204 | "ngenes = 4"
205 | ]
206 | },
207 | {
208 | "cell_type": "code",
209 | "execution_count": 10,
210 | "id": "bb5bc067-c588-4a34-a8f3-138824531828",
211 | "metadata": {},
212 | "outputs": [],
213 | "source": [
214 | "sc.tl.rank_genes_groups(kam20_norn_ad, groupby=\"Disease_Identity\", use_raw=False, reference=\"Control\") # DE, diseases vs control"
215 | ]
216 | },
217 | {
218 | "cell_type": "code",
219 | "execution_count": 11,
220 | "id": "db176232-0dee-4c1a-bda2-38a19a7afcdd",
221 | "metadata": {},
222 | "outputs": [],
223 | "source": [
224 | "de_df = sc.get.rank_genes_groups_df(kam20_norn_ad, group=\"COPD\") # get COPD vs control results\n",
225 | "all_de_dfs[\"copd_vs_control\"] = de_df[~de_df.index.isin(de_df.iloc[10:-10].index)] # top 10 and bottom 10 genes\n",
226 | "copd_control_genes = list(de_df.head(ngenes)[\"names\"].values)"
227 | ]
228 | },
229 | {
230 | "cell_type": "code",
231 | "execution_count": 12,
232 | "id": "c4d25f7a-4056-4e03-bcd1-9ab9cb2705e6",
233 | "metadata": {},
234 | "outputs": [],
235 | "source": [
236 | "de_df = sc.get.rank_genes_groups_df(kam20_norn_ad, group=\"IPF\") # get IPF vs control results\n",
237 | "all_de_dfs[\"ipf_vs_control\"] = de_df[~de_df.index.isin(de_df.iloc[10:-10].index)] # top 10 and bottom 10 genes\n",
238 | "ipf_control_genes = list(de_df.head(ngenes)[\"names\"].values)"
239 | ]
240 | },
241 | {
242 | "cell_type": "code",
243 | "execution_count": 13,
244 | "id": "4e72311d-44ee-4ca5-8abc-24e9079b574b",
245 | "metadata": {},
246 | "outputs": [],
247 | "source": [
248 | "sc.tl.rank_genes_groups(kam20_norn_ad, groupby=\"Disease_Identity\", use_raw=False, reference=\"IPF\") # DE, all vs IPF"
249 | ]
250 | },
251 | {
252 | "cell_type": "code",
253 | "execution_count": 14,
254 | "id": "69be495e-3b0f-41f3-95d8-47a526f00bbd",
255 | "metadata": {},
256 | "outputs": [],
257 | "source": [
258 | "de_df = sc.get.rank_genes_groups_df(kam20_norn_ad, group=\"COPD\") # COPD vs IPF\n",
259 | "all_de_dfs[\"copd_vs_ipf\"] = de_df[~de_df.index.isin(de_df.iloc[10:-10].index)] # top 10 and bottom 10 genes\n",
260 | "copd_ipf_genes = list(de_df.head(ngenes)[\"names\"].values)"
261 | ]
262 | },
263 | {
264 | "cell_type": "code",
265 | "execution_count": 15,
266 | "id": "db9e2358-7431-49df-a980-ff4869d824d4",
267 | "metadata": {},
268 | "outputs": [],
269 | "source": [
270 | "sc.tl.rank_genes_groups(kam20_norn_ad, groupby=\"Disease_Identity\", use_raw=False, reference=\"COPD\") # DE, all vs COPD"
271 | ]
272 | },
273 | {
274 | "cell_type": "code",
275 | "execution_count": 16,
276 | "id": "753b9891-f940-46b7-9fa2-8972b6f67136",
277 | "metadata": {},
278 | "outputs": [],
279 | "source": [
280 | "de_df = sc.get.rank_genes_groups_df(kam20_norn_ad, group=\"IPF\") # IPF vs COPD\n",
281 | "all_de_dfs[\"ipf_vs_copd\"] = de_df[~de_df.index.isin(de_df.iloc[10:-10].index)] # top 10 and bottom 10 genes\n",
282 | "ipf_copd_genes = list(de_df.head(ngenes)[\"names\"].values)"
283 | ]
284 | },
285 | {
286 | "cell_type": "code",
287 | "execution_count": 17,
288 | "id": "647407f4-c9ef-405e-9742-342e31664497",
289 | "metadata": {},
290 | "outputs": [
291 | {
292 | "data": {
293 | "text/plain": [
294 | "['POSTN',\n",
295 | " 'COL1A1',\n",
296 | " 'COL3A1',\n",
297 | " 'SPARC',\n",
298 | " 'LUM',\n",
299 | " 'MFAP4',\n",
300 | " 'PTGDS',\n",
301 | " 'PTPRG',\n",
302 | " 'GPX3',\n",
303 | " 'NAMPT',\n",
304 | " 'RPL41',\n",
305 | " 'CRISPLD2',\n",
306 | " 'SERPINH1',\n",
307 | " 'COL1A2']"
308 | ]
309 | },
310 | "execution_count": 17,
311 | "metadata": {},
312 | "output_type": "execute_result"
313 | }
314 | ],
315 | "source": [
316 | "gene_list = ipf_control_genes + copd_control_genes + copd_ipf_genes + ipf_copd_genes\n",
317 | "\n",
318 | "reduced_gene_list = []\n",
319 | "for g in gene_list:\n",
320 | " if g in reduced_gene_list:\n",
321 | " next\n",
322 | " else:\n",
323 | " reduced_gene_list.append(g)\n",
324 | "reduced_gene_list"
325 | ]
326 | },
327 | {
328 | "cell_type": "markdown",
329 | "id": "c2abdd88-2f7b-4ac4-961a-ea417f85bb04",
330 | "metadata": {},
331 | "source": [
332 | "## Plot Results"
333 | ]
334 | },
335 | {
336 | "cell_type": "code",
337 | "execution_count": 18,
338 | "id": "f11581f2-2895-419e-85da-0f53c07756ec",
339 | "metadata": {},
340 | "outputs": [
341 | {
342 | "data": {
343 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAdsAAAHICAYAAAAGOEABAAAAP3RFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMS5wb3N0MSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8kixA/AAAACXBIWXMAAA9hAAAPYQGoP6dpAACfUUlEQVR4nOzdd1hT1xsH8G8GG0LYAQRxIe69cG9xrzqqKO7VWu22tY46sEtrW6tWrQtx1rpxVREUt+JEUZEhyCZAGJn39wc/o5GZkJugvJ/nuc8jufecvBfDfXPPPYPDMAwDQgghhLCGa+wACCGEkPcdJVtCCCGEZZRsCSGEEJZRsiWEEEJYRsmWEEIIYRklW0IIIYRllGwJIYQQllGyJYQQQlhGyZYQQghhGSVbQgghhGWUbAkhhBCWUbIlhBBCWEbJlhBCCGEZJVtCCCGEZZRsCSGEEJZRsiWEEEJYRsmWEEIIYRklW0IIIYRllGwJIYQQllGyJYQQQlhGyZYQQghhGSVbQgghhGWUbAkhhBCWUbIlhBBCWEbJlhBCCGEZJVtCCCGEZZRsCSGEEJZRsiWEEEJYRsmWEEIIYRklW0IIIYRllGwJIYQQllGyJYQQQlhGyZYQQghhGSVbQgghhGWUbAkhhBCWUbIlhBBCWEbJlhBCCGEZJVtCCCGEZZRsCSGEEJZRsiWEEEJYRsmWEEIIYRklW0IIIYRllGwJIYQQllGyJYQQQlhGyZYQQghhGSVbQgghhGWUbAkhhBCWUbIlhBBCWEbJlhBCCGEZJVtCCCGEZZRsCSGEEJZRsiWEEEJYRsmWEEIIYRklW0IIIYRllGwJIYQQllGyJYQQQlhGyZYQQghhGSVbQgghhGWUbAkhhBCWUbIlhBBCWEbJlhBCCGEZJVtCCCGEZZRsCSGEEJZRsiWEEEJYRsmWEEIIYRklW0IIIYRllGwJIYQQllGyJYQQQlhGyZYQQghhGSVbQgghhGWUbAkhhBCWUbIlhBBCWMY3dgDvIpVKhaSkJNjY2IDD4Rg7HPIOYhgGubm5cHNzA5dL33kJed9RstVBUlISPDw8jB0GeQ8kJCSgRo0axg6DEMIySrY6sLGxAVB0oRQIBEaOhryLcnJy4OHhof4sEULeb5RsdfCq6VggEFCyJZVCjyEIqR7oYREhhBDCMkq2hBBCCMso2RJCCCEso2RLCCGEsIySLSGEEMIySraEEEIIyyjZEkIIISyjZEsIIYSwjJItIYQQwjJKtoQQQgjLKNkSQgghLKNkSwghhLCMki0hhBDCMlr1h1Qrcc+f486F0+DLCsFwubBwrYmu/QaAx+MZOzRCyHuMki2pFlQqFf7Z9AdqyzMxwMMZgAkAIDf3KQ79tAhNBo2Fd6PGxg2SEPLeomZkUi0c2fYX+tgq0NzDWeN1GwtzDG3gjucn9yH5ZZKRoiOEvO+Mlmy9vLxgaWkJa2truLq64pNPPoFCoQAA/Pnnn/Dx8YGFhQW8vLzw/fffQ6lUqsveu3cPPXv2hJ2dHezs7ODr64vr169j5cqVsLa2hrW1NczMzGBiYqL+eebMmQgNDQWHw8GCBQs0YjE3N0dsbKwhT79SFAqF+ndFypeVlQVb8QtYW5iXekzPOi64evKoAaMihFQnRm1GPn36NDp16oQnT56gS5cu8PHxgVgsxrp16xAcHAxfX188ePAA48aNQ1JSEjZs2AAAGDx4MObPn49Tp05BoVDg4sWLMDMzwzfffINvvvkGALBq1So8evQI27ZtU79faGgobGxssGHDBnz++edwcHAwxmlXSuTNm3jx9BE4HA5EXnXQqm07Y4dU5V0+dQy9a4nKPIbD4YCTkWigiAgh1U2VaEauV68eOnfujGvXrmHZsmX4888/0aVLF/D5fDRr1gxBQUHYtGkToqOjkZaWhtjYWEybNg18Ph/m5ubo1asXmjZtWqH3cnZ2xuDBg/HLL79UOD6pVIqcnByNzVheJsRiYO/uGNCrG1KTXhgtjrIoFAosW7rU2GG8JpeCyy3/o86TS8EwjAECIoRUN1Ui2T5+/Bjh4eFo1aoV5HI5BgwYoLG/efPm8PT0xPnz5+Ho6Ig6depg3LhxOHr0KDIyMrR+v4ULF2L9+vXIzMys0PGBgYGwtbVVbx4eHlq/p75Y2QrxKPopop/GwNzaxmhxlIXP5+Pb774zdhivcSvWgKPi8cHhcFgOhhBSHRk12fr5+UEoFMLPzw8BAQEQCARwdHQscRiGi4sL0tPTweFwcO7cOTg7O+Ojjz6Cs7MzBgwYgOTk5Aq/b7169TBw4MAK390uWLAA2dnZ6i0hIaHC76VvXbr3BGwdobASonuvPkaLozwVuZM0lMYdu+F2Qkq5x6mELgaIhhBSHRn1ihgSEgKxWIyYmBgEBgbCyckJ6enpGp2hXklJSYGjoyMAwNPTExs2bEBcXByioqKQkpKCefPmafXeCxcuxJ9//lmhu1szMzMIBAKNzZh8GjREQxqmUmGeXl54qrSASqUq9ZjbL9Lh07GHAaMihFQnVef2A0CHDh1gYmKC48ePa7weGRmJuLg4dOvWrVgZb29vBAQE4P79+1q9V/369dG/f3+sXr26MiGTd8TgaXOx53EacvILNF5nGAYRz5Mh92mH+vQFhhDCkio1qYVQKMQ333yD2bNnQygUwtfXFw8fPsT48eMxefJk1K9fH1lZWfjtt98wceJE1KxZE0lJSdizZw/atm2r9fstXLgQvr6+NIymGrC0tMSHXyxC2OkQ5MY8BE9WCHC5UAic0HrkNLjXMN5zeELI+69KJVugKAEKhUJMnToV8fHxcHFxwaRJk7Bw4UIAgKmpKZ49e4YuXbogMzMTAoEAfn5++Pnnn7V+rwYNGqBfv37Ys2ePvk+DVEE8Hg/d/QYCGGjsUAgh1QyHobEOWsvJyYGtrS2ys7ON/vyWvJvoM0RI9VKlntkSQggh7yNKtoQQQgjLKNkSQgghLKNkSwghhLCMki0hhBDCsio39IcQtty5fgMR27ZD8ugxFPl54JqYwsTBHh69emDQlCkwNy99CT5CCKkMGvqjAxq28W65+t85hK39DfxrN+BUKCu2X84weOHlAZd+fTFp+fcwMTFhPSb6DBFSvdCdLdGrlJRk3AoLBVQK8C2s0Kl3P1hYWBgtnlPBu3H/uyVwzcgq9RgTDge14l5AvmEzfoyOxvxdO2FpaWnAKAkh7zu6s9WBMe9KGIZBamoqGIaBs7NzlVldJyb6Ee6eOQonWRbaeTqDw+FAKlfgQlw6CoSu6D12EmxsDLsk4KUTIbgyey5cxNkVLqNiGCT264Uvg4NKXH1KX+jOlpDqhZKtDoxxoVQqlQjZGwRF8nPUMC36L0uUccATeaHfqPHg843XSPHwzi2knPsXXWo6lrhfpVLhQHQq/GZ/BaFQaJCYGIbByq49UPPuA63LFjIMHH9ZhaFTp7AQWRFKtoRUL1XjtoiUSaVSIfjXVehmloNB3q5o4eWGFl5uGOjtiu7mEgSvCSxxWUJDKCwsRPSJfaUmWqBobdsP6rvg9NZ1Bovr7MF/4aBDogUAcw4Hz44e03NEhJDqjPVku2nTJjRp0gRWVlbw9PTExIkTERsbCwA4cOAAmjdvDktLS7i5uWHevHnIz89Xlw0ICMDy5cuL1alQKDBy5Eh4eHiAw+Go63tbnz594OLiUmxVnwMHDqB9+/YwNzdHQECAvk6VNacP7sPQmgJYmJkW22duaoIRdexw6oBxFlMIPfYv+tZ2Kvc4DocDH1MpYp5EGyAq4MHBf2FdifLcq9dx98ZNvcVDCKneWE22y5cvx6JFi/DDDz8gIyMDUVFR6NixI86dO4fg4GBMmzYNS5cuhVgsRnh4OG7fvo0RI0agIi3bnTt3xr59+2BmZlbi/pcvX+L8+fOQyWQ4ffq0xj57e3t8/vnnmD17tl7Ok23SF09haV480b5ibmoCedIzA0b0muzFU5iaVKwJu5GrA+6Hn2U5oqImZPGtyErV4VQoQ+Rb6yoTQoiuWHvQJxaLsXLlSgQHB6N///7q16dPnw6VSgVPT08sXboUQ4YMAQDUqVMHe/fuRa1atXD27Fn07t279KD5fHzyySdlvv/u3bvRrl07NGvWDEFBQRox9OjRAwDw9OlTZGZmlnsuUqkUUqlU/XNOTk65ZfRFpVLBVJYHoOy7R0tFIaRSaalfPtjCkxUAqPgzR768kL1g/i8/Px+8vLxK16OQVL4OQggBWLyzvXz5MmQyGQYOLL52aHR0NBITE9WJ9hWRSIT27dvj3LlzlX7/oKAgjB49GmPGjMHhw4chkUh0riswMBC2trbqzcPDcAuNczgcMOCUe5ySYVjtPVsqTvmxvVWAlTDexOPxwGgdV3EcHnVpIIToB2tXk4yMDDg6OpbYSzY9PR1AUXJ9m4uLi3q/rqKionDnzh2MHDkSnTp1gr29PQ4ePKhzfQsWLEB2drZ6S0hIqFR82uBwOJBZ2JZ7nNTC1ig9khVmFX8yyjAMlOaVeZJaMebm5mAq2cOXYRjwbaiXMCFEP1hLtg4ODkhPTy/WOenVPgBITk4uti8lJQWOjqX3bK2InTt3okuXLnB1dQWHw8GoUaMQFBSkc31mZmYQCAQamyHZezdBsji31P1pOXmwrdvIgBG9JqjbGDn5FWsavhqfinZ+g1mOqIhDh7aVKp8ksEbXD8foKRpCSHXHWrLt0KEDTExMcLyETib169eHm5sbDh8+rPF6cnIyrly5gu7du+v8vgzDIDg4GNeuXYNIJIJIJMKWLVtw7tw5vHz5Uud6jalL3/64LrdBYmbxZ8XJ4lxcKjBHN79BRoisKLZT8eU/w5bJFXhp7gxnZxcDRAW08x+PzAp23CqJWZdO8KxVS48REUKqM9baHYVCIb799lvMnj0bZmZm6N69O5RKJfbsKRqismrVKsydOxc1a9aEn58fEhISMHnyZHTq1Emjc5RCoUBh4es7Jz6fDz6fD6lUqu61LJVKUVhYCHNzc4SHhyM1NRV3796FtfXrJsu+ffti9+7d+PTTT6FUKiGXy6FQKKBUKlFYWKiut6oaOmkGLl/4D7ce3IZJQdGMSHJzAZwatMCIHr2MFhePx0OXCTNxOGgDBtdzAqeEZ6X5UhkOvSjAmHkLDBZXm86dcbZNK9hHXNW6bDaPh6YfjGQhKkJIdcX6DFJ//fUXfv/9dzx79gwODg7o0aMHvv/+e9SsWRN79+7FypUrER0dDVtbW4waNQqBgYGwsrICUDTOdvv27Rr1TZkyBZs3b4aXlxfi4uI09jEMg+nTp0Mmk2Hbtm0a+zZs2IC//voLt27dwrZt2zBp0iSN/YsXL8aSJUsqdE40+09xYrEY4Yf3QZUciya2JrCxMENyTh6eK8xg5umN3sNGGXxqyeh79/Hvh/5wT0iscJlCANIpEzDnl5/ZCwz0GSKkuqHpGnVAF8rSKZVKPIp6CEm2GE6ubqhdu45R44mMiMDJj+ahRkxsucdKeFzIPhyNj9auYf2LAX2GCKleKNnqgC6U75a4p09xYs1aZJwLhUdSMnhvNXVnmvBR2KYVfEYMw+Apkw0SE32GCKleKNnqgC6U7yaJRILDf25AdlQUFHn54JmawMTBHi0/GInWnToZNBb6DBFSvVCy1QFdKEll0WeIkOqFpsghhBBCWEbJlhBCCGFZ1R1YSoieyWQyHN++HRl370Gelw+eiQlM7e3hO/5DeDcyzgxchJDqgZItee+lJCXh2K+/I/XsWdSIiYPNW72RT2wPwomO7dF41Ej0+uADI0VJCHmfUbIlesMwDK6GnkNmzCNAqQT4Jqjbvgu8GxrvrvH+jRs4PnsuvJ48K1pMvoQZrlwLCoGzoYi5cBEbb9zE9FWBJc6ERQghuqLeyDqgnqTFhR09iKzbl9DWioGzjYX69UfpOXjGE6J2z0Fo3LpyiwNo61lUFP750B81Yyu+SlM+GMinT8H0HwJZjIw+Q4RUN3Rn+465d/s2XkTfBwC41W2AZq1aGzki4NjWjWia8QgdRZbF9vk4CuADFW6d2oXrkly06dbTIDExDIP9n36OWlokWgCwBAfZW3fgv3Zt0HP4cJaiI4RUN9Qb+R2RmpKCPb+uhPmDC+jjwKCPAwPrxxex99eVSH6ZZLS4Lp06jkbpj+AuKJ5o39TSyRp5oYeQmBBvkLiunDsHu+u3dSprK1fgwb4Deo6IEFKdsZ5sN23ahCZNmsDKygqenp6YOHEiYmNjAQAHDhxA8+bNYWlpCTc3N8ybNw/5+fnqsgEBAVi+fHmxOmNiYtCmTRvY2dnB3t4eQ4cOLXH5vPr166Nly5bFXl+/fj1atmwJExOTCi8+YExKpRL/7dyADxrXQG2Rvfp1L2d7jGxcA2HBmyGXy40SW9qtS/C0LTvRvtLJ1QY3ThxiN6D/u7VrN4RKpc7llRcvI+bxYz1GRAipzlhNtsuXL8eiRYvwww8/ICMjA1FRUejYsSPOnTuH4OBgTJs2DUuXLi1aMSY8HLdv38aIESNQ3mNkJycn7Nu3D5mZmUhOToaPjw/mzp2rccy1a9eQlJSE+/fvIyoqSmOfq6srlixZghEjRuj9nNlw4dQJ+HmLSt3vV98VF04WXzeYbVF378KHI6nw8RwOB7ykp1AoFCxGVbTkovjS5UrV4ZaXj/CgYD1FRAip7lh7ZisWi7Fy5UoEBwejf//+6tenT58OlUoFT09PLF26FEOGDAEA1KlTB3v37kWtWrVw9uxZjTVt32ZjYwMbGxv1z1wuF8+ePdM4JigoCEOGDEFWVhZ27tyJlStXqvcNHToUAHDixIkKnYtUKoVUKlX/nJNT/mLp+lSQnAAbT+tS91uam0Ea/8KAERWJe3gHve1tyj/wDZ5cGV6+fAkPDw+WogIyMzNhlpVV6Xrk4srXQQghAIt3tpcvX4ZMJsPAgQOL7YuOjkZiYqI60b4iEonQvn17nDt3rkLvIRQKYWFhgZ9//hmffvqp+nWFQoG9e/di9OjRGDNmDIKDg8u9Wy5LYGAgbG1t1RubiaIkHKb85lCOSmWASDQxOjTTmvG4KCgoYCGa16RSKbjyyt89q2TGaZonhLx/WEu2GRkZcHR0BJ9f/OY5PT0dQFFyfZuLi4t6f3nEYjGysrLwww8/wNvbW/366dOnIZPJ0LdvXwwdOhQpKSkIDw/X8UyABQsWIDs7W70lJGjXw7Wy5CYW5X5ZkJuYGyia10xtbFGoZVJLkarg7OzMUkRFhEIhZJYVe45cFr6VlR6iIYQQFpOtg4MD0tPTS3w+5+DgAABITk4uti8lJQWOjo4Vfh+BQIAJEyZgyJAhUP3/7i4oKAhDhw6FqakpbGxs0L9/fwQFBel4JoCZmRkEAoHGZkite/TDtZjEUvfffJ6EZp17GTCiIp369seFlPzyD3xDlkAEoVDITkD/Z2trC46Pd/kHliEfDEStWugpIkJIdcdasu3QoQNMTExw/Hjxjjv169eHm5sbDh8+rPF6cnIyrly5gu7du2v1XgqFAsnJyZBIJEVrlh4+jH/++QcikQgikQinT5/GgQMHNJ67vkvca9RAoVsDPE4qfsf/NCUDOc514VW7tsHjMjMzg8y1ToWb6DPzC2HflP2JLTgcDmr07Q1lJR4dpPjUQ98xY/QYFSGkOmMt2QqFQnz77beYPXs2Tp48CalUivz8fPz999/Ytm0bVq1ahcWLF+Pw4cOQyWR49uwZRo8ejU6dOml0jlIoFCgsLFRvCoUCoaGhuHXrFpRKJbKysvDZZ5+hVatWEAgEOHjwIOzs7PD48WNERkYiMjISjx49Ap/PVyf+V3UqlUqNf1dlPQYNg9THF8eei3HyYTxOPozDsediSOq0Ra8hI40WV9cxE3A4qfy7W5lCiVOF1ujUx88AUQFDZs9CvKuLTmUZhoGody/weDw9R0UIqa5Yn67xr7/+wu+//45nz57BwcEBPXr0wPfff4+aNWti7969WLlyJaKjo2Fra4tRo0YhMDAQVv9/VhYQEIDt27dr1DdlyhQMGTIEX375JRISEmBpaYmuXbvi559/Rs2aNdGnTx/4+voWGz/79ddfIzo6GgcPHsSSJUuwdOlSjf1bt25FQEBAhc6JptrTlJyUiPMbfkFfBy5sLcyK7Y/NkuAKxx6j5n8DExMTg8W1+8efkP/jalgrtes8FtPQBzOOHIT9/x93sIE+Q4RULzQ3sg7oQlmcUqlE+MljyH54C/zsVHCUSqj4plA4uqOWbw80a9POKHFt/OIr8LbugE0FE25MnVoYuf1v1GvUkNW46DNESPVCyVYHdKEsH8MwVWblnD0//4LnW3fAM/El+KXElGbKR177thi3dg1qeHmxHhN9hgipXijZ6oAulO+e/Px8HN34F+JDTkIe/RRmBQVQ8E0gtxPCsUtHtJ/gj+bt2xssHvoMEVK9ULLVAV0o320FBQUQi8UwNzeHra0tuFzDr8dBnyFCqhdaYo9UOxYWFrCwsCj/QEII0RNaYo8QQghhGd3Zkmrl3vXreHbxIlT5+eCamsDE0RE9xoylO11CCKso2ZL3nkwmw5kdO5Bx+iTc7txFA9XrCUxkDIP9m/6CSecuaDMxAHUbsjvkhxBSPVEHKR0Yq3OLSqVCxH+nIUl9CYZhYO3sio69+hqlg09JUlNScOXYPzDJE4NRKMDwTMATeaDbkJEwNzf8QgkAkJyQgCNzZqPdg/swL+f39MjcHLzZczBw1mzW46IOUoRUL5RsdWDoC6VcLkfIjs1g4qLQUQDY/X+WJnG+FBdzGHBq+qCv/1SYmRWfvckQcnNzcXLTb3DLT0U7V6HG+FqpXIGw5FwoPBtiQMB0g469TUtOxpHx49ApLrbCZVK5XGTOnoMhn8xjLS6Aki0h1Q0lWx0Y8kJZUFCAA4HfYqQDYMovea5euVKJA2nAkC+XwsZGu8XcKys7W4wTq7/HSA8bcLmlJ9KcAilCCq0w9tNvDZJwGYbBX6M+QNfbt7R+vwRTM1j/+BPaDxjAUnSUbAmpbqpG+yMp1aE1KzDaiVNqogUAEx4Po525OPLrCgNGVuTEn7/gA8+yEy0ACCzM4GeRh5Cgvw0S15UzZ9AwMlKnxO4hk+LJnt0sREUIqa4o2VZh92/fRGtVBngVeCbL5XLgy8vB7SuXDRBZkah7d9GMJ6lwQhOYm0EZc7/ENY71LeafA3CG7o029jdv4OnDh3qMiBBSnRk92YaFhaF9+/awtbVVrwr0/PlzLFmyBCYmJrC2toZQKESPHj3w8I2LX0REBDgcDn777TeN+kJDQ8HlcmFtbQ0bGxs0adIER44c0Tjm33//Rdu2bWFlZQU3NzcMHz4cd+/eNcj5auNZ+BnUtrOu8PGetlaIv3KOxYg0RYefQX1HW63KdHW2QOixQ+wE9H/p6ekwreSXjrpyOW7u3KmniAgh1Z1Rk212drZ6ubysrCzExcXh448/Vq8jOnHiREgkErx8+RLu7u6YNGmSumxQUBDs7OwQFBRUrN7atWtDIpEgOzsbH330EcaOHQuxWAwA2LlzJwICAvDJJ58gNTUVsbGxGDNmDEJCQgxyztrgpSVqXyZd+zK64mWlaF3GyswUspdxLETz2pM7d+CZm1v5ipKTK18HIYTAyMk2OjoaZmZmGD58uPpudNiwYfD09NQ4zsLCAmPHjsWDBw8AFPXO3bdvH3777TfcunUL0dHRJdbP5XLh7++P/Px8REdHQ6VS4euvv8bSpUsxbtw4WFlZwdTUFKNGjcJXX31VapxSqRQ5OTkamyFwlHLtyyi0L6MzHeKrVLkKyk5Lg5UehkMpC/L1EA0hhBg52Xp7e0Mmk2Hq1Kk4c+ZMqUksLy8PwcHBaN68OQAgJCQEPB4PY8eORdeuXUu8uwWK1ljdunUr+Hw+atasicePHyMpKQlDhw7VKs7AwEDY2tqqNw8PD63K64ynw5wjupTREYen40LwuparIGs7OxSotFswviRcM+OMDSaEvH+MmmxtbW0RFhYGqVQKf39/ODk5Yfz48cj9fxPgzp07IRQKUbt2bYjFYmzbtg1AURPyiBEjwOPxMGbMGOzatUuj3ufPn0MoFMLCwgLz58/Htm3b4OLigoyMDACASCTSKs4FCxYgOztbvSUkJFT+5CtAYaddnLqW0ZVC4KB1GalcAZ6DKwvRvFa3aVMkWFhWviInp8rXQQghqAIdpBo3boydO3ciOTkZERERiIiIwIoVRUNY/P39IRaLkZKSgmPHjqFu3brIycnB0aNHMXr0aADAiBEjkJCQgIiICHWdtWrVglgshlgsxujRoxEWFgYAcHAoSg7JWj6LMzMzg0Ag0NgMwb1dFyTmVLwpM0VSAJdWnViMSFPN9l0Rm6nds9ELyRJ0GzKCpYiKiFxdkd+uXaXqiOPx0HjMGD1FRAip7oyebN/UqlUrDB8+HPfv3y/1mAMHDqCwsBCjRo2CSCRCw4YNoVKpSmxKtrS0xB9//IEDBw7g9u3bqF+/Ptzc3HD48GE2T0NvWvl2xkWpBSoy7wjDMDifZ4p23XoYILIizdu0x/VC0wofXyhXQOZW1yAzXXkMHoKsSszXkty0ORq3bq3HiAgh1ZlRk+2jR4+wZs0aJCUlASjqMHX06FG0bdu21DJBQUGYN28e7ty5g8jISERGRmLr1q3Yt28f5PLiHW9sbW0xbdo0BAYGgsvlYtWqVVi8eDF2796NvLw8yOVyHDx4ED/++CNr56krDocDv7nfYP9LKVSq0hMHwzA4mCxD34+/Nuh0iADQbcrHOBqfXe5xUrkCB9MZDJg00wBRAZ0HD8adBrotKpDC48FjBLt334SQ6sWoydbGxgYRERFo1aoVrKys0KtXLwwYMABff/11ice/ePEC4eHhmDt3LkQikXobO3YszM3NSx2+8/HHH+Po0aOIjo6Gv78/tm7dijVr1sDZ2Rk1a9ZEUFAQ/Pz82DxVndnZ22Pg1ytxjCPC6aRcSOWvJ4SQKZQ4k5iDI4wz+n25HI5OzgaPT+TqBt9ZX+FAigKP0oonXYZhEPEiE8dlthj71VLw+YbpwMXlctFv9RpcF2n3fDgbQOKYsej+/8cUhBCiDzQ3sg6MNa9tQUEBwo/9C2VuNhhGBa61LboMGg5LSz10BtKDxw/u41HYaZhKsqBUyMExMYXS3hUdBo6Ak7PhvwgAwLP79xE6by58Y2PBLeeu/wWfj/Rx/hj1LfvzN9PcyIRUL5RsdUAXyndLdnY2zm5Yj5zz59Hg6RPYvZFIGYbBPWtryDr4ot7ID9C2Z0+DxESfIUKqF0q2OqAL5buJYRiEHT6MjDuRUOblg2tqAo7QDp39/eHk4mLQWOgzREj1QslWB3ShJJVFnyFCqpcqNfSHEEIIeR8Zbm4/UmkZGRk4sW8/pLkSQKWCmcAG/T4YCSea6ahCCgsLsXfjJqRFP4U0Lw98UxOY2wrRfewHaNqypbHDI4S8x6gZWQeGbgKMOB+Ks9uC8OzMeVi+TAUHRR18GDDId3FCrT7d0HPiOHTq0cPg42zfpFQqEX74X+TFPQcjkwHm5nBv1RYtOnU2WkwAEB0VhcN/bsSTU//B9Gkc+G/9jiQW5nDo0h6thg/BiEkT1atOsYmakQmpXijZ6sBQF0qZTIalk6cjaf8RWMjKXnC90IQH5xEDsGTrZpibG3YC/Yy0NIRt2QjVtUtol/USAt7rpxPxSuCRZz1YdOyGPpOmwsSE3UUI3nZs9x4c+fxbWCanl3usjGFgObgPvg/ewfpwKkq2hFQvlGx1YIgLpVwuxxfDRqHg+FnwULG7VRUYmPXrhp8O/wNT04pPo1gZT+5E4v7SBeiRnVzmXbVUpcIJr8YYumYdbG21W3BeV4d37sKJuV/CIkdS4TIMwwC9OuPHowdZnVaSki0h1Qt1kKqiVs78SKtECwBccFB4MhTLphlmSsS46Gg8XvQFeuaklNt8bcblYmjcAxz+ZAakUinrsUVeu4YTXy3SKtECRVNkMmfDsWrmRyxFRgipjijZVkHRjx7h+YEjWiXaV3jgIP6f43h49y4LkWmK+GEpuuRlVPh4DoeDgfGPcfynQBajKnJ8w2ZYpJTfdFwSLoeD+CMhiI2J0XNUhJDqipJtFXR4/V+wzMnTubxVXgGObtyix4iKe3DjOnxiorQux+dywL1+CTKZjIWoimRkZCDm9LlK1WGdlYN//tigp4gIIdXdO5lsvby8cPHiRfXPoaGhqFu3bpnHBQQEgMPh4Pz58xrH9Ph/D97Y2FhWY64oqVSKqJAzla7n8cmzyM+v+Fq4Wtd/cC/q6Nhpt31OGs7vCdZvQG/Y9/ufsE5KrVQdHA4H0afOsvqlgBBSfbyTyVZX9erVQ3Dw64t8YmIinj9/brDORBVx+8YNqJ48r3Q9nJh4XAkP10NEpdT/+IHOZS15XEjv3NRjNJpSoh7pZQiUPOoJnkRH6yEiQkh1V62S7bBhw3D8+HF1B53g4GCMGTOm3AuzVCpFTk6OxsaWlBeJMNPhWe3bTAGkJ72sfEClyde9mRsAmEqWL4tUop+6zcBBcnyCXuoihFRv1SrZ2tjYoFOnTjhx4gQAYNeuXRg/fny55QIDA2Fra6vePDw8WIvR1MwUKj3UowJgYsbiHTu3ch8dDosTR3D1VLcKDMwsDDtmmRDyfqpWyRYAxo0bh127duHBg6Jm0EaNGpVbZsGCBcjOzlZvCQns3e3UqF0bhbzK/7cUcjhwr1VLDxGVjBEIK1VeZcPeWFszGxu91CM14cPdy0svdRFCqrf3Itny+XzI5fJir8vl8mIzFvn5+eHy5ctYt24dxo0bV6H6zczMIBAINDa2NG7SBII2zStdj2WrJmjdrl3lAyoFp3lr6DofykuFCm69++k5otfqd+sMuR7mahG2awUvSraEED14L5Kth4cHUlJSNCZLKCgoQGpqKjw9PTWONTU1xYABA/DXX3/hww8/NHSo5eJwOGgysB9U0D1ZMGDQZEBfcCvZ1FsW3wlTcIOrWxPrvRp10aY7e4u0Dw+YAGUj70rVoWQYNBnYz6hzTRNC3h/vbLKVyWQoLCxEYWEhXFxc0LRpU3z33XfIz89Hfn4+vv32W3To0AGurq7Fyi5duhQXLlyAu7u7ESIv39iPZqPQo3jcFVXg6ozRH8/RY0TFubi6Iq1leyhU2n0pSFExsOs7kNUkZmJigvp9e+p85w0ABV41MGbOLD1GRQipzt7ZZNuzZ09YWFiot48++ghPnjyBl5cXvLy8EBcXh927d5dY1tXVFR07djRwxBVna2uLnp9/DKmZ9pP2S01N0OXTOXBwcGAhMk3Dvl+FwzXqQ1nBpJapZHCr60D09A9gNzAA47/8DIoW5T+PL0mhCR8dZ02BlZWVnqMihFRXtBCBDgw1ifwfi5bg1o9/wFxasYkVpGYmaDx/FuYHLmctprcVFBRg/+dz0fLhTXiW0gmYYRjcAR/p/YZj2OdfG6xpNuruXawdOxHmjyo+7aKUz0O9udPw6U8/sBgZLURASHVDyVYHhrxQBv+5Hud+/wvcR0/BL2X8rQIMlN610HX2NEz45GNW4ynNrfBwPD/yDyzuXEPN/BxY8bjIVqgQ4+gKtPFF63EB8GCxd3RpYp48wZopMyGPuAHzMj7pDMNA4uyAdnNnYPo3X7MeFyVbQqoXSrY6MPSFUiaTYf+Wv3Hr32NIuXEbqhwJOAAYGyuIWjVHy2ED8cHUKawuCVdRubm5iHv2DHlZWbB1cUHtunWNPkMXwzA4efAgLu05gJfnwmGdlaO+u5YC4DRrAB+/Phj98Wy4iEQGiYmSLSHVCyVbHRjzQpmTk4OsrCwwDAN7e3u6UGsp7vlzXA29gMJcCUzMTOFUwx3d+/UDj8VJNkpCyZaQ6oVv7ACIdtge5/u+q1mrFmoaoTmbEFK9vbO9kQkhhJB3BSVbQgghhGXUjPyOkMvlOL9/N6SR18DNlwAAVJbWMG3aGt1HfWj0TkjviqTEF4gMPQuOQgoOjw9rFzf49mR3ti1CCKEOUjowZOcWhmEQsvF3KCP+QydFNiz5mh15ChRKXOQJwLTrjoFz5hl1esHCwkKEHjsEZXY6oFKC4fJhW7MeOvboZfRkFnnlEuIizsFFkozWLrbq31NmfiEixCpwa3qj15gAWFhYGCQe6iBFSPVCyVYHhrpQMgyDPYu/Ro8nVyE0Kbu3bI5chTO1WmLM8p8MntgYhsGxoL9hkhaHLrWcYfbG4g8ZOXm4nJwDx6bt0bEXe4sPlOXs/mC4Pr0GH4fSZ4RSqlQ4+FKG3h9/DQdHJ9ZjomRLSPVCbWdV2OE1P6DXk2vlJloAEJhw0Tf2Jg79vNIAkb3GMAx2//ELOptJ0NvbXSPRAoCDwAoDvV3hFH8H/x3+x6CxAUD48cOoGVN2ogUAHpeLkW5mOPX7KhQWFhooOkJIdUHJtorKysqCzbXzEJhU/L/Ims+D3c0LSE9LYzEyTacP7kFfJz5sLMteAaiOsxCOKY/w4E6kYQJD0XNu8ZWzqG1XsTmOORwOhotM8N/+YJYjI4RUN0ZNtl5eXrCyskJeXp76tfz8fNjY2KjXEfXy8oKlpSWsra1hbW0N0Rsz/AQHB4PD4eDIkSMa9W7btg18Ph/W1tYQCARo164dLl68WOz9/fz8wOdXzT5i4UFb4WtSfI3e8nQwVeJS8Db9B1QChmFQEBcNoXXFnnM2dnPE02thLEf1WujRf9HNRbtnsCY8HhSxUZVaMYgQQt5m9Dtbd3d3HDp0SP3z4cOHiy2Ld/r0aUgkEkgkEiQnJ6tfDwoKgp2dHYKCgorV261bN0gkEmRmZqJHjx4YOXKkxgX00KFDyM3N1f8J6YnqwU1wdejsxOFwwNy7yUJExV0JO4/2ImutylhL0g32ey98dh+WptqvnNTKQoGbly+xEBEhpLoyerIdO3Ysdu3apf45KCgI48aNK7dcamoqzpw5g3Xr1uHo0aPIyckp8Tg+nw9/f3+kpKQgIyMDQFGv2YULF2LVqlUVilEqlSInJ0djYxsvJ0v3srli/QVSBnHSCzgJtEu23vZWeBb9mKWINPEK8so/qASuAkukxT/XczSEkOrM6Mm2R48euHfvHtLS0pCWloa7d++iV69e5Zbbs2cPmjZtirFjx8Ld3R0HDhwo8TiZTIbt27fD3d0djo6OAIBVq1ZhzJgxqFGjRoViDAwMhK2trXrz8PCo+AnqSqXSuShHpTBIMygH2r+HKZ8PmUzKQjTFMSql7oUr8fsnhJC3GT3Z8ng8jBw5Env37sXevXsxYsSIYpPC+/n5QSgUQigU4tNPPwVQdAc8evRoAMDo0aOLNSVfuHABQqEQ7u7uuHr1Kv79918AQGxsLPbt24fPP/+8wjEuWLAA2dnZ6i0hIaEyp1whKnPdx3uqzC0NM97W1BwKpXYJLVEsgWsNT5YCeoupbqsgyRRK8Cxp4XhCiP5Uid5B48aNw9y5c8EwDH777Tco37qAh4SEoFOnTuqfo6OjcePGDezbtw8AMGbMGKxatQovXrxQ36127doVZ8+eLfZe8+fPx7Jly2BuXnbv2TeZmZkZfPk6pnYD4OlVncqq6jTQczQl69R3AC5s/AE961eshQAAElRmaGmIlgEASicPMKokrb94XEiWoOukASxFRQipjox+ZwsArVu3RmZmJrKystCmTZtyj391F9u+fXuIRCL07t0bKpUKwcHlD9kIDQ3FnDlzIBKJ0KZNGyiVSohEIjx48KDS56FP3oNG4Gmh9s2gsYUK1Oo3hIWIirO2tobExrHCTdaSgkJYenqzHNVrbQcOx7WXYq3LFbrUMthMUoSQ6qFKJFsAOHjwIA4ePFihY3ft2oWff/4ZkZGR6m3FihUl9kp+2+PHj9VlTpw4AR6Ph8jISNSvX7+yp6BXDVu0wl1RXa3L3XSqjWbtfVmIqGSdh47FscdJ5R6nUCpxJC4HPQYa5osAAIhc3ZAo9IBci6buyykSNOs7iMWoCCHVUZVJtg0bNkTDhg3LPS4iIgKpqamYNm0aRCKReps1axaePn2Ku3fvllne2dlZXcbJqWhaPpFIVCXH2/ZcsAzHuMIKHx/CEaD710vZC6gETs7OaD1qCg48eAFJQckzL73IzMaBmGx88NEXBl+kfcjsz7A/jYFMUX7CvZUmgWmnQahV13B334SQ6oHmRtaBIee1ffkiAee+/xo9cxJhZ1pyohLLFfjP2g1dvw2E+/8nAzE0hUKBsFMnkPs8Cvz8bPAByDkcKAVO8GjeDi3bdTBKXEDRTFKHN/wK+7RYdHazBZer+Qw3MTsPNwtMULvvMDRrZ5hWAZobmZDqhZKtDgx9oVSpVLh0/Agyws7A9vlDuKik4ICDFK4pxF4+sOvUC50HDzP6yjqvMAwDpVJZ5VoLsrKycPHQfnDTEgCFDBwuDwpLG7i16oRWvp0MumISJVtCqhdKtjow5oVSLBYjNSUZDMPA2UUEOzs7g74/0Q9KtoRUL1Xr1oOU69V4Y0IIIe+OqtHuSAghhLzHKNkSQgghLKNkSwghhLCMntmSaoVhGMTHxyMjMRHmNtbwrFUb1tbarVxECCHaomT7DklKiMfVoJ3gZGUWvWBnhzYfjod7TS+jxvUukEgkOL91MwovXUCN54/hAAaFKhUu2NijsFU71B0xyqAzbxFCqhca+qMDQw/beBETg0u//AD7yOtoJi9UjwdlGAb3+GbIaNYa7T/9Ep716rEey7vo/uUIRC1fiO7ZqeCXMpY2VsXBgy59MXbFDwYZr0xDfwipXijZ6sCQF8qn9+/j7hdz0TkzpczjIuyc0WDVang3b8FqPGWRy+W4EHIMhS9jwVUpoeSbQtSoJVp36GjQCSPeFHXzBhIWzEPbguxyj81XqvCfb2+M/2kN6/FSsiWkeqEOUlVYYWEhLi/4rNxECwC+Wam48e2XyM/PN0BkxT1/+gSHVy9FW2kc/NzM0beGFfqLTCB6cgm7fllulLgYhsHtVUsrlGgBwJLHRafLZ3FuT/mrRxFCiDaMlmy9vLxgaWkJa2truLq64pNPPoG1tbV643A4sLKyUv8cHx8PADh8+DA6dOgAKysruLi4oHPnzti9e7e63m7dusHc3Bw2NjYQCoXo1KkTtm3bpvHeYWFhaN++PWxtbeHg4IAePXrg+fPnhjz9CvlvxzZ0S46v8PFdUxPx39YtLEZUsry8PNw5uB3DGrrDylxz3V83ewFG17PH4b/WGjyuiBPH0C5Ju/9XOy4HWWdCWIqIEFJdGfXO9vTp05BIJAgLC8O+ffvw008/QSKRQCKRwMzMDA8ePFD/7OnpiZ07d2LixImYPXs2kpOT8fLlS/z000/FFonfvHkzcnNzERcXh3nz5uGbb77B119/DQDIzs7GkCFD8OWXXyIrKwtxcXH4+OOPDb4aTUVILpyDuRbPD025HOSHn2cxopJdOPYv+tUTlbqfy+WiqaUSjx8ads3glyFH4cDVvjm4ZvQ9RN2+xUJEhJDqqkr0Rq5Xrx46d+5c5gLuKpUKX331Fb7//nv4+/urX2/fvj3at29fYhlbW1uMHDkSpqamGDlyJD777DPExsbCzMwMw4cPB1C0APqwYcPKjE8qlUIqlap/zsnJ0eb0dJKfnw/LmCdal7ONeYqsrCyDzpmsTE2AiZewzGN8XB1w4tpF1G/YyDBBAeDGxehUrh5HhYjLl9CgRUs9R0QIqa6qxDPbx48fIzw8HM2bNy/zmJcvX2LIEO0XH+/fvz9UKhWuX78Ob29vyGQyTJ06FWfOnKlQ4gwMDIStra168/Dw0DoGbeXk5MBaLtO6nLVChtzcXBYiKh1XJa/QcTyVguVI3vLGFyRtqaQlr81LCCG6MGqy9fPzg1AohJ+fHwICAjB58uRSj83IyABQtND7K23btoVQKISFhQXi4uJKLcvn8+Ho6IisrCzY2toiLCwMUqkU/v7+cHJywvjx48tMUAsWLEB2drZ6S0hI0OFstSMQCCAxMdW6nIRvChsbGxYiKh3Dq1icKr7251MpZuY6F+WaW+gxEEJIdWfUZBsSEgKxWIyYmBgEBgaWOb7R3t4eAJCcnKx+7dq1axCLxWAYBmWNYFIoFEhPT1c3rTZu3Bg7d+5EcnIyIiIiEBERgRUrVpRa3szMDAKBQGNjm6WlJQpqaz9uNrt2PYMvu8dz9oBcoSzzmEcvM+DdpqOBIiqiqllLp3JPGC5qdzBsrISQ91uVaEauCB8fH4hEIhw5ckTrsiEhIeByuWjTpk2xfa1atcLw4cNx//59fYSpV1bdeqFQparw8TIVA8su3VmMqGRdBw5DyJPkUvcrlSrczefBu0FDA0YFuPoNRrpS+2HkcfWbwMeI45UJIe+fdybZcrlcBAYGYtGiRdi1axdyc3PVz2GVypLvqnJycnDw4EHMnDkTn3zyCZycnPDo0SOsWbMGSUlJAIDo6GgcPXoUbdu2NeTpVEivCRMR6lqzwseHOrujZ0DpTfFssbS0RIsRATj4MBGSAs1nnS8ys7H/WSaGTP/E4HH59h+Aa+61tSqTqQLsevmxFBEhpLp6Z5ItAAQEBGDLli34/fff4eLiAldXV8ybNw87duyAp6en+ripU6fC2toaHh4e+OWXX/D999/jxx9/BADY2NggIiICrVq1gpWVFXr16oUBAwaohwZVJWZmZvAN/AVh9i7lHnvJzhltVv4ES0tLA0RWnFeduhj22RLcsKyFkKRCnErMx4kUBdLrd8aHny40SlwcDgctFyzGNQvbCh2fr1QhwrcXeoz5kOXICCHVDU3XqANDT7WX+Pw5Lv6yCna3b6C5vEBjbuQ7fHNkNW+N9vO/oLmRS3H/ymU8XL4Q3bNSYFLKuNvnKg4edfPD6GVl9x3QF5qukZDqhZKtDox1oUxOTMSVXTuAzAwADCB0QLvx/nCtwf5QpHddXl4eQrf/jfzw83CPeQx7KCFVMUiwtYe0VQd4jxiNJm3bGSweSraEVC+UbHVAF8p3F8MwePHiBTJeJsHc2gYeNWvCysrK4HHQZ4iQ6qVKzCBFiKFwOBx4eHgYZGISQgh55Z3qIEUIIYS8iyjZEkIIISyjZuR3yLPHj3HjyFGoJBIwKgY8G2u0HDgA9RoadrKIsuTk5CDpRQLys7MhcHKGh4cHzMzMyi9oINF37yL68iWo8vPANTGFqYMjuo0oWqyCEELYQh2kdGDIzi0qlQpn9u3Hs38Pg3/5KjzyCzT2v7Awh6x9W9QeOgh9xowxylKBDMPg2rmzSDx1DNYPb8NdKoEFj4tcFYN4G0comrVF0w8+RJ0GDQweG1A0Xee53cEQnzsNz6i7qMd5PStXgUqFq3bOQNuOaD1hEmrWrWuQmKiDFCHVCyVbHRjqQpmTk4O/Jk1BzdCLsOaUvS5rHsPgeecOmL7tb9gKhazF9LaEZ89wccVCtE15DpFJ6U8lHiiApw1aY/iyH2FurvsCAdp6EROD/76Yh24JT2BdzheRO1wT5I2ZiEEfz2M9Lkq2hFQv9My2isrLy8P6D0ajYQUSLQBYcThoFH4ZG0aOMsh6uwDw7MF93F3wMQZnxpWZaAGgER8YGH0d++ZMRn5+vkHiS3j6FFc+no6BSTHlJloAaKaSo+6uzTi4aqUBoiOEVCeUbKuozTNmofGNSPVsURXB4XDQ5PY9bJk+k8XIimSkpeHO8m/QTSaucBkeh4Nh6c9x4Kt5Za7SpA+FhYUI/XIeumeWvkBCSZw5QN3De3E+aCdLkRFCqiNKtlXQ3evXITh3AVwtEu0rHA4HDqHhuHnxIguRvRa+dSP65KdpXY7L4aBTzB1cPXuGhaheO7djG3okxuhU1h1KpP67j/UvBISQ6oOSbRV0bdsOiGRyncs7KZS4tXOXHiPSpFAowNy+otVd95tcTHhIOn1Uz1Fpyr1wDuaVmOO4xYsYXDp+TI8REUKqM6MmWy8vL1haWsLa2hqurq745JNPYG1trd44HA6srKzUP8fHx6Nbt24wNzeHtbU1nJ2d4e/vr35GWdK+3NxcjfcMDw9Hr169YGNjAwcHB7Ru3Rp//PFHlbmLyc3NhSQ0rNL1FFwIR1ZWlh4iKu7Cwf3okKP9Xe2bHB7dQXJSop4i0nQnIgLez6IqVYcDl4OXx7VfO5kQQkpi9Dvb06dPQyKRICwsDPv27cNPP/0EiUQCiUQCMzMzPHjwQP3zq2X0Nm/eDIlEgjt37iAyMhIrVqxQ1/dq371793D37l0EBgaq950/fx79+vXDwIEDERcXh4yMDGzduhXXrl2DTCYz+LmX5Gb4RdR4mVLpejzTMnDtv//0EFFxBY8ewMakckOMWnHluH32tJ4i0hR/9TI8OZX/8sSLf66HaAghpApNalGvXj107twZDx48qHAZV1dX+Pn5lVjGxcUFffv2RWRkpPq1BQsWYMaMGZg3b576tSZNmmDHjh1lvo9UKoVUKlX/zGZv3+yUFFjo2Dz7JjMOB3mZ7NzZMtKC8g8qB4fDgaqg8vWUqCBPL9WoDNRrmhDy/jP6ne0rjx8/Rnh4OJo3b17hMomJiQgJCSmxzKt9derUAVA0lObatWsYMmSI1rEFBgbC1tZWvbE5ib2JuTlU5R9WLhXDgG9iooeaiuPw9PMdjcNn6buenurlGGGCEELI+8noydbPzw9CoRB+fn4ICAjA5MmTyy0zY8YMCIVCdOjQAb6+vvjmm2809gkEAtSoUQNCoRBLly4FAGRlZYFhGIhEIvWxI0eOhFAohKWlJcLCSn9OumDBAmRnZ6u3hISESpxx2US1a0Fc+Rtb5ICBs1fNyldUAsbKutJ15CmUsHB00kM0JbAWQKWHZ/CMjY0egiGEkCqQbENCQiAWixETE4PAwEBwK9CDdOPGjRCLxYiPj8fGjRthaWmpsS8nJweXLl1CTEwM0tKKOvLY2dmBw+EgOfn1uMsDBw5ALBbD09MTKlXp95NmZmYQCAQaG1tatG2L9CaNKl3Py4Y+aNu1qx4iKq5ev4GIllcumV2yckTnAYP0FJEm31FjcMO08mvUcpu11EM0hBBSBZItW3x9fTFz5kx89dVXAAArKyu0adMGR45U7R6mHA4HLn16QlmJOzMVw8CpV48KfXHRhU+zFnhe00fn8gzDgNOyPfgsNSM7ODoir1W7StVxn2uCVuMD9BMQIaTae2+TLQDMmTMHZ8+eVXegWrlyJTZs2IDff/8dmZmZYBgGDx8+hFgsNm6gb+k/axaeOTvqXD7GwQ5+s2fpMaLiBJ17Ikuh29PlGyo+2oydqOeINNUaNhIJlXj4ndKkpcEWJSCEvP/e62Rrb2+PiRMnYtWqVQCAnj174sSJE/j333/h6ekJR0dHTJgwAYsXL4avr6+Ro31NKBTC86NZyDTV/s4vi8+D66wZcHRi6Xno//UcMw7nG3aArIzm95IkKgHZyElw//8wLra06tIVD3r0R55S+4wbIXRC2/lfsBAVIaS6olV/dGCoFVt2LV8B5s+/4FDB2aQyTPhgpk+G/9IlrMX0Jrlcjt1ff4rej69BwC+/5+4zFQ8v+o9G/5kfGSC6ouUJd376CbpcPgcBt2K9zi7bOsJr0Uo0ZvnLF636Q0j1QslWB4a8UB7btBnRm7ag1rPnMOeU3BBRyDCIrV0TdaZMwuCZM1iN520MwyDkrz9RcPE/tMlMhEMJk108lQPPPL3hPngk2vkNNHh8R1b/BOXpE2gnTi11Cscn4CG2QVO0++Ib1DLAuruUbAmpXijZ6sDQF0q5XI6TO4MQf+w4pJF3YZGfDzBAgaUlTJs1gcfA/ug/cQJMWBpXWxEMw+Di8SNIv3AW/BwxGJkMHHNzKJzdUH/ISDRs2cposQFFE5Oc27kduaFnwXsRD3OZFDIuDwprG3BbtEGTsePh3bSpweKhZEtI9ULJVgfGvFAWFBSoxwzb2dlpDHsiFcMwDPLz82FmZsZaj+jyULIlpHp5rztIvY8sLCzg5uYGd3d3SrQ6erXAhbESLSHh4eFo1qyZQd+TYRj4+/tDKBRi2LBheqkzNjZW4++oW7duCAoK0kvd7xtKtkSvVCoVxGIxkpKSIJFIqsxqSoSU5c0VyKytrTVmmtNX/RffWGO6c+fOuHPnjl7fozzh4eG4dOkSkpOT8e+//xr0vUkVWoiAvNvu37qFi1u3IS00HPyMTPBkMsgtzMHxqAFRrx4YMHsWXPR8ASNEn06fPo1OnTqVul+hULzTrSHx8fGoXbs2zM3NjR1KtUR3tu+Ql4kvcPTP1Tjx8/c4/tP3OLpuNRLj44waU0JMDH4cNhJn+w+GYOce1ElIRM38AtRQKFErNw9eDx/DbO2f2OrbBb/Nmq2xehIhVVloaCjq1q2LxYsXw9HREYsXL8azZ8/QpUsXCIVCuLm5aczLDgB79+5F48aNYWNjgyZNmuDx48eYOnUq4uPj0adPH1hbW2PXrl3qul958OABOnfuDKFQiFatWuHSpUvqfV5eXvjll1/QoEEDCIVCfPRR6UPnsrKyMHbsWDg6OqJOnTrYuHEjAGDXrl2YOnUqQkNDYW1tjd9++61Y2by8PMyePRtubm6ws7ODv7+/et+BAwfQqFEj2NvbY/DgwUhNTS3393flyhW0aNECAoEA7u7uWLNmTbll3mfv7te0aiTq5nU8PrwbzgmP0c+KA87/l+BjGAY3b5zF9Rr1UW/waDRq096gcT28dRuHp89AzaexZR7H4XDgmSmGMng/fopPwCd7gmFDk/yTd0BsbCx4PB5evnwJhUKBpKQkLFu2DB07dsTz58/Rs2dPtG3bFkOHDsWlS5cwZ84cHD58GB06dEB0dDQEAgE2b96Ms2fPIigoSH3nHBoaqn4PmUyGQYMGYd68eTh37hwOHjyIQYMG4dmzZ7CzswMAHDp0COHh4SgsLETLli0xYsQIdO/evVi8rxJxfHw8nj59ip49e8LHxwfjxo2DXC5HUFAQzp49W+K5zps3Dy9fvsTdu3dha2uLq1evAgCuXbuGefPmISQkBD4+Pvj2228xe/ZsHDhwoMzf3bx58/D5559j3LhxyMrKQmxsrLa//vcKJdsq7urJY2AObsJAMxVgrdkQweFw0NqaB4if4tZfK3E5dQo6DNB+CUFdJMbH49DMOfAqJ9G+icfhwOviFfwRMBlf7N1tlCa5u7duIe7+TfCUcoDLBayE6NZ/MHU2I/Dz8wPv/8sqTp48GYMHD4aZmRm++eYb8Pl8mJiYoE6dOuplO+vVq4dx48bh4sWLGDp0KLZt24YZM2agY8eOAAAfn4rNH3716lWoVCrMnTsXADB69Gj8+uuvOHnyJMaOHQugKHE5OhZN4dqtWzfcuXOnWLJVKpXYv38/Hj9+DEtLSzRt2hRTp05FcHAwupazKIlKpcLOnTtx79499fu8+mLw999/Y/bs2WjSpAkA4LvvvoO9vT0UCkWZdZqYmODp06fIzMyEvb29+otDdUXNyFXY/asR4PzzF1qblT/lYEtzBqaHt+DOxdKXCtSnQytXwSv6qdbluBwORGdDcXjz3yxEVbrMzEzs/u1HWD67iv61hOhb1wl9azughwODsG1rEXryuEHjIVXPqxXIxGIxVq9eDQAQiUQaXwoTExMxbNgwiEQi2Nra4tdff0VGRgYA4MWLF6hVq5bW75uUlFRsjeyaNWsiKSlJ/bOLi4v635aWlpBIJMXqSU9Ph1wuh+cbU6G+XU9p0tLSIJVKS4w/Pj4eK1asgFAohFAohIeHB/h8vsYKaiXZvHkzHjx4gLp166JTp064fPlyuXG8zyjZVmFPDwahpXnFe/M2M2Pw/NAuFiMqkp2djezzoTqXN+Nw8Py44ZJbfn4+Tu/YgA+ae6GWSHOBBxM+H72b1IVrThzCTp8wWEzk3fDqkc0rCxcuhJ2dHaKjo5GdnY158+ape9x7eHiU2lT6dj1vcnNzK7ZGdnx8PNzc3LSK1dHRESYmJoiPj9e6HicnJ5iZmZUYv7u7O5YtW6b+IiIWi1FQUIAaNWqUWWf9+vWxb98+pKamYsyYMeq79OqqSifbLVu2oHnz5rCysoKrqyv69OmDU6dOAdDsqu/q6opPPvkECoUCOTk5cHd3x8mTJ9X1PHr0CHZ2dnj6tOhObPr06XB1dYVAIECTJk1w9OhRo5xfWaIf3EOdtOdal/PJjMeDWzdYiOi1o3+uh2dyWqXqMLl6A5FXruoporKdO3oQw5rVLvOCV9fVCeLoO+U2jZHqLTc3FzY2NrC2tsb9+/c1xpROnDgRGzduxOXLl8EwDB4/foyXL18CAJydnUtNxO3aFS0H+ccff0ChUGD//v2IiopCv379tIqNx+Nh5MiRWLhwIfLz83H//n1s2bIFY8aMKbcsl8vFhAkT8OmnnyIjIwNyuVzdSWvSpEn4448/1EOVMjMzcfjw4XLr3LVrFzIyMsDn82FjY6Nuoq+uqmyyXbZsGb777jssX74caWlpSEhIwJdffqmRRE+fPg2JRIKwsDDs27cPmzZtgkAgwG+//YZZs2YhPz8fDMNgxowZ+PLLL9W9/z799FPExsYiJycHf//9N8aPH69uCqoqHp88ggZW2j/TrGvFx7Mzx1iI6LW0K1fBLSNxVYSTTI7bh9lfW5hhGCjSk8CvwEIJXeq5I+zMyXKPI9XXokWLcP78eQgEAsydOxcjRoxQ7+vYsSPWrl2LyZMnQyAQ4IMPPkBOTg4A4KuvvsLXX38NoVCI4OBgjTpNTU1x5MgR7N69Gw4ODggMDMSRI0d0esb5KmF7eHhg8ODBWLJkSYkdqUqyevVquLm5oVGjRnBxccFff/0FoGht8J9//hkTJkyAQCBAy5YtNXpLl+bEiROoX78+bGxs8Ntvv2HHjh1an8/7pEpO15iVlQU3Nzfs3bsXgwcPLvEYLy8vjd59o0aNgrOzM/744w8AwODBg1G/fn3Ur18fv/32G27evFni3ME3btxA586dcfXqVTQtZW5cqVSqMWQlJycHHh4erE61d3zpF+ib/kinsqfs6mHA96v1HNFrP3XrCffIe5WuJ3/cGExdV3wIgj7l5OTgzp718G1Qu0LHn36RB7+xAazGBNB0jYRUN1WyN/KVK1egUCgwYMCACh3/+PFjhIeHY9myZerX1q1bh6ZNm4LD4eDEiRPFEu3s2bOxdetWFBYWon///uqediUJDAzE0qVLdTsZXWm5TqwGphJlK1S9fupXKiu2dGBlKBQK8HkVvwvX17kRQsibqmQzckZGBhwdHTXa+EUiEYRCocbsJ35+fhAKhfDz80NAQAAmT56s3ufm5oYaNWrA0dERbdu2LfYef/75JyQSCc6ePYs+ffqU+TxvwYIFRZ2C/r+93ZmBDYy57kNRKlO2IvhWVnqpx8TKWi/1lMXW1hbpBcoKHatUqgBTC5YjIoRUR1Uy2drb2yM9PR1K5euLZHJyMh49eqTRnPuqq35MTAwCAwPBfWOt0rVr10IgEEAgEGDDhg0lvg+Px0PPnj1x9uxZnDhRek9UMzMzdV2vNrbZt/RFaqH2d36ZUgUETYt/udAnQaOGla4jj2Hg3q6NHqIpG4/Hg1LgWP6BAMIexaJT34q1phBCiDaqZLLt0KED+Hx+mQmwLHFxcVi+fDn++usvbNy4Ed99912ZY8IUCoW6p3JV0aFPP1wzrViSeFMEzw6dBpT8nFtfuk+bgpdWlbt7TmvcAL1HjtRTRGVr3b0vQqPKntZSkl8IiaUjPT8lhLCiSiZbOzs7fPXVV5g1axZOnDiBgoICKJVK9fRh5Zk9ezZmz56NRo0aoVWrVvD398e8efMAFI0RDQ4OhkQiUXezP3/+PLp06cLiGWmPw+FA2NUPSdKKNYECQIpUAZtOfTTu8NlQp359cDvqPjWkimHg2rsn63G+UsPDEy5te+DU3adQlfBMNikjCyeeZ2Hw+EkGiYe8uxo1avTeTc7w8ccfY9cu7cfnb9myBZ9//jkLEb2fqmRv5Ff++usvrFu3Dk+ePIFQKETDhg3x2Wefwc/Pr1hv5Ff27t2LhQsX4t69e+rnuxKJBA0aNMCmTZvg6+uLIUOG4Pbt22AYBnXr1sW3336L4cOHVzguQ/YkPbT2R7R+cB4is7L7sqVKlbjs3REjPv+W1XheuREairDJ0yHKFGtdNtanHmadOAo7e3v9B1aGzMxMXDp1DIw4FXylHAyHA4WFDVx9mqFNh45lPrfXN+qNTKqCly9folOnToiOjgaPx8ONGzcwZswYFBYWYsuWLejbty+Aojmix44di4sXL6r70shkMtSrVw83btyAk5OTMU/jnVClk21VZegL5ZmdWyANP4ku3DxYmmiOFy1QKBGmtAS/Q2/0nTSD9VjeFLJjJ6K/XQyn3OJTx5UmvqYHhu3YivrNSh5mVV1QsiWlMeRSfj/++COSk5PV01P269cPn332Gdzc3DB69Gjcv38fADB8+HDMnz8fnTt31ig/a9YseHt7Y/78+QaJ911WJZuRiabe/lPQd10wrnUfhxOC2jht4oRTfEeECGrjcucx6P1HsMETLQD4TfBHk7U/45mbCMpyvrPlMyo8bdYII3fvrPaJlry73lwEPiAgAHPnzkXPnj1hY2ODPn36IDMzs8RyCoUCH330ERwcHODj44MffvhBPclObGws+Hw+NmzYAHd3dwQEBKCwsBBz5syBSCSCp6cnvv/+e/UjkCVLlmDq1Knqut9cru9VXevXr4eLiws8PT01Zrl628mTJzUSaFxcHLp27YpGjRohPz8fQNHkQZaWlsUSLQB06dJFY6IhUroqOc6WFGdiYoJeo8cBo8cZOxQNPYYPR5vevXFk/QbEnzwN28i7sFUVJV6GYZBiaQGmQzt4Dx2MSeM+rPZTtpH3y759+3D69Gl4e3tjwIABWLt2bYlj8tevX49Lly4hKioKSqUS/fv319ivVCoRGRmJZ8+egWEYLFu2DA8ePEBUVBRyc3PRq1cveHp6IiAgoNyYlEolrl27hri4ONy6dQv9+vVD27Zt4e3tXezYe/fuoV69euqfGzRogP/++w8eHh5wdHSEXC7HwoULcejQoRLfy8fHB3fv3i03JkLJluiBjY0Nxn35BZgvPsfl8+eR9OQJlFIp+JaWGNChA+o3amTsEAlhxQcffKCeeW7EiBE4ffp0icf9888/mD9/PpydnQEUrTsbGBiocczixYvV/Uz27NmDzZs3w87ODnZ2dvjss8+we/fuCiXbN+vy9fXF4MGDceDAgWIL3QOAWCyGtfXr8e4//vgjZsyYgYKCAqxbtw5r167FBx98gMTERPj7+8Pc3Bzr1q2Dl5cXgKK//ezs7ArFVN1RsiV6w+Fw4NujB9Cjh7FDIcQgKrL0HVA0T8Cbq+S8vWIOl8uFq6ur+uekpCSdlsp75c0l+zw8PNQLIrzN1tZWI+a6deviv//+U8e8b98+XLp0CR07dsT+/fuRkJCAL774Avv37wdQtDCDra1theOqzuiZLSGEsEwkEiExMVH984sXLzT2v90T3s3NrdSl8qysrFBQUKDel5KSUuz93pzlLiEhQSORv6lJkyZ48uRJifu+/PJLLF++HCYmJkhPT0fNmjXRpk0bdacpoGiq3LKmuiWvUbIlhBCWDR8+HL/++itSU1ORnJyMdevWlXn86NGjsWzZMmRlZSEhIQGrV69WL5XXrFkzhIaGIjk5GampqVi7dm2x8suWLUNhYSGuXLmCI0eOaKxO9KZ+/fqpO3y9KSIiAnl5eejTpw8AwMLCAg8fPsT58+fVTcgAEBYWph4eRMpGzcjviJSXL3Fp6yao7twGR5ILMAwYaxtwm7aE76QpELm7GztExDx6hDv7gsFPT4FKJgPX3ByMZ210njRVp+XCCHlfzJo1C48ePYKPjw+cnJzg7++P3bt3l3r8d999h08//RQ+Pj4wMTHB1KlTMXHiRABA7969MXDgQPj4+MDd3R2TJ0/G+vXr1WV5PB5at24NT09PmJmZ4ffff0f9+vVLfB9/f3907twZP/74o7rzokqlwpdffqnRi/mnn35C7969YWFhgX379gEA5HI5jh8/juvXr1f691Md0DhbHRhyjGRubi6OfbcAjreuoIWsoFhzE8MwuM23QHrLtui/LNAoz0/uX47A4x2b4fb4LhpxNWdoUjEMrphYIbdpa3T99Cs4i0puzjKEuOcxiDx5BNzUOHDlMoDLg8LKFoKGrdCl/yCD9pSmcbbV28aNG/HPP/+U2qFKV7Gxsahbty4UCkWFy3z00Ufw9fXFhx9+qNV7bdmyBVFRUfj555+1DbNaomSrA0NdKDPT03Fs1hT4JTwtd7F2hmEQ4lYbfus3w/GNThtsu3z0CGR//oQW8rxyjz0jdEWrFavhVcq3bLbIZDL8+/uPqJubiObOxf+/cgulCM1UoNagD9Gsna9BYqJkW73k5ubi6tWr6N69O54/f44BAwZg7ty5mDNnjl7fR5dkSwyDntlWUQqFAofnzUH/CiRaoKiDhV9SDI7NmwO5nP11YgHgXsQlyCuYaAGgt/glri/8DJnp6SxH9ppCocDewO8wxFRcYqIFABtzMwxys4Ls1C7cDA81WGyk+njVNGtra4suXbpgwIABmD59urHDIgZEybaKCt2/B92j72o1Xy+Hw0HPZw9xbtdOFiN77dHfG9C8gon2lT6ZibiwsezOIfp04u8NGCFUwKQCTcQtHa2RHLIHubm5BoiMVCe2tra4desWJBIJkpKSsHr1apiYmOj9fby8vOiutooyeLL18vJCzZo1Ne6+Zs6ciSVLlqh/joiIAIfDwW+//aZRNjQ0FBwOR91R4JUdO3aAw+Go6wgNDQWXy4W1tTVsbGzQpEkTHD58GLt27YK1tTWsra1hbm4OHo+n/tnPz4+1c9ZF1plTsNHhGaIlj4vs82dZiEjTw5s3UOfZQ63LcTgcMNcuGuTuWyaTgZ8QBTOTivcD7O1mg7B/97IYFSGkOjLKnW1ubi62bt1a6v6goCDY2dmVOKenq6srzpw5ozHObNeuXRpTjgFA7dq1IZFIkJ2djVmzZmHMmDHw8/ODRCKBRCLBtm3b0LlzZ/XPISEh+jvBSnr68AHcH97RubzX43uIunVLjxEV9+jgftTl6fa4v0NOKs7vLb0npr5cOHIQXRzMtCrD43Ihj3kI6spAyLuDYRhERkZix44d2LBhAzZs2ICdO3ciMjKyyvwtGyXZzp8/HytXrizx7kYul2Pfvn347bffcOvWLURHR2vst7S0RPfu3XH06FEARbOcPHr0CN26dSvxvbhcrnpi75iYGJ3ilUqlyMnJ0djYFH39GrxR8XVs31aHw+D5bXaTLT+t5BlpKsKKx4MsIVZ/wZRCnpoIC1Ptm+ocZbms/x8TQipPoVBg69at+PLLL7Fz507ExMQgJSUFKSkpePbsGXbu3ImvvvoKW7duNXrzulGSbffu3eHp6Ylt27YV2xcSEgIej4exY8eia9euJd7djhs3Tr3Y8Z49ezBq1KhSFyJXKpXYunUrrKys1CtjaCswsGhIzavtzanQ2MAUFlZ6bVWVtFBP0ZRWv7RS5ZlKlq8QhW5N1dZ8lDrtHiHV1YsXL/DHH3/gyy+/xNy5c/HRRx/h008/xU8//YRbLLekleTBgwf45ptv8OzZM1hbW0MgEGhcNzkcDgQCAaysrBATE4Nvv/0WDx48MHicrxitg9TixYtLvLsNCgrCiBEjwOPxMGbMGHVSfVOfPn1w8+ZNZGZmIigoCOPGFV8J5/nz5xAKhXB2dsb27dvxzz//QCgU6hTrggULkJ2drd7enAqNDRxzi0o3fXAtzPUUTSn1m2nXPFusvDm78QEAo+OaoNlyhobjEKN69OgRgoODsXPnThw7dsxgIwxKkpeXhxUrVuCnn35Ceno6rKys4ODgACcnJ9ja2iI/Px+7du3CggULSp36Ud9u3LiBv//+G1ZWVhXqaMbn82FpaYktW7YYbRIOo80g1bNnT7i7u2P79u3q13JycnD06FH1+ogjRozAnDlzEBERAV/f1+Mf+Xw+hg4diuXLl6OwsBDNmzcvVn+tWrXw9OlTvcRqZmYGs0omF2007doN9/76DU1VMp3KR3H4aOBbfO1JfVLUqAnE3C//wBKIFUpYeTfQc0TFWXrWRc7jRAgsTLUqJ7ayh42NDUtREVK606dPIyIiAtnZ2eqbA5lMhv/++w+1a9fGhAkTDDpxTXZ2Nr7//ntYW1vD3t6+1ONefTndsGEDJkyYgGbNmrEWU0ZGBnbt2qXTzZOtrS2Cg4NRu3ZtODg46D+4Mhh16M/bd7cHDhxAYWEhRo0aBZFIhIYNG0KlUpXYlPzhhx/i119/LfGu9l1Xw8sLaU1a6Fw+qVFz1GJ54ogWY8bjvkq3GZeuOXmgy9Dheo6ouC5+gxCerd1zGqlcAdM6jVmKiFQlly5dwtq1a/HLL7/gzz//ZL3FqjybN2/GmTNnwOFwNBKJqakphEIhMjIy8P3332u1+k9lqFQqBAYGwsbGptTHdG8TCATYsWNHqasM6cOGDRsq9YXD1tYWGzZs0GNEFWPUZNu7d2+IRCL1wsRBQUGYN28e7ty5g8jISERGRmLr1q3Yt29fsWYUX19fnDlzBjNnzjRC5Oxz6TcImUpV+Qe+JVupgmOf/uUfWEm1vOsjsX5TrcspGQa8th0r/MdbGTweDyb1WiC7sOItBKdSCtB92CgWoyLGlpGRga+//hqHDh2CWCyGRCJBamoqfv75Z6xdu9YovVf/+ecfREdHa6wt+7ZXzyBXr15tkM4+p06dAofD0br/iEAgwIEDB1iJ6dmzZ0hLS6tUnxYOh4O0tDQ8e/ZMj5GVz+iTWixevBiZmZl48eIFwsPDMXfuXIhEIvU2duxYmJublzg0p2fPnu/tBPddhgxBePP2UGrxh69iGIQ2bo1uI0ayGNlrLT+ajwhLoVZljrvWQd85n7ATUAn6jZ+Ek0o75FYg4Yal5KHBmOnqBbxJ5WVnZ2PTpk1YvXo1tm3bhvz8fKPGo1KpsGrVKpibm2skNg6HA3t7e6SkpGDTpk0Gj+ny5cuwsrKq0PGmpqY4cuQIy1EB169f1/lvISYmhpXnzCEhIXq55tvZ2Rl8uCfNjawDQ81rW1BQgN0zp8Lv0W2YlnMnKFcxCKnXBKP/+huWlpasxfS2exGX8GLVInTMzyrzOBXD4JioNvr+8gecDLwYAcMwOLp5HSzi7qOLiw34PM3fZZw4D5EKCzQdGYC6DRsZJKbqMDdydnY2lixZAoFAAC6XC6VSiby8PKxcudKgfSDedOLECYSFhcHCwqLUYzIzMxEYGGiwv6OTJ08iNDS0zJjeplKpsHTpUtZiio+Px+rVq3VObDKZDC1atMDIkfr94r9kyZJKj9R40+LFi/VWV3mMfmdLSmdhYQH/zdsQMeADXBA4oFBVvFlZqlLhgrU9LvoNx7i/dxg00QJAE9+OaPjzevzXpjsieJbF7sTzlEqctXHCuR5DMXzTToMnWqDormXwtI/Q4eufcdbWByESU5zKYnAqm4tjciFye/tj+KKfDJZoq4v9+/erEy1Q1KxvYWGBgwcPGi2m+/fvl5vUBAKBehy/ITx58kSrRAsAaWlpUCp1H4tfnlu3blXqS6CpqSnS0tL0GFGR7OxsvdUlFov1VldF0Hq2VZyJiQlGLFoKqVSKczu2oeDGVXAkkqLnStbWMGvdFn7+AVr/sepTTW9v1Az8BWKxGOE7tgJpyWCkUnAsLGDm3RCDR48FX8dhOPpkbW2NAROmGDuMaiMnJ6fYs3kTExNkZmYaKSKgsLD88ed8Ph95edrN+V0Zujx/5XA4yM/PZ63XfGFhYaWXnGSjGVmfXzDY/LJSEuNfAUmFmJmZwW/aDGDaDGOHUiqhUIhBc+cbOwxSRdjb20MsFmtctGUyGVwMuATk28zNzTWmei2JXC436PAaXRYkYBimws94dWFpaQmlUlmphGtqqt2Qu4rQ55d2Q98AUDMyIYQVo0ePRl5envoORyqVQqlUYujQoUaLqXXr1uV20srLy8OgQYMMFBHQqFEjre+kRSIRqz3627ZtW6km28LCQri66v+Rka4TE7FdV0VQsiWEsMLCwgKBgYFo3LgxHB0d0apVKyxfvtyojxS6d+8OLpcLVQn9H4CiToktW7Zk5a6sNN26ddOq009hYSFat27NYkRFybwyLRAymQwDBw7UY0RFHB0d9TI0i2EYODo66iGiiqNm5HfIjbAwRJ0+A2VePhhGBb6VNbx79UC77t2NHRoAICM9Hf/9/TcUGZlQSqXgW1jAqnYt9Js40aAXr7Lk5eXh/vVrECcnw9zKGjW8vVGH5QlAqjNTU1OMGTPG2GGocTgcfPfdd/j555+RlZWlvrtRKpXIzs5Gs2bN4O/vb/CYunXrhv/++6/cZ7AMw4BhGPTr14/1uNq3b49Tp05p3emSYRjUrVu30s98SzJkyBD8+OOPZc5mVRFZWVmYOnWqnqKqGBr6owNDDtuQSqU4snETXpw8CZsbkXB8qzNFBp+H7JbNUaNfXwyeaZwxotfPh+JOcDAUYRdRL1Os8S1dxjB4UtMDgu7d0GXaFNTy9jZ4fAAQdfsWHu4LhumNK2iUmwEBnwepikEiw0F8vUaw7NITPf0nGmxISnUY+lPVPX78GKGhoVAoFLCyssKIESOMOk3n3r17ce3atVI/D0qlEvn5+Vi4cKFBmkAZhsGiRYvA5XK1arLOycnBwoULWZsD4YcffkB+fr7OzegqlQpWVlb48ssv9RxZ2SjZ6sBQF8qUpCRsDZiCOtdvwaScZiY5wyCmVXNM2LoZriyvSvQKwzDYsWgJTP7eBpG8/B6VT+3tUHfZYvT44AMDRFdEKpVi7xfz0ODmJdTjlv5RL1SpEGbrjDqffoNWvXqzHhclW1KSq1ev4ty5c0hOToaNjQ14PB7y8vLA5XJRv359jBs3zqBfqAsLC7FkyRKYmJhUqPk/JycHM2fOLLa+uD7l5eXhu+++07kTW3Z2NpYtW8ZqB7OSULLVgSEulBlpadg0fCR8HjzWqtxjn3qYdHA/nEUiVuJ60+bPv4DzjmCUPsFccYlWFnBduQy9xo5lLa5XpFIpdk2biMEx98Gv4DOx+yYW4H6yAL5DhrEaGyVbUpa0tDTcvHkT+fn58PT0RKtWrfQ6mYM25HI5/vjjD8TFxUEgEBRrHmYYBmKxGA4ODpg6dSorHaPe9ujRI2zevFnrloicnBxMnToVDRqwvxDK2yjZ6oDtCyXDMPhx6Ah4h13S6Q/sUYc2+OrYEVb/OI9s2AD50hWw02H+5lh7Idru3IbGbdqwENlrOz6ehf63wyucaF+5bW4D11W/wacVe51QKNmSd41EIsH+/fsRExOj7lluZmYGkUiEoUOHwtPT06DxxMbGYtOmTVCpVOXe7UulUnA4HEydOhW1atUyUISaKNnqgO0L5aXTZxD94UTYldJjsjzZHMBr22Z0HaT/3oBA0ZeB33v0QuMHj3SuI3bEEExe/6ceo9L04MZ1SOdNQW2Obh/vcy07Y/ha9uKjZEtI5TEMg4MHD+L27dsQi8Wws7NTN3crFAqIxWLY2tqiRYsWGD58uNFaBwDqjVwl3d2zF646JloAsGWAB/v2s5Zsz/97CB4PogDo/sEtDA2DWCxmraPH4wN70EPHRAsAtneuITkxESJ3dz1GRQjRJw6HgxEjRmDEiBFISUnB5cuXIRaLwTAMhEIhfH19jTqJypuqxDhbLy8vWFpawtraGm5ubpg7d656Kq1u3bqpV+hwdnaGv78/cnJy1PtKWuv2TX5+fsUe7C9ZsgSNGjUCl8vFtm3bWDknXaWlpKDwQnil65GHX0ISS+tzPv33EGwrkWgBwDs9EyfXs7OmZG5uLkxvXK5UHS0VhYjYvkVPERFC2Obi4oKhQ4ciICAAkyZNwrBhw6pMogWqSLIFgNOnT0MikSA8PBz//PMPtmx5faHbvHkzJBKJep3bFStWVKjOQ4cOITc3t9jrdevWxerVq9GpUye9xa8v10NDUSNTXOl6PLJzcf3c+coHVAJZTEyl6+ByOCh8Wvl6SnLn0iU0yRNXqg4OhwNu/HP9BFTNSaVSLFmyBFKp1NihFEOx6YZi016VSbav1KlTBx07dkRkZGSxfa6urvDz88ODBw/KraewsBALFy7EqlWriu0bP348+vbtW+HB2lKpFDk5ORobWwrE2eUO86kILocDWQlfNPRBmaefNUkV+RK91PO2nPRU2PAq/9FmyplDl1SMVCrF0qVLq9zFD6DYdEWxaa/KJdvo6GiEh4ejTp06xfYlJiYiJCQEzZs3L7eeVatWYcyYMahRo0alYwoMDIStra1682BxHCvP1EQv05EV1cXOrE1cPU23xzNhZwIJE3NzKPTwO+SYUJcGQoh+VJlk6+fnB2tra9SvXx++vr6YM2eOet+MGTMgFArRoUMH+Pr64ptvvimzrtjYWOzbtw+ff/65XmJbsGABsrOz1VsCS89CAUBUuzay9dBhTsIwsK/BTucenp46NfFs2Zmtp4Z3fSTq3r/sNRvDrfxCCHm/VZlkGxISgtzcXBw6dAg3btyARPK6iXHjxo0Qi8WIj4/Hxo0by23+nT9/PpYtW6a3mVbMzMwgEAg0Nra07dwZ6Y0bVrqe5Ab10blvXz1EVJx1+7aVvvvO4nBQZ0B/PUWkqUHTZnhaq3LzHWcrVRB2qRpzThNC3n1VJtkCRZ1ShgwZgl69emH58uU61xMaGoo5c+ZAJBKhTZs2UCqVEIlEFXrWa2xcLhcuvXtBVYlkxjAMnHr3YGUicADoNXMGntpW7gtHUtNG8GXpywAAWHTqAblK99/hdVFNdBkyXI8RVV9mZmZYvHixwead1gbFphuKTXtVKtm+8vnnn2Pz5s1IT08v91i5XI7CwkL1plQq8fjxY0RGRiIyMhInTpwAj8dDZGQk6v9/dZdXZVQqlca/q4qBH83GcycHncvH2AvRf/YsPUakya1GDaBjB53LyxkGLn17szrAvEfAZIRZ67YyiFSlgkmnbqyuF1qdmJmZYcmSJVXu4gdQbLqi2LRXJa8mDRo0QNeuXbF27dpyj508eTIsLCzUW2BgIJydnSESiSASieDk5ASgaH3GV+Ntp02bBgsLC5w5cwbTp0+HhYUFwsLCWD0nbdjZ2aHWJx8hy9RE67JiPh81586BE8vjy/wWLUSUl/bTszEMg/sd22HYJ5+wENVrlpaWcP/kSzzia/cHp2IYhNRvgQFzP2UpMkJIdUTTNerAUFPt7VoRiMI//oSDVF6h4zNNTWAyexr8Fy1iLaY33b96FWEffQKfuIp1GFMyDO60aYlpwUGwNcASYQAQtm8PsHENmsrKH64kVakQUr8FRv62AdbW2iyvoD2arpGQ6oWSrQ4MeaE8vnUrHv61Be6PomHJKbkhooBR4YV3XTSYOhkDpxl2QeTY6Cc4+s03sL16AzWkshKPYRgG0Xa24PbsgYlrfjH4mrv3Iy7h0a5tcLp3E01VsmLN19lKFa45e8CkUzcMnPd5hZYSqyxKtoRUL5RsdWDoC6VSqcSp3Xvw9NBh5EXegXleAThgUGBpAavmzVBn8CD0HfehQZJEaR7ejsTVbduQffESTDOzYCqVocDCHHB3h12P7ug7awbrTdvlSXj+HNd3bgXvRRyY/HxwTEygEthC2Lk7ugwbwVqHspJQsiWkmmGI1rKzsxkATHZ2tsHfWy6XM2lpaUxqaiojk8kM/v7lUalUTG5uLpOcnMwUFBQYO5wqy5ifITYVFhYykyZNYjw8PBgbGxumXbt2TEREhHp/YGAg4+joyNjZ2TFffPEFo1KpjBJnREQEw+FwmGXLllWp2H744QemRo0ajLW1NdO8eXMmJyenSsR2+/ZtxtfXl7GxsWFq1arFbNq0iWEYhlEqlcwnn3zC2NraMs7Ozszq1atZjePPP/9kWrRowfD5fGbx4sUa+7Zu3cq4u7szNjY2TEBAACOVStX7nj59yvj6+jIWFhZMixYtmMjISFbjLAklWx28rxdKYjjv62dIIpEwS5cuZeLi4hilUsns3r2bcXBwYHJzc5njx48zNWrUYJ4+fcq8fPmSady4MbN582aDx6hUKpl27doxbdu2VSfbqhDbH3/8wXTv3p2Ji4tjVCoVc+fOHaawsLBKxNa4cWNm6dKljFKpZG7evMlYW1szDx8+ZNatW8c0a9aMSUlJYaKjoxk3Nzfm7NmzrMXx77//MocPH2ZGjx6tkWzv3r3LCIVC5tq1a4xYLGZ69uzJLFy4UL2/TZs2zKJFi5iCggLmzz//ZGrVqsXI5XLW4iwJJVsdvK8XSmI41ekz5Orqyty4cYMZM2aMxp3k1q1bmS5duhg8nvXr1zNz585lJk6cqI7H2LEpFArG1dWVefr0abF9xo6NYRjG2tqaiY6OVv/cpk0b5uDBg0z79u2ZnTt3ql9fvHgxM2HCBNbjmTFjhkay/frrr5kpU6aofz5//jzj6enJMAzDPHr0iLGysmIKCwvV+2vWrMmcO3eO9TjfRJO/viPy8vJwbP1GiK9fh0oiAcMw4FlbQ9C6FQbOmgkbG3amPqwouVyOUzt2IPnUGSjT0sHIZOBamINf0xP1R46Ab79+Rl24GSjqqHUrPAzJkdeBwgJwTEwAgR06fTCWnpuy5MmTJ8jMzETdunXx8OFDjB07Vr2vSZMmBp9oJiMjA7/++iuuXLmCefPmqV83dmwvXrxAfn4+Dhw4gNWrV0MoFOLzzz/HtGnTjB4bAHz88ccICgrCd999h1u3biE+Ph7t27fHw4cP0bRpU43Yjh07ZtDYgKL/v549e2rEER8fD4lEgocPH8Lb21tj3O2r32H37oabJY6SbRWXmZ6Of5Z+j/zQMNR9mQLhWwlLee4Ctu4IgnnXzhj63UI4i0QGjU+pVGLvsuUQh5yCT2wcGr7dY/p+FDJPnMKGJo1Q138cevv7GzQ+AJDJZDizYwtktyLQNOsFGpu97gilVDG4+N8h5NVvjgZDPoBP85YGj+99VVBQgPHjx2PBggWwtbWFRCLR+FIjEAg0pmU1hG+//Rbz5s2D8K2hZ8aOLTExEdnZ2YiOjkZsbCyePHmCnj17wsfHx+ixAUVz10+YMEG9vOmWLVvg6upaJWIDSv7/e/X62/te7Td0nJRsq7DY6GgcnDYDDR88KrorLOHOkMfhoH5KGpi9/2DX3XsYvHED6jRsYJD4pFIpNgRMRtNzF+DF4QClDE2yZwD7uw+Q+s0iBD+PxYeLvjNIfACQlZmB4ws/w5DceJjyuICZZo9jHpeDrqZy4Pl13PnpFkKHBqDbB2NLqY1UlFwuxwcffIC6deti0f/HfVtbW2ssT5mTk8P6eOY33b59G9evX8e6deuK7TN2bBYWFgCARYsWwcLCAk2bNsWYMWNw4sQJo8eWmZmJAQMG4O+//8awYcPw4MED9OvXD02aNDF6bK+UFMer19/e92q/oeOskjNIESAjLQ0Hp05Ho4ePK9T8yuFw0PDRExyZNgOpycmsx6dSqfDX9Jloce4CzCrYPOwsV0CwcTP+Wb2G5eiKFBQU4MSCuRiZl1CUaMvRzEQJp4ObEXZwnwGie3+pVCr4+/uDw+Fg+/bt6s9vw4YNce/ePfVx9+/fR6NGjQwW14ULF/D48WO4u7tDJBJh7969+OGHHzBp0iSjx+bt7Q1TU1ONv/Wq8nt79uwZrKysMHLkSPB4PDRt2hS+vr64cOGC0WN7paQ4PD09YW1tjYYNG+LJkyca69saI05KtlXUge8WoeHDx1qXa/D4Cf75ZiELEWk6sW0bfE6eAV/L57D2ShVy/tyIhNhYdgJ7w4mfl2NYQbJWz4rrmnEg27cJyUlJLEb2fpsxYwZevnyJ/fv3a4z9Hj9+PDZu3IiYmBikpKRg9erVmDBhgsHimj59Op4+faqeN33w4MGYM2cO1qxZY/TYXiWzFStWQCqVIioqCnv37kX//v2NHpu3tzfy8/Nx+PBhMAyDhw8fIjw8HE2aNMH48ePx888/Iy0tDU+fPsWmTZtYjU2hUKjnwH/z3x9++CH++ecf3Lx5E9nZ2VixYoU6jvr166NBgwZYtWoVpFIp/vrrL3A4HHTu3Jm1OEtCybYKysrKgiw0XKcORRwOB8rwi0hPS2MhsteSjp2AlY4dnrxzJbjw1yY9R6QpNzcXNlG3wONqH2MnMxWu7N7OQlTvv7i4OGzevBnXrl2Do6OjuhkvPDwcAwYMwKxZs9C2bVv4+Pigb9++mDx5ssFis7S0VM+ZLhKJYGFhAWtrawiFQqPHBgDr1q1Deno6HB0d0b9/fyxbtgydO3c2emy2trbYt28fFi9eDIFAAD8/P3z66afo1asXZs2aha5du6JevXrw9fXFp59+qtFRSd+WL18OCwsLbN68GStWrICFhQV27tyJJk2aYPXq1Rg8eDBq1KgBNzc3LFz4+qYjODgYp0+fhlAoxPr163Hw4EGDTwKk8wxSmzZtwm+//YaYmBg4ODige/fuWLp0KQICAnDlyhXw+XxYWlqib9+++PPPP9W9ZQMCAlC3bl0sXLgQDMNg+fLl2Lx5MzIyMuDg4IBhw4bh119/BQB4eXkhNTUVXC4XNjY2GDVqFH755Rfw+Xx4eXkhKCgInTp10ohryZIlWLFihXpKwJo1a2LEiBH46quv1Ovg/vjjj9i2bRsSEhLg6uqKBQsWYNKkSRU+d7Zn/9m1YiXs1/wOro7JjGEYpH88C+MXszNH8oObN/Fg6AeoIa/YnM0luVPLE1PDQmFiov1iCxVxdP1v6HH5EPg6rtxzAgIM3rKftVmlaAYpQqoXna5Ey5cvx6JFi/DDDz8gIyMDUVFR6NixI86dOwcA2Lx5MyQSCe7du4e7d+8iMDCwxHq2b9+O/fv348KFC5BIJAgPD0erVq00jjl9+jQkEgnCwsKwb98+bNpU/h3RxIkTkZubi7S0NGzatAknT55E7969oVQqARTd/QUHB0MsFuPAgQP4+uuvcenSJV1+FazIvnpN50QLFJ1f9tVreoxI091DhyuVaAGg7rNYhB0/rqeIimMe39U50QJAJ1kmLp06qceICCHVmdZXI7FYjJUrV2L9+vXo378/zM3NYWVlhenTpxdr2nBxcUHfvn0RGRlZYl3Xr19H//794eXlBQDw9PSEfylDQ+rVq4fOnTtrNb7M3NwcHTp0wKFDh3Dnzh31+K8vvvgCzZs3Vz/s79mzJ65cuVJqPVKpFDk5ORobm5Q5uXqog8UYc/MqXYUVl4vclFQ9BFMyXkHlYhSY8iFJT9FTNISQ6k7rZHv58mXIZDIMHDiw3GMTExMREhKCOnXqlLi/Xbt22LJlC9asWYPIyMgyF3B//PgxwsPD0bx5c21DhqurK1q3bl3i3atcLseVK1fK7JkWGBgIW1tb9ebh4aF1DNpgVEo91FH677KyVHqID2A7Rj3UTWt0EEL0ROtkm5GRAUdHxzIfLs+YMQMCgQA1atSAUCjE0qVLSzxuwoQJ+Pnnn3H48GF06NABrq6u+PvvvzWO8fPzg1AohJ+fHwICAnTuGCASiZCVlVXs9c8++wxeXl7o27dvqWUXLFiA7Oxs9ZaQULH1W3XF18P4L54Ve2PIuNZWla5DzjAwZ3NNW3PLShWXKlUwEwj1EwshpNrTOtk6ODggPT0dCoWi1GM2btyInJwcXLp0CTExMUgro2fsxIkTERoaiqysLCxatEg9PdkrISEhEIvFiImJQWBgILg6Pod7+fIl7OzsNF4LDAzEuXPncODAgTJ7/pqZmUEgEGhsbLJs1hQ69ltTs27eTE/RFOfVvRvSdejl+6YokTO6DB6kn4BKoKxVv1K/w3DGEh39BugxIkJIdaZ15urQoQNMTExwvAKdW3x9fTFz5kx89dVX5R5rbm6OOXPmwM7OTiPZ6kNycjJu3ryJjh07ql9bt24dNm/ejNOnT8Pe3l6v71dZvWZMR6yN7nem8VYW6DKVvaEB7Xv1QmLTxpWqw6JbF1hZVf4OuTQdPpyIa1LdvxDIG7Uy+CL3hJD3l9bJVigU4ttvv8Xs2bNx8uRJSKVS5Ofn4++//y7WBAwAc+bMwdmzZ0vs2LR9+3acPHkSeXl5UCqVCAoKQk5ODlq0aFGhWGQyGQoLC9Xb28/ppFIprl69imHDhqFJkybq58w7duzAypUrcfr0abi5uWn7K2BdDS8voJOvzuUVHdqhdv36+gvoLRwOB469e0Gh451jgqkJ2gRM1HNUmlxErkitpdsXgrsFKjQeMlrPERFCqjOd2mQXLlyIxYsX44svvoCdnR3q16+PCxculDiY2d7eHhMnTsSqVauK7bOxscHSpUvh7u4Oe3t7rFmzBvv27Su1Q9XbevbsCQsLC/UWHBwMoCiJ29jYwN7eHlOmTEHPnj1x9uxZ9ZjJxYsXIy0tDc2aNVMPul+5cqUuvwrW9Pr6Kzytof0XgWfurujxdfktCZU1+KM5iGzTUuum2gJGhdyRw9Cwgl+oKqPzx1/iJMdWqzJpMhVSug1B3YYNWYqKEFId6TypRXVmqAkJbl8Iw+VP5qNO4ssKHf/M1QVt1vyMNizO4PKm9JQU7Bo/Aa3uPqjQuOA8hsGTwQMwc+N6nZ+9a+v5oyhE/vgd/JRZ5c7IFS9n8KBNPwyd9wXrcdGkFoRUL5RsdWDIC+WT+/dxaukyWF65BvdCaYnHJJmZIa9da/Rc+A0aGOCO8U25ubnYPf8zmF0IQ90cSYkJTcEwiHJ1gWD4EIz57juDr2ubnpqKi5vXweThTXTm5MPsrUUJogtVeCKqA+ce/dFp8DCDxETJlpDqhZKtDoxxoXx09x4itm1D7vWbUOXmFi0eb2MDm9Yt0X7iRDRs0dwgcZQmKSEB5zduQnZYGJjUNJjKFZCZm4FX0xPOfXqj3/Rp6ukyjaWwsBDn9wRBFfsEkBYAfBMorGxQ328ofJo1N2gslGwJqV4o2eqALpRlUyqVKCgogJWVlcHvYt8V9BkipHqhxeOJ3vF4PKMsIE0IIVUVLbFHCCGEsIySLSGEEMIySraEEEIIyyjZkmqJ+gUSQgyJOkiRauP5s2e4H3EBnNx08FQKMBwO5KZWsHavha59/MpcyYoQQiqDri5Er+RyOS6GhaIwPx9OLiK0atO2Sgz/OX3oABxykzCgjgcAzdWfJPm52PfHj+g1dgqcXVyMEyAh5L1GzchEb86GHMfp/UFoU8MO/ZrXgyunAEd2bUXkrRtGjSv05DF4Ixst6niUuN/a0gJj2jZA6N5tKCgoMHB0hJDqQOtkGxYWhvbt28PW1hYODg7o0aMHnj9/jiVLlsDExEQ9sb+1tTV8fYtWrgkNDQWXy4W1tTVsbGzQpEkTHDlyRF1nbGwsOByOuly9evWwadMm9b43m/e6desGGxsbZGRkqF9btWoVAgICSjz+zXJBQUEAgJSUFAwaNAjOzs5V4q5LW1lZWcjKyjJ2GBr+O3kCPo5W6N+5Haz+P1OUm8gZg7u2hyw5Dg/v3zNKXHK5HPnPo+DpXP4yikOb18H544cNEBUhpLrRKtlmZ2djyJAh+PLLL5GVlYW4uDh8/PHH6tV0Jk6cCIlEot4iIiLUZWvXrg2JRILs7Gx89NFHGDt2LMRisXo/j8eDRCJBbm4u1qxZg5kzZ5a4LB8AmJiY4JdfftHhdItwuVz0798fO3bs0LkOY2AYBvt27cSj6xF4fOMy9u7cXmxZQWOQy+WQZaXCXeRc4v42TRrg6b1Iwwb1fxdOhaBL/ZLvaN/G5/MgT0ukzlOEEL3TKtlGR0fDzMwMw4cPV9+pDhs2DJ6enhV/Qy4X/v7+yM/PR3R0dLH9HA4HAwcOhIODA6Kiokqs4+OPP8b69es17m614eTkhFmzZqF58+YVOl4qlSInJ0djM4ZzZ07Dr3MHdGjTCu1bt8TA7p1x9lSIUWJ5U/iF8+jSqkmZxwjNeUb5vcmzUmFpblbh490teUhJSWExIkLeD35+fti7d6+xw3hnaJVsvb29IZPJMHXqVJw5c0ani6dSqcTWrVvB5/NRs2bNYvsZhsGRI0eQlZWFJk1KvoDXq1cPgwYNwurVq7V+f10EBgbC1tZWvXl4VOxOSd/k0kLY2LyeBtHKyhIqhcIosbxJWpAPSwuLMo+xt7FGdna2gSJ6jWGUWh1vY26CvLw8lqIhVYmXlxesrKw0/r/z8/NhY2MDLy8v4wX2jggJCcHo0aONHcY7Q6tka2tri7CwMEilUvj7+8PJyQnjx49Hbm4uAGDnzp0QCoXqbcaMGeqyz58/h1AohIWFBebPn49t27bB5Y2en0qlEkKhEA4ODli4cCG2bduG+vXrlxrLwoUL8eeffyIzM7PYvld1vbldvHhRm1PVsGDBAmRnZ6u3hIQEneuqDAcnZ8QlvFD//CIxCUJ7B6PE8iZ7R2ekppfdyvAyQwxHR0cDRfQaw9Wuw326pAD29uU/3yXvB3d3dxw6dEj98+HDh+Hq6mq8gFikVGr3xZPol9YdpBo3boydO3ciOTkZERERiIiIwIoVKwAA/v7+EIvF6m3jxo3qcrVq1VK/Pnr0aISFhWnUy+PxIBaLkZmZibt372LcuHFlxuHt7Y0BAwaUeHf7qq43t06dOml7qmpmZmYQCAQamzG0ad8BT16m4cR/YQg5F4YHcUlo31H389KXtu074MqD4o8E3pTHcGBRzt0vGxy86iIlq+J31OkqU9jZ2ZV/IHkvjB07Frt27VL/HBQUVOzaEx8fjwEDBsDBwQENGjTAyZMn1fv+/vtveHt7w8bGBk2bNkVoaKh6X7du3bB48WK0bt0aAoEAo0ePhlRa8prUSqUSixcvRs2aNeHi4oLPPvsMCoUCUqkUjRo1Uvcvyc3NhZeXF06cOAGg6O78xx9/hLe3NxwcHPD555+r+3EsWbIEY8eOxYgRI2BtbY1z586VeS4rVqyAq6srBAIBmjRpgocPH5b5+pudTlUqFRYvXgwPDw+4urpi7ty56nPdtm0bevTogVmzZkEgEKBhw4a4deuW9v9Z77hKDf1p1aoVhg8fjvv371e4jKWlJf744w8cOHAAt2/frszbY+HChVi3bl2Jd7fvq159/dB/5Cj4jRiFvgMGGjscAEXP2Wt4N8Gth49L3H/swhW069rLwFEVadexC67EpVXo2GxJHmxq1GY5IlKV9OjRA/fu3UNaWhrS0tJw9+5d9Or1+rOqUqkwaNAg9O3bFykpKfj777/h7++vfq4vEonw33//QSwW4+OPP8aYMWM0Euq+ffvwzz//ID4+Hvfv30dwcHCJcaxevRrh4eG4ceMGHj9+jFu3bmHDhg0wMzPD9u3b8dlnnyExMRGffvopevXqhf79+6vL7t69G2FhYbh37x5CQkKwdetW9b5///0XM2bMQE5ODjp27FjquTx69AgbNmzA7du3kZ2djf3798Pe3r7U19+2ZcsWHDhwAJcvX8b9+/dx8+ZNBAYGqveHh4ejS5cuyMrKwvDhwzF//nzd/9PeUVol20ePHmHNmjVISkoCUNRh6ujRo2jbtq1Wb2pra4tp06Zp/GfowsfHB35+ftiyZYvWZQsLC9V/FG/+m+imVdu2MBXVwtGLN3H59l1Ex8Ti/NVbOHr5Nlr39DNa0xyHw0Gd9t1x49mLMo+TyeU4EZ2C7v0GGCgyUhXweDyMHDkSe/fuxd69ezFixAj16AoAuHbtGgoKCjB37lzw+Xx06NABXbt2RUhIUcfE/v37w8PDAzweD9OmTQOHw8GTJ0/U5adOnYqaNWtCKBRiwIABuHPnTolxbNmyBcuXL4eTkxOEQiE+++wzHDhwAADQunVrzJw5E3369MHp06eLteZ98sknEIlEcHNzw/z58zU6LXXt2hV9+vQBl8vF3bt3Sz0XPp8PqVSKqKgoKJVK+Pj4QCQSlfr62/bs2YPPP/8cNWrUgIODAxYtWoTdu3er9/v4+GDs2LHg8Xj48MMPS/09vM+0SrY2NjaIiIhAq1atYGVlhV69emHAgAH4+uuvAQDbt2/XGGdbVieDjz/+GEePHi2xR7I2vvvuO40hRBVlYWGhjs/CwqLM58OkYpo0a47BY/3RoHNfmHj4oH3/4Rg8apzRn4E1bdkapj5tceTWY0jyi09aERnzAkefpGHUjLnv5LhrUjnjxo1DcHAwdu3aVWIT8qv+Jq+2kydP4uXLlwCAQ4cOoWXLlup9qampGqMk3uyXYmlpCYlEUmIM8fHx8PPzU9czbtw4pKamqvdPnjwZUVFRmDRpUrHHWG922PTw8FDHBgA1atSo0LnUrVsXv/zyC7755hu4uLhg6tSpyMnJKfX1tyUlJWmMSqlZs6b6pkyb38P7TKveI+7u7ti/f3+J+5YsWYIlS5aUuK9bt254+vRpsbrenK1HUUqvWi8vL419bz4TAYAGDRpoPPh/+/jSytFYSvbY2dlVueeezdu0Q+MWrXDhdAjy4xPAUykALhdyvgWadxmIll61jB0iMZLWrVurH0W1adMGV65cUe9zd3dHgwYNcPfu3WLlpFIpxo4di4MHD6JPnz7g8XhwdXXV6dri7u6OvXv3omXLliXunzlzJj788EOsW7cOkydP1khsb3bYTEhI0Phy++aXx7LOBSjqc+Pv74/09HSMGTMGq1evxpIlS0p9/U1ubm6Ij49X/xwfHw83NzetfgfvO5obmVQbfD4fPfsPMnYYpAo6ePBgia+3a9cOKpUK69evx5QpUwAAV69eVTcNy2QyODsXTeaydu1apKVVrH/A2yZPnoyFCxdiy5YtEIlEiIuLQ1xcHLp27YoNGzYgLS0Nx44dQ2BgIKZMmYLTp0+rE+nvv/8OPz8/qFQq/Prrr5g3b57W51JQUICXL1/C19cXlpaWMDMzA4/Hw+PHj0t8/W2jR4/GL7/8gj59+sDCwgLLli3DmDFjdPpdvK9obmRCSLXXsGFDNGzYsNjrfD4fx48fx6lTp+Du7g43NzesWLECKpUKAoEAP/30E/r27QuRSISMjAzUrVtXp/f/4osv0KFDB3Ts2BG2trYYNGgQEhIS8Pz5c3z77bfYvn07TExM8M033yAzMxMbNmxQlx01ahQ6d+6Mxo0bo3fv3pg0aVKJ71HWuUilUnzxxRdwcHCAp6cnbG1tMX/+/FJff9uUKVMwbNgwtG3bFg0bNkSzZs2wYMECnX4X7ysOQ+2pWsvJyYGtrS2ys7ONNgyIvNvoM0T0wcvLC0FBQZUa2kgMg+5sCSGEEJZRsiWEEEJYRh2kSLUReeMGjm/aisyYWEjz8sA3MYGFnRA+Pbpg9PRpMDc3N3aIhGglNjbW2CGQCqJntjqg520luxUehviTR8AXZ0All4NjZg6law20HOMPz9p1jBbX4V27EbYzGOlhV2BRUHzyEiUYKOt5oX6/Xpj07ddwfmNMIFvoM0RI9ULJVgfGuFAmJyXh6p7t4Dy+C56kaFC50soGKu8maDt6AtyMtBIRAFw6dBAvj+5Hk+TnqGFS/MnEHQUHL+s2RuOJM1C/RcnjCNnAMAxWf/0N7v+2CWaFsvKPBwOmRSN8un0TfEpZcUpfKNlWD40aNcLmzZvRoUMHY4dCjIySrQ4MeaFUqVQ4+OP3cL0bgdbmTLEZjhiGwc1CILFhewxfsKTEMXBsOv7nb6gZshe1eOV/jK7xrSCY9QVa9exjgMiA1V9/i/s//gFTLT/iysbe+O7oAXiyuMwaJVtCqhfqIFWFMQyD4G8/Q6+ocLSxQIlTCXI4HLS24KDPkwgEL5inXvHDEM7u2Iq6IXsqlGgBoK0iD7nrfkTUjessRwYc27MX99Zu1DrRAgDvfjR+mjqbZhkj74TSZt8jVQsl2yrsxMbf4Zd4F5b88u9WLfg8DEx5iOPr1hggMiAvLw/Sf4NQQ8sb6TaqfDzcso6doN5wIWhPhZqOS5MbdhkXz53TY0SkOvLy8lKvpR0QEIC5c+eiZ8+esLGxQZ8+fUpdsUyhUOCjjz6Cg4MDfHx88MMPP6gnzIiNjQWfz8eGDRvg7u6OgIAAFBYWYs6cORCJRPD09MT333+vsdTe1KlT1XWHhoYWq2v9+vVwcXGBp6enetk8ol+UbKsopVIJxfUw2JhUPJtZ8Xng3LoEuVzOYmRFzu/Yik7KfJ3K1o5/jEd3Kre8Ylke3LmDlNCIStVhLlfiv+27yj+QEC3s27cPa9asQVpaGpRKJdauXVvicevXr8elS5cQFRWF8+fPY8+ePRr7lUolIiMj8ezZM2zatAnLli3DgwcPEBUVhYsXLyIoKEi9Bm55lEolrl27hri4OOzZswezZ8+u9AIxpDjWk+2mTZvQpEkTWFlZwdPTExMnTlR3Vz9w4ACaN28OS0tLuLm5Yd68ecjPf30BDwgIwPLly4vVqVAoMHLkSHh4eIDD4ZTa/b1Pnz5wcXEp1szy2WefoU6dOuoFn48dO6a389WXsMP/oKM8S+tynVTZCP1nb/kHVgLDMJBeCwefq9sKOd4mHEQd3KfnqF47umkrLPOKr+6jrZjToRoruBBSWR988AGaNm0Kc3NzjBgxotSl5v755x/Mnz8fzs7OcHV1xUcffVTsmMWLF8Pc3BwWFhbYs2cPFi9eDDs7O3h6euKzzz7TWOKuPK/q8vX1xeDBg9XL+xH9YTXZLl++HIsWLcIPP/yAjIwMREVFoWPHjjh37hyCg4Mxbdo0LF26FGKxGOHh4bh9+zZGjBhRoWdlnTt3xr59+2BmZlbi/pcvX+L8+fOQyWQ4ffq0xj4bGxuEhIQgOzsba9euxfjx4/H8+XO9nLO+5D2IhECLu9pXLPk8yB6VvKqHvmRnZ0OYHF/+gWXgJrD3+xbHxumlHl5KGu7evKmXuggBKr7UXHJyssbyeG/+GwC4XK7G6j7lLXFXnrKW6SP6wdqkFmKxGCtXrkRwcDD69++vfn369OlQqVTw9PTE0qVLMWTIEABAnTp1sHfvXtSqVQtnz55F7969Sw+az8cnn3xS5vvv3r0b7dq1Q7NmzRAUFKQRw5vLQ3Xv3h0NGzbErVu3UKtWycusSaVSjcXlS1rPUd840krcmRVW/q6uLNnZ2RAqZEApX3QqpDLnV45CPa2VaQogLYkuOsTwRCIREhMT1T+/ePFCY//bnSVfLXFXp07RePY3l7izsrLSWM40JSWl2PslJCSo1/dOSEgocVEGUjms3dlevnwZMpkMAwcOLLYvOjoaiYmJ6kT7ikgkQvv27XFODx1TgoKCMHr0aIwZMwaHDx8u9RtkVlYW7t+/X+aHKzAwELa2turNwxBjWvkmupc1qUTZCjA3N4eUV7nvaZzKnF85eCameqlHCcDKxkYvdRGijeHDh+PXX39FamoqkpOTsW5d2Z0KR48ejWXLliErKwsJCQlYvXq1eom7Zs2aITQ0FMnJyUhNTS3xOfGyZctQWFiIK1eu4MiRIxgxYgQr51WdsZZsMzIy4OjoCD6/+EU5PT0dQFFyfZuLi4t6v66ioqJw584djBw5Ep06dYK9vX2J61WqVCpMmjQJI0aMQIMGDUqtb8H/2rv3oCbPfA/g3wQDIiEgeIkIiBYK1aLFdi1Iq9Xd1iqtbm3BGyt4qVY93drZM85IrXFXT21n2uWMdlprrZfpamu3dhetR3erqFCl212veMUbitypFpBLQvI+54+EQEhQBJI3lO9nJjPvPd83ZPLjvT3P8uWorKy0vpp31uwsJv8+7Xr0RAgBo38fJyRqEhAQgArvjhUhk8Z5ncv3CuicbetVPTAoon1dphF1xKJFi/Dkk08iKioKY8eOxdSpU1u9ZAYAb7/9NiIjIxEVFYW4uDhMnz4dKSkpAIBnn30WL7zwAqKiojBu3Di7Qurh4YEnnngCoaGhSExMxPr16xEZGenU/euOnFZsAwMDUVFR4fAZsMDAQADm6xItlZaWok+fjhWLzz//HGPGjMGAAQOgUCiQlJTk8Hb2xYsXo7Ky0qZvSEe8vLyg0WhsXs42KikZ/9Y/+A1Ip/TA46/MdEKiJiqVCqbhv2r3+jVGE9RxYzoxka3oCb9BAzr+jKx/7OMY5uSWpOiXLT8/39r93datW7FixQrrvNTUVBw4cMDheiqVCh999BFu376NS5cuoW/fvhg4cCAA8+NELX9Xvb298fHHH6O0tBS3bt3CqlWroFSaf94VCgU++eQT/Pzzzzh37hz+8Ic/4MqVKzbrL1q0CGVlZSgoKLAWaepcTiu2cXFxUKlU2Lt3r928yMhIBAUFISMjw2Z6SUkJfvjhB4wbN67d7yuEwI4dO/Djjz9Cq9VCq9Xis88+Q2Zmps1F/2XLluH48ePYvXv3Pf9jlIs2KAjlYcMeeL3CkEcQPCis8wO1EPVSIq608wmjYz598Ezi9M4N1MzU2clQPNqx/8xNEBjx4kSHDYkQOVt1dTUOHDgAk8mEK1eu4M9//rPdZTfqWpxWbP39/fHWW29h8eLF2L9/P/R6PWpra7F582Zs3boV7777LnQ6HTIyMmAwGHD16lVMmzYNTz31lM3NUUajEfX19dZX4390er0e9fX1dsPZ2dkoKyvD6dOncerUKZw6dQoXL17EsGHDrLfCr1mzBt9++y32798PXze+JvfEvP/CIeHT5uWzpF4YOXeJExM1eeSxkTj7UPQDn+q+YxTo8fSzUDnxunKPHj0wdNKzEB04ujUMDsHMJYs6MRVR20mShGXLlsHPzw9jxoxBQkICFixYIHcs6gCnt428ceNGrF+/HlevXkVgYCDGjx+PP/3pTxg0aBB27tyJd955B3l5efDz80NSUhLWrl0LHx9zgUlNTcW2bdtstjdv3jxs2rQJYWFhuHHD9hEPIQQWLFgAg8GArVu32szbsGEDNm7ciBMnTkChUMDT09PmB/+TTz7BrFmz2rRPrmzXNu/0SeT972o8q6hu9ShLCIFMqBG2JA2PPN7+07sPqrq6Gn9fMgcvVRa26Qiw2ijhwKPxmLX2facfMVZVVWHZ85OBnBMPvK7euyeeS1+DGQud9+PGtpGJuhd2RNAOrv6hLC8txbHtm6E4+288ZfwZapX5prOaBhOylRpIwx5H3Ky56G+51d+VqqqqsHv5mxiVfx4DVa0X0FyjAgVxv8Eraatcdmr2xrVr+J/EWVCcOAsF2vaeeu+eeDJtKV5bkebUbCy2RN0Li207yPVDaTQa8f3e3aitKAOEgHeffngqYbJTT8m21YmsI8jf93eoz53AIP1d9PJQoqrBhKv+/SHFPImYpFkYFB7h8lxlpaV4b/4iVBzMdtiXbSMTBIzhYXjuv1936hFtIxZbou6FxbYd+EPZuqqqKty8dg01d+7Ar38/DAmPgKdn5zz32hEn/vUv/N9nW3F5fyakgkJ4QQETAIOnCn3jRyFmyiQkLXgV3t7eLsnD7xBR98Ji2w78oey67t69iwtnz6L0ViF8NL4IHTIED4W7/lnarvwdqq+vh8HQ/h6ViH5JPD090bNnz/su57TmGonckVqtxq9iY+WO0WXV19djcFgYShw0+UfUHWm1Wly/fv2+BZfFlojazGAwoKS0FAWXcqFRqwEBQEiAEACEg3HJ8niYACTJMq3Zq+U6QliGLeMmE4CmecJkaponWYalxuFm61vmiXst17itxmySybyMkCzjlmFTszzWbTeuIzUbbrF+830ySS3et8UwBIT1PR1tu9mwNa8w759JQEiNn0+z6VLj/jfNM+9u43KWYUmyrg+pKYcQApJJWP5kApIQlkXM05v+jAImqdm4JMy7jubTLes2jkuWebD8+YVojAPJ8tCeJMzbkCxfEckyDFi23bguzNs0WbZjXtacUwAwiaZ1BQCjMA9LjfMsw0ZLBkmYhxszmJcTNssJy3AtJGwvKYHBYGCxJaLOp/H1hcbXt1lxbL1wOi62917Hptg2K3T3LrbNCqLpXsXWwbbtimWz4ZbF1mSyXae1Yis12ydHBbbFsLhXgW0+bJ3WWGwlx8XW1KLYKpsVW2VTYRcmRYtiq2gqtoqmYtlYKCUhzAWxebFtvpxCQCgs09Gi2DaOwzIPlkL6AMW2scCamg0LWIotmhVby7CxcV3Lexktw5LdMBwOS5ZtNw57NF9OtP3JCnYeT0RE5GQstkRERE7GYktERORkLLZEREROxmJLRETkZCy2RERETsZiS0RE5GR8zrYdGlu4rKqqkjkJdVWN352u2lpqVXW15SFLtP7MLBu1aPZebNTil9ioheEB+sxmsW2H6upqAEBISIjMSairq66uhp+fn9wx2szT0xNarRYhkdFyRyFyC1qttk2drbAjgnaQJAlFRUXw9fV1Wd+szVVVVSEkJAQFBQVu2Yi9u+cD5M8ohEB1dTWCgoKgVHatqzldsSMCuf/eHcHsrvcgudkRgRMplUoEBwfLHQMajcatv8Dung+QN2NXOqJtrmfPnm36cXFHXeE72Rpmd73OzN21/qUmIiLqglhsiYiInIzFtgvy8vKCTqeDl5eX3FEccvd8QNfISJ2nK/+9md31nJGbN0gRERE5GY9siYiInIzFloiIyMlYbImIiJyMxZaIiMjJWGyJiIicjMWWiH4RysvLkZCQAB8fH0RGRuLgwYMOl0tNTYWXlxfUajXUajWGDRvm4qT2Pv74Y4wcORIqlQqrVq1qdTlJkrB06VL4+/ujf//+SE9Pd13IVrQ1+6pVq6BSqayfu1qtdl1IB/R6PebOnYvQ0FBoNBrExsYiJyfH4bJ1dXVITk6Gr68vQkND8cUXXzzw+7G5xi4iNzcXR48exZ07d9C7d2/Ex8cjOlr+xuCLioratFxQUJCTk1B3t2TJEmi1WpSXl+PAgQNISkrC5cuXERAQYLfs22+/jRUrVsiQ0rEBAwZg1apV2LFjxz2X27BhAw4fPoy8vDxUVlbimWeewfDhw/HrX//aRUnttTU7AKSkpGDTpk0uSHV/RqMRYWFh+P777xEcHIyvvvoKL774IvLz8+3+EdDpdKioqEBhYSHOnz+PiRMnYuTIkYiMjGz7Gwpya0ajUUyfPl0olUoRFhYmYmNjxaBBg4SHh4eYNm2aMBqNsuZTKBRCqVQKhULR6kupVMqaMSMjQ9b3J+errq4WKpVKFBQUWKeNHTtWbN682W7ZlJQUsXr1alfGa7OFCxcKnU7X6vzY2Fjx+eefW8d1Op2YPXu2C5Ld3/2y63Q6MW/ePNcFaocBAwaI//znP3bTtVqtyM7Oto6npKSIlStXPtC2eRrZza1btw4nT57EiRMncP36deTk5CA/Px/Hjx/HmTNnsG7dOlnzSZIEk8kESZJafZlMJlkzJicn24zzKPuX5/Lly1Cr1TYdhERHR+PcuXMOl09PT0dgYCBGjx6NI0eOuCpmh50/fx7Dhw+3jt9rH93R119/jcDAQMTExOCbb76RO46Ny5cv4/bt2wgPD7eZfufOHZSUlHT4c2exdXNffPEFPvzwQ4wYMcJm+ogRI7Bu3bo2nbpxtdLSUpw8eRJlZWVyRwFg30F7XV2dTEnIWe7evWvXO4tGo8Hdu3ftln3jjTdw5coVFBcXY8mSJZg8eTJu3Ljhqqgd0nI/W9tHd5SUlISLFy+itLQU7777LlJTU/Hjjz/KHQtA0zXZ5cuX2/XG1fj5+vr6Wqe153NnsXVzeXl5GD16tMN5o0ePxqVLl1ycqHVFRUUYP348QkJCMGnSJAQHB2PcuHEoLCyUNVfLPofl6IOYnEutVqOqqspmWlVVlcObcGJiYtC7d294enpi1qxZiIuLwz//+U9XRe2QlvvZ2j66o6FDh0Kr1aJHjx6YMGECZs6ciYyMDLljoaGhAYmJiQgPD8fKlSvt5jd+vtXV1dZp7fnceYOUmxNCoFevXg7n9erVy60Kx2uvvYbo6Gjs2bMHPj4+qKmpwYoVK7Bw4UJ8++23suWqr6/HggULrOO1tbU24wCwceNGV8eiThQREYG7d++isLAQAwcOBACcPXsWs2fPvu+6SqXS7uyHuxo6dChyc3OtpzTPnj3rFndTt4c7fO6SJOF3v/sdFAoFtm3b5vD3tHfv3tBqtcjNzUV8fDyA9n3uLLZuTq/X45133nE4TwgBg8Hg4kStO3r0KHbt2gWVSgUA8PHxwXvvvYcBAwbImuutt96yGV++fLlMSchZ1Go1pkyZAp1Oh/Xr1+PgwYM4c+YMpkyZYrfsrl278Pzzz8PLywu7du1CdnY2PvzwQxlSNzEajTAajTCZTDAajaivr4dKpYKHh4fNcsnJyXj//ffx3HPPobKyEp9++im2bdsmU2qztmbfvXs3xo4dC19fXxw+fBjbt2/Hvn37ZEpttnDhQhQXF+Mf//gHevRovRwmJydjzZo1+Oqrr3DhwgVkZGS0+phQqzpy5xY5X0pKikhNTb3ny1088sgj4tixYzbTcnJyRGRkpEyJqDspKysTEydOFN7e3iIiIkJ89913Qggh/vKXv4ihQ4dal4uPjxcajUZoNBoxatQoceDAAbkiW+l0OgHA5rVlyxaRlZUlfHx8rMuZTCbxxhtvCD8/P9G3b1/xwQcfyJjarK3Zp02bJvz9/YVarRbR0dHiyy+/lDG1EPn5+QKA6Nmzp/Dx8bG+srKy7L4ztbW1YubMmcLHx0cEBweL7du3P/D7sYs96jTffPMN5s6di5deegmhoaG4ceMGMjIysGnTJrz88suy5bp48SLmzJmDc+fOISYmBlu2bMGQIUNky0NE3Q9vkHJzxcXFmDFjBoYPH46UlBRUVFTIHalVU6ZMwQ8//ICHHnoI5eXlCA8PR05OjqyFFjA3dhAREYGdO3ciJCQES5culTUPEXU/PLJ1c7/97W9RU1ODqVOnYteuXejXr59bPu5jMpmgVqtRWVkJT09PuePYCAgIQHFxMby8vFBTU4Pw8HAUFxfLHYuIuhHeIOXmjh49iqtXr0Kj0WDatGk2D1a7Ew8PDzz66KMoLCzE4MGD5Y5jw2g0wsvLC4D5pi29Xi9zIiLqblhs3Zxer7c+xB4QEIDa2lqZE7Vu4sSJmDBhAubNm4fg4GCb2+hnzpwpWy4++kNEcmOxdXMGg8Hm0Z/6+nq7R4HS0tJcHcuh7OxsDBw4EPv377eZrlAoZC22fPSHiOTGa7Zubs6cOfecr1AosHnzZhel6ZqOHTt232Vaa6WLiKgzsNhSp9FoNHZN5gHm09+3b9+WIZHZ4MGDoVAoWm2tRqFQ4Nq1ay5ORUTdCU8jdxEHDx5EZmYmKioq0KdPH4wfP17WPiwdcVTMamtrZW9S8vr167K+PxERi62bq6urw8svv4zDhw8jNjYW/fv3x+XLl5Geno4xY8bgb3/7G7y9vWXNGBERAYVCgbq6Ojz88MM288rLyx02mUdE1J3wNLKbe/PNN3Hy5Ens3LkT/fv3t04vKSnB9OnTERMTg/T0dBkTAkeOHIEQApMmTbJp61ShUKBfv36IioqSMR0RkfxYbN1ccHAwsrOzHT67evXqVTz99NMoKiqSIZk9SZKgVLJRMiKillhs3ZyPjw8qKysd9kjR0NAAf39/1NTUyJDMXkNDA3bs2IHTp0/bdazM51iJqDvjNVs39/DDD2P//v144YUX7Obt27cPERERMqRybPbs2cjNzUVCQoK1T1EiIuKRrdv7+uuvsWDBAqxevRqTJ0+GVqtFSUkJMjIyoNPpsGHDBiQmJsodEwDg5+eHgoICa4tXRERkxiNbN/fKK6+goaEBy5Ytw+9//3vr9KCgIKxbt85tCi1gPgqvqqpisSUiaoHF1s3dvHkTer0eN2/eRF5eHioqKhAYGIjIyEhs27YNBQUFCAkJkTsmACAhIQETJkzA/Pnzbe6cBuRtG5mISG48jezm5syZg/j4eMyfP99u3pYtW5CVlYUtW7bIkMzeuHHjHE5XKBTIzMx0cRoiIvfBYuvmwsLCcOHCBYcNV9TV1SEqKgo3btyQIRkREbUVTyO7uYqKCnh4eDic5+HhgZ9++snFie7t2rVr+Otf/4qioiIEBQUhMTERQ4YMkTsWEZGs2AKBmxsyZAgOHTrkcN6hQ4fcqpDt3r0bjz32GE6fPo1evXrhzJkziImJQUZGhtzRiIhkxSNbN/f6669j7ty52LBhAxISEqBUKiFJEvbu3YvFixdDp9PJHdEqLS0Ne/bswdixY63TsrKysGjRIraPTETdGq/ZdgFr1qzB2rVr0dDQgD59+qCiogIqlQppaWl2HaPLKSAgAKWlpVCpVNZpDQ0N6NevH+7cuSNjMiIiebHYdhGVlZXIycnBTz/9hMDAQMTFxcHPz0/uWDYSEhIQHR2NP/7xj/Dy8oLBYMDKlStx+vRpmw4KiIi6GxZb6jS3bt3CjBkzcPz4cfTt2xfl5eUYOXIkvvzySwQHB8sdj4hINrxmSx128+ZNZGZmIjU1FdnZ2bh165b1buTvvvvOYafyRETdCe9Gpg7T6XQwGo3W8eDgYIwaNcp6NLty5Uq5ohERuQWeRqYOY8MbRET3xiNb6rCu1vAGEZGrsdhSh3WlhjeIiOTAYksd1tjwxp49eyBJEgBAkiTs2bMH8+fPt+kakIioO+LdyNRhr776KkpLSzF9+nSHDW846rGIiKg74Q1S1Gm6QsMbRERyYLElIiJyMl6zJSIicjIWWyIiIidjsSUiInIyFlsiIiInY7ElIiJyMhZbIiIiJ2OxJSIicrL/B9DkJB2crvVZAAAAAElFTkSuQmCC",
344 | "text/plain": [
345 | ""
346 | ]
347 | },
348 | "metadata": {},
349 | "output_type": "display_data"
350 | }
351 | ],
352 | "source": [
353 | "fig, ax = plt.subplots(1,1, figsize=(5, 5))\n",
354 | "sc.pl.dotplot(kam20_norn_ad, groupby=\"Disease_Identity\", var_names=reduced_gene_list, show=True, swap_axes=True, ax=ax)"
355 | ]
356 | }
357 | ],
358 | "metadata": {
359 | "kernelspec": {
360 | "display_name": "Python 3 (ipykernel)",
361 | "language": "python",
362 | "name": "python3"
363 | },
364 | "language_info": {
365 | "codemirror_mode": {
366 | "name": "ipython",
367 | "version": 3
368 | },
369 | "file_extension": ".py",
370 | "mimetype": "text/x-python",
371 | "name": "python",
372 | "nbconvert_exporter": "python",
373 | "pygments_lexer": "ipython3",
374 | "version": "3.11.7"
375 | }
376 | },
377 | "nbformat": 4,
378 | "nbformat_minor": 5
379 | }
380 |
--------------------------------------------------------------------------------
/model.py:
--------------------------------------------------------------------------------
1 | """
2 | Model class
3 |
4 | """
5 |
6 | import warnings
7 | warnings.filterwarnings("ignore")
8 | import math
9 | from torch import nn, Tensor
10 | from torch.nn import TransformerEncoder, TransformerEncoderLayer
11 |
12 | import sys
13 | sys.path.append('../')
14 | from typing import Any
15 | import torch
16 |
17 |
18 | def full_block(in_features, out_features, p_drop=0.1):
19 | return nn.Sequential(
20 | nn.Linear(in_features, out_features, bias=True),
21 | nn.LayerNorm(out_features),
22 | nn.GELU(),
23 | nn.Dropout(p=p_drop),
24 | )
25 |
26 |
27 | class PositionalEncoding(nn.Module):
28 |
29 | def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = 1536):
30 | super().__init__()
31 | self.dropout = nn.Dropout(p=dropout)
32 |
33 | position = torch.arange(max_len).unsqueeze(1)
34 | div_term = torch.exp \
35 | (torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
36 | pe = torch.zeros(max_len, 1, d_model)
37 | pe[:, 0, 0::2] = torch.sin(position * div_term)
38 | pe[:, 0, 1::2] = torch.cos(position * div_term)
39 | self.register_buffer('pe', pe)
40 |
41 | def forward(self, x: Tensor) -> Tensor:
42 | """
43 | Args:
44 | x: Tensor, shape [seq_len, batch_size, embedding_dim]
45 | """
46 | x = x + self.pe[:x.size(0)]
47 | return self.dropout(x)
48 |
49 |
50 | class TransformerModel(nn.Module):
51 |
52 | def __init__(self, token_dim: int, d_model: int, nhead: int, d_hid: int,
53 | nlayers: int, output_dim:int, dropout: float = 0.05):
54 | super().__init__()
55 | self.model_type = 'Transformer'
56 | self.pos_encoder = PositionalEncoding(d_model, dropout)
57 | self.d_model = d_model
58 |
59 | self.encoder = nn.Sequential(nn.Linear(token_dim, d_model),
60 | nn.GELU(),
61 | nn.LayerNorm(d_model))
62 |
63 |
64 |
65 | encoder_layers = TransformerEncoderLayer(d_model, nhead, d_hid, dropout)
66 | self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
67 |
68 |
69 | self.d_model = d_model
70 | self.dropout = dropout
71 |
72 |
73 | self.decoder = nn.Sequential(full_block(d_model, 1024, self.dropout),
74 | full_block(1024, output_dim, self.dropout),
75 | full_block(output_dim, output_dim, self.dropout),
76 | nn.Linear(output_dim, output_dim)
77 | )
78 |
79 | self.binary_decoder = nn.Sequential(
80 | full_block(output_dim + 1280, 2048, self.dropout),
81 | full_block(2048, 512, self.dropout),
82 | full_block(512, 128, self.dropout),
83 | nn.Linear(128, 1)
84 | )
85 |
86 | self.gene_embedding_layer = nn.Sequential(nn.Linear(token_dim, d_model),
87 | nn.GELU(),
88 | nn.LayerNorm(d_model))
89 |
90 | self.pe_embedding = None
91 |
92 | def forward(self, src: Tensor, mask: Tensor):
93 | """
94 | Args:
95 | src: Tensor, shape [seq_len, batch_size]
96 | Returns:
97 | output Tensor of shape [seq_len, batch_size, ntoken]
98 | """
99 | src = self.encoder(src) * math.sqrt(self.d_model)
100 | src = self.pos_encoder(src)
101 | output = self.transformer_encoder(src, src_key_padding_mask=( 1 -mask))
102 | gene_output = self.decoder(output) # batch x seq_len x 128
103 | # embedding = torch.mul(gene_output, mask.t().unsqueeze(2)).sum(0) # average over non zero genes
104 | # In the new format, the cls token, which is at the 0 index mark, is the output.
105 | embedding = gene_output[0, :, :] # select only the CLS token.
106 | embedding = nn.functional.normalize(embedding, dim=1) # Normalize.
107 | return gene_output, embedding
108 |
109 |
110 | def predict(self, cell_embedding, gene_embeddings):
111 | gene_embeddings = self.gene_embedding_layer(gene_embeddings)
112 | dec = self.binary_decoder \
113 | (torch.hstack((cell_embedding, gene_embeddings)))
114 | return dec
115 |
116 |
--------------------------------------------------------------------------------
/model_files/new_species_protein_embeddings.csv:
--------------------------------------------------------------------------------
1 | species,path
2 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy==1.26.4
2 | scipy==1.14.1
3 | pandas==2.2.2
4 | tqdm==4.66.5
5 | torch==2.1.1
6 | scanpy==1.10.2
7 | accelerate==0.24.0
8 | requests==2.25.1
9 | urllib3==1.26.6
10 |
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | """
2 | Utils
3 |
4 | """
5 |
6 | import warnings
7 | warnings.filterwarnings("ignore")
8 | import pandas as pd
9 | import numpy as np
10 | import os
11 | import requests
12 | from tqdm import tqdm
13 | import tarfile
14 |
15 |
16 | def get_shapes_dict(dataset_path):
17 | shapes_dict = {}
18 | datasets_df = pd.read_csv(dataset_path)
19 | sorted_dataset_names = sorted(datasets_df["names"])
20 |
21 | for name in sorted_dataset_names:
22 | shapes_dict[name] = (int(datasets_df.set_index("names").loc[name]["num_cells"]), 8000)
23 |
24 | shapes_dict["dev_immune_mouse"] = (443697, 4786)
25 | shapes_dict["dev_immune_human"] = (34009, 5566)
26 | shapes_dict["intestinal_tract_human"] = (69668, 5192)
27 | shapes_dict["gtex_human"] = (18511, 7109)
28 | shapes_dict["gut_endoderm_mouse"] = (113043, 6806)
29 | shapes_dict["luca"] = (249591, 7196)
30 | shapes_dict.update({
31 | "madissoon_novel_lung":(190728, 8000),
32 | 'flores_cerebellum_human': (20232, 8000),
33 | 'osuch_gut_human': (272310, 8000),
34 | 'msk_ovarian_human': (929690, 8000),
35 | 'htan_vmuc_dis_epi_human': (65084, 8000),
36 | 'htan_vmuc_val_epi_human': (57564, 8000),
37 | 'htan_vmuc_non_epi_human': (9099, 8000),
38 | 'hao_pbmc_3p_human': (161764, 8000),
39 | 'hao_pbmc_5p_human': (49147, 8000),
40 | 'gao_tumors_human': (36111, 8000),
41 | 'swabrick_breast_human': (92427, 8000),
42 | 'wu_cryo_tumors_human': (105662, 8000),
43 | 'cell_line_het_human': (53513, 8000),
44 | 'bi_allen_metastasis_human': (27787, 8000),
45 | 'zheng68k_human': (68579, 8000),
46 | 'zheng68k_12k_human': (68579, 12000),
47 | 'mouse_embryo_ct': (153597, 12000),
48 | "regev_gtex_heart": (36574, 8000),
49 | "tabula_sapiens_heart": (11505, 8000),
50 | "10k_pbmcs":(11990, 12000),
51 | "epo_ido":(35834,12000),
52 | 'tabula_sapiens_kidney': (9641, 8000),
53 | 'tabula_microcebus_kidney': (14592, 8000),
54 | 'tabula_muris_kidney': (2781, 8000),
55 | 'tabula_muris_senis_kidney': (19610, 8000),
56 | 'immune_human': (33506, 8000)
57 | })
58 |
59 | shapes_dict["zyl_sanes_glaucoma_pig"] = (5901, 6819)
60 | shapes_dict["parkinsons_macaF"] = (1062, 5103)
61 |
62 | for row in datasets_df.iterrows():
63 | ngenes = row[1].num_genes
64 | ncells = row[1].num_cells
65 | name = row[1].names
66 | if not np.isnan(ngenes):
67 | shapes_dict[name] = (int(ncells), int(ngenes))
68 |
69 | return shapes_dict
70 |
71 |
72 | def figshare_download(url, save_path):
73 | """
74 | Figshare download helper with progress bar
75 |
76 | Args:
77 | url (str): the url of the dataset
78 | path (str): the path to save the dataset
79 | """
80 |
81 | if os.path.exists(save_path):
82 | return
83 | else:
84 | # Check if directory exists
85 | if not os.path.exists(os.path.dirname(save_path)):
86 | os.makedirs(os.path.dirname(save_path))
87 | print("Downloading " + save_path + " from " + url + " ..." + "\n")
88 | response = requests.get(url, stream=True)
89 | total_size_in_bytes = int(response.headers.get('content-length', 0))
90 | block_size = 1024
91 | progress_bar = tqdm(total=total_size_in_bytes, unit='iB',
92 | unit_scale=True)
93 | with open(save_path, 'wb') as file:
94 | for data in response.iter_content(block_size):
95 | progress_bar.update(len(data))
96 | file.write(data)
97 | progress_bar.close()
98 |
99 | # If the downloaded filename ends in tar.gz then extraact it
100 | if save_path.endswith(".tar.gz"):
101 | with tarfile.open(save_path) as tar:
102 | tar.extractall(path=os.path.dirname(save_path))
103 | print("Done!")
104 |
--------------------------------------------------------------------------------