├── LICENSE ├── README.md ├── count_particles.awk ├── cryolo_config.json ├── dm4_to_mrc.sh ├── exclude_low_res_micrographs.awk ├── imgstats.sh ├── monitor_relion_classification.R ├── multi_count_particles.sh ├── randomly_pick_micrographs.py ├── run_cryolo_evaluation.sh ├── run_cryolo_picking.sh ├── run_cryolo_training.sh ├── run_janni_denoise.sh ├── setup_cryolo_picking.sh ├── setup_cryolo_training.sh └── star2box.awk /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Guillaume Gaullier 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # CryoEM scripts 2 | 3 | **The scripts in this repository are currently not maintained and may be 4 | outdated. Some of the functionality I use regularly can be found in the 5 | following tools, that I still maintain:** 6 | 7 | - [topaztrainmetric](https://github.com/Guillawme/topaztrainmetrics) 8 | - [countparticles](https://github.com/Guillawme/countparticles) 9 | - [classconvergence](https://github.com/Guillawme/classconvergence) 10 | - [angdist](https://github.com/Guillawme/angdist) 11 | - [localres](https://github.com/Guillawme/localres) 12 | 13 | Various utilities to help cryoEM data analysis 14 | 15 | Available utilities so far: 16 | 17 | - `count_particles.awk`: count particles contributing to each class of a class2D 18 | or class3D job from [RELION][relion], by reading a `run_it*_data.star` file. 19 | Dependencies: awk. 20 | automatically apply it to all `*_data.star` files in the current directory and 21 | - `multi_count_particles.sh`: loop wrapper for `count_particles.awk`, to 22 | save results in tsv files. Dependencies: bash, `count_particles.awk`. 23 | - `monitor_relion_classification.R`: plot number of particles as a function of 24 | iteration number for each class of a RELION class2D or class3D job, reading 25 | the summary files from `multi_count_particles.sh`. Dependencies: [R][r], 26 | [Tidyverse][tidyverse], `multi_count_particles.sh`. 27 | - `dm4_to_mrc.sh`: convert a dm4 file to mrc format (useful to convert gain 28 | reference images to a format [MotionCor2][motioncor2] can read). Dependencies: 29 | bash, [EMAN2][EMAN2]. 30 | - `star2box.awk`: convert particle coordinates from RELION `*pick.star` format 31 | to EMAN2 `.box` format. Dependencies: awk. 32 | - `setup_cryolo.sh`, `run_cryolo_training.sh`, `run_cryolo_picking.sh`, 33 | `cryolo_config.json`, `run_janni_denoise.sh`: set up a directory structure to 34 | train and/or perform particle picking with crYOLO. Dependencies: bash, 35 | [crYOLO][cryolo]. 36 | - `randomly_pick_micrographs.py`: select a random subset of micrographs to 37 | constitute a training set for crYOLO. Dependencies: Python3, `numpy`. 38 | - `imgstats.sh`: gather pixel intensity statistics from motion-corrected 39 | micrographs. Dependencies: [IMOD][imod]. 40 | - `exclude_low_res_micrographs.awk`: filter a `micrographs_ctf.star` file from 41 | RELION to keep only micrographs with a rlnCtfMaxResolution equal to or better 42 | than a user-provided threshold. Dependencies: awk. 43 | 44 | 45 | [relion]: https://github.com/3dem/relion 46 | [motioncor2]: http://msg.ucsf.edu/em/software/motioncor2.html 47 | [EMAN2]: http://blake.bcm.edu/emanwiki/EMAN2 48 | [tidyverse]: https://www.tidyverse.org/packages 49 | [r]: https://www.r-project.org 50 | [cryolo]: http://sphire.mpg.de/wiki/doku.php?id=downloads:cryolo_1 51 | [imod]: https://bio3d.colorado.edu/imod/ 52 | -------------------------------------------------------------------------------- /count_particles.awk: -------------------------------------------------------------------------------- 1 | #! /usr/bin/awk -f 2 | 3 | # Count particles in each class from a class2D or class3D run_it***_data.star 4 | # file from RELION. 5 | 6 | # Usage: 7 | # count_particles.awk run_it025_data.star 8 | 9 | # Report which file is being analyzed. 10 | $1 ~ /data_particles/ { 11 | print "Analyzing star file:", FILENAME; 12 | } 13 | 14 | # Go through the file line by line and count how many times each class appears 15 | # (each line is a particle). Only analyze lines with more than 2 fields. 16 | NF > 10 { 17 | # rlnClassNumber is the 3rd field in run_it***_data.star files from 18 | # RELION-3.1.0. Change this if your version of RELION orders columns of star 19 | # files differently. 20 | rlnClassNumber = $3; 21 | 22 | # Count particles in each class. 23 | particles[rlnClassNumber]++; 24 | 25 | # Count all particles. 26 | total++; 27 | } 28 | 29 | # Report results. 30 | END { 31 | print "Total number of particles:", total; 32 | printf("%5s\t%9s\t%10s\n", "Class", "Particles", "% of total"); 33 | for (i in particles) { 34 | printf("%5d\t%9d\t%10.2f\n", i, particles[i], 100*particles[i]/total) | "sort -k 2 -n -r"; 35 | } 36 | } 37 | 38 | -------------------------------------------------------------------------------- /cryolo_config.json: -------------------------------------------------------------------------------- 1 | // See documentation at 2 | // Not all JSON parsers support comments, so delete these two lines before use. 3 | { 4 | "model": { 5 | "architecture": "PhosaurusNet", 6 | "input_size": 768, 7 | "anchors": [200,200], 8 | "max_box_per_image": 600, 9 | "filter": [0.1,"filtered"] 10 | }, 11 | 12 | "train": { 13 | "train_image_folder": "training_micrographs", 14 | "train_annot_folder": "training_coordinates", 15 | "train_times": 10, 16 | "pretrained_weights": "gmodel_phosnet_20190314.h5", 17 | "batch_size": 5, 18 | "learning_rate": 1e-4, 19 | "nb_epoch": 50, 20 | "warmup_epochs": 0, 21 | "object_scale": 5.0 , 22 | "no_object_scale": 1.0, 23 | "coord_scale": 1.0, 24 | "class_scale": 1.0, 25 | "log_path": "logs", 26 | "saved_weights_name": "your-model.h5", 27 | "debug": true 28 | }, 29 | 30 | "valid": { 31 | "valid_image_folder": "validation_micrographs", 32 | "valid_annot_folder": "validation_coordinates", 33 | "valid_times": 1 34 | } 35 | } 36 | 37 | -------------------------------------------------------------------------------- /dm4_to_mrc.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | # Convert a DM4 file from Gatan Digital Micrograph to MRC format. 4 | # EMAN2 needs to be installed and accessible in PATH. 5 | 6 | # This line may vary according to your local installation 7 | module load eman2/2.22 8 | 9 | DM4=$1 10 | MRC=$2 11 | 12 | # If no DM4 or MRC files are provided as command-line arguments, display a short 13 | # usage guide. 14 | if [ $# != 2 ]; then 15 | echo "Usage: dm4_to_mrc DM4 MRC" 16 | echo "DM4 is the DM4 file to convert to MRC." 17 | echo "MRC is the MRC file to save as output (provide a new file name; any existing file would be overwritten!)." 18 | exit 0 19 | fi 20 | 21 | e2proc2d.py --outtype mrc $DM4 $MRC 22 | 23 | -------------------------------------------------------------------------------- /exclude_low_res_micrographs.awk: -------------------------------------------------------------------------------- 1 | #! /usr/bin/awk -f 2 | 3 | # Takes a micrographs_ctf.star file from RELION and excludes all rows that have 4 | # a rlnCtfMaxResolution worse than the value of the resolution= parameter 5 | # provided on the command line. 6 | 7 | # Usage: 8 | # exclude_low_res_micrographs.awk resolution= micrographs_ctf.star > new_file.star 9 | # in which is the resolution cut-off (only micrographs with better or equal 10 | # resolution will be kept). 11 | 12 | # Keep all metadata lines. 13 | NF <= 4 { 14 | print; 15 | } 16 | 17 | # Keep rows for which rlnCtfMaxResolution is better than or equal to the 18 | # resolution= parameter passed on the command line. 19 | NF > 4 { 20 | # rlnCtfMaxResolution is the 13th field in micrographs_ctf.star files from 21 | # RELION-3.0. Change this if your version of RELION orders columns of star 22 | # files differently. 23 | rlnCtfMaxResolution = $13; 24 | 25 | if (rlnCtfMaxResolution <= resolution) { 26 | print; 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /imgstats.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | # Run this script in a directory containing MRC files (e.g. a MotionCorr job 4 | # directory in a RELION project) to get a table of minimum, maximum and mean 5 | # pixel values for each micrograph. 6 | # This depends on IMOD. 7 | 8 | # Adapt this line to your environment 9 | module load imod/4.9.10 10 | 11 | DATASET=$1 12 | MICS=$2 13 | 14 | for micrograph in $MICS; do 15 | header -minimum $micrograph 16 | done > minimum.txt 17 | 18 | for micrograph in $MICS; do 19 | header -maximum $micrograph 20 | done > maximum.txt 21 | 22 | for micrograph in $MICS; do 23 | header -mean $micrograph 24 | done > mean.txt 25 | 26 | nl -v 0 minimum.txt > minimum_num.txt 27 | 28 | echo -e "micrograph\tminimum\tmaximum\tmean" > header.txt 29 | 30 | paste minimum_num.txt maximum.txt mean.txt > data.txt 31 | 32 | cat header.txt data.txt > "$DATASET"_imgstats.txt 33 | 34 | rm minimum.txt minimum_num.txt maximum.txt mean.txt header.txt data.txt 35 | 36 | -------------------------------------------------------------------------------- /monitor_relion_classification.R: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env Rscript --vanilla 2 | 3 | # For each class of a RELION Class2D or Class3D job, plot the number of particles 4 | # as a function of the iteration number. This is useful to monitor convergence: 5 | # do classes still significantly vary in size after a given number of iterations? 6 | # Uses the summary tsv files generated by count_particles.awk. 7 | 8 | # Load helpful packages 9 | library(tidyverse) 10 | 11 | # List summary files and load them in dataframes 12 | my_files <- list.files(path = ".", 13 | pattern = "_data.star.tsv") 14 | my_datasets <- map(.x = my_files, 15 | .f = read_tsv, 16 | skip = 2) 17 | names(my_datasets) <- my_files 18 | 19 | # For each dataframe, add a column containing the iteration number 20 | iteration_numbers <- seq_along(my_datasets) - 1 21 | my_datasets <- map2(.x = my_datasets, 22 | .y = iteration_numbers, 23 | .f = ~ mutate(.data = .x, iteration = as.integer(.y))) 24 | 25 | # Plot results based on particle number 26 | my_datasets %>% 27 | bind_rows() %>% 28 | mutate(Class = as.factor(Class)) %>% 29 | ggplot(aes(x = iteration, y = Particles, color = Class)) + 30 | geom_line() + 31 | theme_bw() + 32 | xlab("Iteration") + 33 | ylab("Particles") 34 | 35 | # Save plot 36 | ggsave(filename = "relion_classification_progress_particles.png", 37 | device = "png", 38 | units = "cm", 39 | width = 24.27, 40 | height = 15, 41 | dpi = 300) 42 | 43 | # Plot results based on class distribution 44 | my_datasets %>% 45 | bind_rows() %>% 46 | mutate(Class = as.factor(Class)) %>% 47 | ggplot(aes(x = iteration, y = `% of total`, color = Class)) + 48 | geom_line() + 49 | theme_bw() + 50 | xlab("Iteration") + 51 | ylab("% of total") 52 | 53 | # Save plot 54 | ggsave(filename = "relion_classification_progress_distribution.png", 55 | device = "png", 56 | units = "cm", 57 | width = 24.27, 58 | height = 15, 59 | dpi = 300) 60 | 61 | file.remove("Rplots.pdf") 62 | 63 | # Clear environment 64 | remove(list = ls()) 65 | -------------------------------------------------------------------------------- /multi_count_particles.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | # Wrapper to count_particles.sh to apply to all *_data.star files in a directory. 4 | 5 | for iter in *_data.star; do 6 | count_particles.awk $iter > $iter.tsv 7 | done 8 | 9 | -------------------------------------------------------------------------------- /randomly_pick_micrographs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # Initially written by Dr. Sam Bowerman 4 | 5 | import glob, os, argparse, shutil 6 | import numpy as np 7 | 8 | parser = argparse.ArgumentParser() 9 | 10 | parser.add_argument("-bf", 11 | dest = 'base_folder', 12 | help = "Absolute path to base folder containing full collection of micrographs", 13 | type = str, 14 | required = True) 15 | 16 | parser.add_argument("-ext", 17 | dest = 'file_ext', 18 | help = 'Extension for micrograph files (i.e., "*.mrc" or "*.tif"). Default = "*.mrc"', 19 | type = str, 20 | default = "*.mrc") 21 | 22 | parser.add_argument("-of", 23 | dest = 'out_folder', 24 | help = "Output folder where subset should be stored", 25 | type = str, 26 | required = True) 27 | 28 | parser.add_argument("-n", 29 | dest = 'num', 30 | help = 'Number of micrographs to include in the trial set. Over-rides the "-frac" option.', 31 | type = int, 32 | default = 0) 33 | 34 | parser.add_argument("-frac", 35 | dest = 'frac', 36 | help = 'Fraction of micrographs (0.0 - 1.0) to use in trial set. Over-ridden by the "-n" flag.', 37 | type = float, 38 | default = 0.0) 39 | 40 | parser.add_argument("--copy", 41 | dest = 'copy', 42 | action = 'store_true', 43 | help = 'Copy files instead of creating symlinks') 44 | 45 | args = parser.parse_args() 46 | 47 | # Gather up a list of all the possible micrographs 48 | search_pattern = os.path.join(args.base_folder, args.file_ext) 49 | FILELIST = glob.glob(search_pattern) 50 | 51 | # Check to see if output directory exists 52 | if os.path.isdir(args.out_folder): 53 | # Warn user that old training images will be deleted 54 | print("WARNING: output directory already exists!") 55 | print(" Continuing will remove old files and create new ones!") 56 | overwrite = "" 57 | # Continue asking the user until they say "Y" or "N" 58 | while not (overwrite.lower() in ['y', 'n']): 59 | overwrite = input(" Would you like to continue (Y/N)? > ") 60 | if overwrite.lower() == 'y': 61 | # If authorized, remove old symlinks 62 | print("\nRemoving old symlinks.") 63 | old_link_path = os.path.join(args.out_folder, args.file_ext) 64 | old_links = glob.glob(old_link_path) 65 | for symlink in old_links: 66 | os.unlink(symlink) 67 | os.remove(symlink) 68 | else: 69 | # Otherwise, quit 70 | print("\nDirectory over-write aborted. No new symlinks created.") 71 | quit() 72 | # If path doesn't exist, create the directory 73 | else: 74 | print("Creating output folder \"" + args.out_folder) 75 | os.mkdir(args.out_folder) 76 | 77 | if (args.num == 0) and (args.frac == 0.0): 78 | print("\n\n\tERROR: You must provide either the total number of trial images (-n) or the fraction of micrographs to include in the trial (-frac)!!!!") 79 | print("\n\n") 80 | quit() 81 | elif (args.num == 0) and (args.frac != 0.0): 82 | num_frames = int(args.frac * len(FILELIST)) 83 | else: 84 | num_frames = args.num 85 | 86 | # Randomly select the subset of micrographs 87 | image_subset = np.random.choice(FILELIST, size=num_frames, replace=False) 88 | 89 | # Create symbolic links each of the subset micrographs 90 | for MICROGRAPH in image_subset: 91 | micrograph_basename = os.path.basename(MICROGRAPH) 92 | symlink_name = os.path.join(args.out_folder, micrograph_basename) 93 | if not args.copy: 94 | os.symlink(MICROGRAPH, symlink_name) 95 | 96 | # Inform user that symlink is complete 97 | print("Symbolic link created for " + MICROGRAPH + " at location " + symlink_name) 98 | else: 99 | shutil.copy2(MICROGRAPH, symlink_name) 100 | print("File copied from " + MICROGRAPH + " to " + symlink_name) 101 | 102 | # Inform user that program is complete 103 | print("All symbolic links complete. " + str(num_frames) + " links created.") 104 | -------------------------------------------------------------------------------- /run_cryolo_evaluation.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | # See documentation at 4 | 5 | # Provide either a runfile or a training set (images and box files), not all at 6 | # the same time. 7 | 8 | # This line depends on your environment setup. 9 | # Comment it out or edit it as needed. 10 | module purge 11 | module load cuda/9.0 cryolo/v1.5.1 12 | 13 | RUN="001" 14 | 15 | cryolo_evaluation.py \ 16 | --config config.json \ 17 | --weights your-model.h5 \ 18 | --runfile runfiles/your-latest-runfile.json \ 19 | --gpu 0 \ 20 | 2>cryolo_evaluation_"$RUN".err | tee cryolo_evaluation_"$RUN".log 21 | 22 | -------------------------------------------------------------------------------- /run_cryolo_picking.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | # See documentation at 4 | 5 | # Pick all micrographs with cryolo, using a generic model 6 | 7 | # --weights indicates which model to use: optionally provide a model trained 8 | # specifically on your own data here 9 | 10 | # --threshold is a picking threshold: default 0.3, increase to make it more 11 | # picky, decrease to make it more greedy 12 | 13 | # --distance will consider duplicates two particles that are closer to each 14 | # other than the set distance, and will only keep one such particle 15 | 16 | # --otf means micrographs will be low-pass filtered on the fly and not saved. 17 | # This saves disk space at the expense of slightly longer picking time. If you 18 | # want to run picking several times with different options, removing this option 19 | # might speed up subsequent runs. 20 | 21 | # --gpu designates which GPU to use 22 | 23 | # This line depends on your environment setup. 24 | # Comment it out or edit it as needed. 25 | module purge 26 | module load cuda/9.0 cryolo/v1.5.1 27 | 28 | RUN="001" 29 | 30 | if [ ! -d coordinates_"$RUN" ]; then 31 | mkdir coordinates_"$RUN" 32 | fi 33 | 34 | cryolo_predict.py \ 35 | --conf config.json \ 36 | --weights gmodel_phosnet_20190314.h5 \ 37 | --input micrographs/ \ 38 | --output coordinates_"$RUN"/ \ 39 | --threshold 0.3 \ 40 | --distance 0 \ 41 | --otf \ 42 | --gpu 0 \ 43 | 2>cryolo_picking_"$RUN".err | tee cryolo_picking_"$RUN".log 44 | 45 | -------------------------------------------------------------------------------- /run_cryolo_training.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | # See documentation at 4 | 5 | # Train a cryolo model on your own data 6 | # Indicate a filename in which to save the model in config.json 7 | 8 | # --early indicates how many epochs without improvement the training procedure 9 | # will wait before early training termination 10 | 11 | # --gpu designates which GPU to use 12 | 13 | # --fine_tune is only useful when training from a pre-existing model, to speed 14 | # up training by only optimizing weights of two layers (instead of all of them); 15 | # add this option as necessary. 16 | 17 | # This line depends on your environment setup. 18 | # Comment it out or edit it as needed. 19 | module purge 20 | module load cuda/9.0 cryolo/v1.5.1 21 | 22 | RUN="001" 23 | 24 | # Warm-up 25 | # This is not strictly required if starting from pre-trained weigths 26 | #cryolo_train.py \ 27 | # --conf config.json \ 28 | # --warmup 3 \ 29 | # --gpu 0 \ 30 | # 2>cryolo_warmup_"$RUN".err | tee cryolo_warmup_"$RUN".log 31 | 32 | # Training 33 | cryolo_train.py \ 34 | --conf config.json \ 35 | --warmup 0 \ 36 | --gpu 0 \ 37 | --early 10 \ 38 | 2>cryolo_training_"$RUN".err | tee cryolo_training_"$RUN".log 39 | 40 | -------------------------------------------------------------------------------- /run_janni_denoise.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Run janni_denoise on all micrographs 4 | # This assumes motion-corrected micrographs are in a Micrographs/ directory 5 | # under the working directory. Denoised micrographs will be written in 6 | # denoised/Micrographs/ 7 | # See 8 | 9 | module purge 10 | module load cuda/9.0 cryolo/v1.5.1 11 | 12 | janni_denoise.py \ 13 | denoise \ 14 | --gpu 0 \ 15 | Micrographs/ \ 16 | denoised/ \ 17 | gmodel_janni_20190703.h5 \ 18 | 2>janni_denoise.err | tee janni_denoise.log 19 | 20 | -------------------------------------------------------------------------------- /setup_cryolo_picking.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | # Set up a working directory for particle picking with crYOLO. 4 | # Run this at the root of a RELION project directory. 5 | 6 | # Update this URL with latest model before running this script. 7 | # This can also be an absolute path if the file is already present on the local 8 | # machine or if you are using a model you trained yourself. 9 | LATEST_CRYOLO_MODEL=ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO-GENERAL-MODELS/gmodel_phosnet_20190314.h5 10 | 11 | # Don't change after this line 12 | CWD=$(pwd) 13 | CRYOLO_CONFIG=https://raw.githubusercontent.com/Guillawme/cryoEM-scripts/master/cryolo_config.json 14 | CRYOLO_PICKING=https://raw.githubusercontent.com/Guillawme/cryoEM-scripts/master/run_cryolo_picking.sh 15 | 16 | ## Stop if there is already a crYOLO directory here 17 | if [ -d "crYOLO" ]; then 18 | echo "There is already a crYOLO directory here." 19 | exit 0 20 | fi 21 | 22 | ## Create directory structure under a new directory 23 | mkdir -p \ 24 | $CWD/crYOLO \ 25 | $CWD/crYOLO/micrographs 26 | 27 | ## Keep all files under crYOLO/ 28 | cd $CWD/crYOLO 29 | 30 | ## Add some instructions 31 | cat >README.txt <README.txt < coordinates.box 6 | # The box_size argument must be an integer number and cannot be omitted. 7 | 8 | # Check that we received box_size from command line argument 9 | BEGIN { 10 | if (length(ARGV) != 3) { 11 | print "Wrong number of arguments. Usage:" \ 12 | > "/dev/stderr"; 13 | print "star2box.awk relion_pick.star box_size > coordinates.box" \ 14 | > "/dev/stderr"; 15 | print "box_size must be an integer number and cannot be omitted." \ 16 | > "/dev/stderr"; 17 | exit 1; 18 | } 19 | box_size = ARGV[2]; 20 | delete ARGV[2]; 21 | } 22 | 23 | # To convert relion coordinates to eman coordinates: 24 | # 1. extract X and Y coordinates (columns 1 and 2, respectively), 25 | # 2. subtract half the box size (eman uses the top left corner of the box as 26 | # coordinate, not the center like relion) 27 | # 3. append the box size twice (this is for display of the boxes by eman2boxer 28 | # and similar graphical programs) 29 | NF == 5 { 30 | printf("%f\t%f\t%d\t%d\n", 31 | $1 - box_size / 2, 32 | $2 - box_size / 2, 33 | box_size, 34 | box_size); 35 | } 36 | 37 | --------------------------------------------------------------------------------