├── .gitignore ├── README.md ├── cremi ├── Annotations.py ├── Volume.py ├── __init__.py ├── evaluation │ ├── Clefts.py │ ├── NeuronIds.py │ ├── SynapticPartners.py │ ├── __init__.py │ ├── border_mask.py │ ├── rand.py │ ├── synaptic_partners.py │ └── voi.py └── io │ ├── CremiFile.py │ └── __init__.py ├── example_evaluation.py ├── example_read.py ├── example_write.py ├── requirements.txt ├── setup.py └── tests ├── example.h5 ├── example.png └── test_border.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.sw[pmno] 2 | *.hdf 3 | *.h5 4 | *.pyc 5 | /build 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | CREMI Python Scripts 2 | ==================== 3 | 4 | Python scripts associated with the [CREMI Challenge](http://cremi.org). 5 | 6 | Installation 7 | ------------ 8 | 9 | If you are using `pip`, installing the scripts is as easy as 10 | 11 | ``` 12 | pip install git+https://github.com/cremi/cremi_python.git 13 | ``` 14 | 15 | Alternatively, you can clone this repository yourself and use `distutils` 16 | ``` 17 | python setup.py install 18 | ``` 19 | or just include the `cremi_python` directory to your `PYTHONPATH`. 20 | 21 | Reading and writing of CREMI files 22 | ---------------------------------- 23 | 24 | We recommend you use the `cremi.io` package for reading and writing of the 25 | CREMI files. This way, you can be sure that the submissions you produce are of 26 | the form that is expected by the challenge server, and that compression is 27 | used. 28 | 29 | You open a file by instantiating a `CremiFile` object: 30 | ```python 31 | from cremi.io import CremiFile 32 | file = CremiFile("example.hdf", "r") 33 | ``` 34 | The second argument specifies the mode, which is `"r"` for reading, `"w"` for 35 | writing a new file (careful, this replaces an existing file with the same 36 | name), and `"a"` to append or change an existing file. 37 | 38 | The `CremiFile` class provides read and write methods for each of the challenge 39 | datasets. To read the neuron IDs in the training volumes, for example, use 40 | `read_neuron_ids()`: 41 | ```python 42 | neuron_ids = file.read_neuron_ids() 43 | ``` 44 | This returns the `neuron_ids` as a `cremi.Volume`, which contains an HDF5 dataset (`neuron_ids.data`) and some meta-information. If you are using the padded version of the volumes, `neuron_ids.offset` will contain the starting point of `neuron_ids` inside the `raw` volume. Note that these numbers are given in nm. 45 | 46 | To save a dataset, use the appropriate write method, e.g.,: 47 | ``` 48 | file.write_neuron_ids(neuron_ids) 49 | ``` 50 | See the included `example_read.py` and `example_write.py` for more details. 51 | 52 | Evaluation 53 | ---------- 54 | 55 | For each of the challenge categories, you find an evaluation class in 56 | `cremi.evaluation`, which are `NeuronIds`, `Clefts`, and `SynapticPartners`. 57 | 58 | After you read a test file `test` and a ground truth file `truth`, you can 59 | evaluate your results by instantiating these classes as follows: 60 | ```python 61 | from cremi.evaluation import NeuronIds, Clefts, SynapticPartners 62 | 63 | neuron_ids_evaluation = NeuronIds(truth.read_neuron_ids()) 64 | (voi_split, voi_merge) = neuron_ids_evaluation.voi(test.read_neuron_ids()) 65 | adapted_rand = neuron_ids_evaluation.adapted_rand(test.read_neuron_ids()) 66 | 67 | clefts_evaluation = Clefts(test.read_clefts(), truth.read_clefts()) 68 | fp_count = clefts_evaluation.count_false_positives() 69 | fn_count = clefts_evaluation.count_false_negatives() 70 | fp_stats = clefts_evaluation.acc_false_positives() 71 | fn_stats = clefts_evaluation.acc_false_negatives() 72 | 73 | synaptic_partners_evaluation = SynapticPartners() 74 | fscore = synaptic_partners_evaluation.fscore( 75 | test.read_annotations(), 76 | truth.read_annotations(), 77 | truth.read_neuron_ids()) 78 | ``` 79 | See the included `example_evaluate.py` for more details. The metrics are 80 | described in more detail on the [CREMI Challenge website](http://cremi.org/metrics/). 81 | 82 | Acknowledgements 83 | ---------------- 84 | 85 | Evaluation code contributed by: 86 | 87 | * [Jan Funke](https://github.com/funkey) 88 | * [Juan Nunez-Iglesias](http://github.com/jni) 89 | * [Philipp Hanslovsky](http://github.com/hanslovsky) 90 | * [Stephan Saalfeld](http://github.com/axtimwalde) 91 | -------------------------------------------------------------------------------- /cremi/Annotations.py: -------------------------------------------------------------------------------- 1 | class Annotations: 2 | 3 | def __init__(self, offset = (0.0, 0.0, 0.0)): 4 | 5 | self.__types = {} 6 | self.__locations = {} 7 | self.comments = {} 8 | self.pre_post_partners = [] 9 | self.offset = offset 10 | 11 | def __check(self, id): 12 | if not id in self.__types.keys(): 13 | raise "there is no annotation with id " + str(id) 14 | 15 | def add_annotation(self, id, type, location): 16 | """Add a new annotation. 17 | 18 | Parameters 19 | ---------- 20 | 21 | id: int 22 | The ID of the new annotation. 23 | 24 | type: string 25 | A string denoting the type of the annotation. Use 26 | "presynaptic_site" or "postsynaptic_site" for pre- and 27 | post-synaptic annotations, respectively. 28 | 29 | location: tuple, float 30 | The location of the annotation, relative to the offset. 31 | """ 32 | 33 | self.__types[id] = type.encode('utf8') 34 | self.__locations[id] = location 35 | 36 | def add_comment(self, id, comment): 37 | """Add a comment to an annotation. 38 | """ 39 | 40 | self.__check(id) 41 | self.comments[id] = comment.encode('utf8') 42 | 43 | def set_pre_post_partners(self, pre_id, post_id): 44 | """Mark two annotations as pre- and post-synaptic partners. 45 | """ 46 | 47 | self.__check(pre_id) 48 | self.__check(post_id) 49 | self.pre_post_partners.append((pre_id, post_id)) 50 | 51 | def ids(self): 52 | """Get the ids of all annotations. 53 | """ 54 | 55 | return self.__types.keys() 56 | 57 | def types(self): 58 | """Get the types of all annotations. 59 | """ 60 | 61 | return self.__types.values() 62 | 63 | def locations(self): 64 | """Get the locations of all annotations. Locations are in world units, 65 | relative to the offset. 66 | """ 67 | 68 | return self.__locations.values() 69 | 70 | def get_annotation(self, id): 71 | """Get the type and location of an annotation by its id. 72 | """ 73 | 74 | self.__check(id) 75 | return (self.__types[id], self.__locations[id]) 76 | -------------------------------------------------------------------------------- /cremi/Volume.py: -------------------------------------------------------------------------------- 1 | 2 | class Volume: 3 | 4 | def __init__(self, data, resolution = (1.0, 1.0, 1.0), offset = (0.0, 0.0, 0.0), comment = ""): 5 | self.data = data 6 | self.resolution = resolution 7 | self.offset = offset 8 | self.comment = comment 9 | 10 | def __getitem__(self, location): 11 | """Get the closest value of this volume to the given location. The 12 | location is in world units, relative to the volumes offset. 13 | 14 | This method takes into account the resolution of the volume. An 15 | IndexError exception is raised if the location is not contained in this 16 | volume. 17 | 18 | To access the raw pixel values, use the `data` attribute. 19 | """ 20 | 21 | i = tuple([ round(location[d]/self.resolution[d]) for d in range(len(location)) ]) 22 | 23 | if min(i) >= 0: 24 | try: 25 | return self.data[i] 26 | except IndexError as e: 27 | raise IndexError("location " + str(location) + " does not lie inside volume: " + str(e)) 28 | 29 | raise IndexError("location " + str(location) + " does not lie inside volume") 30 | 31 | def __setitem__(self, location, value): 32 | """Set the closest value of this volume to the given location. The 33 | location is in world units, relative to the volumes offset. 34 | 35 | This method takes into account the resolution of the volume. An 36 | IndexError exception is raised if the location is not contained in this 37 | volume. 38 | 39 | To access the raw pixel values, use the `data` attribute. 40 | """ 41 | 42 | i = tuple([ round(location[d]/self.resolution[d]) for d in range(len(location)) ]) 43 | 44 | if min(i) >= 0: 45 | try: 46 | self.data[i] = value 47 | return 48 | except IndexError as e: 49 | raise IndexError("location " + str(location) + " does not lie inside volume: " + str(e)) 50 | 51 | raise IndexError("location " + str(location) + " does not lie inside volume") 52 | -------------------------------------------------------------------------------- /cremi/__init__.py: -------------------------------------------------------------------------------- 1 | from Annotations import * 2 | from Volume import * 3 | from io import * 4 | from evaluation import * 5 | -------------------------------------------------------------------------------- /cremi/evaluation/Clefts.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy import ndimage 3 | 4 | class Clefts: 5 | 6 | def __init__(self, test, truth): 7 | 8 | test_clefts = test 9 | truth_clefts = truth 10 | 11 | self.truth_clefts_invalid = truth_clefts.data.value == 0xfffffffffffffffe 12 | 13 | self.test_clefts_mask = np.logical_or(test_clefts.data.value == 0xffffffffffffffff, self.truth_clefts_invalid) 14 | self.truth_clefts_mask = np.logical_or(truth_clefts.data.value == 0xffffffffffffffff, self.truth_clefts_invalid) 15 | 16 | self.test_clefts_edt = ndimage.distance_transform_edt(self.test_clefts_mask, sampling=test_clefts.resolution) 17 | self.truth_clefts_edt = ndimage.distance_transform_edt(self.truth_clefts_mask, sampling=truth_clefts.resolution) 18 | 19 | def count_false_positives(self, threshold = 200): 20 | 21 | mask1 = np.invert(self.test_clefts_mask) 22 | mask2 = self.truth_clefts_edt > threshold 23 | false_positives = self.truth_clefts_edt[np.logical_and(mask1, mask2)] 24 | return false_positives.size 25 | 26 | def count_false_negatives(self, threshold = 200): 27 | 28 | mask1 = np.invert(self.truth_clefts_mask) 29 | mask2 = self.test_clefts_edt > threshold 30 | false_negatives = self.test_clefts_edt[np.logical_and(mask1, mask2)] 31 | return false_negatives.size 32 | 33 | def acc_false_positives(self): 34 | 35 | mask = np.invert(self.test_clefts_mask) 36 | false_positives = self.truth_clefts_edt[mask] 37 | stats = { 38 | 'mean': np.mean(false_positives), 39 | 'std': np.std(false_positives), 40 | 'max': np.amax(false_positives), 41 | 'count': false_positives.size, 42 | 'median': np.median(false_positives)} 43 | return stats 44 | 45 | def acc_false_negatives(self): 46 | 47 | mask = np.invert(self.truth_clefts_mask) 48 | false_negatives = self.test_clefts_edt[mask] 49 | stats = { 50 | 'mean': np.mean(false_negatives), 51 | 'std': np.std(false_negatives), 52 | 'max': np.amax(false_negatives), 53 | 'count': false_negatives.size, 54 | 'median': np.median(false_negatives)} 55 | return stats 56 | 57 | -------------------------------------------------------------------------------- /cremi/evaluation/NeuronIds.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from border_mask import create_border_mask 3 | from voi import voi 4 | from rand import adapted_rand 5 | 6 | class NeuronIds: 7 | 8 | def __init__(self, groundtruth, border_threshold = None): 9 | """Create a new evaluation object for neuron ids against the provided ground truth. 10 | 11 | Parameters 12 | ---------- 13 | 14 | groundtruth: Volume 15 | The ground truth volume containing neuron ids. 16 | 17 | border_threshold: None or float, in world units 18 | Pixels within `border_threshold` to a label border in the 19 | same section will be assigned to background and ignored during 20 | the evaluation. 21 | """ 22 | 23 | assert groundtruth.resolution[1] == groundtruth.resolution[2], \ 24 | "x and y resolutions of ground truth are not the same (%f != %f)" % \ 25 | (groundtruth.resolution[1], groundtruth.resolution[2]) 26 | 27 | self.groundtruth = groundtruth 28 | self.border_threshold = border_threshold 29 | 30 | if self.border_threshold: 31 | 32 | print "Computing border mask..." 33 | 34 | self.gt = np.zeros(groundtruth.data.shape, dtype=np.uint64) 35 | create_border_mask( 36 | groundtruth.data, 37 | self.gt, 38 | float(border_threshold)/groundtruth.resolution[1], 39 | np.uint64(-1)) 40 | else: 41 | self.gt = np.array(self.groundtruth.data).copy() 42 | 43 | # current voi and rand implementations don't work with np.uint64(-1) as 44 | # background label, so we make it 0 here and bump all other labels 45 | self.gt += 1 46 | 47 | def voi(self, segmentation): 48 | 49 | assert list(segmentation.data.shape) == list(self.groundtruth.data.shape) 50 | assert list(segmentation.resolution) == list(self.groundtruth.resolution) 51 | 52 | print "Computing VOI..." 53 | 54 | return voi(np.array(segmentation.data), self.gt, ignore_groundtruth = [0]) 55 | 56 | def adapted_rand(self, segmentation): 57 | 58 | assert list(segmentation.data.shape) == list(self.groundtruth.data.shape) 59 | assert list(segmentation.resolution) == list(self.groundtruth.resolution) 60 | 61 | print "Computing RAND..." 62 | 63 | return adapted_rand(np.array(segmentation.data), self.gt) 64 | -------------------------------------------------------------------------------- /cremi/evaluation/SynapticPartners.py: -------------------------------------------------------------------------------- 1 | from synaptic_partners import synaptic_partners_fscore 2 | 3 | class SynapticPartners: 4 | 5 | def __init__(self, matching_threshold = 400): 6 | 7 | self.matching_threshold = matching_threshold 8 | 9 | def fscore(self, rec_annotations, gt_annotations, gt_segmentation, all_stats = False): 10 | 11 | return synaptic_partners_fscore(rec_annotations, gt_annotations, gt_segmentation, self.matching_threshold, all_stats) 12 | -------------------------------------------------------------------------------- /cremi/evaluation/__init__.py: -------------------------------------------------------------------------------- 1 | from Clefts import * 2 | from NeuronIds import * 3 | from SynapticPartners import * 4 | from border_mask import * 5 | -------------------------------------------------------------------------------- /cremi/evaluation/border_mask.py: -------------------------------------------------------------------------------- 1 | import h5py 2 | import numpy as np 3 | import scipy 4 | 5 | def create_border_mask(input_data, target, max_dist, background_label,axis=0): 6 | """ 7 | Overlay a border mask with background_label onto input data. 8 | A pixel is part of a border if one of its 4-neighbors has different label. 9 | 10 | Parameters 11 | ---------- 12 | input_data : h5py.Dataset or numpy.ndarray - Input data containing neuron ids 13 | target : h5py.Datset or numpy.ndarray - Target which input data overlayed with border mask is written into. 14 | max_dist : int or float - Maximum distance from border for pixels to be included into the mask. 15 | background_label : int - Border mask will be overlayed using this label. 16 | axis : int - Axis of iteration (perpendicular to 2d images for which mask will be generated) 17 | """ 18 | sl = [slice(None) for d in xrange(len(target.shape))] 19 | 20 | for z in xrange(target.shape[axis]): 21 | sl[ axis ] = z 22 | border = create_border_mask_2d(input_data[tuple(sl)], max_dist) 23 | target_slice = input_data[tuple(sl)] if isinstance(input_data,h5py.Dataset) else np.copy(input_data[tuple(sl)]) 24 | target_slice[border] = background_label 25 | target[tuple(sl)] = target_slice 26 | 27 | def create_and_write_masked_neuron_ids(in_file, out_file, max_dist, background_label, overwrite=False): 28 | """ 29 | Overlay a border mask with background_label onto input data loaded from in_file and write into out_file. 30 | A pixel is part of a border if one of its 4-neighbors has different label. 31 | 32 | Parameters 33 | ---------- 34 | in_file : CremiFile - Input file containing neuron ids 35 | out_file : CremiFile - Output file which input data overlayed with border mask is written into. 36 | max_dist : int or float - Maximum distance from border for pixels to be included into the mask. 37 | background_label : int - Border mask will be overlayed using this label. 38 | overwrite : bool - Overwrite existing data in out_file (True) or do nothing if data is present in out_file (False). 39 | """ 40 | if ( not in_file.has_neuron_ids() ) or ( (not overwrite) and out_file.has_neuron_ids() ): 41 | return 42 | 43 | neuron_ids, resolution, offset, comment = in_file.read_neuron_ids() 44 | comment = ('' if comment is None else comment + ' ') + 'Border masked with max_dist=%f' % max_dist 45 | 46 | path = "/volumes/labels/neuron_ids" 47 | group_path = "/".join( path.split("/")[:-1] ) 48 | ds_name = path.split("/")[-1] 49 | if ( out_file.has_neuron_ids() ): 50 | del out_file.h5file[path] 51 | if (group_path not in out_file.h5file): 52 | out_file.h5file.create_group(group_path) 53 | 54 | group = out_file.h5file[group_path] 55 | target = group.create_dataset(ds_name, shape=neuron_ids.shape, dtype=neuron_ids.dtype) 56 | target.attrs["resolution"] = resolution 57 | target.attrs["comment"] = comment 58 | if offset != (0.0, 0.0, 0.0): 59 | target.attrs["offset"] = offset 60 | 61 | create_border_mask(neuron_ids, target, max_dist, background_label) 62 | 63 | def create_border_mask_2d(image, max_dist): 64 | """ 65 | Create binary border mask for image. 66 | A pixel is part of a border if one of its 4-neighbors has different label. 67 | 68 | Parameters 69 | ---------- 70 | image : numpy.ndarray - Image containing integer labels. 71 | max_dist : int or float - Maximum distance from border for pixels to be included into the mask. 72 | 73 | Returns 74 | ------- 75 | mask : numpy.ndarray - Binary mask of border pixels. Same shape as image. 76 | """ 77 | max_dist = max(max_dist, 0) 78 | 79 | padded = np.pad(image, 1, mode='edge') 80 | 81 | border_pixels = np.logical_and( 82 | np.logical_and( image == padded[:-2, 1:-1], image == padded[2:, 1:-1] ), 83 | np.logical_and( image == padded[1:-1, :-2], image == padded[1:-1, 2:] ) 84 | ) 85 | 86 | distances = scipy.ndimage.distance_transform_edt( 87 | border_pixels, 88 | return_distances=True, 89 | return_indices=False 90 | ) 91 | 92 | return distances <= max_dist 93 | 94 | -------------------------------------------------------------------------------- /cremi/evaluation/rand.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | 3 | import numpy as np 4 | import scipy.sparse as sparse 5 | 6 | # Evaluation code courtesy of Juan Nunez-Iglesias, taken from 7 | # https://github.com/janelia-flyem/gala/blob/master/gala/evaluate.py 8 | 9 | def adapted_rand(seg, gt, all_stats=False): 10 | """Compute Adapted Rand error as defined by the SNEMI3D contest [1] 11 | 12 | Formula is given as 1 - the maximal F-score of the Rand index 13 | (excluding the zero component of the original labels). Adapted 14 | from the SNEMI3D MATLAB script, hence the strange style. 15 | 16 | Parameters 17 | ---------- 18 | seg : np.ndarray 19 | the segmentation to score, where each value is the label at that point 20 | gt : np.ndarray, same shape as seg 21 | the groundtruth to score against, where each value is a label 22 | all_stats : boolean, optional 23 | whether to also return precision and recall as a 3-tuple with rand_error 24 | 25 | Returns 26 | ------- 27 | are : float 28 | The adapted Rand error; equal to $1 - \frac{2pr}{p + r}$, 29 | where $p$ and $r$ are the precision and recall described below. 30 | prec : float, optional 31 | The adapted Rand precision. (Only returned when `all_stats` is ``True``.) 32 | rec : float, optional 33 | The adapted Rand recall. (Only returned when `all_stats` is ``True``.) 34 | 35 | References 36 | ---------- 37 | [1]: http://brainiac2.mit.edu/SNEMI3D/evaluation 38 | """ 39 | # segA is truth, segB is query 40 | segA = np.ravel(gt) 41 | segB = np.ravel(seg) 42 | n = segA.size 43 | 44 | n_labels_A = np.amax(segA) + 1 45 | n_labels_B = np.amax(segB) + 1 46 | 47 | ones_data = np.ones(n) 48 | 49 | p_ij = sparse.csr_matrix((ones_data, (segA[:], segB[:])), shape=(n_labels_A, n_labels_B)) 50 | 51 | a = p_ij[1:n_labels_A,:] 52 | b = p_ij[1:n_labels_A,1:n_labels_B] 53 | c = p_ij[1:n_labels_A,0].todense() 54 | d = b.multiply(b) 55 | 56 | a_i = np.array(a.sum(1)) 57 | b_i = np.array(b.sum(0)) 58 | 59 | sumA = np.sum(a_i * a_i) 60 | sumB = np.sum(b_i * b_i) + (np.sum(c) / n) 61 | sumAB = np.sum(d) + (np.sum(c) / n) 62 | 63 | precision = sumAB / sumB 64 | recall = sumAB / sumA 65 | 66 | fScore = 2.0 * precision * recall / (precision + recall) 67 | are = 1.0 - fScore 68 | 69 | if all_stats: 70 | return (are, precision, recall) 71 | else: 72 | return are 73 | -------------------------------------------------------------------------------- /cremi/evaluation/synaptic_partners.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | from scipy.optimize import linear_sum_assignment 3 | import numpy as np 4 | 5 | def synaptic_partners_fscore(rec_annotations, gt_annotations, gt_segmentation, matching_threshold = 400, all_stats = False): 6 | """Compute the f-score of the found synaptic partners. 7 | 8 | Parameters 9 | ---------- 10 | 11 | rec_annotations: Annotations, containing found synaptic partners 12 | 13 | gt_annotations: Annotations, containing ground truth synaptic partners 14 | 15 | gt_segmentation: Volume, ground truth neuron segmentation 16 | 17 | matching_threshold: float, world units 18 | Euclidean distance threshold to consider two annotations a potential 19 | match. Annotations that are `matching_threshold` or more untis apart 20 | from each other are not considered as potential matches. 21 | 22 | all_stats: boolean, optional 23 | Whether to also return precision, recall, FP, FN, and matches as a 6-tuple with f-score 24 | 25 | Returns 26 | ------- 27 | 28 | fscore: float 29 | The f-score of the found synaptic partners. 30 | precision: float, optional 31 | recall: float, optional 32 | fp: int, optional 33 | fn: int, optional 34 | filtered_matches: list of tuples, optional 35 | The indices of the matches with matching costs. 36 | """ 37 | 38 | # get cost matrix 39 | costs = cost_matrix(rec_annotations, gt_annotations, gt_segmentation, matching_threshold) 40 | 41 | # match using Hungarian method 42 | print "Finding cost-minimal matches..." 43 | matches = linear_sum_assignment(costs - np.amax(costs) - 1) 44 | matches = zip(matches[0], matches[1]) # scipy returns matches as numpy arrays 45 | 46 | filtered_matches = [ (i,j, costs[i][j]) for (i,j) in matches if costs[i][j] <= matching_threshold ] 47 | print str(len(filtered_matches)) + " matches found" 48 | 49 | # unmatched in rec = FP 50 | fp = len(rec_annotations.pre_post_partners) - len(filtered_matches) 51 | 52 | # unmatched in gt = FN 53 | fn = len(gt_annotations.pre_post_partners) - len(filtered_matches) 54 | 55 | # all ground truth elements - FN = TP 56 | tp = len(gt_annotations.pre_post_partners) - fn 57 | 58 | precision = float(tp)/(tp + fp) 59 | recall = float(tp)/(tp + fn) 60 | fscore = 2.0*precision*recall/(precision + recall) 61 | 62 | if all_stats: 63 | return (fscore, precision, recall, fp, fn, filtered_matches) 64 | else: 65 | return fscore 66 | 67 | def cost_matrix(rec, gt, gt_segmentation, matching_threshold): 68 | 69 | print "Computing matching costs..." 70 | 71 | rec_locations = pre_post_locations(rec, gt_segmentation) 72 | gt_locations = pre_post_locations(gt, gt_segmentation) 73 | 74 | rec_labels = pre_post_labels(rec_locations, gt_segmentation) 75 | gt_labels = pre_post_labels(gt_locations, gt_segmentation) 76 | 77 | size = max(len(rec_locations), len(gt_locations)) 78 | costs = np.zeros((size, size), dtype=np.float) 79 | costs[:] = 2*matching_threshold 80 | num_potential_matches = 0 81 | for i in range(len(rec_locations)): 82 | for j in range(len(gt_locations)): 83 | c = cost(rec_locations[i], gt_locations[j], rec_labels[i], gt_labels[j], matching_threshold) 84 | costs[i,j] = c 85 | if c <= matching_threshold: 86 | num_potential_matches += 1 87 | 88 | print str(num_potential_matches) + " potential matches found" 89 | 90 | return costs 91 | 92 | def pre_post_locations(annotations, gt_segmentation): 93 | """Get the locations of the annotations relative to the ground truth offset.""" 94 | 95 | locations = annotations.locations() 96 | shift = sub(annotations.offset, gt_segmentation.offset) 97 | 98 | return [ 99 | (add(annotations.get_annotation(pre_id)[1], shift), add(annotations.get_annotation(post_id)[1], shift)) for (pre_id, post_id) in annotations.pre_post_partners 100 | ] 101 | 102 | def pre_post_labels(locations, segmentation): 103 | 104 | return [ (segmentation[pre], segmentation[post]) for (pre, post) in locations ] 105 | 106 | 107 | def cost(pre_post_location1, pre_post_location2, labels1, labels2, matching_threshold): 108 | 109 | max_cost = 2*matching_threshold 110 | 111 | # pairs do not link the same segments 112 | if labels1 != labels2: 113 | return max_cost 114 | 115 | pre_dist = distance(pre_post_location1[0], pre_post_location2[0]) 116 | post_dist = distance(pre_post_location1[1], pre_post_location2[1]) 117 | 118 | if pre_dist > matching_threshold or post_dist > matching_threshold: 119 | return max_cost 120 | 121 | return 0.5*(pre_dist + post_dist) 122 | 123 | def distance(a, b): 124 | return np.linalg.norm(np.array(list(a))-np.array(list(b))) 125 | 126 | def add(a, b): 127 | return tuple([a[d] + b[d] for d in range(len(b))]) 128 | 129 | def sub(a, b): 130 | return tuple([a[d] - b[d] for d in range(len(b))]) 131 | -------------------------------------------------------------------------------- /cremi/evaluation/voi.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | 3 | # Evaluation code courtesy of Juan Nunez-Iglesias, taken from 4 | # https://github.com/janelia-flyem/gala/blob/master/gala/evaluate.py 5 | 6 | import numpy as np 7 | import scipy.sparse as sparse 8 | 9 | def voi(reconstruction, groundtruth, ignore_reconstruction=[], ignore_groundtruth=[0]): 10 | """Return the conditional entropies of the variation of information metric. [1] 11 | 12 | Let X be a reconstruction, and Y a ground truth labelling. The variation of 13 | information between the two is the sum of two conditional entropies: 14 | 15 | VI(X, Y) = H(X|Y) + H(Y|X). 16 | 17 | The first one, H(X|Y), is a measure of oversegmentation, the second one, 18 | H(Y|X), a measure of undersegmentation. These measures are referred to as 19 | the variation of information split or merge error, respectively. 20 | 21 | Parameters 22 | ---------- 23 | seg : np.ndarray, int type, arbitrary shape 24 | A candidate segmentation. 25 | gt : np.ndarray, int type, same shape as `seg` 26 | The ground truth segmentation. 27 | ignore_seg, ignore_gt : list of int, optional 28 | Any points having a label in this list are ignored in the evaluation. 29 | By default, only the label 0 in the ground truth will be ignored. 30 | 31 | Returns 32 | ------- 33 | (split, merge) : float 34 | The variation of information split and merge error, i.e., H(X|Y) and H(Y|X) 35 | 36 | References 37 | ---------- 38 | [1] Meila, M. (2007). Comparing clusterings - an information based 39 | distance. Journal of Multivariate Analysis 98, 873-895. 40 | """ 41 | (hyxg, hxgy) = split_vi(reconstruction, groundtruth, ignore_reconstruction, ignore_groundtruth) 42 | return (hxgy, hyxg) 43 | 44 | def split_vi(x, y=None, ignore_x=[0], ignore_y=[0]): 45 | """Return the symmetric conditional entropies associated with the VI. 46 | 47 | The variation of information is defined as VI(X,Y) = H(X|Y) + H(Y|X). 48 | If Y is the ground-truth segmentation, then H(Y|X) can be interpreted 49 | as the amount of under-segmentation of Y and H(X|Y) is then the amount 50 | of over-segmentation. In other words, a perfect over-segmentation 51 | will have H(Y|X)=0 and a perfect under-segmentation will have H(X|Y)=0. 52 | 53 | If y is None, x is assumed to be a contingency table. 54 | 55 | Parameters 56 | ---------- 57 | x : np.ndarray 58 | Label field (int type) or contingency table (float). `x` is 59 | interpreted as a contingency table (summing to 1.0) if and only if `y` 60 | is not provided. 61 | y : np.ndarray of int, same shape as x, optional 62 | A label field to compare to `x`. 63 | ignore_x, ignore_y : list of int, optional 64 | Any points having a label in this list are ignored in the evaluation. 65 | Ignore 0-labeled points by default. 66 | 67 | Returns 68 | ------- 69 | sv : np.ndarray of float, shape (2,) 70 | The conditional entropies of Y|X and X|Y. 71 | 72 | See Also 73 | -------- 74 | vi 75 | """ 76 | _, _, _ , hxgy, hygx, _, _ = vi_tables(x, y, ignore_x, ignore_y) 77 | # false merges, false splits 78 | return np.array([hygx.sum(), hxgy.sum()]) 79 | 80 | def vi_tables(x, y=None, ignore_x=[0], ignore_y=[0]): 81 | """Return probability tables used for calculating VI. 82 | 83 | If y is None, x is assumed to be a contingency table. 84 | 85 | Parameters 86 | ---------- 87 | x, y : np.ndarray 88 | Either x and y are provided as equal-shaped np.ndarray label fields 89 | (int type), or y is not provided and x is a contingency table 90 | (sparse.csc_matrix) that may or may not sum to 1. 91 | ignore_x, ignore_y : list of int, optional 92 | Rows and columns (respectively) to ignore in the contingency table. 93 | These are labels that are not counted when evaluating VI. 94 | 95 | Returns 96 | ------- 97 | pxy : sparse.csc_matrix of float 98 | The normalized contingency table. 99 | px, py, hxgy, hygx, lpygx, lpxgy : np.ndarray of float 100 | The proportions of each label in `x` and `y` (`px`, `py`), the 101 | per-segment conditional entropies of `x` given `y` and vice-versa, the 102 | per-segment conditional probability p log p. 103 | """ 104 | if y is not None: 105 | pxy = contingency_table(x, y, ignore_x, ignore_y) 106 | else: 107 | cont = x 108 | total = float(cont.sum()) 109 | # normalize, since it is an identity op if already done 110 | pxy = cont / total 111 | 112 | # Calculate probabilities 113 | px = np.array(pxy.sum(axis=1)).ravel() 114 | py = np.array(pxy.sum(axis=0)).ravel() 115 | # Remove zero rows/cols 116 | nzx = px.nonzero()[0] 117 | nzy = py.nonzero()[0] 118 | nzpx = px[nzx] 119 | nzpy = py[nzy] 120 | nzpxy = pxy[nzx, :][:, nzy] 121 | 122 | # Calculate log conditional probabilities and entropies 123 | lpygx = np.zeros(np.shape(px)) 124 | lpygx[nzx] = xlogx(divide_rows(nzpxy, nzpx)).sum(axis=1).ravel() 125 | # \sum_x{p_{y|x} \log{p_{y|x}}} 126 | hygx = -(px*lpygx) # \sum_x{p_x H(Y|X=x)} = H(Y|X) 127 | 128 | lpxgy = np.zeros(np.shape(py)) 129 | lpxgy[nzy] = xlogx(divide_columns(nzpxy, nzpy)).sum(axis=0).ravel() 130 | hxgy = -(py*lpxgy) 131 | 132 | return [pxy] + list(map(np.asarray, [px, py, hxgy, hygx, lpygx, lpxgy])) 133 | 134 | def contingency_table(seg, gt, ignore_seg=[0], ignore_gt=[0], norm=True): 135 | """Return the contingency table for all regions in matched segmentations. 136 | 137 | Parameters 138 | ---------- 139 | seg : np.ndarray, int type, arbitrary shape 140 | A candidate segmentation. 141 | gt : np.ndarray, int type, same shape as `seg` 142 | The ground truth segmentation. 143 | ignore_seg : list of int, optional 144 | Values to ignore in `seg`. Voxels in `seg` having a value in this list 145 | will not contribute to the contingency table. (default: [0]) 146 | ignore_gt : list of int, optional 147 | Values to ignore in `gt`. Voxels in `gt` having a value in this list 148 | will not contribute to the contingency table. (default: [0]) 149 | norm : bool, optional 150 | Whether to normalize the table so that it sums to 1. 151 | 152 | Returns 153 | ------- 154 | cont : scipy.sparse.csc_matrix 155 | A contingency table. `cont[i, j]` will equal the number of voxels 156 | labeled `i` in `seg` and `j` in `gt`. (Or the proportion of such voxels 157 | if `norm=True`.) 158 | """ 159 | segr = seg.ravel() 160 | gtr = gt.ravel() 161 | ignored = np.zeros(segr.shape, np.bool) 162 | data = np.ones(len(gtr)) 163 | for i in ignore_seg: 164 | ignored[segr == i] = True 165 | for j in ignore_gt: 166 | ignored[gtr == j] = True 167 | data[ignored] = 0 168 | cont = sparse.coo_matrix((data, (segr, gtr))).tocsc() 169 | if norm: 170 | cont /= float(cont.sum()) 171 | return cont 172 | 173 | def divide_columns(matrix, row, in_place=False): 174 | """Divide each column of `matrix` by the corresponding element in `row`. 175 | 176 | The result is as follows: out[i, j] = matrix[i, j] / row[j] 177 | 178 | Parameters 179 | ---------- 180 | matrix : np.ndarray, scipy.sparse.csc_matrix or csr_matrix, shape (M, N) 181 | The input matrix. 182 | column : a 1D np.ndarray, shape (N,) 183 | The row dividing `matrix`. 184 | in_place : bool (optional, default False) 185 | Do the computation in-place. 186 | 187 | Returns 188 | ------- 189 | out : same type as `matrix` 190 | The result of the row-wise division. 191 | """ 192 | if in_place: 193 | out = matrix 194 | else: 195 | out = matrix.copy() 196 | if type(out) in [sparse.csc_matrix, sparse.csr_matrix]: 197 | if type(out) == sparse.csc_matrix: 198 | convert_to_csc = True 199 | out = out.tocsr() 200 | else: 201 | convert_to_csc = False 202 | row_repeated = np.take(row, out.indices) 203 | nz = out.data.nonzero() 204 | out.data[nz] /= row_repeated[nz] 205 | if convert_to_csc: 206 | out = out.tocsc() 207 | else: 208 | out /= row[np.newaxis, :] 209 | return out 210 | 211 | def divide_rows(matrix, column, in_place=False): 212 | """Divide each row of `matrix` by the corresponding element in `column`. 213 | 214 | The result is as follows: out[i, j] = matrix[i, j] / column[i] 215 | 216 | Parameters 217 | ---------- 218 | matrix : np.ndarray, scipy.sparse.csc_matrix or csr_matrix, shape (M, N) 219 | The input matrix. 220 | column : a 1D np.ndarray, shape (M,) 221 | The column dividing `matrix`. 222 | in_place : bool (optional, default False) 223 | Do the computation in-place. 224 | 225 | Returns 226 | ------- 227 | out : same type as `matrix` 228 | The result of the row-wise division. 229 | """ 230 | if in_place: 231 | out = matrix 232 | else: 233 | out = matrix.copy() 234 | if type(out) in [sparse.csc_matrix, sparse.csr_matrix]: 235 | if type(out) == sparse.csr_matrix: 236 | convert_to_csr = True 237 | out = out.tocsc() 238 | else: 239 | convert_to_csr = False 240 | column_repeated = np.take(column, out.indices) 241 | nz = out.data.nonzero() 242 | out.data[nz] /= column_repeated[nz] 243 | if convert_to_csr: 244 | out = out.tocsr() 245 | else: 246 | out /= column[:, np.newaxis] 247 | return out 248 | 249 | def xlogx(x, out=None, in_place=False): 250 | """Compute x * log_2(x). 251 | 252 | We define 0 * log_2(0) = 0 253 | 254 | Parameters 255 | ---------- 256 | x : np.ndarray or scipy.sparse.csc_matrix or csr_matrix 257 | The input array. 258 | out : same type as x (optional) 259 | If provided, use this array/matrix for the result. 260 | in_place : bool (optional, default False) 261 | Operate directly on x. 262 | 263 | Returns 264 | ------- 265 | y : same type as x 266 | Result of x * log_2(x). 267 | """ 268 | if in_place: 269 | y = x 270 | elif out is None: 271 | y = x.copy() 272 | else: 273 | y = out 274 | if type(y) in [sparse.csc_matrix, sparse.csr_matrix]: 275 | z = y.data 276 | else: 277 | z = y 278 | nz = z.nonzero() 279 | z[nz] *= np.log2(z[nz]) 280 | return y 281 | -------------------------------------------------------------------------------- /cremi/io/CremiFile.py: -------------------------------------------------------------------------------- 1 | import h5py 2 | import numpy as np 3 | from .. import Annotations 4 | from .. import Volume 5 | 6 | class CremiFile(object): 7 | 8 | def __init__(self, filename, mode): 9 | 10 | self.h5file = h5py.File(filename, mode) 11 | 12 | if mode == "w" or mode == "a": 13 | self.h5file["/"].attrs["file_format"] = "0.2" 14 | 15 | def __create_group(self, group): 16 | 17 | path = "/" 18 | for d in group.split("/"): 19 | path += d + "/" 20 | try: 21 | self.h5file.create_group(path) 22 | except ValueError: 23 | pass 24 | 25 | def __create_dataset(self, path, data, dtype, compression = None): 26 | """Wrapper around h5py's create_dataset. Creates the group, if not 27 | existing. Deletes a previous dataset, if existing and not compatible. 28 | Otherwise, replaces the dataset. 29 | """ 30 | 31 | group = "/".join(path.split("/")[:-1]) 32 | ds_name = path.split("/")[-1] 33 | 34 | self.__create_group(group) 35 | 36 | if ds_name in self.h5file[group]: 37 | 38 | ds = self.h5file[path] 39 | if ds.dtype == dtype and ds.shape == np.array(data).shape: 40 | print "overwriting existing dataset" 41 | self.h5file[path][:] = data[:] 42 | return 43 | 44 | del self.h5file[path] 45 | 46 | self.h5file.create_dataset(path, data=data, dtype=dtype, compression=compression) 47 | 48 | def write_volume(self, volume, ds_name, dtype): 49 | 50 | self.__create_dataset(ds_name, data=volume.data, dtype=dtype, compression="gzip") 51 | self.h5file[ds_name].attrs["resolution"] = volume.resolution 52 | if volume.comment is not None: 53 | self.h5file[ds_name].attrs["comment"] = str(volume.comment) 54 | if tuple(volume.offset) != (0.0, 0.0, 0.0): 55 | self.h5file[ds_name].attrs["offset"] = volume.offset 56 | 57 | def read_volume(self, ds_name): 58 | 59 | volume = Volume(self.h5file[ds_name]) 60 | 61 | volume.resolution = self.h5file[ds_name].attrs["resolution"] 62 | if "offset" in self.h5file[ds_name].attrs: 63 | volume.offset = self.h5file[ds_name].attrs["offset"] 64 | if "comment" in self.h5file[ds_name].attrs: 65 | volume.comment = self.h5file[ds_name].attrs["comment"] 66 | 67 | return volume 68 | 69 | def __has_volume(self, ds_name): 70 | 71 | return ds_name in self.h5file 72 | 73 | def write_raw(self, raw): 74 | """Write a raw volume. 75 | """ 76 | 77 | self.write_volume(raw, "/volumes/raw", np.uint8) 78 | 79 | def write_neuron_ids(self, neuron_ids): 80 | """Write a volume of segmented neurons. 81 | """ 82 | 83 | self.write_volume(neuron_ids, "/volumes/labels/neuron_ids", np.uint64) 84 | 85 | def write_clefts(self, clefts): 86 | """Write a volume of segmented synaptic clefts. 87 | """ 88 | 89 | self.write_volume(clefts, "/volumes/labels/clefts", np.uint64) 90 | 91 | def write_annotations(self, annotations): 92 | """Write pre- and post-synaptic site annotations. 93 | """ 94 | 95 | if len(annotations.ids()) == 0: 96 | return 97 | 98 | self.__create_group("/annotations") 99 | if tuple(annotations.offset) != (0.0, 0.0, 0.0): 100 | self.h5file["/annotations"].attrs["offset"] = annotations.offset 101 | 102 | self.__create_dataset("/annotations/ids", data=annotations.ids(), dtype=np.uint64) 103 | self.__create_dataset("/annotations/types", data=annotations.types(), dtype=h5py.special_dtype(vlen=unicode), compression="gzip") 104 | self.__create_dataset("/annotations/locations", data=annotations.locations(), dtype=np.double) 105 | 106 | if len(annotations.comments) > 0: 107 | self.__create_dataset("/annotations/comments/target_ids", data=annotations.comments.keys(), dtype=np.uint64) 108 | self.__create_dataset("/annotations/comments/comments", data=annotations.comments.values(), dtype=h5py.special_dtype(vlen=unicode)) 109 | 110 | if len(annotations.pre_post_partners) > 0: 111 | self.__create_dataset("/annotations/presynaptic_site/partners", data=annotations.pre_post_partners, dtype=np.uint64) 112 | 113 | def has_raw(self): 114 | """Check if this file contains a raw volume. 115 | """ 116 | return self.__has_volume("/volumes/raw") 117 | 118 | def has_neuron_ids(self): 119 | """Check if this file contains neuron ids. 120 | """ 121 | return self.__has_volume("/volumes/labels/neuron_ids") 122 | 123 | def has_neuron_ids_confidence(self): 124 | """Check if this file contains confidence information about neuron ids. 125 | """ 126 | return self.__has_volume("/volumes/labels/neuron_ids_confidence") 127 | 128 | def has_clefts(self): 129 | """Check if this file contains synaptic clefts. 130 | """ 131 | return self.__has_volume("/volumes/labels/clefts") 132 | 133 | def has_annotations(self): 134 | """Check if this file contains synaptic partner annotations. 135 | """ 136 | return "/annotations" in self.h5file 137 | 138 | def has_segment_annotations(self): 139 | """Check if this file contains segment annotations. 140 | """ 141 | return "/annotations" in self.h5file 142 | 143 | def read_raw(self): 144 | """Read the raw volume. 145 | Returns a Volume. 146 | """ 147 | 148 | return self.read_volume("/volumes/raw") 149 | 150 | def read_neuron_ids(self): 151 | """Read the volume of segmented neurons. 152 | Returns a Volume. 153 | """ 154 | 155 | return self.read_volume("/volumes/labels/neuron_ids") 156 | 157 | def read_neuron_ids_confidence(self): 158 | """Read confidence information about neuron ids. 159 | Returns Confidences. 160 | """ 161 | 162 | confidences = Confidences(num_levels=2) 163 | if not self.has_neuron_ids_confidence(): 164 | return confidences 165 | 166 | data = self.h5file["/volumes/labels/neuron_ids_confidence"] 167 | i = 0 168 | while i < len(data): 169 | level = data[i] 170 | i += 1 171 | num_ids = data[i] 172 | i += 1 173 | confidences.add_all(level, data[i:i+num_ids]) 174 | i += num_ids 175 | 176 | return confidences 177 | 178 | def read_clefts(self): 179 | """Read the volume of segmented synaptic clefts. 180 | Returns a Volume. 181 | """ 182 | 183 | return self.read_volume("/volumes/labels/clefts") 184 | 185 | def read_annotations(self): 186 | """Read pre- and post-synaptic site annotations. 187 | """ 188 | 189 | annotations = Annotations() 190 | 191 | if not "/annotations" in self.h5file: 192 | return annotations 193 | 194 | offset = (0.0, 0.0, 0.0) 195 | if "offset" in self.h5file["/annotations"].attrs: 196 | offset = self.h5file["/annotations"].attrs["offset"] 197 | annotations.offset = offset 198 | 199 | ids = self.h5file["/annotations/ids"] 200 | types = self.h5file["/annotations/types"] 201 | locations = self.h5file["/annotations/locations"] 202 | for i in range(len(ids)): 203 | annotations.add_annotation(ids[i], types[i], locations[i]) 204 | 205 | if "comments" in self.h5file["/annotations"]: 206 | ids = self.h5file["/annotations/comments/target_ids"] 207 | comments = self.h5file["/annotations/comments/comments"] 208 | for (id, comment) in zip(ids, comments): 209 | annotations.add_comment(id, comment) 210 | 211 | if "presynaptic_site/partners" in self.h5file["/annotations"]: 212 | pre_post = self.h5file["/annotations/presynaptic_site/partners"] 213 | for (pre, post) in pre_post: 214 | annotations.set_pre_post_partners(pre, post) 215 | 216 | return annotations 217 | 218 | def close(self): 219 | 220 | self.h5file.close() 221 | -------------------------------------------------------------------------------- /cremi/io/__init__.py: -------------------------------------------------------------------------------- 1 | from CremiFile import * 2 | -------------------------------------------------------------------------------- /example_evaluation.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | from cremi.io import CremiFile 4 | from cremi.evaluation import NeuronIds, Clefts, SynapticPartners 5 | 6 | test = CremiFile('test.hdf', 'r') 7 | truth = CremiFile('groundtruth.hdf', 'r') 8 | 9 | neuron_ids_evaluation = NeuronIds(truth.read_neuron_ids()) 10 | 11 | (voi_split, voi_merge) = neuron_ids_evaluation.voi(test.read_neuron_ids()) 12 | adapted_rand = neuron_ids_evaluation.adapted_rand(test.read_neuron_ids()) 13 | 14 | print "Neuron IDs" 15 | print "==========" 16 | print "\tvoi split : " + str(voi_split) 17 | print "\tvoi merge : " + str(voi_merge) 18 | print "\tadapted RAND: " + str(adapted_rand) 19 | 20 | clefts_evaluation = Clefts(test.read_clefts(), truth.read_clefts()) 21 | 22 | false_positive_count = clefts_evaluation.count_false_positives() 23 | false_negative_count = clefts_evaluation.count_false_negatives() 24 | 25 | false_positive_stats = clefts_evaluation.acc_false_positives() 26 | false_negative_stats = clefts_evaluation.acc_false_negatives() 27 | 28 | print "Clefts" 29 | print "======" 30 | 31 | print "\tfalse positives: " + str(false_positive_count) 32 | print "\tfalse negatives: " + str(false_negative_count) 33 | 34 | print "\tdistance to ground truth: " + str(false_positive_stats) 35 | print "\tdistance to proposal : " + str(false_negative_stats) 36 | 37 | synaptic_partners_evaluation = SynapticPartners() 38 | fscore = synaptic_partners_evaluation.fscore(test.read_annotations(), truth.read_annotations(), truth.read_neuron_ids()) 39 | 40 | print "Synaptic partners" 41 | print "=================" 42 | print "\tfscore: " + str(fscore) 43 | -------------------------------------------------------------------------------- /example_read.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | from cremi import Annotations, Volume 4 | from cremi.io import CremiFile 5 | import numpy as np 6 | import random 7 | 8 | # Open a file for reading 9 | file = CremiFile("example.hdf", "r") 10 | 11 | # Check the content of the datafile 12 | print "Has raw: " + str(file.has_raw()) 13 | print "Has neuron ids: " + str(file.has_neuron_ids()) 14 | print "Has clefts: " + str(file.has_clefts()) 15 | print "Has annotations: " + str(file.has_annotations()) 16 | 17 | # Read everything there is. 18 | # 19 | # If you are using the padded versions of the datasets (where raw is larger to 20 | # provide more context), the offsets of neuron_ids, clefts, and annotations tell 21 | # you where they are placed in nm relative to (0,0,0) of the raw volume. 22 | # 23 | # In other words, neuron_ids, clefts, and annotations are exactly the same 24 | # between the padded and unpadded versions, except for the offset attribute. 25 | raw = file.read_raw() 26 | neuron_ids = file.read_neuron_ids() 27 | clefts = file.read_clefts() 28 | annotations = file.read_annotations() 29 | 30 | print "Read raw: " + str(raw) + \ 31 | ", resolution " + str(raw.resolution) + \ 32 | ", offset " + str(raw.offset) + \ 33 | ("" if raw.comment == None else ", comment \"" + raw.comment + "\"") 34 | 35 | print "Read neuron_ids: " + str(neuron_ids) + \ 36 | ", resolution " + str(neuron_ids.resolution) + \ 37 | ", offset " + str(neuron_ids.offset) + \ 38 | ("" if neuron_ids.comment == None else ", comment \"" + neuron_ids.comment + "\"") 39 | 40 | print "Read clefts: " + str(clefts) + \ 41 | ", resolution " + str(clefts.resolution) + \ 42 | ", offset " + str(clefts.offset) + \ 43 | ("" if clefts.comment == None else ", comment \"" + clefts.comment + "\"") 44 | 45 | print "Read annotations:" 46 | for (id, type, location) in zip(annotations.ids(), annotations.types(), annotations.locations()): 47 | print str(id) + " of type " + type + " at " + str(np.array(location)+np.array(annotations.offset)) 48 | print "Pre- and post-synaptic partners:" 49 | for (pre, post) in annotations.pre_post_partners: 50 | print str(pre) + " -> " + str(post) 51 | -------------------------------------------------------------------------------- /example_write.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | from cremi import Annotations, Volume 4 | from cremi.io import CremiFile 5 | import numpy as np 6 | import random 7 | 8 | 9 | # Create some dummy annotation data 10 | annotations = Annotations() 11 | for id in [ 0, 1, 2, 3 ]: 12 | location = (random.randint(0, 100), random.randint(0, 100), random.randint(0, 100)) 13 | annotations.add_annotation(id, "presynaptic_site", location) 14 | for id in [ 4, 5, 6, 7 ]: 15 | location = (random.randint(0, 100), random.randint(0, 100), random.randint(0, 100)) 16 | annotations.add_annotation(id, "postsynaptic_site", location) 17 | for (pre, post) in [ (0, 4), (1, 5), (2, 6), (3, 7) ]: 18 | annotations.set_pre_post_partners(pre, post) 19 | annotations.add_comment(6, "unsure") 20 | 21 | # Open a file for writing (deletes previous file, if exists) 22 | file = CremiFile("example.hdf", "w") 23 | 24 | # Write the raw volume. This is given here just for illustration. For your 25 | # submission, you don't need to store the raw data. We have it already. 26 | raw = Volume(np.zeros((10,100,100), dtype=np.uint8), resolution=(40.0, 4.0, 4.0)) 27 | file.write_raw(raw) 28 | 29 | # Write volumes representing the neuron and synaptic cleft segmentation. 30 | neuron_ids = Volume(np.ones((10,100,100), dtype=np.uint64), resolution=(40.0, 4.0, 4.0), comment="just ones") 31 | clefts = Volume(np.zeros((10,100,100), dtype=np.uint64), resolution=(40.0, 4.0, 4.0), comment="just zeros") 32 | file.write_neuron_ids(neuron_ids) 33 | file.write_clefts(clefts) 34 | 35 | # Write synaptic partner annotations. 36 | file.write_annotations(annotations) 37 | 38 | file.close() 39 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | h5py 2 | numpy 3 | scipy 4 | munkres 5 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from distutils.core import setup 4 | 5 | setup( 6 | name='cremi', 7 | version='0.7', 8 | description='Python Package for the CREMI Challenge', 9 | author='Jan Funke', 10 | author_email='jfunke@iri.upc.edu', 11 | url='http://github.com/funkey/cremi_python', 12 | packages=['cremi', 'cremi.io', 'cremi.evaluation'], 13 | ) 14 | -------------------------------------------------------------------------------- /tests/example.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cremi/cremi_python/aa6d211f177ad7f0364ed027ea0314803d114c04/tests/example.h5 -------------------------------------------------------------------------------- /tests/example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cremi/cremi_python/aa6d211f177ad7f0364ed027ea0314803d114c04/tests/example.png -------------------------------------------------------------------------------- /tests/test_border.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import numpy as np 4 | import skimage.io as io 5 | from skimage.viewer import CollectionViewer 6 | 7 | import cremi.evaluation as evaluation 8 | import cremi.io.CremiFile as CremiFile 9 | 10 | 11 | if __name__ == "__main__": 12 | img = [io.imread('example.png')] 13 | 14 | for w in (0, 2, 4, 10): 15 | target = np.copy(img[0])[...,np.newaxis] 16 | evaluation.create_border_mask(img[0][...,np.newaxis],target,w,105,axis=2) 17 | img.append(target[...,0]) 18 | 19 | v = CollectionViewer(img) 20 | v.show() 21 | 22 | cfIn = CremiFile('example.h5', 'r') 23 | cfOut = CremiFile('output.h5', 'a') 24 | 25 | evaluation.create_and_write_masked_neuron_ids(cfIn, cfOut, 3, 240, overwrite=True) 26 | --------------------------------------------------------------------------------