├── .gitattributes
├── .gitignore
├── AutoDisk_Demo.ipynb
├── LICENSE
├── README.md
├── autodisk.py
├── kernel_cir.npy
└── pdpt_x64_y64.raw
/.gitattributes:
--------------------------------------------------------------------------------
1 | *.raw filter=lfs diff=lfs merge=lfs -text
2 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 swang59
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # AutoDisk Demostration
2 |
3 | AutoDisk is an automatic convergent beam electron diffraction (CBED) pattern analysis method to detect diffraction disks in each CBED pattern from a four-dimensional scanning transmission electron microscopy (4D-STEM) dataset, refining disk positions and extracting lattice parameters if a 2D crystal lattice is found. This method can perform lattice parameter estimation and strain mapping of a nanometer to mirometer sampling area, which can be the basis for high throughput phase, symmetry, orientation, and crystallographic analysis in general.
4 | In this demostration, we would show an example of AutoDisk being used to map strain of a core-shell Pd@Pt nanoparticle. The Jupyter notebook "AutoDisk_Demo.ipynb" goes through each step illustrating how the 4D-STEM data is read in and pre-processed, how the disks are detected, how lattice parameters are estimated, and finally how strain maps are generated. To run the code, please download the utilities file "autodisk.py" and the demostration file "AutoDisk_Demo.ipynb". Also put the data file "pdpt_x64_y64.raw", a 4D-STEM dataset with 64 * 64 probe positions of 128 * 128 pixel CBED patterns from a square scanning area, in the same folder, and run the notebook. You are welcome to tune the parameters and try the method on your own data.
5 | In the current version, the input file needs to be in the .raw format generated from EMPAD. We are going to implement more compatible data formats in the later version, and add more functions such as pattern clustering and strain mapping of amorphus area, and also accelerate the analysis with code optimazation.
6 | For details of the method, please refer to the paper "AutoDisk: Automated Diffraction Processing and Strain Mapping in 4D-STEM" by Sihan Wang, Tim Eldred, Jacob Smith and Wenpei Gao (https://authors.elsevier.com/c/1ek4R15DbnYMfX).
7 |
--------------------------------------------------------------------------------
/autodisk.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # -*- coding: utf-8 -*-
3 | """
4 | AutoDisk version 1.0
5 |
6 | @author: Sihan Wang(swang59@ncsu.edu)
7 | """
8 |
9 | import numpy as np
10 | import cv2
11 | import os
12 | import matplotlib.pyplot as plt
13 | import copy
14 | from skimage import feature,draw
15 | from skimage.feature import blob_log
16 | from skimage.io import imsave
17 | from skimage.transform import resize
18 | from scipy import stats,signal
19 |
20 |
21 |
22 | ###################################################################
23 | # This file includes the utilities of AutoDisk, an automated diffraction
24 | # pattern analysis method for 4D-STEM. This version covers the functions
25 | # for diffraction disk recognition, lattice parameter estimation and
26 | # lattice strain mapping.
27 | #
28 | # For details about the method, please refer to the manuscript:
29 | # "AutoDisk: Automated Diffraction Processing and Strain Mapping in 4D-STEM"
30 | # by Sihan Wang, Tim Eldred, Jacob Smith and Wenpei Gao.
31 | ###################################################################
32 |
33 |
34 | def visual(image,plot = True):
35 | """
36 | Convert a 2D array of int or float to an int8 array of image and visualize it.
37 |
38 | Parameters
39 | ----------
40 | image : 2D array of int or float
41 | plot : bool, optional
42 | Ture if the image need to be ploted. The default is True.
43 |
44 | Returns
45 | -------
46 | image_out: 2D array of int8
47 |
48 | """
49 | image_out = (((image - image.min()) / (image.max() - image.min())) * 255).astype(np.uint8)
50 | if plot==True:
51 | plt.imshow(image_out,cmap='gray')
52 | plt.show()
53 |
54 | return image_out
55 |
56 |
57 |
58 | def readData(dname):
59 | """
60 | Read in a 4D-STEM data file.
61 |
62 | Parameters
63 | ----------
64 | dname : str
65 | Name of the data file.
66 |
67 | Returns
68 | -------
69 | data: 4D array of int or float
70 | The read-in 4D-STEM data.
71 |
72 | """
73 | dimy = 130
74 | dimx = 128
75 |
76 | file = open(dname,'rb')
77 | data = np.fromfile(file, np.float32)
78 | pro_dim = int(np.sqrt(len(data)/dimx/dimy))
79 |
80 | data = np.reshape(data, (pro_dim, pro_dim, dimy, dimx))
81 | data = data[:,:,1:dimx+1, :]
82 | file.close()
83 |
84 | return data
85 |
86 |
87 |
88 | def savePat(out_dir, data, ext ='.tif'):
89 | """
90 | Save diffraction patterns into '.tif's.
91 |
92 | Parameters
93 | ----------
94 | out_dir : str
95 | The name of the save folder.
96 | data : 2D array of int or float
97 | Array of a 4D dataset.
98 | ext : str, optional
99 | Extension of the output pattern. The default is '.tif'.
100 |
101 | Returns
102 | -------
103 | None.
104 |
105 | """
106 | pro_dim,pro_dim = data.shape[:2]
107 | out_dir = os.path.join(out_dir)
108 | for i in range(pro_dim):
109 | for j in range(pro_dim):
110 | pattern = data[i,j]
111 | imsave(out_dir+np.str_(i)+'_'+np.str_(j)+".tif", pattern, plugin="tifffile")
112 |
113 | pass
114 |
115 |
116 |
117 | def generateAdf(data,in_rad,out_rad,save=False):
118 | """
119 | Generate an annular dark-field image from the diffraction patterns.
120 |
121 | Parameters
122 | ----------
123 | data : 4D array of int or float
124 | The 4D dataset.
125 | in_rad : int
126 | Inner collection angle.
127 | out_rad : int
128 | Outer collection angle.
129 |
130 | Returns
131 | -------
132 | None.
133 |
134 | """
135 | imgh,imgw,pxh,pxw = data.shape
136 | i = imgh//2
137 | j = imgw//2
138 |
139 | data[np.where(np.isnan(data)==True)] = 0
140 | data[i,j,:,:] -= np.min(data[i,j,:,:])
141 | data[i,j,:,:] += 0.0000000001
142 |
143 | mask_img = np.zeros((pxh,pxw,3))
144 |
145 | rr,cc = draw.disk((pxh//2,pxw//2),out_rad)
146 | draw.set_color(mask_img,[rr,cc],[1,1,1])
147 |
148 | rr,cc = draw.disk((pxh//2,pxw//2),in_rad)
149 | draw.set_color(mask_img,[rr,cc],[0,0,0])
150 |
151 | adf=np.mean(data*mask_img[:,:,0],axis=(-2,-1))
152 |
153 | plt.imshow(adf,cmap='gray')
154 | plt.show()
155 |
156 | pass
157 |
158 |
159 |
160 | def generateAvg(data):
161 | """
162 | Generate an average (sum) pattern from the 4D dataset.
163 |
164 | Parameters
165 | ----------
166 | data : 4D array of int or float
167 | Array of the 4D dataset.
168 |
169 | Returns
170 | -------
171 | avg_pat: 2D array of int or float
172 | An average (sum) difffraction pattern.
173 |
174 | """
175 | pro_y,pro_x = data.shape[:2]
176 | avg_pat = data[0,0]*1
177 | avg_pat[:,:] = 0
178 | for row in range (pro_y):
179 | for col in range (pro_x):
180 | avg_pat += data[row,col]
181 |
182 | return avg_pat
183 |
184 |
185 |
186 | def ctrRadiusIni(pattern):
187 | """
188 | Find the center coordinate and the radius of the zero-order disk.
189 |
190 | Parameters
191 | ----------
192 | pattern : 2D array of int or float
193 | A diffraction pattern.
194 |
195 | Returns
196 | -------
197 | ctr : 1D array of int or float
198 | Array of the center coordinates [row,col].
199 | avg_r : float
200 | Radius of the center disk in unit of pixels.
201 |
202 | """
203 | h,w = pattern.shape
204 | ctr = h//2
205 | pix_w = pattern[ctr,:]
206 | pix_h = pattern[:,ctr]
207 |
208 | fir_der_w = np.abs(pix_w[:1]-pix_w[1:])
209 | sec_dir_w_r = np.array(fir_der_w[w//2:-1]-fir_der_w[w//2+1:])
210 | sec_dir_w_l = np.array(fir_der_w[1:w//2]-fir_der_w[:w//2-1])
211 | avg_pos1_w = np.where(sec_dir_w_r==sec_dir_w_r.max())[0][0]
212 | avg_pos2_w = np.where(sec_dir_w_l==sec_dir_w_l.max())[0][0]
213 | avg_r_w = np.mean([avg_pos1_w+1,len(sec_dir_w_l)-avg_pos2_w])
214 | ctr_w = np.mean([w//2 + avg_pos1_w + 1,avg_pos2_w + 2])
215 |
216 | fir_der_h = np.abs(pix_h[:1]-pix_h[1:])
217 | sec_dir_h_b = np.array(fir_der_h[h//2:-1]-fir_der_h[h//2+1:])
218 | sec_dir_h_u = np.array(fir_der_h[1:h//2]-fir_der_h[:h//2-1])
219 | avg_pos1_h = np.where(sec_dir_h_b==sec_dir_h_b.max())[0][0]
220 | avg_pos2_h = np.where(sec_dir_h_u==sec_dir_h_u.max())[0][0]
221 | avg_r_h = np.mean([avg_pos1_h+1,len(sec_dir_h_u)-avg_pos2_h])
222 | ctr_h = np.mean([h//2 + avg_pos1_h + 1, avg_pos2_h+2])
223 |
224 | avg_r = np.mean([avg_r_w,avg_r_h])
225 | ctr = np.array([ctr_h,ctr_w])
226 |
227 | return ctr,avg_r
228 |
229 |
230 |
231 | def generateKernel(pattern,ctr,r,c=0.7,pad=2,pre_def = False):
232 | """
233 | Generate the kernel for cross-correlation based on thee center disk.
234 |
235 | Parameters
236 | ----------
237 | pattern : 2D array of int or float
238 | An array of a diffraction pattern.
239 | ctr : 1D array of float
240 | Array of the row and column coordinates of the center.
241 | r : float
242 | Radius of a disk.
243 | c : float, optional
244 | An coefficient to modify the kernel size. The default is 0.7.
245 | pad : int, optional
246 | A hyperparameter to change the padding size out of the feature. The default is 2.
247 | pre_def: bool, optional
248 | If True, read the pre-defined ring kernel. The default is False.
249 |
250 | Returns
251 | -------
252 | fil_ring : 2D array of float
253 | Array of the kernel.
254 |
255 | """
256 | if pre_def == True:
257 | ring = np.load("kernel_cir.npy")
258 | f_size = int(2*r*c)
259 | ring = resize(ring, (f_size, f_size))
260 | fil_ring = np.zeros((len(ring)+2*pad,len(ring)+2*pad),dtype=float)
261 | fil_ring[pad:-pad,pad:-pad] = ring
262 |
263 | return fil_ring
264 |
265 |
266 | y_st = int(ctr[0]-r+0.5-pad*2)
267 | y_end = int(ctr[0]+r+0.5+pad*2)
268 | x_st = int(ctr[1]-r+0.5-pad*2)
269 | x_end = int(ctr[1]+r+0.5+pad*2)
270 | # +0.5 to avoid rounding errors (always shift to right, so 0,5 is modified to 1.5)
271 |
272 | if y_end-y_st==x_end-x_st:
273 | ctr_disk = pattern[y_st:y_end,x_st:x_end]
274 | elif y_end-y_st>x_end-x_st:
275 | ctr_disk = pattern[y_st+1:y_end,x_st:x_end]
276 | else:
277 | ctr_disk = pattern[y_st:y_end,x_st+1:x_end]
278 |
279 | edge_det = feature.canny(ctr_disk, sigma=1)
280 |
281 | dim = len(ctr_disk)
282 | dim_hf = dim/2
283 | fil_ring = np.zeros((dim,dim))
284 | for i in range (dim):
285 | for j in range (dim):
286 | if edge_det[i,j]==True:
287 | if (i-dim_hf)**2+(j-dim_hf)**2>int(r-2)**2 and (i-dim_hf)**2+(j-dim_hf)**2sh-f_size-5) or np.any(blobs_log[i,1]>sw-f_size-5):
391 | rem.append(i)
392 |
393 | blobs_log_out = np.delete(blobs_log, rem, axis =0)
394 | blobs_log_out -= f_size
395 |
396 | blobs = blobs_log_out[:,:2].astype(int)
397 |
398 | return blobs
399 |
400 |
401 |
402 | def radGradMax(sample, blobs, r, rn=20, ra=2, n_p=40, threshold=3):
403 | """
404 | Radial gradient Maximum process.
405 |
406 | Parameters
407 | ----------
408 | sample : 2D array of float or int
409 | The diffraction pattern.
410 | blobs : 2D array of int or float
411 | Blob coordinates.
412 | r : float
413 | Radius of the disk
414 | rn : int, optional
415 | The total number of rings. The default is 20.
416 | ra : int, optional
417 | Half of the window size. The default is 2.
418 | n_p : int, optional
419 | The number of sampling points on a ring. The default is 40.
420 | threshold : float, optional
421 | A threshold to filter out outliers. The smaller the threshold is, the more outliers are detected. The default is 3.
422 |
423 | Returns
424 | -------
425 | ref_ctr : 2D array of float
426 | Array with three columns, y component, x component and the weight of each detected disk.
427 |
428 | """
429 | ori_ctr = blobs
430 | h,w = sample.shape
431 | adjr = r * 1
432 | r_scale = np.linspace(adjr*0.8, adjr*1.2, rn)
433 | theta = np.linspace(0, 2*np.pi, n_p)
434 | ref_ctr = []
435 |
436 | for lp in range (len(ori_ctr)):
437 | test_ctr = ori_ctr[lp]
438 | ind_list = []
439 | for ca in range (-ra,ra):
440 | for cb in range (-ra,ra):
441 | cur_row, cur_col = test_ctr[0]+ca, test_ctr[1]+cb
442 | cacb_rn = np.empty(rn)
443 | for i in range (rn):
444 | row_coor = np.array([cur_row + r_scale[i] * np.sin(theta) + 0.5]).astype(int)
445 | col_coor = np.array([cur_col + r_scale[i] * np.cos(theta) + 0.5]).astype(int)
446 |
447 | row_coor[row_coor>=h]=h-1
448 | row_coor[row_coor<0]=0
449 | col_coor[col_coor>=w]=w-1
450 | col_coor[col_coor<0]=0
451 |
452 | int_sum = np.sum(sample[row_coor,col_coor])
453 | cacb_rn[i] = int_sum
454 |
455 | cacb_rn[:rn//2] *= np.linspace(1,rn//2,rn//2)
456 | cacb_diff = np.sum(cacb_rn[:rn//2]) - np.sum(cacb_rn[rn//2:])
457 | ind_list.append([cur_row, cur_col,cacb_diff])
458 |
459 |
460 | ind_list = np.array(ind_list)
461 | ind_max = np.where(ind_list[:,2]==ind_list[:,2].max())[0][0]
462 | ref_ctr.append(ind_list[ind_max])
463 |
464 | ref_ctr = np.array(ref_ctr)
465 |
466 | # Check Outliers
467 | z = np.abs(stats.zscore(ref_ctr[:,2]))
468 | outlier = np.where(z>threshold)
469 | if len(outlier[0])>0:
470 | for each in outlier[0]:
471 | if np.linalg.norm(ref_ctr[each,:2]-[h//2,w//2])> r:
472 | ref_ctr = np.delete(ref_ctr,outlier[0],axis = 0)
473 |
474 | return ref_ctr
475 |
476 |
477 |
478 | def detAng(ref_ctr,ctr,r): # threshold: accepted angle difference
479 | """
480 | Detect an angle to rotate the disk coordinates.
481 |
482 | Parameters
483 | ----------
484 | ref_ctr : 2D array of float
485 | Array of disk position coordinates and their corresponding weights
486 | ctr : 1D array of float
487 | Center of the zero-order disk.
488 | r : float
489 | Radius of the disks.
490 |
491 | Returns
492 | -------
493 | wt_ang : float
494 | The rotation angle.
495 | ref_ctr : 2D array of float
496 | Refined disk positions.
497 |
498 | """
499 | ctr_vec = ref_ctr[:,:2] - ctr
500 | ctr_diff = ctr_vec[:,0]**2 + ctr_vec[:,1]**2
501 | ctr_idx = np.where(ctr_diff==ctr_diff.min())[0][0]
502 |
503 | diff = ref_ctr[:,:2]-ctr
504 | distance = diff[:,0]**2 + diff[:,1]**2
505 |
506 | dis_copy = copy.deepcopy(distance)
507 | min_dis = []
508 | while len(min_dis) <5:
509 | cur_min = dis_copy.min()
510 | idx_rem = np.where(dis_copy==cur_min)[0]
511 | dis_copy = np.delete(dis_copy,idx_rem)
512 | idx_ctr = np.where(distance==cur_min)[0]
513 | if len(idx_ctr)==1:
514 | min_dis.append(ref_ctr[idx_ctr[0],:2])
515 | else:
516 | for each in idx_ctr:
517 | min_dis.append(ref_ctr[each,:2])
518 |
519 | min_dis_ctr = np.array(min_dis,dtype = int)
520 | min_dis_ctr = np.delete(min_dis_ctr,0,axis = 0) # delete [0,0]
521 |
522 | vec = min_dis_ctr-ctr
523 |
524 | ang = np.arctan2(vec[:,0],vec[:,1])* 180 / np.pi
525 |
526 | for i in range (len(ang)):
527 | ang[i] = (180 + ang[i]) if (ang[i]<0) else ang[i]
528 |
529 | cand_ang_idx = np.where(ang==ang.min())[0]
530 | sup_pt = min_dis_ctr[cand_ang_idx] # the point retuning the smallest rotation angle
531 |
532 |
533 | ref_diff = ctr-sup_pt
534 | ini_ang = np.arctan2(ref_diff[:,0],ref_diff[:,1])*180/np.pi
535 | all_ref = []
536 | for n in range (len(ini_ang)):
537 | all_ref.append(np.array([ini_ang[n]]))
538 | if len(ref_diff)>1:
539 | ref_diff = ref_diff[0]
540 |
541 | for each_ctr in ref_ctr:
542 | cur_vec = each_ctr[:2] - ref_diff
543 | cur_diff = ref_ctr[:,:2]-cur_vec
544 | cur_norm = np.linalg.norm(cur_diff,axis=1)
545 | if cur_norm.min()= 180:
555 | all_ref[i] = 180 - all_ref[i]
556 |
557 | wt_ang = np.mean(all_ref)
558 | ref_ctr[ctr_idx,2] = 10**38
559 |
560 | return wt_ang, ref_ctr
561 |
562 |
563 |
564 | def rotImg(image, angle, ctr):
565 | """
566 | Rotate a pattern.
567 |
568 | Parameters
569 | ----------
570 | image : 2D array of int or float
571 | The input pattern.
572 | angle : float
573 | An angel to rotate.
574 | ctr : 1D array of int or float
575 | The rotation center.
576 |
577 | Returns
578 | -------
579 | result : 2D array of int or float
580 | The rotated pattern.
581 |
582 | """
583 | image_center = tuple(np.array([ctr[0],ctr[1]]))
584 | rot_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0)
585 | result = cv2.warpAffine(image, rot_mat, image.shape[1::-1], flags=cv2.INTER_LINEAR)
586 |
587 | return result
588 |
589 |
590 |
591 | def rotCtr(pattern,ref_ctr,angle):
592 | """
593 | Rotate disk coordinates.
594 |
595 | Parameters
596 | ----------
597 | pattern : 2D array of int or float
598 | A diffraction pattern.
599 | ref_ctr : 2D array of float
600 | Array of the detected disk positions.
601 | angle : float
602 | Detected angle to rotate.
603 |
604 | Returns
605 | -------
606 | ctr_new : 2D array of float
607 | The transformed disk positions.
608 |
609 | """
610 | h,w = pattern.shape
611 | ctr_idx = np.where(ref_ctr[:,2]==ref_ctr[:,2].max())[0][0]
612 | ctr = ref_ctr[ctr_idx]
613 | ctr_new = []
614 | ang_rad = angle*np.pi/180
615 |
616 | for i in range (len(ref_ctr)):
617 | cur_cd = ref_ctr[i,:2]
618 | y_new = -(ctr[0] - (cur_cd[0]-ctr[0])*np.cos(ang_rad) + (cur_cd[1]-ctr[1])*np.sin(ang_rad) ) + 2*ctr[0]
619 | x_new = (ctr[1] + (cur_cd[0]-ctr[0])*np.sin(ang_rad) + (cur_cd[1]-ctr[1])*np.cos(ang_rad) )
620 |
621 | if y_new>0 and x_new>0 and y_newr:
664 | g_y.append([load_ctr[i]])
665 | else:
666 | g_y[gy_ind].append(load_ctr[i])
667 |
668 | return g_y
669 |
670 |
671 |
672 | def latFit(pattern,rot_ref_ctr,r):
673 | """
674 | Lattice fitting process.
675 |
676 | Parameters
677 | ----------
678 | pattern : 2D array of int or float
679 | A diffraction pattern.
680 | rot_ref_ctr : 2D array of float
681 | Array of the disks positionss.
682 | r : float
683 | Radius of the disks.
684 |
685 | Returns
686 | -------
687 | vec_a : 1D array of float
688 | The estimated horizontal lattice vector [y component, x component].
689 | vec_b_ref : 1D array of float
690 | The estimated non-horizontal lattice vector [y component, x component].
691 | result_ctr : 2D array of float
692 | Array of the refined disk positions.
693 | lat_ctr_arr : 2D array of float
694 | The array of the positions of disks in the middle row.
695 | avg_ref_ang : float
696 | Refined rotation angle.
697 |
698 | """
699 | load_ctr = rot_ref_ctr*1
700 | g_y = groupY(load_ctr,r)
701 |
702 | vec_a = np.array([0,0])
703 | vec_b_ref = np.array([0,0])
704 |
705 | result_ctr = copy.deepcopy(rot_ref_ctr)
706 | lat_ctr = []
707 | avg_ref_ang = 0
708 |
709 | ########## Sort y values in each group and refine the angle ##########
710 | ref_ang = []
711 | for ea_g in g_y:
712 | if len(ea_g)>1:
713 | ea_g_arr = np.array(ea_g)
714 |
715 | result = np.polyfit(ea_g_arr[:,1], ea_g_arr[:,0], 1)
716 | ref_ang.append(np.arctan2(result[0],1)* 180 / np.pi)
717 |
718 | if len(ref_ang)>0:
719 | avg_ref_ang = sum(ref_ang)/len(ref_ang)
720 | else:
721 | avg_ref_ang = 0
722 |
723 | rot_ref_ctr2 = rotCtr(pattern,load_ctr,avg_ref_ang)
724 |
725 | g_y = groupY(rot_ref_ctr2,r)
726 |
727 | g_y_len = [len(l) for l in g_y]
728 |
729 | if max(g_y_len)>1:
730 | ################ Refine y values #######################
731 | n = len(rot_ref_ctr2)
732 | ref_y = []
733 | for group in g_y:
734 | cur_mean = 0
735 | sum_cur = 0
736 | for each in group:
737 | sum_cur += each[2]
738 | for each in group:
739 | cur_mean += each[0]*(each[2]/sum_cur)
740 | ref_y.append(cur_mean) # Weighted mean
741 |
742 | # Change y values to the averaged y in each group
743 | result_ctr = copy.deepcopy(rot_ref_ctr2)
744 | for j in range (n):
745 | cur_y = rot_ref_ctr2[j,0]
746 | d_y = [np.abs(s-cur_y) for s in ref_y]
747 | min_y_ind = np.argmin(d_y)
748 | result_ctr[j][0] = ref_y[min_y_ind]
749 |
750 | ################ Vec a #######################
751 | x_g = []
752 | tit_diff_x = []
753 | for cur_y in ref_y:
754 | cur_x_g = result_ctr[np.where(result_ctr[:,0]== cur_y)]
755 | if len(cur_x_g)>1:
756 | cur_x_g.sort(axis = 0)
757 | x_g.append(cur_x_g)
758 | cur_diff_x = cur_x_g[1:]-cur_x_g[:-1]
759 | tit_diff_x.append(cur_diff_x)
760 | else:
761 | x_g.append(cur_x_g)
762 |
763 | ###################### Calculate average distance ################
764 | if len(tit_diff_x)>0:
765 | outl_rem_x = []
766 | mean_diff_x = []
767 |
768 | for i in range (len(tit_diff_x)):
769 | for x in tit_diff_x[i]:
770 | outl_rem_x.append(x[1])
771 |
772 | outl_rem_x = np.array(outl_rem_x)
773 | q1, q3= np.percentile(outl_rem_x,[25,75])
774 | lower_bound = 2.5*q1 - 1.5*q3
775 | upper_bound = 2.5*q3 - 1.5*q1
776 |
777 | for each_g in tit_diff_x:
778 | each_g_mod = each_g*1
779 | for idx in range (len(each_g)):
780 | if each_g[idx,1]upper_bound:
781 | each_g_mod = np.delete(each_g,idx,axis = 0)
782 |
783 | if len(each_g_mod)>0:
784 | cur_mean = np.mean(each_g_mod[:,1],axis=0)
785 | mean_diff_x.append([cur_mean,len(each_g_mod)])
786 |
787 | mean_diff_x_arr = np.array(mean_diff_x)
788 |
789 | if len(mean_diff_x_arr)>0:
790 | count = 0
791 | sum_x = 0
792 | for i in range (len(mean_diff_x_arr)):
793 | sum_x += mean_diff_x_arr[i,0]* mean_diff_x_arr[i,1]
794 | count += mean_diff_x_arr[i,1]
795 |
796 | vec_a = np.array([0, sum_x/count])
797 |
798 | ######### Find vector b #########
799 | set_ct_ind = np.argmax(result_ctr[:,2])
800 | set_ct = result_ctr[set_ct_ind]
801 |
802 | # Find rough b
803 | min_nn = 10**38
804 | nn_vecb_rough = np.array([-1,-1,-1])
805 | for gn in range (len(x_g)):
806 | cur_ct = x_g[gn]
807 | if set_ct[0] not in cur_ct[:,0]:
808 | dis_xy = cur_ct - set_ct
809 | dis_norm = np.linalg.norm(dis_xy[:,:2],axis = 1)
810 | xy_min = np.min(dis_norm)
811 | if xy_min<=min_nn:
812 | min_nn = xy_min
813 | nn_vecb_rough = cur_ct[np.argmin(dis_norm)]
814 |
815 | # Generate hypothetical lattice
816 | h,w = pattern.shape
817 | lat_ctr = [set_ct[:2]]
818 |
819 | ###### Generate pts along vector a (middle row) ######
820 | # one side
821 | cur_h1 = set_ct[0]
822 | cur_w1 = set_ct[1]
823 | cur_ct1 = set_ct[:2]*1
824 | while cur_h1>=0 and cur_h1<=h and cur_w1>=0 and cur_w1<=w:
825 | cur_h1,cur_w1 = cur_ct1-vec_a
826 | if cur_h1>=0 and cur_h1<=h and cur_w1>=0 and cur_w1<=w:
827 | cur_ct1 = [cur_h1,cur_w1]
828 | lat_ctr.append([cur_h1,cur_w1])
829 |
830 | # the other side
831 | cur_h2 = set_ct[0]
832 | cur_w2 = set_ct[1]
833 | cur_ct2 = set_ct[:2]*1.0
834 | while cur_h2>=0 and cur_h2<=h and cur_w2>=0 and cur_w2<=w:
835 | cur_h2,cur_w2 = cur_ct2+vec_a
836 | if cur_h2>=0 and cur_h2<=h and cur_w2>=0 and cur_w2<=w:
837 | cur_ct2 = [cur_h2,cur_w2]
838 | lat_ctr.append([cur_h2,cur_w2])
839 |
840 | ######### Refine Vector b #########
841 | vec_b = nn_vecb_rough - set_ct
842 | if vec_b[0]<0:
843 | vec_b = -vec_b
844 |
845 | vec_b_rough = vec_b [:2]
846 |
847 | diff_y_ref = []
848 |
849 | look_y = set_ct[0]-vec_b_rough[0]
850 | est_ct = lat_ctr - vec_b_rough
851 | while look_y>0:
852 | for each in est_ct:
853 | each_diff_xy = each - result_ctr[:,:2]
854 |
855 | each_dis = each_diff_xy[:,0]**2+each_diff_xy[:,1]**2
856 | each_dis_min = np.min(each_dis)
857 | if each_dis_min=0 and cur_h1<=h and cur_w1>=0 and cur_w1<=w:
926 | cur_h1,cur_w1 = cur_ct1-vecb
927 | if cur_h1>=0 and cur_h1<=h and cur_w1>=0 and cur_w1<=w:
928 | cur_ct1 = [cur_h1,cur_w1]
929 | final_ctr.append([cur_h1,cur_w1])
930 |
931 | # the other side
932 | cur_h2 = cur_veca_ct[0]
933 | cur_w2 = cur_veca_ct[1]
934 | cur_ct2 = cur_veca_ct*1
935 |
936 | while cur_h2>=0 and cur_h2<=h and cur_w2>=0 and cur_w2<=w:
937 | cur_h2,cur_w2 = cur_ct2+vecb
938 | if cur_h2>=0 and cur_h2<=h and cur_w2>=0 and cur_w2<=w:
939 | cur_ct2 = [cur_h2,cur_w2]
940 | final_ctr.append([cur_h2,cur_w2])
941 |
942 | ######## Check Again ########
943 | chk_lat_ctr= final_ctr
944 |
945 | for cur_vec2_ct in chk_lat_ctr:
946 | # one side
947 | cur_h1 = cur_vec2_ct[0]
948 | cur_w1 = cur_vec2_ct[1]
949 | cur_ct1 = cur_vec2_ct*1
950 | while cur_h1>=0 and cur_h1<=h and cur_w1>=0 and cur_w1<=w:
951 | cur_h1,cur_w1 = cur_ct1-veca
952 | # print(cur_ct1-veca,cur_h1,cur_w1)
953 | if cur_h1>=0 and cur_h1<=h and cur_w1>=0 and cur_w1<=w:
954 | cur_ct1 = [cur_h1,cur_w1]
955 | dif_chk = [(ct[0]-cur_ct1[0])**2+(ct[1]-cur_ct1[1])**2 for ct in chk_lat_ctr]
956 | if min(dif_chk)> r**2:
957 | final_ctr.append([cur_h1,cur_w1])
958 |
959 | # the other side
960 | cur_h2 = cur_vec2_ct[0]
961 | cur_w2 = cur_vec2_ct[1]
962 | cur_ct2 = cur_vec2_ct*1
963 | while cur_h2>=0 and cur_h2<=h and cur_w2>=0 and cur_w2<=w:
964 | cur_h2,cur_w2 = cur_ct2+veca
965 | if cur_h2>=0 and cur_h2<=h and cur_w2>=0 and cur_w2<=w:
966 | cur_ct2 = [cur_h2,cur_w2]
967 | dif_chk2 = [(ct[0]-cur_ct2[0])**2+(ct[1]-cur_ct2[1])**2 for ct in chk_lat_ctr]
968 | if min(dif_chk2)> r**2:
969 | final_ctr.append([cur_h2,cur_w2])
970 |
971 | for pt in mid_ctr:
972 | final_ctr.append(pt)
973 |
974 | final_ctr = np.array(final_ctr)
975 |
976 | return final_ctr
977 |
978 |
979 |
980 | def delArti(gen_lat_pt,ref_ctr,r):
981 | """
982 | Delete any artificial lattice points.
983 |
984 | Parameters
985 | ----------
986 | gen_lat_pt : 2D array of float
987 | Array of artificial disk positions.
988 | ref_ctr : 2D array of float
989 | Array of detected disk positions.
990 | r : float
991 | Radius of the disks.
992 |
993 | Returns
994 | -------
995 | gen_lat_pt_up : 2D array of float
996 | A filtered array of disk positions.
997 |
998 | """
999 | gen_lat_pt_up = []
1000 | for i in range (len(gen_lat_pt)):
1001 | dif_gen_ref = np.array(gen_lat_pt[i] - ref_ctr[:,:2])
1002 | dif_gen_ref_norm = np.linalg.norm(dif_gen_ref,axis = 1)
1003 | if dif_gen_ref_norm.min()< r:
1004 | gen_lat_pt_up.append(gen_lat_pt[i])
1005 |
1006 | gen_lat_pt_up = np.array(gen_lat_pt_up)
1007 |
1008 | return gen_lat_pt_up
1009 |
1010 |
1011 |
1012 | def latBack(refe_a,refe_b,angle):
1013 | """
1014 | Transform the lattice vectors to the default coordinate system.
1015 |
1016 | Parameters
1017 | ----------
1018 | refe_a : 1D array of float
1019 | Array of the vector a.
1020 | refe_b : 1D array of float
1021 | Array of the vector b.
1022 | angle : float
1023 | The rotation angle.
1024 |
1025 | Returns
1026 | -------
1027 | a_init : 1D array of float
1028 | Transformed array of the vector a.
1029 | b_init : 1D array of float
1030 | Transformed array of the vector b.
1031 |
1032 | """
1033 | ang_init_back = angle*np.pi/180
1034 | a_init = np.array([refe_a[1]*np.sin(ang_init_back),refe_a[1]*np.cos(ang_init_back)])
1035 | b_init = np.array([refe_b[1]*np.sin(ang_init_back)+refe_b[0]*np.cos(ang_init_back),refe_b[1]*np.cos(ang_init_back)-refe_b[0]*np.sin(ang_init_back)])
1036 |
1037 | return a_init,b_init
1038 |
1039 |
1040 |
1041 | def drawCircles(ori_pattern,blobs_list,r):
1042 | """
1043 | Label the disk positions on the pattern.
1044 |
1045 | Parameters
1046 | ----------
1047 | ori_pattern : 2D array of int or float
1048 | The pattern to be labeled on.
1049 | blobs_list : 2D array of float
1050 | Array of disk positions.
1051 | r: float
1052 | The radius of the disks.
1053 |
1054 | Returns
1055 | -------
1056 | None.
1057 |
1058 | """
1059 | pattern = copy.deepcopy(ori_pattern)
1060 |
1061 | for q in range (len(blobs_list)):
1062 | center = (int(blobs_list[q,0]),int(blobs_list[q,1]))
1063 | pattern[center] = pattern.max()
1064 |
1065 | fig, ax = plt.subplots(figsize = (5,5))
1066 | ax.imshow(pattern,cmap='gray')
1067 | for blob in blobs_list:
1068 | y, x = blob
1069 | c = plt.Circle((x, y),r, color='red', linewidth=2, fill=False)
1070 | ax.add_patch(c)
1071 |
1072 | plt.show()
1073 |
1074 | pass
1075 |
1076 |
1077 |
1078 | def latDist(lat_par,refe_a,refe_b,err=0.2):
1079 | """
1080 | This function filters out the outliers of the lattice parameters based on the references.
1081 |
1082 | Parameters
1083 | ----------
1084 | lat_par : 2D array of arrays of float
1085 | 2D array with each element as two arrays of lattice vectors.
1086 | refe_a : 1D array of float
1087 | The reference lattice vector a.
1088 | refe_b : 1D array of float
1089 | The reference lattice vector b.
1090 | err : float, optional
1091 | Acceptable error percentage. The default is 0.2 (20%).
1092 |
1093 | Returns
1094 | -------
1095 | store_whole : 3D array of float
1096 | Array containing 3 columns, y coordinate, x coordinate, and 4 lattice vector elements
1097 | (y of vector a, x of vector a, y of vector b, x of vector b).
1098 |
1099 | """
1100 | arr_vec = lat_par
1101 |
1102 | sm_y,sm_x = lat_par.shape[:2]
1103 | std_ax = refe_a[1] # vec_a[0,std_2x]
1104 | std_ay = refe_a[0]
1105 | std_bx = refe_b[1] # vec_b[std_1y,std_1x]
1106 | std_by = refe_b[0]
1107 |
1108 | acc_ax_min = std_ax*(1-err) if std_ax>0 else std_ax*(1+err)
1109 | acc_ax_max = std_ax*(1+err) if std_ax>0 else std_ax*(1-err)
1110 | acc_ay_min = std_ay*(1-err) if std_ay>0 else std_ay*(1+err)
1111 | acc_ay_max = std_ay*(1+err) if std_ay>0 else std_ay*(1-err)
1112 | acc_bx_min = std_bx*(1-err) if std_bx>0 else std_bx*(1+err)
1113 | acc_bx_max = std_bx*(1+err) if std_bx>0 else std_bx*(1-err)
1114 | acc_by_min = std_by*(1-err) if std_by>0 else std_by*(1+err)
1115 | acc_by_max = std_by*(1+err) if std_by>0 else std_by*(1-err)
1116 |
1117 | store_whole = np.zeros((sm_y,sm_x,4),dtype = float)
1118 |
1119 | # Delete paramater outliers
1120 | ct = 0
1121 | for row in range (sm_y):
1122 | for col in range (sm_x):
1123 |
1124 | each = arr_vec[row,col]
1125 |
1126 | gax = float(each[0,1])
1127 | gay = float(each[0,0])
1128 | gbx = float(each[1,1])
1129 | gby = float(each[1,0])
1130 |
1131 | if gax>acc_ax_max or gaxacc_ay_max or gayacc_bx_max or gbxacc_by_max or gby