├── .gitignore ├── Cargo.toml ├── LICENSE ├── README.md └── src ├── avi.rs ├── bin └── example1.rs ├── blk_match.rs ├── bmp.rs ├── defs.rs ├── filters.rs ├── image.rs ├── img_align.rs ├── img_list.rs ├── img_seq.rs ├── img_seq_priv.rs ├── lib.rs ├── quality.rs ├── ref_pt_align.rs ├── ser.rs ├── stacking.rs ├── tiff.rs ├── triangulation.rs └── utils.rs /.gitignore: -------------------------------------------------------------------------------- 1 | /target/ 2 | **/*.rs.bk 3 | Cargo.lock 4 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "skry" 3 | version = "0.3.0" 4 | authors = ["Filip Szczerek "] 5 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Filip Szczerek (ga.software@yahoo.com) 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # **libskry_r** 2 | 3 | ## Lucky imaging library 4 | 5 | Copyright (C) 2017 Filip Szczerek (ga.software@yahoo.com) 6 | 7 | *This project is licensed under the terms of the MIT license (see the LICENSE file for details).* 8 | 9 | ---------------------------------------- 10 | 11 | - 1\. Introduction 12 | - 2\. Performance comparison with C 13 | - 2\.1\. Implementation remarks 14 | - 3\. Input/output formats support 15 | - 4\. Principles of operation 16 | - 5\. Change log 17 | 18 | 19 | ---------------------------------------- 20 | ## 1. Introduction 21 | 22 | **libskry_r** implements the *lucky imaging* principle of astronomical imaging: creating a high-quality still image out of a series of many (possibly thousands) low quality ones (blurred, deformed, noisy). The resulting *image stack* typically requires post-processing, including sharpening (e.g. via deconvolution in [***ImPPG***](http://greatattractor.github.io/imppg/)). 23 | 24 | *libskry_r* is a Rust rewrite of [***libskry***](https://github.com/GreatAttractor/libskry), mostly complete. Not yet ported: multi-threading, demosaicing, and video file support via *libav*. 25 | 26 | For a visualization of the stacking process, see the [**Stackistry video tutorial**](https://www.youtube.com/watch?v=_68kEYBXkLw&list=PLCKkDZ7up_-VRMzGQ0bmmiXL39z78zwdE). 27 | 28 | For sample results, see the [**gallery**](https://www.astrobin.com/users/GreatAttractor/collections/131/). 29 | 30 | See `doc/example1.rs` for a usage example. 31 | 32 | See also the [**Algorithms summary**](https://github.com/GreatAttractor/libskry/raw/master/doc/algorithms.pdf) in *libskry* repository. 33 | 34 | 35 | ---------------------------------------- 36 | ## 2. Performance comparison with C 37 | 38 | The two goals of rewriting *libskry* were practising Rust and comparing the performance to C99 code. 39 | 40 | The following figures were obtained on a Core i5-3570K under Fedora 25. Each test was preceded by a pre-run to make sure the input video is cached in RAM. In case of *libskry*, the program was `doc/example1.c`. In case of *libskry_r*, it was `doc/example1.rs`. Both use the same processing parameters. The raw video `sun01.avi` (840x612 8 bpp mono, 634 frames, Sun in Hα) can be downloaded in the “Releases” section. The C program was forced to use 1 thread with `OMP_NUM_THREADS=1`. 41 | 42 | 43 | | Compiler | Options | Execution time | 44 | |----------------------|-----------------------------------------------------------------------------|----------------| 45 | | rustc 1.23.0-nightly | `RUSTFLAGS="-C opt-level=3 -C target-cpu=native"`, `cargo` with `--release` | 23.4 s | 46 | | GCC 6.4.1 | `-O3 -ffast-math -march=native` | 25.7 s | 47 | | Clang 3.9.1 | `-O3 -ffast-math -march=native` | 23.9 s | 48 | 49 | 50 | ### 2.1 Implementation remarks 51 | 52 | The processing is not very complex. The calculations include: iterated box blur, block matching (both on 8-bit mono images), bilinear interpolation (on 32-bit floating-point pixel values) and Delaunay triangulation (only once, typically for a few hundred to a few thousand points, with negligible time impact). The only collection type used is vector. 53 | 54 | The most time-consuming operation is block matching. Replacing the inner loop’s body in `blk_match::calc_sum_of_squared_diffs`: 55 | 56 | ```Rust 57 | result += sqr!(img_pix[img_offs + x as usize] as i32 - 58 | rblk_pix[blk_offs + x as usize] as i32) as u64; 59 | ``` 60 | 61 | with an unsafe block: 62 | 63 | 64 | ```Rust 65 | unsafe { 66 | result += sqr!(*img_pix.get_unchecked(img_offs + x as usize) as i32 - 67 | *rblk_pix.get_unchecked(blk_offs + x as usize) as i32) as u64; 68 | } 69 | ``` 70 | 71 | enabled the compiler to vectorize it (using AVX) and reduced the reference point alignment phase’s execution time of `example1.rs` from 26.7 s to 12.6 s. 72 | 73 | On the other hand, doing the same in `filters::box_blur_pass` and `filters::estimate_quality` had negligible effect on the quality estimation speed. 74 | 75 | Other uses of `unsafe` are not performance-critical: pixel format conversions, reading structures from a file, creating an uninitialized vector. 76 | 77 | In *libskry*, the user is asked to create the processing phase objects in appropriate order and not to modify the underlying image sequence while processing is in progress. In *libskry_r*, correct usage in enforced by Rust’s borrow checker; it also necessitated modification of the API. Each processing phase is now represented by a `__Proc` struct, which holds a mutable reference to the input image sequence, and a `__Data` struct with this phase’s results (which does not hold any references). Each `__Proc` is created in a sub-scope; each resulting `__Data` is then fed to the next phase’s `__Proc`. See `doc/example1.rs` for details. 78 | 79 | 80 | ---------------------------------------- 81 | ## 3. Input/output formats support 82 | 83 | Supported input formats: 84 | 85 | - AVI: uncompressed DIB (mono or RGB), Y8/Y800 86 | - SER: mono, RGB 87 | - BMP: 8-, 24- and 32-bit uncompressed 88 | - TIFF: 8- and 16-bit per channel mono or RGB uncompressed 89 | 90 | Supported output formats: 91 | 92 | - BMP: 8- and 24-bit uncompressed 93 | - TIFF: 8- and 16-bit per channel mono or RGB uncompressed 94 | 95 | At the moment there is only limited AVI support (no extended or ODML AVI headers). 96 | 97 | 98 | ---------------------------------------- 99 | ## 4. Principles of operation 100 | 101 | Processing of a raw input image sequence consists of the following steps: 102 | 103 | 1. Image alignment (video stabilization) 104 | 2. Quality estimation 105 | 3. Reference point alignment 106 | 4. Image stacking 107 | 108 | **Image alignment** compensates any global image drift; the result is a stabilized video of size usually smaller than any of the input images. The (rectangular) region visible in all input images is referred to as *images’ intersection* throughout the source code. 109 | 110 | **Quality estimation** concerns the changes of local image quality. This information is later used to reject (via an user-specified criterion) poor-quality image fragments during reference point alignment and image stacking. 111 | 112 | **Reference point alignment** traces the geometric distortion of images (by using local block matching) which is later compensated for during image stacking. 113 | 114 | **Image stacking** performs shift-and-add summation of image fragments using information from previous steps. This improves signal-to-noise ratio. Note that stacking too many images may decrease quality – adding lower-quality fragments causes more blurring in the output stack. 115 | 116 | 117 | ---------------------------------------- 118 | ## 5. Change log 119 | 120 | - 0.3.0 (2017-12-05) 121 | - Initial rewrite (from C) of *libskry* 0.3.0 (a805d0c4a). 122 | -------------------------------------------------------------------------------- /src/avi.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // AVI support. 11 | // 12 | 13 | use bmp; 14 | use image::{Image, ImageError, Palette, PixelFormat, bytes_per_pixel}; 15 | use img_seq_priv::{ImageProvider}; 16 | use std::fs::{File, OpenOptions}; 17 | use std::io; 18 | use std::io::{Read, Seek, SeekFrom}; 19 | use std::mem::{size_of, size_of_val}; 20 | use std::slice; 21 | use utils; 22 | 23 | 24 | /// Four Character Code (FCC) 25 | type FourCC = [u8; 4]; 26 | 27 | 28 | fn fcc_equals(fcc1: &FourCC, fcc2: &[u8]) -> bool { 29 | fcc1[0] == fcc2[0] && 30 | fcc1[1] == fcc2[1] && 31 | fcc1[2] == fcc2[2] && 32 | fcc1[3] == fcc2[3] 33 | } 34 | 35 | #[repr(C, packed)] 36 | struct AviFileHeader { 37 | riff: FourCC, 38 | file_size: u32, 39 | avi: FourCC 40 | } 41 | 42 | 43 | #[repr(C, packed)] 44 | struct AviFrame { 45 | left: i16, 46 | top: i16, 47 | right: i16, 48 | bottom: i16 49 | } 50 | 51 | 52 | #[repr(C, packed)] 53 | struct AviStreamHeader { 54 | fcc_type: FourCC, 55 | fcc_handler: FourCC, 56 | flags: u32, 57 | priority: u16, 58 | language: u16, 59 | initial_frames: u32, 60 | scale: u32, 61 | rate: u32, 62 | start: u32, 63 | length: u32, 64 | suggested_buffer_size: u32, 65 | quality: u32, 66 | sample_size: u32, 67 | frame: AviFrame 68 | } 69 | 70 | 71 | #[repr(C, packed)] 72 | struct AviList { 73 | list: FourCC, // Contains "LIST" 74 | list_size: u32, // Does not include `list` and `list_type` 75 | list_type: FourCC 76 | } 77 | 78 | 79 | #[repr(C, packed)] 80 | struct AviChunk { 81 | ck_id: FourCC, 82 | ck_size: u32 // Does not include `ck_id` and `ck_size` 83 | } 84 | 85 | 86 | /// List or chunk (used when skipping `JUNK` chunks) 87 | #[repr(C, packed)] 88 | struct AviFragment { 89 | fcc: FourCC, 90 | size: u32 91 | } 92 | 93 | 94 | #[repr(C, packed)] 95 | struct AviMainHeader { 96 | microsec_per_frame: u32, 97 | max_bytes_per_sec: u32, 98 | padding_granularity: u32, 99 | flags: u32, 100 | total_frames: u32, 101 | initial_frames: u32, 102 | streams: u32, 103 | suggested_buffer_size: u32, 104 | width: u32, 105 | height: u32, 106 | reserved: [u32; 4] 107 | } 108 | 109 | 110 | #[repr(C, packed)] 111 | struct AviOldIndex { 112 | chunk_id: FourCC, 113 | flags: u32, 114 | 115 | /// Offset of frame contents counted from the beginning of the `movi` list's `list_type` field OR absolute file offset. 116 | offset: u32, 117 | 118 | frame_size: u32 119 | } 120 | 121 | 122 | const AVIF_HAS_INDEX: u32 = 0x00000010; 123 | 124 | 125 | #[derive(Copy, Clone, Debug, Eq, PartialEq)] 126 | enum AviPixelFormat { 127 | /// DIB, RGB 8-bit. 128 | DibRGB8, 129 | /// DIB, 256-color 8-bit RGB palette. 130 | DibPal8, 131 | /// DIB, 256-color grayscale palette. 132 | DibMono8, 133 | /// 8 bits per pixel, luminance only. 134 | Y800 135 | } 136 | 137 | 138 | #[derive(Debug)] 139 | pub enum AviError { 140 | Io(io::Error), 141 | MalformedFile, 142 | IndexNotPresent, 143 | UnsupportedFormat, 144 | InvalidFrame(usize) 145 | } 146 | 147 | 148 | impl From for AviError { 149 | fn from(err: io::Error) -> AviError { AviError::Io(err) } 150 | } 151 | 152 | 153 | impl From for ImageError { 154 | fn from(err: io::Error) -> ImageError { ImageError::AviError(AviError::Io(err)) } 155 | } 156 | 157 | 158 | impl From for ImageError { 159 | fn from(err: AviError) -> ImageError { ImageError::AviError(err) } 160 | } 161 | 162 | 163 | fn is_dib(avi_pix_fmt: AviPixelFormat) -> bool { 164 | match avi_pix_fmt { 165 | AviPixelFormat::DibRGB8 | 166 | AviPixelFormat::DibPal8 | 167 | AviPixelFormat::DibMono8 => true, 168 | 169 | _ => false 170 | } 171 | } 172 | 173 | 174 | fn avi_to_image_pix_fmt(avi_pix_fmt: AviPixelFormat) -> PixelFormat { 175 | match avi_pix_fmt { 176 | AviPixelFormat::DibMono8 | AviPixelFormat::Y800 => PixelFormat::Mono8, 177 | AviPixelFormat::DibRGB8 => PixelFormat::RGB8, 178 | AviPixelFormat::DibPal8 => PixelFormat::Pal8 179 | } 180 | } 181 | 182 | 183 | pub struct AviFile { 184 | file_name: String, 185 | 186 | /// Becomes empty after calling `deactivate()`. 187 | file: Option, 188 | 189 | /// Absolute file offsets (point to each frame's `AVI_chunk`). 190 | frame_offsets: Vec, 191 | 192 | /// Valid for an AVI with palette. 193 | palette: Option, 194 | 195 | avi_pix_fmt: AviPixelFormat, 196 | 197 | num_images: usize, 198 | 199 | width: u32, 200 | height: u32 201 | } 202 | 203 | 204 | impl AviFile { 205 | pub fn new(file_name: &str) -> Result, AviError> { 206 | // Expected AVI file structure: 207 | // 208 | // RIFF/AVI // AVI_file_header 209 | // LIST: hdrl 210 | // | avih // AVI_main_header 211 | // | LIST: strl 212 | // | | strh // AVI_stream_header 213 | // | | for DIB: strf // bitmap_info_header 214 | // | | for DIB/8-bit: BMP palette // BMP_palette 215 | // | | ... 216 | // | | (ignored) 217 | // | | ... 218 | // | |_____ 219 | // |_________ 220 | // ... 221 | // (ignored; possibly 'JUNK' chunks, LIST:INFO) 222 | // ... 223 | // LIST: movi 224 | // | ... 225 | // | (frames) 226 | // | ... 227 | // |_________ 228 | // ... 229 | // (ignored) 230 | // ... 231 | // idx1 232 | // ... 233 | // (index entries) // AVI_old_index 234 | // ... 235 | 236 | let mut chunk: AviChunk; 237 | let mut list: AviList; 238 | let mut last_chunk_pos: u64; 239 | let mut last_chunk_size: u32; 240 | 241 | let mut file = OpenOptions::new().read(true) 242 | .write(false) 243 | .open(file_name)?; 244 | 245 | let fheader: AviFileHeader = utils::read_struct(&mut file)?; 246 | 247 | if !fcc_equals(&fheader.riff, "RIFF".as_bytes()) || 248 | !fcc_equals(&fheader.avi, "AVI ".as_bytes()) { 249 | 250 | return Err(AviError::MalformedFile); 251 | } 252 | 253 | let header_list_pos = file.seek(SeekFrom::Current(0)).unwrap(); 254 | let header_list: AviList = utils::read_struct(&mut file)?; 255 | 256 | if !fcc_equals(&header_list.list, "LIST".as_bytes()) || 257 | !fcc_equals(&header_list.list_type, "hdrl".as_bytes()) { 258 | 259 | return Err(AviError::MalformedFile); 260 | } 261 | 262 | // Returns on error 263 | macro_rules! read_chunk { () => { 264 | last_chunk_pos = file.seek(SeekFrom::Current(0)).unwrap(); 265 | chunk = utils::read_struct(&mut file)?; 266 | last_chunk_size = u32::from_le(chunk.ck_size); 267 | }}; 268 | 269 | read_chunk!(); 270 | 271 | if !fcc_equals(&chunk.ck_id, "avih".as_bytes()) { 272 | return Err(AviError::MalformedFile); 273 | } 274 | 275 | let avi_header: AviMainHeader = utils::read_struct(&mut file)?; 276 | 277 | // This may be zero; if so, we'll use the stream header's `length` field 278 | let mut num_images = u32::from_le(avi_header.total_frames) as usize; 279 | let width = u32::from_le(avi_header.width); 280 | let height = u32::from_le(avi_header.height); 281 | 282 | if u32::from_le(avi_header.flags) & AVIF_HAS_INDEX == 0 { 283 | return Err(AviError::IndexNotPresent); 284 | } 285 | 286 | macro_rules! seek_to_next_list_or_chunk { () => { 287 | file.seek(SeekFrom::Start( 288 | last_chunk_pos + 289 | last_chunk_size as u64 + 290 | size_of_val(&{let x = chunk.ck_id; x}) as u64 + 291 | size_of_val(&{let x = chunk.ck_size; x}) as u64) 292 | )?; 293 | }}; 294 | 295 | seek_to_next_list_or_chunk!(); 296 | 297 | // Read the stream list 298 | list = utils::read_struct(&mut file)?; 299 | 300 | if !fcc_equals(&list.list, "LIST".as_bytes()) || 301 | !fcc_equals(&list.list_type, "strl".as_bytes()) { 302 | 303 | return Err(AviError::MalformedFile); 304 | } 305 | 306 | // Read the stream header 307 | read_chunk!(); 308 | 309 | if !fcc_equals(&chunk.ck_id, "strh".as_bytes()) { 310 | return Err(AviError::MalformedFile); 311 | } 312 | 313 | let mut stream_header: AviStreamHeader = utils::read_struct(&mut file)?; 314 | 315 | if !fcc_equals(&stream_header.fcc_type, "vids".as_bytes()) { 316 | return Err(AviError::MalformedFile); 317 | } 318 | 319 | if fcc_equals(&stream_header.fcc_handler, "\0\0\0".as_bytes()) { 320 | // Empty 'fcc_handler' means DIB by default 321 | stream_header.fcc_handler[0] = 'D' as u8; 322 | stream_header.fcc_handler[1] = 'I' as u8; 323 | stream_header.fcc_handler[2] = 'B' as u8; 324 | stream_header.fcc_handler[3] = ' ' as u8; 325 | } 326 | 327 | if !fcc_equals(&stream_header.fcc_handler, "DIB ".as_bytes()) && 328 | !fcc_equals(&stream_header.fcc_handler, "Y800".as_bytes()) && 329 | !fcc_equals(&stream_header.fcc_handler, "Y8 ".as_bytes()) { 330 | 331 | return Err(AviError::UnsupportedFormat); 332 | } 333 | let is_dib = fcc_equals(&stream_header.fcc_handler, "DIB ".as_bytes()); 334 | 335 | if num_images == 0 { num_images = stream_header.length as usize; } 336 | 337 | // Seek to and read the stream format 338 | seek_to_next_list_or_chunk!(); 339 | read_chunk!(); 340 | 341 | if !fcc_equals(&chunk.ck_id, "strf".as_bytes()) { 342 | return Err(AviError::MalformedFile); 343 | } 344 | 345 | let bmp_hdr: bmp::BitmapInfoHeader = utils::read_struct(&mut file)?; 346 | 347 | let palette: Option = None; 348 | 349 | if is_dib && u32::from_le(bmp_hdr.compression) != bmp::BI_BITFIELDS && u32::from_le(bmp_hdr.compression) != bmp::BI_RGB || 350 | u16::from_le(bmp_hdr.planes) != 1 || 351 | u16::from_le(bmp_hdr.bit_count) != 8 && u16::from_le(bmp_hdr.bit_count) != 24 { 352 | 353 | return Err(AviError::UnsupportedFormat); 354 | } 355 | 356 | let avi_pix_fmt; 357 | 358 | if is_dib && u16::from_le(bmp_hdr.bit_count) == 8 { 359 | let bmp_palette: bmp::BmpPalette = utils::read_struct(&mut file)?; 360 | 361 | let mut clr_used = u32::from_le(bmp_hdr.clr_used); 362 | if clr_used == 0 { clr_used = 256; } 363 | let palette = Some(bmp::convert_bmp_palette(clr_used, &bmp_palette)); 364 | 365 | avi_pix_fmt = if bmp::is_mono8_palette(palette.iter().next().unwrap()) { AviPixelFormat::DibMono8 } else { AviPixelFormat::DibPal8 } 366 | } else if is_dib && u16::from_le(bmp_hdr.bit_count) == 24 { 367 | avi_pix_fmt = AviPixelFormat::DibRGB8; 368 | } else { 369 | avi_pix_fmt = AviPixelFormat::Y800; 370 | } 371 | 372 | // Jump to the location immediately after `hdrl` 373 | file.seek(SeekFrom::Start( 374 | header_list_pos + 375 | u32::from_le(header_list.list_size) as u64 + 376 | size_of_val(&header_list.list) as u64 + 377 | size_of_val(&{let x = header_list.list_size; x}) as u64) 378 | )?; 379 | 380 | // Skip any additional fragments (e.g. `JUNK` chunks) 381 | let mut stored_pos: u64; 382 | loop { 383 | stored_pos = file.seek(SeekFrom::Current(0)).unwrap(); 384 | 385 | let fragment: AviFragment = utils::read_struct(&mut file)?; 386 | 387 | if fcc_equals(&fragment.fcc, "LIST".as_bytes()) { 388 | let list_type: FourCC = utils::read_struct(&mut file)?; 389 | 390 | // Found a list; if it is the `movi` list, move the file pointer back; 391 | // the list will be re-read after the current `while` loop 392 | if fcc_equals(&list_type, "movi".as_bytes()) { 393 | file.seek(SeekFrom::Start(stored_pos))?; 394 | break; 395 | } else { 396 | // Not the `movi` list; skip it. 397 | // Must rewind back by length of the `size` field, 398 | // because in a list it is not counted in `size`. 399 | file.seek(SeekFrom::Current(-(size_of_val(&{let x = fragment.size; x}) as i64)))?; 400 | } 401 | } 402 | 403 | // Skip the current fragment, whatever it is 404 | file.seek(SeekFrom::Current(u32::from_le(fragment.size) as i64))?; 405 | } 406 | 407 | list = utils::read_struct(&mut file)?; 408 | 409 | if !fcc_equals(&list.list, "LIST".as_bytes()) || 410 | !fcc_equals(&list.list_type, "movi".as_bytes()) { 411 | 412 | return Err(AviError::MalformedFile); 413 | } 414 | 415 | let frame_chunks_start_ofs = file.seek(SeekFrom::Current(0)).unwrap() - size_of_val(&list.list_type) as u64; 416 | 417 | // Jump to the old-style AVI index 418 | file.seek(SeekFrom::Current(u32::from_le(list.list_size) as i64 - size_of_val(&{ let x = list.list_size; x}) as i64))?; 419 | 420 | read_chunk!(); 421 | 422 | if !fcc_equals(&chunk.ck_id, "idx1".as_bytes()) || 423 | (u32::from_le(chunk.ck_size) as usize) < num_images * size_of::() { 424 | 425 | return Err(AviError::MalformedFile); 426 | } 427 | 428 | // Index may contain bogus entries, this will make it longer than num_images * sizeof(AviOldIndex) 429 | let index_length = u32::from_le(chunk.ck_size); 430 | if index_length % size_of::() as u32 != 0 { 431 | 432 | return Err(AviError::MalformedFile); 433 | } 434 | 435 | // Absolute byte offsets of each frame's contents in the file 436 | let mut frame_offsets = Vec::::with_capacity(num_images); 437 | 438 | // We have just checked it is divisible 439 | let mut avi_old_index = utils::alloc_uninitialized::(index_length as usize / size_of::()); 440 | file.read_exact( unsafe { slice::from_raw_parts_mut(avi_old_index.as_mut_slice().as_ptr() as *mut u8, index_length as usize) })?; 441 | 442 | let mut line_byte_count = width as usize * bytes_per_pixel(avi_to_image_pix_fmt(avi_pix_fmt)); 443 | if is_dib { line_byte_count = upmult!(line_byte_count, 4); } 444 | 445 | let frame_byte_count = line_byte_count * height as usize; 446 | 447 | for entry in &avi_old_index { 448 | // Ignore bogus entries (they may have "7Fxx" as their ID) 449 | if fcc_equals(&entry.chunk_id, "00db".as_bytes()) || 450 | fcc_equals(&entry.chunk_id, "00dc".as_bytes()) { 451 | 452 | if u32::from_le(entry.frame_size) as usize != frame_byte_count { 453 | return Err(AviError::MalformedFile); 454 | } else { 455 | frame_offsets.push(frame_chunks_start_ofs + u32::from_le(entry.offset) as u64); 456 | } 457 | } 458 | } 459 | 460 | // We assumed the frame offsets in the index were relative to the `movi` list; however, they may be actually 461 | // absolute file offsets. Check and update the offsets array. 462 | 463 | // Try to read the first frame's preamble 464 | file.seek(SeekFrom::Start(u32::from_le(avi_old_index[0].offset) as u64))?; 465 | read_chunk!(); 466 | 467 | if (fcc_equals(&chunk.ck_id, "00db".as_bytes()) || fcc_equals(&chunk.ck_id, "00dc".as_bytes())) && 468 | u32::from_le(chunk.ck_size) as usize == frame_byte_count { 469 | 470 | // Indeed, index frame offsets are absolute; must correct the values 471 | for ofs in &mut frame_offsets { 472 | *ofs -= frame_chunks_start_ofs; 473 | } 474 | } 475 | 476 | Ok(Box::new( 477 | AviFile{ file_name: String::from(file_name), 478 | file: None, 479 | frame_offsets, 480 | palette, 481 | avi_pix_fmt, 482 | num_images, 483 | width, 484 | height } 485 | )) 486 | } 487 | } 488 | 489 | 490 | impl ImageProvider for AviFile { 491 | fn get_img(&mut self, idx: usize) -> Result { 492 | if self.file.is_none() { 493 | self.file = Some( 494 | OpenOptions::new() 495 | .read(true) 496 | .write(false) 497 | .open(&self.file_name)? 498 | ); 499 | } 500 | 501 | let file: &mut File = self.file.iter_mut().next().unwrap(); 502 | 503 | file.seek(SeekFrom::Start(self.frame_offsets[idx]))?; 504 | 505 | let chunk: AviChunk = utils::read_struct(file)?; 506 | 507 | let mut src_line_byte_count = self.width as usize * bytes_per_pixel(avi_to_image_pix_fmt(self.avi_pix_fmt)); 508 | let is_dib = is_dib(self.avi_pix_fmt); 509 | if is_dib { src_line_byte_count = upmult!(src_line_byte_count, 4); } 510 | 511 | if !fcc_equals(&chunk.ck_id, "00db".as_bytes()) && !fcc_equals(&chunk.ck_id, "00dc".as_bytes()) || 512 | u32::from_le(chunk.ck_size) as usize != src_line_byte_count * self.height as usize { 513 | 514 | return Err(ImageError::AviError(AviError::InvalidFrame(idx))); 515 | } 516 | 517 | let mut img = Image::new(self.width, self.height, avi_to_image_pix_fmt(self.avi_pix_fmt), self.palette.clone(), false); 518 | 519 | let mut src_line = utils::alloc_uninitialized::(src_line_byte_count); 520 | 521 | for y in 0..self.height { 522 | file.read_exact(&mut src_line)?; 523 | 524 | let bpl = img.get_bytes_per_line(); 525 | 526 | // Line order in a DIB is reversed 527 | let img_line = if is_dib { img.get_line_raw_mut(self.height - y - 1) } 528 | else { img.get_line_raw_mut(y) }; 529 | 530 | if self.avi_pix_fmt == AviPixelFormat::DibRGB8 { 531 | // Rearrange channels to RGB order 532 | for x in 0..self.width as usize { 533 | img_line[3*x + 0] = src_line[3*x + 2]; 534 | img_line[3*x + 1] = src_line[3*x + 1]; 535 | img_line[3*x + 2] = src_line[3*x + 0]; 536 | } 537 | } 538 | else { 539 | img_line.copy_from_slice(&src_line[0..bpl]); 540 | } 541 | } 542 | 543 | Ok(img) 544 | } 545 | 546 | 547 | fn get_img_metadata(&self, _: usize) -> Result<(u32, u32, PixelFormat, Option), ImageError> { 548 | Ok((self.width, self.height, avi_to_image_pix_fmt(self.avi_pix_fmt), self.palette.clone())) 549 | } 550 | 551 | 552 | fn img_count(&self) -> usize { 553 | self.num_images 554 | } 555 | 556 | 557 | fn deactivate(&mut self) { 558 | self.file = None; 559 | } 560 | } 561 | -------------------------------------------------------------------------------- /src/bin/example1.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Example showing basic usage of libskry_r. 11 | // 12 | 13 | extern crate skry; 14 | use skry::defs::{ProcessingError, ProcessingPhase}; 15 | use std::io::Write; 16 | 17 | 18 | /// Returns false on failure. 19 | fn execute_processing_phase(phase_processor: &mut ProcessingPhase) -> bool { 20 | loop { 21 | match phase_processor.step() { 22 | Err(err) => match err { 23 | ProcessingError::NoMoreSteps => break, 24 | _ => { println!("Error during processing: {:?}", err); return false; } 25 | }, 26 | _ => () 27 | } 28 | } 29 | 30 | true 31 | } 32 | 33 | 34 | fn time_elapsed_str(tstart: std::time::Instant) -> String { 35 | let elapsed = std::time::Instant::now() - tstart; 36 | format!("{:.*}", 3, (elapsed.as_secs() as f64) + (elapsed.subsec_nanos() as f64) * 1.0e-9) 37 | } 38 | 39 | 40 | fn main() { 41 | let mut tstart = std::time::Instant::now(); 42 | let tstart0 = tstart.clone(); 43 | 44 | let input_file_name = "doc/sun01.avi"; 45 | print!("Processing \"{}\"... ", input_file_name); 46 | let img_seq_result = skry::img_seq::ImageSequence::new_avi_video(input_file_name); 47 | match img_seq_result { 48 | Err(ref err) => { println!("Error opening file: {:?}", err); }, 49 | Ok(_) => () 50 | } 51 | 52 | let mut img_seq = img_seq_result.unwrap(); 53 | 54 | print!("\nImage alignment... "); std::io::stdout().flush(); 55 | 56 | let img_align_data; 57 | { // Nested scope needed to access `img_seq` mutably, here and later 58 | let mut img_align = skry::img_align::ImgAlignmentProc::init( 59 | &mut img_seq, 60 | skry::img_align::AlignmentMethod::Anchors(skry::img_align::AnchorConfig{ 61 | initial_anchors: None, 62 | block_radius: 32, 63 | search_radius: 32, 64 | placement_brightness_threshold: 0.33 65 | }) 66 | ).unwrap(); 67 | 68 | if !execute_processing_phase(&mut img_align) { 69 | return; 70 | } 71 | 72 | // `__Proc` structs hold a mutable reference to the image sequence being processed, 73 | // so need to be enclosed in a scope (because we need to subsequently create 4 74 | // of these structs). 75 | // 76 | // Each `__Proc` produces a `__Data` struct, which does not hold any references, 77 | // so can be freely used in the main scope. 78 | 79 | img_align_data = img_align.get_data(); 80 | } 81 | println!(" {} s", time_elapsed_str(tstart)); 82 | 83 | print!("Quality estimation... "); std::io::stdout().flush(); 84 | tstart = std::time::Instant::now(); 85 | let qual_est_data; 86 | { 87 | let mut qual_est = skry::quality::QualityEstimationProc::init(&mut img_seq, &img_align_data, 40, 3); 88 | 89 | if !execute_processing_phase(&mut qual_est) { 90 | return; 91 | } 92 | 93 | qual_est_data = qual_est.get_data(); 94 | } 95 | println!(" {} s", time_elapsed_str(tstart)); 96 | 97 | print!("Reference point alignment... "); std::io::stdout().flush(); 98 | tstart = std::time::Instant::now(); 99 | 100 | let ref_pt_align_data: skry::ref_pt_align::RefPointAlignmentData; 101 | { 102 | let mut ref_pt_align = skry::ref_pt_align::RefPointAlignmentProc::init( 103 | &mut img_seq, 104 | &img_align_data, 105 | &qual_est_data, 106 | None, 107 | skry::ref_pt_align::QualityCriterion::PercentageBest(30), 108 | 32, 109 | 20, 110 | 0.33, 111 | 1.2, 112 | 1, 113 | 40).unwrap(); 114 | 115 | if !execute_processing_phase(&mut ref_pt_align) { 116 | return; 117 | } 118 | 119 | ref_pt_align_data = ref_pt_align.get_data(); 120 | } 121 | println!(" {} s", time_elapsed_str(tstart)); 122 | 123 | print!("Image stacking... "); std::io::stdout().flush(); 124 | tstart = std::time::Instant::now(); 125 | let mut stacking = skry::stacking::StackingProc::init( 126 | &mut img_seq, &img_align_data, &ref_pt_align_data, None).unwrap(); 127 | 128 | if !execute_processing_phase(&mut stacking) { 129 | return; 130 | } 131 | println!(" {} s", time_elapsed_str(tstart)); 132 | 133 | println!("\n\nTotal time: {} s", time_elapsed_str(tstart0)); 134 | print!("Saving \"out.tif\"... "); 135 | match stacking.get_image_stack().convert_pix_fmt(skry::image::PixelFormat::RGB16, None).save("out.tif", skry::image::FileType::Auto) { 136 | Err(err) => println!("error: {:?}", err), 137 | Ok(()) => println!("done.\n") 138 | } 139 | } 140 | -------------------------------------------------------------------------------- /src/blk_match.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Block matching. 11 | // 12 | 13 | use defs::{Point, Rect}; 14 | use image::{Image, PixelFormat}; 15 | 16 | 17 | const MIN_FRACTION_OF_BLOCK_TO_MATCH: u32 = 4; 18 | 19 | 20 | /// Returns the sum of squared differences between pixels of `img` and `ref_block`. 21 | /// 22 | /// `ref_block`'s center is aligned on `pos` over `img`. The differences are 23 | /// calculated only for the `refblk_rect` portion of `ref_block`. 24 | /// `pos` is relative to `img`; `refblk_rect` is relative to `ref_blk`. 25 | /// 26 | /// Both `ref_block` and `img` must be `Mono8`. The result is 64-bit, so for 27 | /// 8-bit images it can accommodate a block of 2^(64-2*8) = 2^48 pixels. 28 | /// 29 | pub fn calc_sum_of_squared_diffs(img: &Image, ref_block: &Image, pos: &Point, refblk_rect: &Rect) -> u64 { 30 | assert!(img.get_pixel_format() == PixelFormat::Mono8); 31 | assert!(ref_block.get_pixel_format() == PixelFormat::Mono8); 32 | 33 | assert!(ref_block.get_img_rect().contains_rect(refblk_rect)); 34 | 35 | let mut result = 0u64; 36 | 37 | let blk_width = ref_block.get_width(); 38 | let blk_height = ref_block.get_height(); 39 | 40 | // Example: an 8x6 block, * = pos, . - block's pixels, 41 | // refblk_rect = { x: 2, y: 1, w: 5, h: 3 } 42 | // 43 | // 44 | // 0 pos.x 45 | // +---------------------- 46 | // 0| | 47 | // | | 48 | // | +--------+ 49 | // | |........| 50 | // | |..#####.| 51 | // | |..#####.| 52 | // pos.y|-------|..##*##.| 53 | // | |........| 54 | // | |........| 55 | // | +--------+ 56 | // | 57 | // 58 | 59 | // Loop bounds 60 | 61 | let xstart = pos.x - (blk_width as i32)/2 + refblk_rect.x; 62 | let ystart = pos.y - (blk_height as i32)/2 + refblk_rect.y; 63 | 64 | let xend = xstart + refblk_rect.width as i32; 65 | let yend = ystart + refblk_rect.height as i32; 66 | 67 | assert!(xstart >= 0); 68 | assert!(ystart >= 0); 69 | 70 | assert!(xend <= img.get_width() as i32); 71 | assert!(yend <= img.get_height() as i32); 72 | 73 | // Byte offsets in the pixel arrays 74 | 75 | let img_pix = img.get_mono8_pixels_from(Point{ x: xstart, y: ystart}); 76 | let rblk_pix = ref_block.get_mono8_pixels_from(Point{x: refblk_rect.x, y: refblk_rect.y}); 77 | 78 | let img_stride = img.get_width() as usize; 79 | let blk_stride = ref_block.get_width() as usize; 80 | 81 | let mut img_offs = 0; 82 | let mut blk_offs = 0; 83 | 84 | for _ in ystart..yend { 85 | for x in 0..(xend-xstart) { 86 | unsafe { 87 | result += sqr!(*img_pix.get_unchecked(img_offs + x as usize) as i32 - 88 | *rblk_pix.get_unchecked(blk_offs + x as usize) as i32) as u64; 89 | } 90 | } 91 | img_offs += img_stride; 92 | blk_offs += blk_stride; 93 | } 94 | 95 | result 96 | } 97 | 98 | 99 | /// TODO: add comments 100 | pub fn find_matching_position(ref_pos: Point, 101 | ref_block: &Image, 102 | image: &Image, 103 | search_radius: u32, 104 | initial_search_step: u32) -> Point { 105 | assert!(image.get_pixel_format() == PixelFormat::Mono8); 106 | assert!(ref_block.get_pixel_format() == PixelFormat::Mono8); 107 | 108 | let blkw = ref_block.get_width(); 109 | let blkh = ref_block.get_height(); 110 | let imgw = image.get_width(); 111 | let imgh = image.get_height(); 112 | 113 | // At first, use a coarse step when trying to match `ref_block` 114 | // with `img` at different positions. Once an approximate matching 115 | // position is determined, the search continues around it repeatedly 116 | // using a smaller step, until the step becomes 1. 117 | let mut search_step = initial_search_step; 118 | 119 | // Range of positions where `ref_block` will be match-tested with `img`. 120 | // Using signed type is necessary, as the positions may be negative 121 | // (then the block is appropriately clipped before comparison). 122 | 123 | struct SearchRange { 124 | // Inclusive 125 | pub xmin: i32, 126 | pub ymin: i32, 127 | 128 | // Exclusive 129 | pub xmax: i32, 130 | pub ymax: i32 131 | }; 132 | 133 | let mut search_range = SearchRange{ xmin: ref_pos.x - search_radius as i32, 134 | ymin: ref_pos.y - search_radius as i32, 135 | xmax: ref_pos.x + search_radius as i32, 136 | ymax: ref_pos.y + search_radius as i32 }; 137 | 138 | let mut best_pos = Point{ x: 0, y: 0 }; 139 | 140 | while search_step > 0 { 141 | // Min. sum of squared differences between pixel values of 142 | // the reference block and the image at candidate positions 143 | let mut min_sq_diff_sum = u64::max_value(); 144 | 145 | // (x, y) = position in `img` for which a block match test is performed 146 | let mut y = search_range.ymin; 147 | while y < search_range.ymax { 148 | let mut x = search_range.xmin; 149 | while x < search_range.xmax { 150 | 151 | // It is allowed for `ref_block` to not be entirely inside `image`. 152 | // Before calling `calc_sum_of_squared_diffs()`, find a sub-rectangle 153 | // `refblk_rect` of `ref_block` which lies within `image`: 154 | // 155 | // +======== ref_block ========+ 156 | // | | 157 | // | +-------- img ----------|------- 158 | // | |.......................| 159 | // | |..........*............| 160 | // | |.......................| 161 | // | |.......................| 162 | // +===========================+ 163 | // | 164 | // | 165 | // 166 | // *: current search position (x, y); corresponds with the middle 167 | // of 'ref_block' during block matching 168 | // 169 | // Dotted area is the 'refblk_rect'. Start coordinates of 'refblk_rect' 170 | // are relative to the 'ref_block'; if whole 'ref_block' fits in 'image', 171 | // then refblk_rect = {0, 0, blkw, blkh}. 172 | 173 | 174 | let refblk_rect_x = if x >= (blkw as i32)/2 { 0 } else { (blkw as i32)/2 - x }; 175 | let refblk_rect_y = if y >= (blkh as i32)/2 { 0 } else { (blkh as i32)/2 - y }; 176 | 177 | let refblk_rect_xmax: i32 = 178 | if x + (blkw as i32)/2 <= imgw as i32 { blkw as i32 } else { blkw as i32 - (x + (blkw as i32)/2 - imgw as i32) }; 179 | let refblk_rect_ymax: i32 = 180 | if y + (blkh as i32)/2 <= imgh as i32 { blkh as i32 } else { blkh as i32 - (y + (blkh as i32)/2 - imgh as i32) }; 181 | 182 | let mut sum_sq_diffs: u64; 183 | 184 | if refblk_rect_x >= refblk_rect_xmax || 185 | refblk_rect_y >= refblk_rect_ymax { 186 | 187 | // Ref. block completely outside image 188 | sum_sq_diffs = u64::max_value(); 189 | } else { 190 | let refblk_rect = Rect{ x: refblk_rect_x, y: refblk_rect_y, 191 | width: (refblk_rect_xmax - refblk_rect_x) as u32, 192 | height: (refblk_rect_ymax - refblk_rect_y) as u32}; 193 | 194 | if refblk_rect.width < blkw / MIN_FRACTION_OF_BLOCK_TO_MATCH || 195 | refblk_rect.height < blkh / MIN_FRACTION_OF_BLOCK_TO_MATCH { 196 | 197 | // Ref. block too small to compare 198 | sum_sq_diffs = u64::max_value(); 199 | } else { 200 | sum_sq_diffs = calc_sum_of_squared_diffs(&image, &ref_block, &Point{ x, y }, &refblk_rect); 201 | 202 | // The sum must be normalized in order to be comparable with others 203 | sum_sq_diffs *= ((blkw as u32) * (blkh as u32)) as u64; 204 | sum_sq_diffs /= (refblk_rect.width * refblk_rect.height) as u64; 205 | } 206 | } 207 | 208 | if sum_sq_diffs < min_sq_diff_sum { 209 | min_sq_diff_sum = sum_sq_diffs; 210 | best_pos = Point{ x, y }; 211 | } 212 | 213 | x += search_step as i32; 214 | } 215 | y += search_step as i32; 216 | } 217 | 218 | search_range.xmin = best_pos.x - search_step as i32; 219 | search_range.ymin = best_pos.y - search_step as i32; 220 | search_range.xmax = best_pos.x + search_step as i32; 221 | search_range.ymax = best_pos.y + search_step as i32; 222 | 223 | search_step /= 2; 224 | } 225 | 226 | best_pos 227 | } 228 | -------------------------------------------------------------------------------- /src/bmp.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // BMP support. 11 | // 12 | 13 | use image::{Image, Palette, PixelFormat, bytes_per_pixel}; 14 | use std::fs::{File, OpenOptions}; 15 | use std::io; 16 | use std::io::prelude::*; 17 | use std::io::{Seek, SeekFrom}; 18 | use std::mem::{size_of}; 19 | use utils; 20 | 21 | 22 | impl Default for BmpPalette { 23 | fn default() -> BmpPalette { BmpPalette{ pal: [0; BMP_PALETTE_SIZE] }} 24 | } 25 | 26 | 27 | #[repr(C, packed)] 28 | #[derive(Default)] 29 | struct BitmapFileHeader { 30 | pub ftype: [u8; 2], 31 | pub size: u32, 32 | pub reserved1: u16, 33 | pub reserved2: u16, 34 | pub off_bits: u32 35 | } 36 | 37 | 38 | pub const BMP_PALETTE_SIZE: usize = 256*4; 39 | pub const BI_RGB: u32 = 0; 40 | pub const BI_BITFIELDS: u32 = 3; 41 | 42 | 43 | #[repr(C, packed)] 44 | #[derive(Default)] 45 | pub struct BitmapInfoHeader { 46 | pub size: u32, 47 | pub width: i32, 48 | pub height: i32, 49 | pub planes: u16, 50 | pub bit_count: u16, 51 | pub compression: u32, 52 | pub size_image: u32, 53 | pub x_pels_per_meter: i32, 54 | pub y_pels_per_meter: i32, 55 | pub clr_used: u32, 56 | pub clr_important: u32 57 | } 58 | 59 | 60 | #[repr(C, packed)] 61 | pub struct BmpPalette { 62 | pub pal: [u8; BMP_PALETTE_SIZE] 63 | } 64 | 65 | 66 | pub fn convert_bmp_palette(num_used_pal_entries: u32, bmp_pal: &BmpPalette) -> Palette { 67 | let mut pal = Palette::default(); 68 | for i in 0..num_used_pal_entries as usize { 69 | pal.pal[3*i + 0] = bmp_pal.pal[i*4 + 2]; 70 | pal.pal[3*i + 1] = bmp_pal.pal[i*4 + 1]; 71 | pal.pal[3*i + 2] = bmp_pal.pal[i*4 + 0]; 72 | } 73 | 74 | pal 75 | } 76 | 77 | 78 | pub fn is_mono8_palette(palette: &Palette) -> bool { 79 | for i in 0..Palette::NUM_ENTRIES { 80 | if palette.pal[3*i + 0] as usize != i || 81 | palette.pal[3*i + 1] as usize != i || 82 | palette.pal[3*i + 2] as usize != i { 83 | 84 | return false; 85 | } 86 | } 87 | 88 | true 89 | } 90 | 91 | 92 | #[derive(Debug)] 93 | pub enum BmpError { 94 | Io(io::Error), 95 | MalformedFile, 96 | UnsupportedFormat 97 | } 98 | 99 | 100 | impl From for BmpError { 101 | fn from(err: io::Error) -> BmpError { BmpError::Io(err) } 102 | } 103 | 104 | 105 | /// Returns metadata (width, height, ...) without reading the pixel data. 106 | pub fn get_bmp_metadata(file_name: &str) -> Result<(u32, u32, PixelFormat, Option), BmpError> { 107 | let mut file = OpenOptions::new().read(true).write(false).open(file_name)?; 108 | 109 | let (img_width, img_height, _, pix_fmt, palette) = get_bmp_metadata_priv(&mut file)?; 110 | 111 | Ok((img_width, img_height, pix_fmt, palette)) 112 | } 113 | 114 | 115 | /// Returns width, height, file bits per pixel, pixel format, palette. 116 | /// 117 | /// After return, `file`'s cursor will be positioned at the beginning of pixel data. 118 | /// 119 | fn get_bmp_metadata_priv(file: &mut File) -> Result<(u32, u32, usize, PixelFormat, Option), BmpError> { 120 | let file_hdr: BitmapFileHeader = utils::read_struct(file)?; 121 | let info_hdr: BitmapInfoHeader = utils::read_struct(file)?; 122 | 123 | // Fields in a BMP are always little-endian, so remember to swap them 124 | 125 | let bits_per_pixel = u16::from_le(info_hdr.bit_count); 126 | let img_width = i32::from_le(info_hdr.width) as u32; 127 | let img_height = i32::from_le(info_hdr.height) as u32; 128 | 129 | if img_width == 0 || img_height == 0 || 130 | file_hdr.ftype[0] != 'B' as u8 || file_hdr.ftype[1] != 'M' as u8 || 131 | u16::from_le(info_hdr.planes) != 1 || 132 | bits_per_pixel != 8 && bits_per_pixel != 24 && bits_per_pixel != 32 || 133 | u32::from_le(info_hdr.compression) != BI_RGB && u32::from_le(info_hdr.compression) != BI_BITFIELDS { 134 | 135 | return Err(BmpError::UnsupportedFormat); 136 | } 137 | 138 | let mut pix_fmt; 139 | 140 | if bits_per_pixel == 8 { 141 | pix_fmt = PixelFormat::Pal8; 142 | } else if bits_per_pixel == 24 || bits_per_pixel == 32 { 143 | pix_fmt = PixelFormat::RGB8; 144 | } else { 145 | panic!(); // Cannot happen (due to the previous checks) 146 | } 147 | 148 | let mut pal: Option = None; 149 | 150 | if pix_fmt == PixelFormat::Pal8 { 151 | let mut num_used_pal_entries = u32::from_le(info_hdr.clr_used); 152 | 153 | if num_used_pal_entries == 0 { 154 | num_used_pal_entries = 256; 155 | } 156 | 157 | // Seek to the beginning of palette 158 | file.seek(SeekFrom::Start((size_of::() + u32::from_le(info_hdr.size) as usize) as u64))?; 159 | 160 | let bmp_palette: BmpPalette = utils::read_struct(file)?; 161 | 162 | // Convert to an RGB-order palette 163 | let palette = convert_bmp_palette(num_used_pal_entries, &bmp_palette); 164 | 165 | if is_mono8_palette(&palette) { 166 | pix_fmt = PixelFormat::Mono8; 167 | } 168 | 169 | pal = Some(palette); 170 | } 171 | 172 | file.seek(SeekFrom::Start(u32::from_le(file_hdr.off_bits) as u64))?; 173 | 174 | Ok((img_width, img_height, bits_per_pixel as usize, pix_fmt, pal)) 175 | } 176 | 177 | 178 | pub fn load_bmp(file_name: &str) -> Result { 179 | 180 | let mut file = OpenOptions::new().read(true).write(false).open(file_name)?; 181 | 182 | let (img_width, img_height, bits_per_pix, pix_fmt, palette) = get_bmp_metadata_priv(&mut file)?; 183 | 184 | let src_bytes_per_pixel = bits_per_pix / 8; 185 | 186 | let dest_bytes_per_line = img_width as usize * bytes_per_pixel(pix_fmt); 187 | let dest_byte_count = img_height as usize * dest_bytes_per_line; 188 | 189 | let mut pixels = utils::alloc_uninitialized(dest_byte_count); 190 | 191 | if [PixelFormat::Pal8, PixelFormat::Mono8].contains(&pix_fmt) { 192 | let bmp_stride: usize = upmult!(img_width as usize, 4); // Line length in bytes in the BMP file's pixel data 193 | let skip = bmp_stride - img_width as usize; // Number of padding bytes at the end of a line 194 | 195 | let mut y = img_height - 1; 196 | loop { 197 | file.read_exact(&mut pixels[range!(y as usize * dest_bytes_per_line, dest_bytes_per_line)])?; 198 | 199 | if skip > 0 { 200 | file.seek(SeekFrom::Current(skip as i64))?; 201 | } 202 | 203 | if y == 0 { 204 | break; 205 | } else { 206 | y -= 1; // Lines in BMP are stored bottom-to-top 207 | } 208 | } 209 | } else if pix_fmt == PixelFormat::RGB8 { 210 | let bmp_stride = upmult!(img_width as usize * src_bytes_per_pixel, 4); // Line length in bytes in the BMP file's pixel data 211 | let skip = bmp_stride - img_width as usize * src_bytes_per_pixel; // Number of padding bytes at the end of a line 212 | 213 | let mut src_line = utils::alloc_uninitialized(img_width as usize * src_bytes_per_pixel); 214 | 215 | let mut y = img_height - 1; 216 | loop { 217 | let dest_line = &mut pixels[range!(y as usize * dest_bytes_per_line, dest_bytes_per_line)]; 218 | 219 | file.read_exact(&mut src_line)?; 220 | 221 | if src_bytes_per_pixel == 3 { 222 | // Rearrange the channels to RGB order 223 | for x in 0..img_width as usize { 224 | dest_line[x*3 + 0] = src_line[x*3 + 2]; 225 | dest_line[x*3 + 1] = src_line[x*3 + 1]; 226 | dest_line[x*3 + 2] = src_line[x*3 + 0]; 227 | } 228 | } 229 | else if src_bytes_per_pixel == 4 { 230 | // Remove the unused 4th byte from each pixel and rearrange the channels to RGB order 231 | for x in 0..img_width as usize { 232 | dest_line[x*3 + 0] = src_line[x*4 + 3]; 233 | dest_line[x*3 + 1] = src_line[x*4 + 2]; 234 | dest_line[x*3 + 2] = src_line[x*4 + 1]; 235 | } 236 | } 237 | 238 | if skip > 0 { 239 | file.seek(SeekFrom::Current(skip as i64))?; 240 | } 241 | 242 | if y == 0 { 243 | break; 244 | } else { 245 | y -= 1; // Lines in BMP are stored bottom-to-top 246 | } 247 | } 248 | } 249 | 250 | Ok(Image::new_from_pixels(img_width, img_height, pix_fmt, palette, pixels)) 251 | } 252 | 253 | 254 | pub fn save_bmp(img: &Image, file_name: &str) -> Result<(), BmpError> { 255 | let pix_fmt = img.get_pixel_format(); 256 | 257 | if ![PixelFormat::Pal8, PixelFormat::RGB8, PixelFormat::Mono8].contains(&pix_fmt) { 258 | return Err(BmpError::UnsupportedFormat); 259 | } 260 | 261 | let width = img.get_width(); 262 | let height = img.get_height(); 263 | let bytes_per_pix = bytes_per_pixel(pix_fmt); 264 | let bmp_line_width = upmult!(width as usize * bytes_per_pix, 4); 265 | 266 | let mut pix_data_offs = size_of::() + 267 | size_of::(); 268 | if [PixelFormat::Pal8, PixelFormat::Mono8].contains(&pix_fmt) { 269 | pix_data_offs += size_of::(); 270 | } 271 | 272 | // Fields in a BMP are always little-endian 273 | 274 | let bmfh = BitmapFileHeader{ 275 | ftype: ['B' as u8, 'M' as u8], 276 | size: u32::to_le((pix_data_offs + height as usize * bmp_line_width) as u32), 277 | reserved1: 0, 278 | reserved2: 0, 279 | off_bits: u32::to_le(pix_data_offs as u32) 280 | }; 281 | 282 | let bmih = BitmapInfoHeader{ 283 | size: u32::to_le(size_of::() as u32), 284 | width: i32::to_le(width as i32), 285 | height: i32::to_le(height as i32), 286 | planes: u16::to_le(1), 287 | bit_count: u16::to_le((bytes_per_pix * 8) as u16), 288 | compression: u32::to_le(BI_RGB), 289 | size_image: u32::to_le(0), 290 | x_pels_per_meter: i32::to_le(1000), 291 | y_pels_per_meter: i32::to_le(1000), 292 | clr_used: u32::to_le(0), 293 | clr_important: u32::to_le(0) 294 | }; 295 | 296 | let mut file = OpenOptions::new().read(false).write(true).create(true).open(file_name)?; 297 | 298 | utils::write_struct(&bmfh, &mut file)?; 299 | utils::write_struct(&bmih, &mut file)?; 300 | 301 | if [PixelFormat::Pal8, PixelFormat::Mono8].contains(&pix_fmt) { 302 | let mut bmp_palette = BmpPalette::default(); 303 | 304 | if pix_fmt == PixelFormat::Pal8 { 305 | let img_pal: &Palette = img.get_palette().iter().next().unwrap(); 306 | 307 | for i in 0..256 { 308 | bmp_palette.pal[4*i + 0] = img_pal.pal[3*i + 2]; 309 | bmp_palette.pal[4*i + 1] = img_pal.pal[3*i + 1]; 310 | bmp_palette.pal[4*i + 2] = img_pal.pal[3*i + 0]; 311 | bmp_palette.pal[4*i + 3] = 0; 312 | } 313 | } else { 314 | for i in 0..256 { 315 | bmp_palette.pal[4*i + 0] = i as u8; 316 | bmp_palette.pal[4*i + 1] = i as u8; 317 | bmp_palette.pal[4*i + 2] = i as u8; 318 | bmp_palette.pal[4*i + 3] = 0 as u8; 319 | } 320 | } 321 | 322 | utils::write_struct(&bmp_palette, &mut file)?; 323 | } 324 | 325 | let pix_bytes_per_line = width as usize * bytes_per_pix; 326 | let line_padding = vec![0; bmp_line_width - pix_bytes_per_line]; 327 | let mut bmp_line = utils::alloc_uninitialized(pix_bytes_per_line); 328 | 329 | for i in 0..height { 330 | let src_line = img.get_line_raw(height - i - 1); 331 | 332 | if [PixelFormat::Pal8, PixelFormat::Mono8].contains(&pix_fmt) { 333 | file.write_all(&src_line)?; 334 | } else { 335 | // Rearrange the channels to BGR order 336 | for x in 0..width as usize { 337 | bmp_line[x*3 + 0] = src_line[x*3 + 2]; 338 | bmp_line[x*3 + 1] = src_line[x*3 + 1]; 339 | bmp_line[x*3 + 2] = src_line[x*3 + 0]; 340 | } 341 | file.write_all(&bmp_line)?; 342 | } 343 | if !line_padding.is_empty() { 344 | file.write_all(&line_padding)?; 345 | } 346 | } 347 | 348 | Ok(()) 349 | } -------------------------------------------------------------------------------- /src/defs.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Common definitions. 11 | // 12 | 13 | use image; 14 | use std; 15 | use std::ops::{Add, AddAssign, Sub}; 16 | 17 | 18 | pub const WHITE_8BIT: u8 = 0xFF; 19 | 20 | 21 | pub struct PointFlt { 22 | pub x: f32, 23 | pub y: f32 24 | } 25 | 26 | 27 | impl std::fmt::Display for PointFlt { 28 | fn fmt(&self, f: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> { 29 | write!(f, "({:0.1}, {:0.1})", self.x, self.y) 30 | } 31 | } 32 | 33 | 34 | #[derive(Clone, Copy, Default, Eq, PartialEq)] 35 | pub struct Point { 36 | pub x: i32, 37 | pub y: i32 38 | } 39 | 40 | 41 | impl Add for Point { 42 | type Output = Point; 43 | 44 | fn add(self, other: Point) -> Point { 45 | Point { 46 | x: self.x + other.x, 47 | y: self.y + other.y, 48 | } 49 | } 50 | } 51 | 52 | 53 | impl Sub for Point { 54 | type Output = Point; 55 | 56 | fn sub(self, other: Point) -> Point { 57 | Point { 58 | x: self.x - other.x, 59 | y: self.y - other.y, 60 | } 61 | } 62 | } 63 | 64 | 65 | impl AddAssign for Point { 66 | fn add_assign(&mut self, other: Point) { 67 | *self = Point { 68 | x: self.x + other.x, 69 | y: self.y + other.y, 70 | }; 71 | } 72 | } 73 | 74 | 75 | impl std::fmt::Display for Point { 76 | fn fmt(&self, f: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> { 77 | write!(f, "({}, {})", self.x, self.y) 78 | } 79 | } 80 | 81 | 82 | impl Point { 83 | pub fn sqr_dist(p1: &Point, p2: &Point) -> i32 { 84 | sqr!(p1.x - p2.x) + sqr!(p1.y - p2.y) 85 | } 86 | } 87 | 88 | 89 | #[derive(Copy, Clone, Default)] 90 | pub struct Rect { 91 | pub x: i32, 92 | pub y: i32, 93 | pub width: u32, 94 | pub height: u32 95 | } 96 | 97 | 98 | impl Rect { 99 | pub fn contains_point(&self, p: &Point) -> bool { 100 | p.x >= self.x && p.x < self.x + self.width as i32 && p.y >= self.y && p.y < self.y + self.height as i32 101 | } 102 | 103 | 104 | pub fn contains_rect(&self, other: &Rect) -> bool { 105 | self.contains_point(&Point{ x: other.x, y: other.y }) && 106 | self.contains_point(&Point{ x: other.x + other.width as i32 - 1, 107 | y: other.y + other.height as i32 - 1 }) 108 | } 109 | 110 | 111 | pub fn get_pos(&self) -> Point { Point{ x: self.x, y: self.y } } 112 | } 113 | 114 | 115 | #[derive(Debug)] 116 | pub enum ProcessingError { 117 | /// There are no more steps in the current processing phase. 118 | NoMoreSteps, 119 | 120 | ImageError(image::ImageError) 121 | } 122 | 123 | 124 | impl From for ProcessingError { 125 | fn from(err: image::ImageError) -> ProcessingError { ProcessingError::ImageError(err) } 126 | } 127 | 128 | 129 | /// Represents a single processing phase. 130 | pub trait ProcessingPhase { 131 | /// Executes one processing step. 132 | fn step(&mut self) -> Result<(), ProcessingError>; 133 | 134 | 135 | /// Returns a copy of the image that was processed by the last call to `step()`. 136 | /// 137 | /// Can be used to show processing visualization. 138 | /// 139 | fn get_curr_img(&mut self) -> Result; 140 | } -------------------------------------------------------------------------------- /src/filters.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Image filters. 11 | // 12 | 13 | use image::{Image, PixelFormat}; 14 | use std::cmp::{min, max}; 15 | use std::convert::{From}; 16 | use std::mem::{swap}; 17 | use utils; 18 | 19 | 20 | //// Result of 3 iterations is quite close to a Gaussian blur 21 | const QUALITY_ESTIMATE_BOX_BLUR_ITERATIONS: usize = 3; 22 | 23 | 24 | /// Returns a copy of `img` with box blur applied; `img` has to be `Mono8`. 25 | pub fn apply_box_blur(img: &Image, box_radius: u32, iterations: usize) -> Image { 26 | assert!(img.get_pixel_format() == PixelFormat::Mono8); 27 | 28 | let mut blurred_img = Image::new(img.get_width(), img.get_height(), 29 | img.get_pixel_format(), None, false); 30 | 31 | box_blur(img.get_pixels::(), blurred_img.get_pixels_mut::(), 32 | img.get_width(), img.get_height(), img.get_width() as usize, 33 | box_radius, iterations); 34 | 35 | blurred_img 36 | } 37 | 38 | 39 | /// Performs a blurring pass of a range of `length` elements. 40 | /// 41 | /// `T` is `u8` or `u32`, `src` points to the range's beginning, 42 | /// `step` is the distance between subsequent elements. 43 | /// 44 | fn box_blur_pass(src: &[T], pix_sum: &mut [u32], box_radius: u32, length: usize, step: usize) 45 | where T: Copy, u32: From { 46 | 47 | // Sum for the first pixel in the current line/row (count the last pixel multiple times if needed) 48 | pix_sum[0] = (box_radius as u32 + 1) * u32::from(src[0]); 49 | 50 | let mut i = step; 51 | while i <= box_radius as usize * step { 52 | pix_sum[0] += u32::from(src[min(i, (length - 1) * step)]); 53 | i += step; 54 | } 55 | 56 | // Starting region 57 | i = step; 58 | while i <= min((length - 1) * step, box_radius as usize * step) { 59 | pix_sum[i] = pix_sum[i - step] - u32::from(src[0]) + 60 | u32::from(src[min((length - 1) * step, i + box_radius as usize * step)]); 61 | i += step; 62 | } 63 | 64 | if length > box_radius as usize { 65 | // Middle region 66 | i = (box_radius as usize + 1) * step; 67 | while i < (length - box_radius as usize) * step { 68 | pix_sum[i] = pix_sum[i - step] - u32::from(src[i - (box_radius as usize + 1) * step]) + 69 | u32::from(src[i + box_radius as usize * step]); 70 | i += step; 71 | } 72 | 73 | // End region 74 | i = (length - box_radius as usize) * step; 75 | while i < length * step { 76 | pix_sum[i] = pix_sum[i - step] - 77 | u32::from(src[if i > (box_radius as usize + 1) * step { i - (box_radius as usize + 1) * step } else { 0 }]) + 78 | u32::from(src[min(i + box_radius as usize * step, (length - 1) * step)]); 79 | i += step; 80 | } 81 | } 82 | } 83 | 84 | 85 | /// Fills `blurred` with box-blurred contents of `src`. 86 | /// 87 | /// Both `src` and `blurred` have `width`*`height` elements (8-bit grayscale). 88 | /// `src_line_stride` is the distance between lines in `src` (which may be a part of a larger image). 89 | /// Line stride in `blurred` equals `width`. 90 | /// 91 | fn box_blur(src: &[u8], blurred: &mut [u8], 92 | width: u32, height: u32, src_line_stride: usize, 93 | box_radius: u32, iterations: usize) { 94 | assert!(iterations > 0); 95 | assert!(box_radius > 0); 96 | 97 | if width == 0 || height == 0 { return; } 98 | 99 | // First the 32-bit unsigned sums of neighborhoods are calculated horizontally 100 | // and (incrementally) vertically. The max value of a (unsigned) sum is: 101 | // 102 | // (2^8-1) * (box_radius*2 + 1)^2 103 | // 104 | // In order for it to fit in 32 bits, box_radius must be below ca. 2^11 - 1. 105 | 106 | assert!((box_radius as u32) < (1u32 << 11) - 1); 107 | 108 | // We need 2 summation buffers to act as source/destination (and then vice versa) 109 | let mut pix_sum_1 = utils::alloc_uninitialized::((width * height) as usize); 110 | let mut pix_sum_2 = utils::alloc_uninitialized::((width * height) as usize); 111 | 112 | let divisor = sqr!(2 * box_radius + 1) as u32; 113 | 114 | // For pixels less than 'box_radius' away from image border, assume 115 | // the off-image neighborhood consists of copies of the border pixel. 116 | 117 | let mut src_array = &mut pix_sum_1[..]; 118 | let mut dest_array = &mut pix_sum_2[..]; 119 | 120 | 121 | for n in 0..iterations { 122 | swap(&mut src_array, &mut dest_array); 123 | 124 | // Calculate horizontal neighborhood sums 125 | if n == 0 { 126 | // Special case: in iteration 0 the source is the 8-bit 'src' 127 | let mut s_offs = 0usize; 128 | let mut d_offs = 0usize; 129 | 130 | for _ in 0..height { 131 | box_blur_pass(&src[range!(s_offs, width as usize)], 132 | &mut dest_array[range!(d_offs, width as usize)], 133 | box_radius, width as usize, 1); 134 | s_offs += src_line_stride; 135 | d_offs += width as usize; 136 | } 137 | } else { 138 | let mut offs = 0usize; 139 | 140 | for _ in 0..height { 141 | box_blur_pass(&src_array[range!(offs, width as usize)], 142 | &mut dest_array[range!(offs, width as usize)], 143 | box_radius, width as usize, 1); 144 | offs += width as usize; 145 | } 146 | } 147 | 148 | swap(&mut src_array, &mut dest_array); 149 | 150 | // Calculate vertical neighborhood sums 151 | for offs in 0..width as usize { 152 | box_blur_pass(&src_array[offs..], 153 | &mut dest_array[offs..], 154 | box_radius, height as usize, width as usize); 155 | } 156 | 157 | // Divide to obtain normalized result. We choose not to divide just once 158 | // after completing all iterations, because the 32-bit intermediate values 159 | // would overflow in as little as 3 iterations with 8-pixel box radius 160 | // for an all-white input image. In such case the final sums would be: 161 | // 162 | // 255 * ((2*8+1)^2)^3 = 6'155'080'095 163 | // 164 | // (where the exponent 3 = number of iterations) 165 | 166 | for i in dest_array.iter_mut() { 167 | *i /= divisor; 168 | } 169 | } 170 | 171 | // 'dest_array' is where the last summation results were stored, 172 | // now use it as source for producing the final 8-bit image in 'blurred' 173 | 174 | for i in 0 .. (width*height) as usize { 175 | blurred[i] = dest_array[i] as u8; 176 | } 177 | } 178 | 179 | 180 | /// Estimates quality of the specified area (8 bits per pixel). 181 | /// 182 | /// Quality is the sum of differences between input image and its blurred version. 183 | /// In other words, sum of values of the high-frequency component. 184 | /// The sum is normalized by dividing by the number of pixels. 185 | /// 186 | /// `pixels` starts at the beginning of a `width`x`height` area 187 | /// in an image with `line_stride` distance between lines. 188 | /// 189 | pub fn estimate_quality(pixels: &[u8], width: u32, height: u32, line_stride: usize, box_blur_radius: u32) -> f32 { 190 | let mut blurred = utils::alloc_uninitialized::((width*height) as usize); 191 | box_blur(pixels, &mut blurred[..], width, height, line_stride, box_blur_radius, QUALITY_ESTIMATE_BOX_BLUR_ITERATIONS); 192 | 193 | let mut quality = 0.0; 194 | 195 | let mut src_offs = 0usize; 196 | let mut blur_offs = 0usize; 197 | for _ in 0..height { 198 | for x in 0..width { 199 | quality += i32::abs(pixels[src_offs + x as usize] as i32 - blurred[blur_offs + x as usize] as i32) as f32; 200 | } 201 | src_offs += line_stride; 202 | blur_offs += width as usize; 203 | } 204 | 205 | quality / ((width*height) as f32) 206 | } 207 | 208 | 209 | /// Finds `remove_val` in the sorted `array`, replaces it with `new_val` and ensures `array` remains sorted. 210 | fn shift_sorted_window(array: &mut [f64], remove_val: f64, new_val: f64) { 211 | // Locate 'remove_val' in 'array' 212 | let mut curr_idx = array.binary_search_by(|x| x.partial_cmp(&remove_val).unwrap()).unwrap(); 213 | 214 | // Insert 'new_val' into 'array' and (if needed) move it to keep the array sorted 215 | array[curr_idx] = new_val; 216 | while curr_idx <= array.len() - 2 && array[curr_idx] > array[curr_idx + 1] { 217 | array.swap(curr_idx, curr_idx + 1); 218 | curr_idx += 1; 219 | } 220 | while curr_idx > 0 && array[curr_idx] < array[curr_idx - 1] { 221 | array.swap(curr_idx - 1, curr_idx); 222 | curr_idx -= 1; 223 | } 224 | } 225 | 226 | /// Returns contents of `array` after median filtering. 227 | pub fn median_filter(array: &[f64], window_radius: usize) -> Vec { 228 | assert!(window_radius < array.len()); 229 | 230 | let wnd_len = 2 * window_radius + 1; 231 | 232 | // A sorted array 233 | let mut window = utils::alloc_uninitialized::(wnd_len); 234 | 235 | // Set initial window contents 236 | for i in 0..window_radius+1 { 237 | // upper half 238 | window[window_radius + i] = array[i]; 239 | 240 | // lower half 241 | if i < window_radius { 242 | window[i] = array[0]; 243 | } 244 | } 245 | 246 | window.sort_by(|x, y| x.partial_cmp(y).unwrap()); 247 | 248 | let mut output = utils::alloc_uninitialized::(array.len()); 249 | 250 | // Replace every 'array' element in 'output' with window's median and shift the window 251 | for i in 0..array.len() { 252 | output[i] = window[window_radius]; 253 | shift_sorted_window(&mut window, 254 | array[max(i as isize - window_radius as isize, 0) as usize], 255 | array[min(i + 1 + window_radius, array.len() - 1)]); 256 | } 257 | 258 | output 259 | } 260 | -------------------------------------------------------------------------------- /src/img_align.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Processing phase: image alignment (video stabilization). 11 | // 12 | 13 | use blk_match; 14 | use defs::{Point, ProcessingPhase, ProcessingError, Rect, WHITE_8BIT}; 15 | use filters; 16 | use img_seq; 17 | use img_seq::ImageSequence; 18 | use image; 19 | use image::{DemosaicMethod, Image, PixelFormat}; 20 | use std::cmp::{max, min}; 21 | use utils; 22 | 23 | 24 | pub struct AnchorConfig { 25 | /// Anchor points to use for alignment; if `None`, an anchor will be placed automatically. 26 | /// 27 | /// Coordinates are relative to the current image's origin. 28 | /// 29 | pub initial_anchors: Option>, 30 | 31 | /// Radius (in pixels) of anchors' reference blocks 32 | pub block_radius: u32, 33 | 34 | /// Images are aligned by matching blocks that are offset (horz. and vert.) by up to this radius (in pixels). 35 | pub search_radius: u32, 36 | 37 | /// Min. image brightness that an anchor can be placed at (values: [0; 1]). 38 | /// 39 | /// Value is relative to the image's darkest (0.0) and brightest (1.0) pixels. 40 | /// 41 | pub placement_brightness_threshold: f32 42 | } 43 | 44 | pub enum AlignmentMethod { 45 | /// Alignment via block-matching around the specified anchor points 46 | Anchors(AnchorConfig), 47 | 48 | /// Alignment using the image centroid 49 | Centroid 50 | } 51 | 52 | 53 | const QUALITY_EST_BOX_BLUR_RADIUS: u32 = 2; 54 | 55 | 56 | struct AnchorData { 57 | /// Current position 58 | pub pos: Point, 59 | 60 | pub is_valid: bool, 61 | 62 | /// Square image fragment (of the best quality so far) centered (after alignment) on `pos` 63 | pub ref_block: Image, 64 | 65 | /// Quality of `ref_block` 66 | pub ref_block_qual: f32 67 | } 68 | 69 | 70 | /// Set-theoretic intersection of all images after alignment (i.e. the fragment which is visible in all images). 71 | #[derive(Default)] 72 | struct ImgIntersection { 73 | /// Offset, relative to the first image's origin. 74 | pub offset: Point, 75 | 76 | /// Coordinates of the bottom right corner (belongs to the intersection), relative to the first image's origin. 77 | pub bottom_right: Point, 78 | 79 | pub width: u32, 80 | pub height: u32 81 | } 82 | 83 | 84 | /// Contains results of processing performed by `ImgAlignmentProc`. 85 | #[derive(Default)] 86 | pub struct ImgAlignmentData { 87 | /// Images' intersection 88 | intersection: ImgIntersection, 89 | 90 | /// Image offsets (relative to each image's origin) necessary for them to be aligned. 91 | /// 92 | /// Concerns only active images of the sequence. 93 | /// 94 | img_offsets: Vec 95 | } 96 | 97 | 98 | impl ImgAlignmentData { 99 | /// Returns offset of images' intersection relative to the first image's origin. 100 | pub fn get_intersection(&self) -> Rect { 101 | Rect{ x: self.intersection.offset.x, 102 | y: self.intersection.offset.y, 103 | width: self.intersection.width, 104 | height: self.intersection.height } 105 | } 106 | 107 | 108 | /// Returns offsets (relative to each image's origin) required for images to be aligned. 109 | pub fn get_image_ofs(&self) -> &[Point] { 110 | &self.img_offsets[..] 111 | } 112 | } 113 | 114 | struct AnchorsState { 115 | anchors: Vec, 116 | config: AnchorConfig, 117 | active_anchor_idx: usize, 118 | } 119 | 120 | enum State { 121 | Anchors(AnchorsState), 122 | /// Contains current centroid position. 123 | Centroid(Point) 124 | } 125 | 126 | /// Performs image alignment (video stabilization). 127 | /// 128 | /// Completed alignment results in determining the images' intersection, 129 | /// i.e. the common rectangular area visible in all frames. Due to the likely 130 | /// image drift, this area is usually smaller than the smallest image in the sequence. 131 | /// 132 | pub struct ImgAlignmentProc<'a> { 133 | data_returned: bool, 134 | 135 | is_complete: bool, 136 | 137 | img_seq: &'a mut ImageSequence, 138 | 139 | /// Current image index (within the active images' subset). 140 | curr_img_idx: usize, 141 | 142 | state: State, 143 | 144 | data: ImgAlignmentData 145 | } 146 | 147 | 148 | impl<'a> ImgAlignmentProc<'a> { 149 | /// Returns image alignment data determined during processing. May be called only once. 150 | pub fn get_data(&mut self) -> ImgAlignmentData { 151 | assert!(!self.data_returned && self.is_complete()); 152 | self.data_returned = true; 153 | ::std::mem::replace(&mut self.data, ImgAlignmentData::default()) 154 | } 155 | 156 | 157 | /// Initializes image alignment (video stabilization). 158 | /// 159 | /// If `align_method` is `AlignmentMethod::Centroid`, all subsequent parameters are ignored. 160 | /// If `anchors` is empty, anchors will be placed automatically. 161 | /// 162 | /// `block_radius` - radius (in pixels) of square blocks used for matching images 163 | /// `search_radius` - max offset in pixels (horizontal and vertical) of blocks during matching 164 | /// `placement_brightness_threshold` - min. image brightness that an anchor can be placed at (values: [0; 1]); 165 | /// value is relative to the image's darkest (0.0) and brightest (1.0) pixels 166 | /// 167 | pub fn init( 168 | img_seq: &'a mut ImageSequence, 169 | align_method: AlignmentMethod 170 | ) -> Result, ProcessingError> { 171 | assert!(img_seq.get_active_img_count() > 0); 172 | 173 | img_seq.seek_start(); 174 | let first_img = img_seq.get_curr_img()?; 175 | 176 | let img_offsets = Vec::::with_capacity(img_seq.get_active_img_count()); 177 | 178 | let intersection = ImgIntersection{ 179 | offset: Point{ x: 0, y: 0 }, 180 | bottom_right: Point { x: i32::max_value(), y: i32::max_value() }, 181 | width: 0, 182 | height: 0 183 | }; 184 | 185 | let state; 186 | 187 | match align_method { 188 | AlignmentMethod::Anchors(anchor_cfg) => { 189 | assert!(anchor_cfg.block_radius > 0 && anchor_cfg.search_radius > 0); 190 | 191 | let mut anchor_data: Vec = vec![]; 192 | let mut anchor_positions: Vec = vec![]; 193 | 194 | match &anchor_cfg.initial_anchors { 195 | None => anchor_positions.push(ImgAlignmentProc::suggest_anchor_pos( 196 | &first_img, 197 | anchor_cfg.placement_brightness_threshold, 198 | 2 * anchor_cfg.block_radius 199 | )), 200 | 201 | Some(positions) => anchor_positions = positions.clone() 202 | } 203 | 204 | for anchor_pos in anchor_positions { 205 | let ref_block = first_img.convert_pix_fmt_of_subimage( 206 | PixelFormat::Mono8, 207 | Point{ 208 | x: anchor_pos.x - anchor_cfg.block_radius as i32, 209 | y: anchor_pos.y - anchor_cfg.block_radius as i32 210 | }, 211 | 2 * anchor_cfg.block_radius, 212 | 2 * anchor_cfg.block_radius, 213 | Some(DemosaicMethod::Simple) 214 | ); 215 | 216 | let ref_block_qual = filters::estimate_quality(ref_block.get_pixels(), 217 | ref_block.get_width(), 218 | ref_block.get_height(), 219 | ref_block.get_width() as usize, 220 | QUALITY_EST_BOX_BLUR_RADIUS); 221 | anchor_data.push( 222 | AnchorData{ pos: anchor_pos, 223 | is_valid: true, 224 | ref_block, 225 | ref_block_qual } 226 | ); 227 | } 228 | 229 | state = State::Anchors( 230 | AnchorsState{ anchors: anchor_data, config: anchor_cfg, active_anchor_idx: 0} 231 | ); 232 | }, 233 | 234 | AlignmentMethod::Centroid => state = State::Centroid(first_img.get_centroid(first_img.get_img_rect())) 235 | } 236 | 237 | Ok(ImgAlignmentProc{ 238 | data_returned: false, 239 | is_complete: false, 240 | img_seq, 241 | curr_img_idx: 0, 242 | state, 243 | data: ImgAlignmentData{ intersection, img_offsets } 244 | }) 245 | } 246 | 247 | pub fn is_complete(&self) -> bool { self.is_complete } 248 | 249 | /// Returns the current number of anchors. 250 | /// 251 | /// The return value may increase during processing (when all existing 252 | /// anchors became invalid and a new one(s) had to be automatically created). 253 | /// 254 | pub fn get_anchor_count(&self) -> usize { 255 | match &self.state { 256 | State::Anchors(anchors) => anchors.anchors.len(), 257 | _ => panic!("Alignment mode is not anchors.") 258 | } 259 | } 260 | 261 | /// Returns current positions of anchor points. 262 | pub fn get_anchors(&self) -> Vec { 263 | match &self.state { 264 | State::Anchors(anchors) => anchors.anchors.iter().map(|ref a| a.pos).collect(), 265 | _ => panic!("Alignment mode is not anchors.") 266 | } 267 | } 268 | 269 | 270 | pub fn is_anchor_valid(&self, anchor_idx: usize) -> bool { 271 | match &self.state { 272 | State::Anchors(anchors) => anchors.anchors[anchor_idx].is_valid, 273 | _ => panic!("Alignment mode is not anchors.") 274 | } 275 | } 276 | 277 | 278 | /// Returns the optimal position of a video stabilization anchor in `image`. 279 | /// 280 | /// `placement_brightness_threshold` - min. image brightness that an anchor point can be placed at; 281 | /// value is relative to the image's darkest (0.0) and brightest (1.0) pixels 282 | /// 283 | fn suggest_anchor_pos(image: &Image, placement_brightness_threshold: f32, ref_block_size: u32) -> Point { 284 | let width = image.get_width(); 285 | let height = image.get_height(); 286 | 287 | let img8_storage: Box; 288 | let mut img8: &Image = image; 289 | 290 | if image.get_pixel_format() != PixelFormat::Mono8 { 291 | img8_storage = Box::new(image.convert_pix_fmt(PixelFormat::Mono8, Some(DemosaicMethod::Simple))); 292 | img8 = &img8_storage; 293 | } 294 | 295 | let (bmin, bmax) = utils::find_min_max_brightness(&img8); 296 | 297 | let mut result = Point{ x: (width / 2) as i32, y: (height / 2) as i32 }; 298 | let mut best_qual = 0.0; 299 | 300 | let num_pixels_in_block = sqr!(ref_block_size) as usize; 301 | 302 | // Consider only the middle 3/4 of `image` 303 | let mut y = height/8 + ref_block_size/2; 304 | while y < 7*height/8 - ref_block_size/2 { 305 | let mut x = width/8 + ref_block_size/2; 306 | while x < 7*width/8 - ref_block_size { 307 | let mut num_above_thresh = 0usize; 308 | let mut num_white = 0usize; 309 | 310 | for ny in range!(y - ref_block_size/2, ref_block_size) { 311 | let line = img8.get_line_raw(ny); 312 | 313 | for nx in range!(x - ref_block_size/2, ref_block_size) { 314 | if line[nx as usize] == WHITE_8BIT { num_white += 1; } 315 | 316 | if line[nx as usize] != WHITE_8BIT && line[nx as usize] >= bmin + (placement_brightness_threshold * (bmax - bmin) as f32) as u8 { 317 | num_above_thresh += 1; 318 | } 319 | } 320 | } 321 | 322 | if num_above_thresh > num_pixels_in_block / 5 && 323 | // Reject locations at the limb of an overexposed (fully white) disc; the white pixels 324 | // would weigh heavily during block matching and the point would tend to jump along the limb 325 | num_white < num_pixels_in_block/10 && 326 | utils::assess_gradients_for_block_matching( 327 | img8, 328 | Point{ x: x as i32, y: y as i32}, 329 | max(ref_block_size/2, 32)) { 330 | 331 | let qual = filters::estimate_quality(img8.get_mono8_pixels_from(Point{ x: (x - ref_block_size/2) as i32, 332 | y: (y - ref_block_size/2) as i32 }), 333 | ref_block_size, ref_block_size, img8.get_width() as usize, 4); 334 | 335 | if qual > best_qual { 336 | best_qual = qual; 337 | result = Point{ x: x as i32, y: y as i32 }; 338 | } 339 | } 340 | 341 | x += ref_block_size/3; 342 | } 343 | 344 | y += ref_block_size/3; 345 | } 346 | 347 | result 348 | } 349 | 350 | /// Returns the current centroid position 351 | pub fn get_current_centroid_pos(&self) -> Point { 352 | match &self.state { 353 | State::Centroid(centroid) => *centroid, 354 | _ => panic!("Alignment mode is not centroid.") 355 | } 356 | } 357 | } 358 | 359 | fn determine_img_offset_using_centroid(current_centroid_pos: Point, img: &Image) -> Point { 360 | let new_centroid_pos = img.get_centroid(img.get_img_rect()); 361 | 362 | Point{ 363 | x: new_centroid_pos.x - current_centroid_pos.x, 364 | y: new_centroid_pos.y - current_centroid_pos.y 365 | } 366 | } 367 | 368 | fn determine_img_offset_using_anchors(state: &mut AnchorsState, img: &Image) -> Point { 369 | assert!(img.get_pixel_format() == PixelFormat::Mono8); 370 | 371 | let mut active_anchor_offset = Point::default(); 372 | 373 | for (i, anchor) in state.anchors.iter_mut().enumerate() { 374 | if anchor.is_valid { 375 | let s_rad = state.config.search_radius; 376 | 377 | let new_pos = blk_match::find_matching_position( 378 | anchor.pos, 379 | &anchor.ref_block, 380 | &img, 381 | s_rad, 382 | 4 383 | ); 384 | 385 | let blkw = anchor.ref_block.get_width(); 386 | let blkh = anchor.ref_block.get_height(); 387 | 388 | if new_pos.x < (blkw + s_rad) as i32 || 389 | new_pos.x > (img.get_width() - blkw - s_rad) as i32 || 390 | new_pos.y < (blkh + s_rad) as i32 || 391 | new_pos.y > (img.get_height() - blkh - s_rad) as i32 { 392 | 393 | anchor.is_valid = false; 394 | continue; 395 | } 396 | 397 | let new_qual = filters::estimate_quality(img.get_mono8_pixels_from(new_pos), 398 | blkw, blkh, img.get_width() as usize, QUALITY_EST_BOX_BLUR_RADIUS); 399 | 400 | if new_qual > anchor.ref_block_qual { 401 | anchor.ref_block_qual = new_qual; 402 | 403 | // Refresh the reference block using the current image at the block's new position 404 | img.convert_pix_fmt_of_subimage_into(&mut anchor.ref_block, 405 | Point{ x: new_pos.x - (blkw/2) as i32, y: new_pos.y - (blkh/2) as i32 }, 406 | Point{ x: 0, y: 0 }, 407 | blkw, blkh, 408 | Some(DemosaicMethod::Simple)); 409 | } 410 | 411 | if i == state.active_anchor_idx { 412 | active_anchor_offset.x = new_pos.x - anchor.pos.x; 413 | active_anchor_offset.y = new_pos.y - anchor.pos.y; 414 | } 415 | 416 | anchor.pos = new_pos; 417 | } 418 | } 419 | 420 | if !state.anchors[state.active_anchor_idx].is_valid { 421 | // select the next available valid anchor as "active" 422 | let mut new_active_idx = state.active_anchor_idx + 1; 423 | 424 | while new_active_idx < state.anchors.len() { 425 | if state.anchors[new_active_idx].is_valid { 426 | break; 427 | } else { 428 | new_active_idx += 1; 429 | } 430 | } 431 | 432 | if new_active_idx >= state.anchors.len() { 433 | // there are no more existing valid anchors; choose and add a new one 434 | 435 | let new_pos = ImgAlignmentProc::suggest_anchor_pos( 436 | &img, state.config.placement_brightness_threshold, 2 * state.config.search_radius 437 | ); 438 | 439 | 440 | let ref_block = img.get_fragment_copy(Point{ x: new_pos.x - state.config.block_radius as i32, 441 | y: new_pos.y - state.config.block_radius as i32 }, 442 | 2 * state.config.block_radius, 443 | 2 * state.config.block_radius, 444 | false); 445 | let ref_block_qual = filters::estimate_quality(&ref_block.get_pixels(), 446 | ref_block.get_width(), 447 | ref_block.get_height(), 448 | ref_block.get_width() as usize, 449 | QUALITY_EST_BOX_BLUR_RADIUS); 450 | state.anchors.push( 451 | AnchorData{ 452 | pos: new_pos, 453 | ref_block, 454 | ref_block_qual, 455 | is_valid: true, 456 | } 457 | ); 458 | 459 | state.active_anchor_idx = state.anchors.len() - 1; 460 | } 461 | } 462 | 463 | active_anchor_offset 464 | } 465 | 466 | 467 | 468 | impl<'a> ProcessingPhase for ImgAlignmentProc<'a> { 469 | fn get_curr_img(&mut self) -> Result { 470 | self.img_seq.get_curr_img() 471 | } 472 | 473 | 474 | fn step(&mut self) -> Result<(), ProcessingError> { 475 | if self.curr_img_idx == 0 { 476 | self.data.img_offsets.push(Point{ x: 0, y: 0 }); 477 | 478 | let (width, height, _, _) = self.img_seq.get_curr_img_metadata()?; 479 | 480 | self.data.intersection.bottom_right.x = width as i32 - 1; 481 | self.data.intersection.bottom_right.y = height as i32 - 1; 482 | self.curr_img_idx += 1; 483 | 484 | Ok(()) 485 | } else { 486 | match self.img_seq.seek_next() { 487 | Err(img_seq::SeekResult::NoMoreImages) => { 488 | self.data.intersection.width = (self.data.intersection.bottom_right.x - self.data.intersection.offset.x + 1) as u32; 489 | self.data.intersection.height = (self.data.intersection.bottom_right.y - self.data.intersection.offset.y + 1) as u32; 490 | 491 | self.is_complete = true; 492 | 493 | return Err(ProcessingError::NoMoreSteps); 494 | }, 495 | 496 | _ => { } 497 | } 498 | 499 | let img = self.img_seq.get_curr_img()?; 500 | 501 | let detected_img_offset; 502 | 503 | match &mut self.state { 504 | State::Anchors(anchors) => { 505 | let img8_storage: Box; 506 | let mut img8 = &img; 507 | if img.get_pixel_format() != PixelFormat::Mono8 { 508 | img8_storage = Box::new(img.convert_pix_fmt(PixelFormat::Mono8, Some(DemosaicMethod::Simple))); 509 | img8 = &img8_storage; 510 | } 511 | 512 | detected_img_offset = determine_img_offset_using_anchors(anchors, img8); 513 | }, 514 | 515 | State::Centroid(centroid_pos) => { 516 | detected_img_offset = determine_img_offset_using_centroid(*centroid_pos, &img); 517 | *centroid_pos += detected_img_offset; 518 | } 519 | } 520 | 521 | // `img_offsets` contain offsets relative to the first frame, so store the current offset incrementally w.r.t. the previous one 522 | let new_ofs = *self.data.img_offsets.last().unwrap() + detected_img_offset; 523 | self.data.img_offsets.push(new_ofs); 524 | 525 | self.data.intersection.offset.x = max(self.data.intersection.offset.x, -new_ofs.x); 526 | self.data.intersection.offset.y = max(self.data.intersection.offset.y, -new_ofs.y); 527 | self.data.intersection.bottom_right.x = min(self.data.intersection.bottom_right.x, -new_ofs.x + img.get_width() as i32 - 1); 528 | self.data.intersection.bottom_right.y = min(self.data.intersection.bottom_right.y, -new_ofs.y + img.get_height() as i32 - 1); 529 | self.curr_img_idx += 1; 530 | 531 | Ok(()) 532 | } 533 | } 534 | } 535 | -------------------------------------------------------------------------------- /src/img_list.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Image provider: list of image files. 11 | // 12 | 13 | use image::{FileType, Image, ImageError, Palette, PixelFormat}; 14 | use img_seq_priv::{ImageProvider}; 15 | 16 | 17 | pub struct ImageList { 18 | file_names: Vec 19 | } 20 | 21 | 22 | impl ImageList { 23 | pub fn new(file_names: &[&str]) -> Box { 24 | Box::new( 25 | ImageList { 26 | file_names: { 27 | let mut v = vec![]; 28 | for fname in file_names { 29 | v.push(fname.to_string()) 30 | } 31 | 32 | v 33 | } 34 | } 35 | ) 36 | } 37 | } 38 | 39 | 40 | impl ImageProvider for ImageList { 41 | fn get_img(&mut self, idx: usize) -> Result { 42 | Image::load(&self.file_names[idx], FileType::Auto) 43 | } 44 | 45 | 46 | fn get_img_metadata(&self, idx: usize) -> Result<(u32, u32, PixelFormat, Option), ImageError> { 47 | Image::get_metadata(&self.file_names[idx], FileType::Auto) 48 | } 49 | 50 | 51 | fn img_count(&self) -> usize { 52 | self.file_names.len() 53 | } 54 | 55 | 56 | fn deactivate(&mut self) { 57 | // Do nothing 58 | } 59 | } -------------------------------------------------------------------------------- /src/img_seq.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Image sequence. 11 | // 12 | 13 | use avi; 14 | use image::{Image, ImageError, Palette, PixelFormat}; 15 | use img_list; 16 | use img_seq_priv::ImageProvider; 17 | use ser; 18 | 19 | 20 | pub enum SeekResult { 21 | NoMoreImages 22 | } 23 | 24 | 25 | pub struct ImageSequence { 26 | img_provider: Box, 27 | 28 | active_flags: Vec, 29 | 30 | curr_img_idx: usize, 31 | 32 | curr_img_idx_within_active_subset: usize, 33 | 34 | num_active_imgs: usize, 35 | 36 | last_active_idx: usize, 37 | 38 | last_loaded_img_idx: usize, 39 | 40 | last_loaded_img: Option 41 | } 42 | 43 | 44 | impl ImageSequence { 45 | fn init(img_provider: Box) -> ImageSequence { 46 | let num_images = img_provider.img_count(); 47 | 48 | ImageSequence { 49 | img_provider, 50 | active_flags: vec![true; num_images], 51 | curr_img_idx: 0, 52 | curr_img_idx_within_active_subset: 0, 53 | num_active_imgs: num_images, 54 | last_active_idx: num_images - 1, 55 | last_loaded_img_idx: 0, 56 | last_loaded_img: None 57 | } 58 | } 59 | 60 | pub fn new_image_list(file_names: &[&str]) -> ImageSequence { 61 | ImageSequence::init(img_list::ImageList::new(file_names)) 62 | } 63 | 64 | 65 | pub fn new_avi_video(file_name: &str) -> Result { 66 | Ok(ImageSequence::init(avi::AviFile::new(file_name)?)) 67 | } 68 | 69 | 70 | pub fn new_ser_video(file_name: &str) -> Result { 71 | Ok(ImageSequence::init(ser::SerFile::new(file_name)?)) 72 | } 73 | 74 | 75 | fn get_img(&mut self, idx: usize) -> Result { 76 | if idx == self.last_loaded_img_idx { 77 | match &self.last_loaded_img { 78 | &Some(ref img) => return Ok(img.clone()), 79 | &None => () 80 | } 81 | } 82 | 83 | self.last_loaded_img = Some(self.img_provider.get_img(idx)?); 84 | self.last_loaded_img_idx = idx; 85 | 86 | Ok(self.last_loaded_img.iter().next().unwrap().clone()) 87 | } 88 | 89 | 90 | pub fn get_curr_img_idx(&self) -> usize { 91 | self.curr_img_idx 92 | } 93 | 94 | 95 | pub fn get_curr_img_idx_within_active_subset(&self) -> usize { 96 | self.curr_img_idx_within_active_subset 97 | } 98 | 99 | 100 | pub fn get_img_count(&self) -> usize { 101 | self.img_provider.img_count() 102 | } 103 | 104 | 105 | /// Seeks to the first active image 106 | pub fn seek_start(&mut self) { 107 | self.curr_img_idx = 0; 108 | while !self.active_flags[self.curr_img_idx] { 109 | self.curr_img_idx += 1; 110 | } 111 | 112 | self.curr_img_idx_within_active_subset = 0; 113 | } 114 | 115 | 116 | /// Seeks forward to the next active image 117 | pub fn seek_next(&mut self) -> Result<(), SeekResult> { 118 | if self.curr_img_idx < self.last_active_idx { 119 | while !self.active_flags[self.curr_img_idx] { 120 | self.curr_img_idx += 1; 121 | } 122 | self.curr_img_idx += 1; 123 | self.curr_img_idx_within_active_subset += 1; 124 | 125 | Ok(()) 126 | } else { 127 | Err(SeekResult::NoMoreImages) 128 | } 129 | } 130 | 131 | 132 | pub fn get_curr_img(&mut self) -> Result { 133 | let idx_to_load = self.curr_img_idx; 134 | self.get_img(idx_to_load) 135 | } 136 | 137 | 138 | /// Returns (width, height, pixel format, palette) 139 | pub fn get_curr_img_metadata(&mut self) -> Result<(u32, u32, PixelFormat, Option), ImageError> { 140 | if self.curr_img_idx == self.last_loaded_img_idx { 141 | match &self.last_loaded_img { 142 | &Some(ref img) => return Ok((img.get_width(), img.get_height(), img.get_pixel_format(), img.get_palette().clone())), 143 | &None => () 144 | } 145 | } 146 | 147 | self.img_provider.get_img_metadata(self.curr_img_idx) 148 | } 149 | 150 | 151 | pub fn get_img_by_index(&mut self, idx: usize) -> Result { 152 | self.get_img(idx) 153 | } 154 | 155 | 156 | /// Should be called when `img_seq` will not be read for some time. 157 | /// 158 | /// In case of image lists, the function does nothing. For video files, it closes them. 159 | /// Video files are opened automatically (and kept open) every time a frame is loaded. 160 | /// 161 | pub fn deactivate(&mut self) { 162 | self.img_provider.deactivate() 163 | } 164 | 165 | 166 | /// Marks images as active. Element count of `is_active` must equal the number of images in the sequence. 167 | pub fn set_active_imgs(&mut self, is_active: &[bool]) { 168 | assert!(is_active.len() == self.active_flags.len()); 169 | self.active_flags.clear(); 170 | self.active_flags.extend_from_slice(is_active); 171 | 172 | self.num_active_imgs = 0; 173 | for i in 0 .. self.img_provider.img_count() { 174 | if self.active_flags[i] { 175 | self.last_active_idx = i; 176 | self.num_active_imgs += 1; 177 | } 178 | } 179 | } 180 | 181 | 182 | pub fn is_img_active(&self, img_idx: usize) -> bool { 183 | self.active_flags[img_idx] 184 | } 185 | 186 | 187 | /// Element count of the result equals the number of images in the sequence. 188 | pub fn get_img_active_flags(&self) -> &[bool] { 189 | &self.active_flags[..] 190 | } 191 | 192 | 193 | pub fn get_active_img_count(&self) -> usize { 194 | self.num_active_imgs 195 | } 196 | 197 | 198 | /// Translates index in the active images' subset into absolute index. 199 | pub fn get_absolute_img_idx(&self, active_img_idx: usize) -> usize { 200 | let mut abs_idx = 0usize; 201 | let mut active_img_counter = 0usize; 202 | while abs_idx < self.img_provider.img_count() { 203 | if active_img_counter == active_img_idx { 204 | break; 205 | } 206 | if self.active_flags[abs_idx] { 207 | active_img_counter += 1; 208 | } 209 | abs_idx += 1; 210 | } 211 | 212 | abs_idx 213 | } 214 | } 215 | -------------------------------------------------------------------------------- /src/img_seq_priv.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Image provider trait. 11 | // 12 | 13 | use image::{Image, ImageError, Palette, PixelFormat}; 14 | 15 | 16 | /// Image provider used by `ImageSequence`. 17 | pub trait ImageProvider { 18 | fn img_count(&self) -> usize; 19 | 20 | fn get_img(&mut self, idx: usize) -> Result; 21 | 22 | /// Returns width, height, pixel format, palette. 23 | fn get_img_metadata(&self, idx: usize) -> Result<(u32, u32, PixelFormat, Option), ImageError>; 24 | 25 | fn deactivate(&mut self); 26 | } -------------------------------------------------------------------------------- /src/lib.rs: -------------------------------------------------------------------------------- 1 | #![crate_type = "lib"] 2 | #![crate_name = "skry"] 3 | 4 | #[macro_use] 5 | mod utils; 6 | 7 | pub mod defs; 8 | pub mod filters; 9 | pub mod image; 10 | pub mod img_align; 11 | pub mod img_seq; 12 | pub mod quality; 13 | pub mod ref_pt_align; 14 | pub mod triangulation; 15 | pub mod stacking; 16 | 17 | mod avi; 18 | mod blk_match; 19 | mod bmp; 20 | mod img_list; 21 | mod img_seq_priv; 22 | mod ser; 23 | mod tiff; -------------------------------------------------------------------------------- /src/ref_pt_align.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Processing phase: reference point alignment. 11 | // 12 | 13 | use blk_match; 14 | use defs::{Point, PointFlt, ProcessingError, ProcessingPhase, Rect}; 15 | use image::{DemosaicMethod, Image, ImageError, PixelFormat}; 16 | use img_align::{ImgAlignmentData}; 17 | use img_seq::{ImageSequence, SeekResult}; 18 | use quality::QualityEstimationData; 19 | use triangulation::Triangulation; 20 | 21 | 22 | /// Value in pixels. 23 | const BLOCK_MATCHING_INITIAL_SEARCH_STEP: u32 = 2; 24 | 25 | const ADDITIONAL_FIXED_PTS_PER_BORDER: usize = 4; 26 | const ADDITIONAL_FIXED_PT_OFFSET_DIV: u32 = 4; 27 | 28 | 29 | 30 | /// Position of a reference point in image. 31 | #[derive(Clone, Copy)] 32 | pub struct RefPtPosition { 33 | pub pos: Point, 34 | /// True if the quality criteria for the image are met. 35 | pub is_valid: bool 36 | } 37 | 38 | 39 | struct ReferencePoint { 40 | /// Index of the associated quality estimation area. 41 | pub qual_est_area_idx: Option, 42 | 43 | /// Reference block used for block matching. 44 | pub ref_block: Option, 45 | 46 | /// Positions (and their validity) in every active image. 47 | pub positions: Vec, 48 | 49 | /// Index of the last valid position in `positions`. 50 | pub last_valid_pos_idx: Option, 51 | 52 | /// Length of the last translation vector. 53 | pub last_transl_vec_len: f64, 54 | /// Squared length of the last translation vector. 55 | pub last_transl_vec_sqlen: f64 56 | } 57 | 58 | 59 | /// Sum of reference points translation vector lengths in an image. 60 | #[derive(Default, Clone, Copy)] 61 | struct TVecSum { 62 | sum_len: f64, 63 | sum_sq_len: f64, 64 | num_terms: usize 65 | } 66 | 67 | 68 | /// Number of the most recent images used to keep a "sliding window" average of ref. pt. translation vector lengths. 69 | const TVEC_SUM_NUM_IMAGES: usize = 10; 70 | 71 | 72 | /// Selection criterion used for reference point alignment and stacking. 73 | /// 74 | /// A "fragment" is a triangular patch. 75 | /// 76 | #[derive(Clone, Copy)] 77 | pub enum QualityCriterion { 78 | /// Percentage of best-quality fragments. 79 | PercentageBest(u32), 80 | 81 | /// Minimum relative quality (%). 82 | /// 83 | /// Only fragments with quality above specified threshold (% relative to [min,max] of 84 | /// the corresponding quality estimation area) will be used. 85 | /// 86 | MinRelative(f32), 87 | 88 | /// Number of best-quality fragments. 89 | NumberBest(usize) 90 | } 91 | 92 | 93 | #[derive(Default)] 94 | pub struct RefPointAlignmentData { 95 | reference_pts: Vec, 96 | 97 | /// Delaunay triangulation of the reference points 98 | triangulation: Triangulation, 99 | 100 | /// Number of valid positions of all points in all images. 101 | num_valid_positions: u64, 102 | 103 | /// Number of rejected positions of all points in all images. 104 | /// 105 | /// Concerns positions rejected by outlier testing, not by 106 | /// too low image quality. 107 | /// 108 | num_rejected_positions: u64 109 | } 110 | 111 | 112 | impl RefPointAlignmentData { 113 | /// Returns the number of reference points. 114 | pub fn get_num_ref_points(&self) -> usize { 115 | self.reference_pts.len() 116 | } 117 | 118 | 119 | /// Returns the final (i.e. averaged over all images) positions of reference points. 120 | pub fn get_final_positions(&self) -> Vec { 121 | 122 | let mut result = Vec::::with_capacity(self.reference_pts.len()); 123 | let num_images = self.reference_pts[0].positions.len(); 124 | for ref_pt in &self.reference_pts { 125 | let mut valid_pos_count = 0usize; 126 | result.push(PointFlt{ x: 0.0, y: 0.0 }); 127 | let last = result.last_mut().unwrap(); 128 | for img_idx in 0..num_images { 129 | // Due to how 'update_ref_pt_positions()' works, it is guaranteed that at least one element "is valid" 130 | if ref_pt.positions[img_idx].is_valid { 131 | last.x += ref_pt.positions[img_idx].pos.x as f32; 132 | last.y += ref_pt.positions[img_idx].pos.y as f32; 133 | valid_pos_count += 1; 134 | } 135 | } 136 | last.x /= valid_pos_count as f32; 137 | last.y /= valid_pos_count as f32; 138 | } 139 | 140 | result 141 | } 142 | 143 | 144 | pub fn get_ref_pts_triangulation(&self) -> &Triangulation { 145 | &self.triangulation 146 | } 147 | 148 | 149 | //TODO: make it also available from RefPointAlignmentProc, but only for the current image (for visualization) 150 | /// Returns a reference point's position in the specified image and the position's "valid" flag. 151 | pub fn get_ref_pt_pos(&self, point_idx: usize, img_idx: usize) -> &RefPtPosition { 152 | &self.reference_pts[point_idx].positions[img_idx] 153 | } 154 | } 155 | 156 | 157 | #[derive(Clone)] 158 | struct TriangleQuality { 159 | // Min and max sum of triangle vertices' quality in an image. 160 | pub qmin: f32, 161 | pub qmax: f32, 162 | 163 | /// i-th element is the sorted quality index in i-th image (quality index 0==worst). 164 | pub sorted_idx: Vec 165 | } 166 | 167 | 168 | pub struct RefPointAlignmentProc<'a> { 169 | img_seq: &'a mut ImageSequence, 170 | 171 | img_align_data: &'a ImgAlignmentData, 172 | 173 | qual_est_data: &'a QualityEstimationData, 174 | 175 | quality_criterion: QualityCriterion, 176 | 177 | /// Size of (square) reference blocks used for block matching. 178 | /// 179 | /// Some ref. points (near image borders) may have smaller blocks (but always square). 180 | /// 181 | ref_block_size: u32, 182 | 183 | /// Search radius used during block matching. 184 | search_radius: u32, 185 | 186 | /// Array of boolean flags indicating if a ref. point has been updated during the current step. 187 | update_flags: Vec, 188 | 189 | /// Contains one element for each triangle in `triangulation`. 190 | tri_quality: Vec, 191 | 192 | is_complete: bool, 193 | 194 | data_returned: bool, 195 | 196 | /// Translation vectors of ref. points in recent images. 197 | /// 198 | /// Summary (for all ref. points) of the translation vectors between subsequent 199 | /// "valid" positions within the most recent `TVEC_SUM_NUM_IMAGES`. 200 | /// Used for clipping outliers in `update_ref_pt_positions()`. 201 | /// 202 | tvec_img_sum: [TVecSum; TVEC_SUM_NUM_IMAGES], 203 | 204 | /// Index in `tvec_img_sum` to store the next sum at. 205 | tvec_next_entry: usize, 206 | 207 | data: RefPointAlignmentData, 208 | } 209 | 210 | 211 | impl<'a> RefPointAlignmentProc<'a> { 212 | 213 | /// Returns image alignment data determined during processing. May be called only once. 214 | pub fn get_data(&mut self) -> RefPointAlignmentData { 215 | assert!(!self.data_returned && self.is_complete); 216 | self.data_returned = true; 217 | ::std::mem::replace(&mut self.data, RefPointAlignmentData::default()) 218 | } 219 | 220 | 221 | /// Initializes reference point alignment processor. 222 | /// 223 | /// # Parameters 224 | /// 225 | /// * `img_seq` - Image sequence to process. 226 | /// 227 | /// * `first_img_ofs` - First active image's offset relative to the images' intersection. 228 | /// 229 | /// * `points` - Reference point positions; if none, points will be placed automatically. 230 | /// Positions are specified within the images' intersection and must not lie outside it. 231 | /// 232 | /// * `quality_criterion` - Criterion for updating ref. point position (and later for stacking). 233 | /// 234 | /// * `ref_block_size` - Size (in pixels) of reference blocks used for block matching. 235 | /// 236 | /// * `search_radius` - Search radius (in pixels) used during block matching. 237 | /// 238 | /// * `placement_brightness_threshold` - Min. image brightness that a reference point can be placed at. 239 | /// Value (from [0; 1]) is relative to the image's darkest (0.0) and brightest (1.0) pixels. 240 | /// 241 | /// * `structure_threshold` - Structure detection threshold; value of 1.2 is recommended. 242 | /// The greater the value, the more local contrast is required to place a ref. point. 243 | /// 244 | /// * `structure_scale` - Corresponds to pixel size of smallest structures. Should equal 1 245 | /// for optimally-sampled or undersampled images. Use higher values for oversampled (blurry) material. 246 | /// 247 | /// * `spacing` - Spacing in pixels between reference points. 248 | /// 249 | pub fn init( 250 | img_seq: &'a mut ImageSequence, 251 | img_align_data: &'a ImgAlignmentData, 252 | qual_est_data: &'a QualityEstimationData, 253 | points: Option>, 254 | quality_criterion: QualityCriterion, 255 | ref_block_size: u32, 256 | search_radius: u32, 257 | 258 | // Parameters used if `points`=None (i.e., using automatic placement of ref. points) 259 | 260 | placement_brightness_threshold: f32, 261 | structure_threshold: f32, 262 | structure_scale: u32, 263 | spacing: u32) 264 | -> Result, ImageError> { 265 | 266 | // FIXME: detect if image size < reference_block size and return an error; THE SAME goes for image alignment phase 267 | 268 | img_seq.seek_start(); 269 | 270 | let intersection = qual_est_data.get_intersection(); 271 | 272 | let mut first_img = img_seq.get_curr_img()?; 273 | 274 | first_img = first_img.convert_pix_fmt_of_subimage( 275 | PixelFormat::Mono8, 276 | intersection.get_pos() + img_align_data.get_image_ofs()[0], 277 | intersection.width, 278 | intersection.height, 279 | Some(DemosaicMethod::Simple)); 280 | 281 | let actual_points: Vec; 282 | 283 | if points.is_none() { 284 | actual_points = qual_est_data.suggest_ref_point_positions( 285 | placement_brightness_threshold, 286 | structure_threshold, 287 | structure_scale, 288 | spacing, 289 | ref_block_size); 290 | } else { 291 | actual_points = points.unwrap(); 292 | } 293 | 294 | let mut reference_pts = Vec::::new(); 295 | 296 | for point in actual_points { 297 | assert!(point.x >= 0 && point.x < intersection.width as i32 && 298 | point.y >= 0 && point.y < intersection.height as i32); 299 | 300 | // Not initializing the reference block yet, as we do not know if the current area 301 | // meets the quality criteria in the current (first) image 302 | 303 | reference_pts.push( 304 | ReferencePoint{ 305 | qual_est_area_idx: Some(qual_est_data.get_area_idx_at_pos(&point)), 306 | ref_block: None, 307 | positions: vec![RefPtPosition{ pos: point, is_valid: false}; img_seq.get_active_img_count()], 308 | last_valid_pos_idx: None, 309 | last_transl_vec_len: 0.0, 310 | last_transl_vec_sqlen: 0.0 }); 311 | } 312 | 313 | RefPointAlignmentProc::append_surrounding_fixed_points(&mut reference_pts, intersection, img_seq); 314 | 315 | // Envelope of all reference points (including the fixed ones) 316 | let envelope = Rect{ 317 | x: -(intersection.width as i32) / ADDITIONAL_FIXED_PT_OFFSET_DIV as i32, 318 | y: -(intersection.height as i32) / ADDITIONAL_FIXED_PT_OFFSET_DIV as i32, 319 | width: intersection.width + 2 * intersection.width / ADDITIONAL_FIXED_PT_OFFSET_DIV, 320 | height: intersection.height + 2 * intersection.height / ADDITIONAL_FIXED_PT_OFFSET_DIV }; 321 | 322 | // Find the Delaunay triangulation of the reference points 323 | 324 | let initial_positions: Vec = reference_pts.iter().map(|ref p| p.positions[0].pos).collect(); 325 | 326 | let triangulation = Triangulation::find_delaunay_triangulation(&initial_positions, &envelope); 327 | 328 | // The triangulation object contains 3 additional points comprising a triangle that covers all the other points. 329 | // These 3 points shall have fixed position and are not associated with any quality estimation area. 330 | // Add them to the list now and fill their position for all images. 331 | 332 | for i in range!(triangulation.get_vertices().len() - 3, 3) { 333 | reference_pts.push(RefPointAlignmentProc::create_fixed_point(triangulation.get_vertices()[i], img_seq)); 334 | } 335 | 336 | let update_flags = vec![false; reference_pts.len()]; 337 | 338 | let tri_quality = Vec::::with_capacity(triangulation.get_triangles().len()); 339 | 340 | let mut ref_pt_align = RefPointAlignmentProc{ 341 | img_seq, 342 | img_align_data, 343 | qual_est_data, 344 | quality_criterion, 345 | ref_block_size, 346 | search_radius, 347 | update_flags, 348 | tri_quality, 349 | is_complete: false, 350 | data_returned: false, 351 | tvec_img_sum: [TVecSum::default(); TVEC_SUM_NUM_IMAGES], 352 | tvec_next_entry: 0, 353 | data: RefPointAlignmentData{ reference_pts, triangulation, num_valid_positions: 0, num_rejected_positions: 0 } 354 | }; 355 | 356 | ref_pt_align.calc_triangle_quality(); 357 | 358 | ref_pt_align.update_ref_pt_positions( 359 | &first_img, 360 | 0, 361 | // `first_img` is already just an intersection-sized fragment of the first image, 362 | // so pass a full-image "intersection" and a zero offset 363 | &first_img.get_img_rect(), &Point{ x: 0, y: 0 }); 364 | 365 | Ok(ref_pt_align) 366 | } 367 | 368 | 369 | fn update_ref_pt_positions( 370 | &mut self, 371 | img: &Image, 372 | img_idx: usize, 373 | intersection: &Rect, 374 | img_alignment_ofs: &Point) { 375 | 376 | // Reminder: positions of reference points and quality estimation areas are specified 377 | // within the intersection of all images after alignment. Therefore all accesses 378 | // to the current image `img` have to take it into account and use `intersection` 379 | // and `img_alignment_ofs` to apply proper offsets. 380 | 381 | for i in &mut self.update_flags { 382 | *i = false; 383 | } 384 | 385 | let mut curr_step_tvec = TVecSum{ sum_len: 0.0, sum_sq_len: 0.0, num_terms: 0 }; 386 | 387 | let num_active_imgs = self.img_seq.get_active_img_count(); 388 | 389 | // #pragma omp parallel for 390 | for (tri_idx, tri) in self.data.triangulation.get_triangles().iter().enumerate() { 391 | // Update positions of reference points belonging to triangle `[tri_idx]` iff the sum of their quality est. areas 392 | // is at least the specified threshold (relative to the min and max sum) 393 | 394 | let mut qsum = 0.0; 395 | 396 | let tri_pts = [tri.v0, tri.v1, tri.v2]; 397 | 398 | for tri_p in tri_pts.iter() { 399 | match self.data.reference_pts[*tri_p].qual_est_area_idx { 400 | Some(qarea) => qsum += self.qual_est_data.get_area_quality(qarea, img_idx), 401 | _ => () 402 | } 403 | } 404 | 405 | let curr_tri_q = &self.tri_quality[tri_idx]; 406 | 407 | let is_quality_sufficient; 408 | 409 | match self.quality_criterion { 410 | QualityCriterion::PercentageBest(percentage) => { 411 | is_quality_sufficient = curr_tri_q.sorted_idx[img_idx] >= (0.01 * ((100 - percentage) * (num_active_imgs as u32)) as f32) as usize; 412 | }, 413 | 414 | QualityCriterion::MinRelative(threshold) => { 415 | is_quality_sufficient = qsum >= curr_tri_q.qmin + 0.01 * threshold * (curr_tri_q.qmax - curr_tri_q.qmin); 416 | }, 417 | 418 | QualityCriterion::NumberBest(threshold) => { 419 | is_quality_sufficient = threshold > num_active_imgs || curr_tri_q.sorted_idx[img_idx] >= num_active_imgs - threshold; 420 | } 421 | } 422 | 423 | for tri_p in tri_pts.iter() { 424 | 425 | let ref_pt = &mut self.data.reference_pts[*tri_p]; 426 | 427 | if ref_pt.qual_est_area_idx.is_none() || self.update_flags[*tri_p] { 428 | continue; 429 | } 430 | 431 | let mut found_new_valid_pos = false; 432 | 433 | if is_quality_sufficient { 434 | let mut is_first_update = false; 435 | 436 | if ref_pt.ref_block.is_none() { 437 | // This is the first time this point meets the quality criteria; 438 | // initialize its reference block 439 | if img_idx > 0 { 440 | // Point's position in the current image has not been filled in yet, do it now 441 | ref_pt.positions[img_idx].pos = ref_pt.positions[img_idx-1].pos; 442 | } 443 | 444 | ref_pt.ref_block = Some(self.qual_est_data.create_reference_block( 445 | ref_pt.positions[img_idx].pos, 446 | self.ref_block_size)); 447 | is_first_update = true; 448 | } 449 | 450 | let current_ref_pos = ref_pt.positions[if 0 == img_idx { 0 } else { img_idx - 1 }].pos; 451 | 452 | let new_pos_in_img = blk_match::find_matching_position( 453 | current_ref_pos + intersection.get_pos() + *img_alignment_ofs, 454 | ref_pt.ref_block.iter().next().unwrap(), 455 | img, 456 | self.search_radius, 457 | BLOCK_MATCHING_INITIAL_SEARCH_STEP); 458 | 459 | let new_pos = new_pos_in_img - intersection.get_pos() - *img_alignment_ofs; 460 | 461 | // Additional rejection criterion: ignore the first position update if the new position is too distant. 462 | // Otherwise the point would be moved too far at the very start of the ref. point alignment 463 | // phase and might not recover, i.e. its subsequent position updates might be getting rejected 464 | // by the additional check after the current outermost `for` loop. 465 | if !is_first_update || 466 | sqr!(new_pos.x - current_ref_pos.x) + sqr!(new_pos.y - current_ref_pos.y) <= sqr!(self.ref_block_size as i32 / 3 /*TODO: make it adaptive somehow? or use the current avg. deviation*/) { 467 | 468 | ref_pt.positions[img_idx].pos = new_pos; 469 | ref_pt.positions[img_idx].is_valid = true; 470 | 471 | match ref_pt.last_valid_pos_idx { 472 | Some(lvi) => { 473 | ref_pt.last_transl_vec_sqlen = Point::sqr_dist(&ref_pt.positions[img_idx].pos, 474 | &ref_pt.positions[lvi].pos) as f64; 475 | 476 | ref_pt.last_transl_vec_len = f64::sqrt(ref_pt.last_transl_vec_sqlen); 477 | 478 | curr_step_tvec.sum_len += ref_pt.last_transl_vec_len; 479 | curr_step_tvec.sum_sq_len += ref_pt.last_transl_vec_sqlen; 480 | curr_step_tvec.num_terms += 1; 481 | }, 482 | _ => () 483 | } 484 | 485 | found_new_valid_pos = true; 486 | } 487 | } 488 | 489 | if !found_new_valid_pos && img_idx > 0 { 490 | ref_pt.positions[img_idx].pos = ref_pt.positions[img_idx-1].pos; 491 | } 492 | 493 | self.update_flags[*tri_p] = true; 494 | } 495 | } 496 | 497 | let mut prev_num_terms = 0usize; 498 | let mut prev_sum_len = 0.0f64; 499 | let mut prev_sum_sq_len = 0.0f64; 500 | for tis in &self.tvec_img_sum { 501 | prev_num_terms += tis.num_terms; 502 | prev_sum_len += tis.sum_len; 503 | prev_sum_sq_len += tis.sum_sq_len; 504 | } 505 | 506 | if curr_step_tvec.num_terms > 0 { 507 | let sum_len_avg: f64 = (prev_sum_len + curr_step_tvec.sum_len) / (prev_num_terms + curr_step_tvec.num_terms) as f64; 508 | let sum_sq_len_avg: f64 = (prev_sum_sq_len + curr_step_tvec.sum_sq_len) / (prev_num_terms + curr_step_tvec.num_terms) as f64; 509 | 510 | let std_deviation: f64 = 511 | if sum_sq_len_avg - sqr!(sum_len_avg) >= 0.0 { f64::sqrt(sum_sq_len_avg - sqr!(sum_len_avg)) } else { 0.0 }; 512 | 513 | // Iterate over the points found "valid" in the current step and if their current translation 514 | // lies too far from the "sliding window" translation average, clear their "valid" flag. 515 | for ref_pt in &mut self.data.reference_pts { 516 | if ref_pt.qual_est_area_idx.is_some() && ref_pt.positions[img_idx].is_valid && 517 | ref_pt.last_transl_vec_len > sum_len_avg + 1.5 * std_deviation { 518 | 519 | ref_pt.positions[img_idx].is_valid = false; 520 | ref_pt.positions[img_idx].pos = ref_pt.positions[img_idx-1].pos; 521 | 522 | curr_step_tvec.sum_len -= ref_pt.last_transl_vec_len; 523 | curr_step_tvec.num_terms -= 1; 524 | 525 | self.data.num_rejected_positions += 1; 526 | } else { 527 | ref_pt.last_valid_pos_idx = Some(img_idx); 528 | self.data.num_valid_positions += 1; 529 | } 530 | } 531 | 532 | self.tvec_img_sum[self.tvec_next_entry] = curr_step_tvec; 533 | self.tvec_next_entry = (self.tvec_next_entry + 1) % TVEC_SUM_NUM_IMAGES; 534 | } else { 535 | for ref_pt in &mut self.data.reference_pts { 536 | if ref_pt.positions[img_idx].is_valid { 537 | ref_pt.last_valid_pos_idx = Some(img_idx); 538 | self.data.num_valid_positions += 1; 539 | } 540 | } 541 | } 542 | } 543 | 544 | 545 | fn calc_triangle_quality(&mut self) { 546 | let num_active_imgs = self.img_seq.get_active_img_count(); 547 | 548 | #[derive(Default, Clone, Copy)] 549 | struct ImgIdxToQuality { 550 | img_idx: usize, 551 | quality: f32 552 | } 553 | 554 | let mut img_to_qual: Vec = vec![Default::default(); num_active_imgs]; 555 | 556 | for tri in self.data.triangulation.get_triangles() { 557 | let tri_points = [&self.data.reference_pts[tri.v0], 558 | &self.data.reference_pts[tri.v1], 559 | &self.data.reference_pts[tri.v2]]; 560 | 561 | let mut curr_tri_qual = TriangleQuality{ 562 | qmin: ::std::f32::MAX, 563 | qmax: 0.0, 564 | sorted_idx: vec![0usize; num_active_imgs] 565 | }; 566 | 567 | for img_idx in 0..num_active_imgs { 568 | let mut qsum = 0.0; 569 | 570 | for tri_p in tri_points.iter() { 571 | match tri_p.qual_est_area_idx { 572 | Some(qarea_idx) => qsum += self.qual_est_data.get_area_quality(qarea_idx, img_idx), 573 | _ => () // else it is one of the fixed boundary points; does not affect triangle's quality 574 | } 575 | } 576 | 577 | if qsum < curr_tri_qual.qmin { 578 | curr_tri_qual.qmin = qsum; 579 | } 580 | if qsum > curr_tri_qual.qmax { 581 | curr_tri_qual.qmax = qsum; 582 | } 583 | 584 | img_to_qual[img_idx] = ImgIdxToQuality{ img_idx, quality: qsum }; 585 | } 586 | 587 | img_to_qual.sort_unstable_by(|x, y| x.quality.partial_cmp(&y.quality).unwrap()); 588 | 589 | // See comment at `TriangleQuality::sorted_idx` declaration for details 590 | for img_idx in 0..num_active_imgs { 591 | curr_tri_qual.sorted_idx[img_to_qual[img_idx].img_idx] = img_idx; 592 | } 593 | 594 | self.tri_quality.push(curr_tri_qual); 595 | } 596 | } 597 | 598 | 599 | /// Adds a fixed reference point (not tracked during processing). 600 | fn create_fixed_point(pos: Point, img_seq: &ImageSequence) -> ReferencePoint { 601 | ReferencePoint{ 602 | qual_est_area_idx: None, 603 | ref_block: None, 604 | positions: vec![RefPtPosition{ pos, is_valid: true }; img_seq.get_active_img_count()], 605 | last_valid_pos_idx: Some(0), 606 | last_transl_vec_len: 0.0, 607 | last_transl_vec_sqlen: 0.0 } 608 | } 609 | 610 | 611 | /// Adds a few fixed points along and just outside intersection's borders. 612 | /// 613 | /// This way after triangulation the near-border points will not generate skinny triangles, 614 | /// which would result in locally degraded stack quality. 615 | /// 616 | /// Example of triangulation without the additional points: 617 | /// 618 | /// ``` 619 | /// o o 620 | /// 621 | /// +--------------+ 622 | /// | * * * | 623 | /// | |<--images' intersection 624 | /// | | 625 | /// +--------------+ 626 | /// 627 | /// 628 | /// 629 | /// o 630 | /// 631 | /// ``` 632 | /// `o` = external fixed points added by `Triangulation.find_delaunay_triangulation()` 633 | /// 634 | /// The internal near-border points (`*`) would generate skinny triangles with the upper (`o`) points. 635 | /// With additional fixed points: 636 | /// 637 | /// ``` 638 | /// o o 639 | /// 640 | /// o o o 641 | /// 642 | /// +--------------+ 643 | /// o | * * * | o 644 | /// | | 645 | /// o | | o 646 | /// 647 | /// ``` 648 | /// 649 | fn append_surrounding_fixed_points( 650 | ref_points: &mut Vec, 651 | intersection: &Rect, 652 | img_seq: &mut ImageSequence) { 653 | 654 | for i in 1 .. ADDITIONAL_FIXED_PTS_PER_BORDER + 1 { 655 | // Along top border 656 | ref_points.push(RefPointAlignmentProc::create_fixed_point( 657 | Point{ x: (i as u32 * intersection.width) as i32 / (ADDITIONAL_FIXED_PTS_PER_BORDER as i32 + 1), 658 | y: -(intersection.height as i32) / ADDITIONAL_FIXED_PT_OFFSET_DIV as i32 }, 659 | &img_seq)); 660 | 661 | // Along bottom border 662 | ref_points.push(RefPointAlignmentProc::create_fixed_point( 663 | Point{ x: (i as u32 * intersection.width) as i32 / (ADDITIONAL_FIXED_PTS_PER_BORDER as i32 + 1), 664 | y: (intersection.height + intersection.height / ADDITIONAL_FIXED_PT_OFFSET_DIV) as i32 }, 665 | &img_seq)); 666 | 667 | // Along left border 668 | ref_points.push(RefPointAlignmentProc::create_fixed_point( 669 | Point{ x: -(intersection.width as i32) / ADDITIONAL_FIXED_PT_OFFSET_DIV as i32, 670 | y: (i as u32 * intersection.height) as i32 / (ADDITIONAL_FIXED_PTS_PER_BORDER + 1) as i32 }, 671 | &img_seq)); 672 | 673 | // Along right border 674 | ref_points.push(RefPointAlignmentProc::create_fixed_point( 675 | Point{ x: (intersection.width + intersection.width / ADDITIONAL_FIXED_PT_OFFSET_DIV) as i32, 676 | y: (i as u32 * intersection.height) as i32 / (ADDITIONAL_FIXED_PTS_PER_BORDER + 1) as i32 }, 677 | &img_seq)); 678 | } 679 | } 680 | 681 | 682 | /// Makes sure that for every triangle there is at least 1 image where all 3 vertices are "valid". 683 | fn ensure_tris_are_valid(&mut self) { 684 | let num_active_imgs = self.img_seq.get_active_img_count(); 685 | 686 | let triangles = self.data.triangulation.get_triangles(); 687 | 688 | for tri in triangles { 689 | let tri_v = [tri.v0, tri.v1, tri.v2]; 690 | let ref_pts = &mut self.data.reference_pts; 691 | 692 | // Best quality and associated img index where the triangle's vertices are not all "valid" 693 | let mut best_tri_qual = 0.0; 694 | let mut best_tri_qual_img_idx: Option = None; 695 | 696 | for img_idx in 0..num_active_imgs { 697 | if ref_pts[tri.v0].positions[img_idx].is_valid && 698 | ref_pts[tri.v1].positions[img_idx].is_valid && 699 | ref_pts[tri.v2].positions[img_idx].is_valid { 700 | 701 | best_tri_qual_img_idx = None; 702 | break; 703 | } else { 704 | let mut tri_qual = 0.0; 705 | for v in &tri_v { 706 | match ref_pts[*v].qual_est_area_idx { 707 | Some(qa_idx) => tri_qual += self.qual_est_data.get_area_quality(qa_idx, img_idx), 708 | _ => () 709 | } 710 | } 711 | if tri_qual > best_tri_qual { 712 | best_tri_qual = tri_qual; 713 | best_tri_qual_img_idx = Some(img_idx); 714 | } 715 | } 716 | } 717 | 718 | match best_tri_qual_img_idx { 719 | Some(best_idx) => { 720 | // The triangle's vertices turned out not to be simultaneously "valid" in any image, 721 | // which is required (in at least one image) during stacking phase. 722 | // 723 | // Mark them "valid" anyway in the image where their quality sum is highest. 724 | for v in &tri_v { 725 | ref_pts[*v].positions[best_idx].is_valid = true; 726 | } 727 | }, 728 | _ => () 729 | } 730 | } 731 | } 732 | 733 | } 734 | 735 | 736 | impl<'a> ProcessingPhase for RefPointAlignmentProc<'a> { 737 | fn get_curr_img(&mut self) -> Result 738 | { 739 | self.img_seq.get_curr_img() 740 | } 741 | 742 | 743 | fn step(&mut self) -> Result<(), ProcessingError> { 744 | match self.img_seq.seek_next() { 745 | Err(SeekResult::NoMoreImages) => { 746 | self.ensure_tris_are_valid(); 747 | self.is_complete = true; 748 | return Err(ProcessingError::NoMoreSteps); 749 | }, 750 | Ok(()) => () 751 | } 752 | 753 | let img_idx = self.img_seq.get_curr_img_idx_within_active_subset(); 754 | 755 | let mut img = self.img_seq.get_curr_img()?; 756 | if img.get_pixel_format() != PixelFormat::Mono8 { 757 | img = img.convert_pix_fmt(PixelFormat::Mono8, Some(DemosaicMethod::Simple)); 758 | } 759 | 760 | self.update_ref_pt_positions( 761 | &img, img_idx, 762 | &self.img_align_data.get_intersection(), 763 | &self.img_align_data.get_image_ofs()[img_idx]); 764 | 765 | Ok(()) 766 | } 767 | } 768 | -------------------------------------------------------------------------------- /src/ser.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // SER support. 11 | // 12 | 13 | use image::{bytes_per_channel, bytes_per_pixel, Image, ImageError, Palette, PixelFormat}; 14 | use img_seq_priv::ImageProvider; 15 | use std::fs::{File, OpenOptions}; 16 | use std::io; 17 | use std::io::{Read, Seek, SeekFrom}; 18 | use std::mem::size_of; 19 | use utils; 20 | 21 | 22 | #[derive(PartialEq)] 23 | enum SerColorFormat { 24 | Mono = 0, 25 | BayerRGGB = 8, 26 | BayerGRBG = 9, 27 | BayerGBRG = 10, 28 | BayerBGGR = 11, 29 | BayerCYYM = 16, 30 | BayerYCMY = 17, 31 | BayerYMCY = 18, 32 | BayerMYYC = 19, 33 | RGB = 100, 34 | BGR = 101 35 | } 36 | 37 | 38 | #[derive(Debug)] 39 | pub enum SerError { 40 | Io(io::Error), 41 | UnsupportedFormat, 42 | InvalidBitDepth 43 | } 44 | 45 | 46 | impl From for SerError { 47 | fn from(err: io::Error) -> SerError { SerError::Io(err) } 48 | } 49 | 50 | 51 | fn get_ser_color_fmt(color_id: u32) -> Result { 52 | 53 | match color_id { 54 | color_id if color_id == SerColorFormat::Mono as u32 => Ok(SerColorFormat::Mono), 55 | //TODO: uncomment once demosaicing is ported 56 | // color_id if color_id == SerColorFormat::BayerRGGB as u32 => Ok(SerColorFormat::BayerRGGB), 57 | // color_id if color_id == SerColorFormat::BayerGRBG as u32 => Ok(SerColorFormat::BayerGRBG), 58 | // color_id if color_id == SerColorFormat::BayerGBRG as u32 => Ok(SerColorFormat::BayerGBRG), 59 | // color_id if color_id == SerColorFormat::BayerBGGR as u32 => Ok(SerColorFormat::BayerBGGR), 60 | color_id if color_id == SerColorFormat::RGB as u32 => Ok(SerColorFormat::RGB), 61 | color_id if color_id == SerColorFormat::BGR as u32 => Ok(SerColorFormat::BGR), 62 | _ => Err(SerError::UnsupportedFormat) 63 | } 64 | } 65 | 66 | 67 | /// See comment for SerHeader::little_endian 68 | const SER_LITTLE_ENDIAN: u32 = 0; 69 | 70 | 71 | #[repr(C, packed)] 72 | struct SerHeader { 73 | signature: [u8; 14], 74 | camera_series_id: u32, 75 | color_id: u32, 76 | // Online documentation claims this is 0 when 16-bit pixel data 77 | // is big-endian, but the meaning is actually reversed. 78 | little_endian: u32, 79 | img_width: u32, 80 | img_height: u32, 81 | bits_per_channel: u32, 82 | frame_count: u32, 83 | observer: [u8; 40], 84 | instrument: [u8; 40], 85 | telescope: [u8; 40], 86 | date_time: i64, 87 | date_time_utc: i64 88 | } 89 | 90 | 91 | pub struct SerFile { 92 | file_name: String, 93 | /// Becomes empty after calling `deactivate()`. 94 | file: Option, 95 | /// Concerns 16-bit pixel data 96 | little_endian_data: bool, 97 | ser_color_fmt: SerColorFormat, 98 | pix_fmt: PixelFormat, 99 | num_images: usize, 100 | width: u32, 101 | height: u32 102 | } 103 | 104 | 105 | /// Reverses RGB<->BGR. 106 | fn reverse_rgb(line: &mut [T]) { 107 | for x in 0 .. line.len()/3 { 108 | line.swap(3*x, 3*x + 2); 109 | } 110 | } 111 | 112 | 113 | impl SerFile { 114 | 115 | pub fn new(file_name: &str) -> Result, SerError> { 116 | 117 | let mut file = OpenOptions::new().read(true).write(false).open(file_name)?; 118 | 119 | let fheader: SerHeader = utils::read_struct(&mut file)?; 120 | 121 | let ser_color_fmt = get_ser_color_fmt(u32::from_le(fheader.color_id))?; 122 | 123 | let bits_per_channel = u32::from_le(fheader.bits_per_channel); 124 | if bits_per_channel > 16 { 125 | return Err(SerError::InvalidBitDepth); 126 | } 127 | 128 | let pix_fmt = match ser_color_fmt { 129 | SerColorFormat::Mono => if bits_per_channel <= 8 { PixelFormat::Mono8 } else { PixelFormat::Mono16 }, 130 | SerColorFormat::RGB | SerColorFormat::BGR => if bits_per_channel <= 8 { PixelFormat::RGB8 } else { PixelFormat::RGB16 }, 131 | //TODO: uncomment once demosaicing is ported 132 | // SerColorFormat::BayerBGGR => if bits_per_channel <= 8 { PixelFormat::CfaBGGR8 } else { PixelFormat::CfaBGGR16 }, 133 | // SerColorFormat::BayerGBRG => if bits_per_channel <= 8 { PixelFormat::CfaGBRG8 } else { PixelFormat::CfaGBRG16 }, 134 | // SerColorFormat::BayerGRBG => if bits_per_channel <= 8 { PixelFormat::CfaGRBG8 } else { PixelFormat::CfaGRBG16 }, 135 | // SerColorFormat::BayerRGGB => if bits_per_channel <= 8 { PixelFormat::CfaRGGB8 } else { PixelFormat::CfaRGGB16 }, 136 | _ => panic!() // cannot happen due, thanks get_ser_color_fmt() 137 | }; 138 | 139 | let little_endian_data = u32::from_le(fheader.little_endian) == SER_LITTLE_ENDIAN; 140 | let width = u32::from_le(fheader.img_width); 141 | let height = u32::from_le(fheader.img_height); 142 | let num_images = u32::from_le(fheader.frame_count) as usize; 143 | 144 | Ok(Box::new( 145 | SerFile{ file_name: String::from(file_name), 146 | file: None, 147 | ser_color_fmt, 148 | little_endian_data, 149 | pix_fmt, 150 | num_images, 151 | width, 152 | height } 153 | )) 154 | } 155 | } 156 | 157 | impl ImageProvider for SerFile { 158 | fn get_img(&mut self, idx: usize) -> Result { 159 | assert!(idx < self.num_images); 160 | 161 | if self.file.is_none() { 162 | self.file = Some(OpenOptions::new().read(true).write(false).open(&self.file_name)?); 163 | } 164 | 165 | let file: &mut File = self.file.iter_mut().next().unwrap(); 166 | 167 | let mut img = Image::new(self.width, self.height, self.pix_fmt, None, false); 168 | 169 | let frame_size = (self.width * self.height) as usize * bytes_per_pixel(self.pix_fmt); 170 | 171 | file.seek(SeekFrom::Start((size_of::() + idx * frame_size) as u64))?; 172 | 173 | for y in 0..self.height { 174 | file.read_exact(img.get_line_raw_mut(y))?; 175 | 176 | if self.ser_color_fmt == SerColorFormat::BGR { 177 | match self.pix_fmt { 178 | PixelFormat::RGB8 => reverse_rgb(img.get_line_mut::(y)), 179 | PixelFormat::RGB16 => reverse_rgb(img.get_line_mut::(y)), 180 | _ => panic!() // cannot happen 181 | } 182 | } 183 | } 184 | 185 | if bytes_per_channel(self.pix_fmt) > 1 && (utils::is_machine_big_endian() ^ !self.little_endian_data) { 186 | utils::swap_words16(&mut img); 187 | } 188 | 189 | Ok(img) 190 | } 191 | 192 | 193 | fn get_img_metadata(&self, _: usize) -> Result<(u32, u32, PixelFormat, Option), ImageError> { 194 | Ok((self.width, self.height, self.pix_fmt, None)) 195 | } 196 | 197 | 198 | fn img_count(&self) -> usize { 199 | self.num_images 200 | } 201 | 202 | 203 | fn deactivate(&mut self) { 204 | self.file = None; 205 | } 206 | } 207 | -------------------------------------------------------------------------------- /src/stacking.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Processing phase: image stacking (shift-and-add summation). 11 | // 12 | 13 | use defs::{Point, PointFlt, ProcessingPhase, ProcessingError, Rect}; 14 | use image; 15 | use image::{DemosaicMethod, Image, ImageError, PixelFormat}; 16 | use img_align::ImgAlignmentData; 17 | use img_seq::{ImageSequence, SeekResult}; 18 | use ref_pt_align::RefPointAlignmentData; 19 | use std::cmp::{min, max}; 20 | 21 | 22 | struct StackTrianglePoint { 23 | // Image coordinates in the stack 24 | pub x: i32, 25 | pub y: i32, 26 | 27 | // Barycentric coordinates in the parent triangle 28 | pub u: f32, 29 | pub v: f32 30 | } 31 | 32 | 33 | /// Performs image stacking (shift-and-add summation). 34 | pub struct StackingProc<'a> { 35 | img_seq: &'a mut ImageSequence, 36 | 37 | img_align_data: &'a ImgAlignmentData, 38 | 39 | ref_pt_align_data: &'a RefPointAlignmentData, 40 | 41 | is_complete: bool, 42 | 43 | /// For each triangle in `ref_pt_align_data.triangulation`, contains a list of points comprising it. 44 | rasterized_tris: Vec>, 45 | 46 | /// Final positions (within the images' intersection) of the reference points, 47 | /// i.e. the average over all images where the points are valid. 48 | final_ref_pt_pos: Vec, 49 | 50 | /// Element [i] = number of images that were stacked to produce the i-th pixel in `image_stack'. 51 | added_img_count: Vec, 52 | 53 | /// Format: Mono32f or RGB32f. 54 | image_stack: Image, 55 | 56 | first_step_complete: bool, 57 | 58 | /// Triangle indices (from `ref_pt_align_data.triangulation`) stacked in the current step. 59 | curr_step_stacked_triangles: Vec, 60 | 61 | /// Contains inverted flat-field values (1/flat-field) 62 | flatfield_inv: Option 63 | } 64 | 65 | 66 | impl<'a> StackingProc<'a> { 67 | pub fn init(img_seq: &'a mut ImageSequence, 68 | img_align_data: &'a ImgAlignmentData, 69 | ref_pt_align_data: &'a RefPointAlignmentData, 70 | flatfield: Option) 71 | -> Result, ImageError> { 72 | 73 | img_seq.seek_start(); 74 | 75 | let final_ref_pt_pos = ref_pt_align_data.get_final_positions(); 76 | 77 | let triangulation = &ref_pt_align_data.get_ref_pts_triangulation(); 78 | let mut rasterized_tris = Vec::>::with_capacity(triangulation.get_triangles().len()); 79 | let curr_step_stacked_triangles = Vec::::with_capacity(triangulation.get_triangles().len()); 80 | 81 | let intersection = img_align_data.get_intersection(); 82 | let mut pixel_occupied = vec![false; (intersection.width * intersection.height) as usize]; 83 | 84 | for tri in triangulation.get_triangles() { 85 | rasterized_tris.push( 86 | rasterize_triangle( 87 | &final_ref_pt_pos[tri.v0], 88 | &final_ref_pt_pos[tri.v1], 89 | &final_ref_pt_pos[tri.v2], 90 | Rect{ x: 0, y: 0, width: intersection.width, height: intersection.height }, 91 | &mut pixel_occupied)); 92 | } 93 | //TODO: see if after rasterization there are any pixels not belonging to any triangle and assign them 94 | 95 | let (_, _, pix_fmt, _) = img_seq.get_curr_img_metadata()?; 96 | 97 | let stack_pix_fmt = if image::get_num_channels(pix_fmt) == 1 && !pix_fmt.is_cfa() { 98 | PixelFormat::Mono32f 99 | } else { 100 | PixelFormat::RGB32f 101 | }; 102 | 103 | let image_stack = Image::new(intersection.width, intersection.height, stack_pix_fmt, None, true); 104 | 105 | let added_img_count = vec![0usize; (intersection.width * intersection.height) as usize]; 106 | 107 | let mut flatfield_inv: Option = None; 108 | 109 | if flatfield.is_some() { 110 | let mut ffield = flatfield.unwrap(); 111 | 112 | let ffield_inv = 113 | if ffield.get_pixel_format() == PixelFormat::Mono32f { 114 | ffield.get_copy() 115 | } else { 116 | ffield.convert_pix_fmt(PixelFormat::Mono32f, Some(DemosaicMethod::HqLinear)) 117 | }; 118 | 119 | let pixels = ffield.get_pixels_mut::(); 120 | let max_val = pixels.iter().max_by(|x, y| x.partial_cmp(y).unwrap()).unwrap().clone(); 121 | 122 | for pix in pixels.iter_mut() { 123 | if *pix > 0.0 { 124 | *pix = max_val / *pix; 125 | } 126 | } 127 | 128 | flatfield_inv = Some(ffield_inv); 129 | } 130 | 131 | Ok(StackingProc{ 132 | img_seq, 133 | img_align_data, 134 | ref_pt_align_data, 135 | is_complete: false, 136 | rasterized_tris, 137 | final_ref_pt_pos, 138 | added_img_count, 139 | image_stack, 140 | first_step_complete: false, 141 | curr_step_stacked_triangles, 142 | flatfield_inv 143 | }) 144 | } 145 | 146 | 147 | /// Returns the image stack; can be used only after stacking completes. 148 | pub fn get_image_stack(&self) -> &Image { 149 | assert!(self.is_complete); 150 | &self.image_stack 151 | } 152 | 153 | 154 | /// Returns an incomplete image stack, updated after every stacking step. 155 | pub fn get_partial_image_stack(&self) -> Image { 156 | let mut pstack = self.image_stack.get_copy(); 157 | normalize_image_stack(&self.added_img_count, &mut pstack, self.flatfield_inv.is_some()); 158 | pstack 159 | } 160 | 161 | 162 | pub fn is_complete(&self) -> bool { 163 | self.is_complete 164 | } 165 | 166 | 167 | /// Returns an array of triangle indices stacked in current step. 168 | /// 169 | /// Meant to be called right after `step()`. Values are indices into triangle array 170 | /// of the triangulation returned by `RefPointAlignmentData.get_ref_pts_triangulation()`. 171 | /// Vertex coordinates do not correspond to the triangulation, but to the array 172 | /// returned by `get_ref_pt_stacking_pos()`. 173 | /// 174 | pub fn get_curr_step_stacked_triangles(&self) -> &[usize] { 175 | &self.curr_step_stacked_triangles 176 | } 177 | 178 | /// Returns reference point positions as used during stacking. 179 | pub fn get_ref_pt_stacking_pos(&self) -> &[PointFlt] { 180 | &self.final_ref_pt_pos 181 | } 182 | } 183 | 184 | 185 | impl<'a> ProcessingPhase for StackingProc<'a> 186 | { 187 | fn get_curr_img(&mut self) -> Result { 188 | self.img_seq.get_curr_img() 189 | } 190 | 191 | 192 | fn step(&mut self) -> Result<(), ProcessingError> { 193 | if self.first_step_complete { 194 | match self.img_seq.seek_next() { 195 | Err(SeekResult::NoMoreImages) => { 196 | normalize_image_stack(&self.added_img_count, &mut self.image_stack, self.flatfield_inv.is_some()); 197 | self.is_complete = true; 198 | return Err(ProcessingError::NoMoreSteps); 199 | }, 200 | _ => () 201 | } 202 | } 203 | 204 | let mut img = self.img_seq.get_curr_img()?; 205 | let curr_img_idx = self.img_seq.get_curr_img_idx_within_active_subset(); 206 | 207 | let intersection = self.img_align_data.get_intersection(); 208 | let alignment_ofs = self.img_align_data.get_image_ofs()[curr_img_idx]; 209 | 210 | if img.get_pixel_format() != self.image_stack.get_pixel_format() { 211 | img = img.convert_pix_fmt(self.image_stack.get_pixel_format(), Some(DemosaicMethod::HqLinear)); 212 | } 213 | 214 | let num_channels = image::get_num_channels(img.get_pixel_format()); 215 | 216 | // For each triangle, check if its vertices are valid in the current image. If they are, 217 | // add the triangle's contents to the corresponding triangle patch in the stack. 218 | 219 | self.curr_step_stacked_triangles.clear(); 220 | 221 | let triangulation = self.ref_pt_align_data.get_ref_pts_triangulation(); 222 | 223 | let envelope = Rect{ x: 0, y: 0, width: intersection.width, height: intersection.height }; 224 | 225 | // First, find the list of triangles valid in the current step 226 | 227 | for (tri_idx, tri) in triangulation.get_triangles().iter().enumerate() { 228 | let p0 = self.ref_pt_align_data.get_ref_pt_pos(tri.v0, curr_img_idx); 229 | let p1 = self.ref_pt_align_data.get_ref_pt_pos(tri.v1, curr_img_idx); 230 | let p2 = self.ref_pt_align_data.get_ref_pt_pos(tri.v2, curr_img_idx); 231 | 232 | if p0.is_valid && p1.is_valid && p2.is_valid { 233 | // Due to the way reference point alignment works, it is allowed for a point 234 | // to be outside the image intersection at some times. Must be careful not to 235 | // try interpolating pixel values from outside the current image. 236 | // (Cannot use `intersection` here directly, because its origin may not be (0,0), 237 | // and `p0`, `p1`, `p2` have coordinates relative to intersection's origin.) 238 | let p0_inside = envelope.contains_point(&p0.pos); 239 | let p1_inside = envelope.contains_point(&p1.pos); 240 | let p2_inside = envelope.contains_point(&p2.pos); 241 | 242 | if p0_inside || p1_inside || p2_inside { 243 | self.curr_step_stacked_triangles.push(tri_idx); 244 | } 245 | } 246 | } 247 | 248 | // Second, stack the triangles 249 | 250 | let src_pixels = img.get_pixels::(); 251 | 252 | let stack_stride = self.image_stack.get_width() as usize * num_channels; 253 | let stack_pixels = self.image_stack.get_pixels_mut::(); 254 | 255 | let mut flatf_pixels: &[f32] = &Vec::::new()[..]; 256 | let mut flatf_stride = 0; 257 | 258 | if self.flatfield_inv.is_some() { 259 | let ff_inv: &Image = self.flatfield_inv.iter().next().unwrap(); 260 | flatf_pixels = ff_inv.get_pixels::(); 261 | flatf_stride = num_channels * ff_inv.get_width() as usize; 262 | }; 263 | 264 | // #pragma omp parallel for 265 | for tri_idx in &self.curr_step_stacked_triangles { 266 | let tri = &triangulation.get_triangles()[*tri_idx]; 267 | 268 | let p0 = self.ref_pt_align_data.get_ref_pt_pos(tri.v0, curr_img_idx); 269 | let p1 = self.ref_pt_align_data.get_ref_pt_pos(tri.v1, curr_img_idx); 270 | let p2 = self.ref_pt_align_data.get_ref_pt_pos(tri.v2, curr_img_idx); 271 | 272 | let p0_inside = envelope.contains_point(&p0.pos); 273 | let p1_inside = envelope.contains_point(&p1.pos); 274 | let p2_inside = envelope.contains_point(&p2.pos); 275 | let all_inside = p0_inside && p1_inside && p2_inside; 276 | 277 | for stp in &self.rasterized_tris[*tri_idx] { 278 | let srcx = stp.u * p0.pos.x as f32 + 279 | stp.v * p1.pos.x as f32 + 280 | (1.0 - stp.u - stp.v) * p2.pos.x as f32; 281 | let srcy = stp.u * p0.pos.y as f32 + 282 | stp.v * p1.pos.y as f32 + 283 | (1.0 - stp.u - stp.v) * p2.pos.y as f32; 284 | 285 | if all_inside || 286 | srcx >= 0.0 && srcx <= (intersection.width - 1) as f32 && 287 | srcy >= 0.0 && srcy <= (intersection.height - 1) as f32 { 288 | 289 | let mut ffx = 0; 290 | let mut ffy = 0; 291 | if self.flatfield_inv.is_some() { 292 | ffx = min(srcx as i32 + intersection.x + alignment_ofs.x, self.flatfield_inv.iter().next().unwrap().get_width() as i32 - 1); 293 | ffy = min(srcy as i32 + intersection.y + alignment_ofs.y, self.flatfield_inv.iter().next().unwrap().get_height() as i32 - 1); 294 | } 295 | 296 | for ch in 0..num_channels { 297 | let mut src_val = 298 | interpolate_pixel_value(src_pixels, img.get_width(), img.get_height(), 299 | srcx + (intersection.x + alignment_ofs.x) as f32, 300 | srcy + (intersection.y + alignment_ofs.y) as f32, 301 | ch, num_channels); 302 | 303 | if self.flatfield_inv.is_some() { 304 | // `flatfield_inv` contains inverted flat-field values, 305 | // so we multiply instead of dividing 306 | src_val *= flatf_pixels[ffy as usize * flatf_stride + ffx as usize * num_channels]; 307 | } 308 | 309 | stack_pixels[stp.y as usize * stack_stride + stp.x as usize * num_channels + ch] += src_val; 310 | } 311 | 312 | self.added_img_count[(stp.x + stp.y * intersection.width as i32) as usize] += 1; 313 | } 314 | } 315 | } 316 | 317 | if !self.first_step_complete { 318 | self.first_step_complete = true; 319 | } 320 | 321 | Ok(()) 322 | } 323 | } 324 | 325 | 326 | /// Returns list of pixels belonging to triangle `(v0, v1, v2)`. 327 | /// 328 | /// # Parameters 329 | /// 330 | /// * `envelope` - Image region corresponding to `pixel_occupied`. 331 | /// * `pixel_occupied` - Pixels of `envelope` (row-major order). If a pixel belongs 332 | /// to the rasterized triangle, will be set to `true`. 333 | /// 334 | fn rasterize_triangle( 335 | v0: &PointFlt, 336 | v1: &PointFlt, 337 | v2: &PointFlt, 338 | envelope: Rect, 339 | pixel_occupied: &mut[bool]) -> Vec { 340 | 341 | // Test every point of the rectangular axis-aligned bounding box of 342 | // the triangle (v0, v1, v2) and if it is inside triangle, add it 343 | // to the returned list. 344 | 345 | let mut points: Vec = vec![]; 346 | 347 | let xmin = *[v0.x as i32, v1.x as i32, v2.x as i32].iter().min().unwrap(); 348 | let xmax = *[v0.x as i32, v1.x as i32, v2.x as i32].iter().max().unwrap(); 349 | let ymin = *[v0.y as i32, v1.y as i32, v2.y as i32].iter().min().unwrap(); 350 | let ymax = *[v0.y as i32, v1.y as i32, v2.y as i32].iter().max().unwrap(); 351 | 352 | for y in ymin .. ymax + 1 { 353 | for x in xmin .. xmax + 1 { 354 | if envelope.contains_point(&Point{ x, y }) { 355 | let is_pix_occupied = &mut pixel_occupied[(x - envelope.x + (y - envelope.y) * envelope.width as i32) as usize]; 356 | if !*is_pix_occupied { 357 | let (u, v) = calc_barycentric_coords!(Point{ x, y }, v0, v1, v2); 358 | if u >= 0.0 && u <= 1.0 && 359 | v >= 0.0 && v <= 1.0 && 360 | u+v >= 0.0 && u+v <= 1.0 { 361 | 362 | points.push(StackTrianglePoint{ x, y, u, v }); 363 | *is_pix_occupied = true; 364 | } 365 | } 366 | } 367 | } 368 | } 369 | 370 | points 371 | } 372 | 373 | 374 | /// Normalizes `img_stack` using the specified counts of stacked images for each pixel. 375 | fn normalize_image_stack(added_img_count: &[usize], img_stack: &mut Image, uses_flatfield: bool) { 376 | let mut max_stack_value = 0.0; 377 | 378 | let num_channels = image::get_num_channels(img_stack.get_pixel_format()); 379 | 380 | for (i, pix) in img_stack.get_pixels_mut::().iter_mut().enumerate() { 381 | 382 | *pix /= max(1usize, added_img_count[i / num_channels]) as f32; 383 | 384 | if uses_flatfield && *pix > max_stack_value { 385 | max_stack_value = *pix; 386 | } 387 | 388 | } 389 | if uses_flatfield && max_stack_value > 0.0 { 390 | for pix in img_stack.get_pixels_mut::() { 391 | *pix /= max_stack_value; 392 | } 393 | } 394 | } 395 | 396 | 397 | /// Performs linear interpolation of floating-point pixel values. 398 | pub fn interpolate_pixel_value( 399 | pixels: &[f32], 400 | img_width: u32, 401 | img_height: u32, 402 | x: f32, 403 | y: f32, 404 | channel: usize, 405 | num_channels: usize) -> f32 { 406 | 407 | if x < 0.0 || x >= (img_width - 1) as f32 || y < 0.0 || y >= (img_height-1) as f32 { 408 | return 0.0; 409 | } 410 | 411 | let vals_per_line = img_width as usize * num_channels; 412 | 413 | let tx = x.fract(); 414 | let ty = y.fract(); 415 | let x0 = x.floor() as usize; 416 | let y0 = y.floor() as usize; 417 | 418 | let line_lo = &pixels[y0 * vals_per_line as usize..]; 419 | let line_hi = &pixels[(y0+1) * vals_per_line as usize..]; 420 | 421 | let v00 = line_lo[x0 * num_channels + channel]; 422 | let v10 = line_lo[(x0 + 1) * num_channels + channel]; 423 | let v01 = line_hi[x0 * num_channels + channel]; 424 | let v11 = line_hi[(x0 + 1) * num_channels + channel]; 425 | 426 | (1.0 - ty) * ((1.0 - tx) * v00 + tx * v10) + ty * ((1.0 - tx) * v01 + tx * v11) 427 | } -------------------------------------------------------------------------------- /src/tiff.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // TIFF support. 11 | // 12 | 13 | use image::{Image, Palette, PixelFormat, bytes_per_channel, bytes_per_pixel, get_num_channels}; 14 | use std::fs::{File, OpenOptions}; 15 | use std::io; 16 | use std::io::prelude::*; 17 | use std::io::{Seek, SeekFrom}; 18 | use std::mem::{size_of, size_of_val}; 19 | use utils; 20 | 21 | 22 | #[derive(Debug)] 23 | pub enum TiffError { 24 | Io(io::Error), 25 | UnknownVersion, 26 | UnsupportedPixelFormat, 27 | ChannelBitDepthsDiffer, 28 | CompressionNotSupported, 29 | UnsupportedPlanarConfig 30 | } 31 | 32 | 33 | impl From for TiffError { 34 | fn from(err: io::Error) -> TiffError { TiffError::Io(err) } 35 | } 36 | 37 | 38 | #[repr(C, packed)] 39 | struct TiffField { 40 | tag: u16, 41 | ftype: u16, 42 | count: u32, 43 | value: u32 44 | } 45 | 46 | #[repr(C, packed)] 47 | struct TiffHeader { 48 | id: u16, 49 | version: u16, 50 | dir_offset: u32 51 | } 52 | 53 | 54 | const TAG_TYPE_BYTE : u16 = 1; 55 | const TAG_TYPE_ASCII : u16 = 2; 56 | const TAG_TYPE_WORD : u16 = 3; // 16 bits 57 | const TAG_TYPE_DWORD : u16 = 4; // 32 bits 58 | const TAG_TYPE_RATIONAL: u16 = 5; 59 | 60 | const TIFF_VERSION: u16 = 42; 61 | 62 | const TAG_IMAGE_WIDTH : u16 = 0x100; 63 | const TAG_IMAGE_HEIGHT : u16 = 0x101; 64 | const TAG_BITS_PER_SAMPLE : u16 = 0x102; 65 | const TAG_COMPRESSION : u16 = 0x103; 66 | const TAG_PHOTOMETRIC_INTERPRETATION : u16 = 0x106; 67 | const TAG_STRIP_OFFSETS : u16 = 0x111; 68 | const TAG_SAMPLES_PER_PIXEL : u16 = 0x115; 69 | const TAG_ROWS_PER_STRIP : u16 = 0x116; 70 | const TAG_STRIP_BYTE_COUNTS : u16 = 0x117; 71 | const TAG_PLANAR_CONFIGURATION : u16 = 0x11C; 72 | 73 | const NO_COMPRESSION: u32 = 1; 74 | const PLANAR_CONFIGURATION_CHUNKY: u32 = 1; 75 | const INTEL_BYTE_ORDER: u16 = (('I' as u16) << 8) + 'I' as u16; 76 | const MOTOROLA_BYTE_ORDER: u16 = (('M' as u16) << 8) + 'M' as u16; 77 | 78 | const PHMET_WHITE_IS_ZERO: u32 = 0; 79 | const PHMET_BLACK_IS_ZERO: u32 = 1; 80 | const PHMET_RGB: u32 = 2; 81 | 82 | 83 | /// Reverses 8-bit grayscale values. 84 | fn negate_grayscale_8(img: &mut Image) { 85 | for p in img.get_pixels_mut::() { 86 | *p = 0xFF - *p; 87 | } 88 | } 89 | 90 | 91 | /// Reverses 16-bit grayscale values. 92 | fn negate_grayscale_16(img: &mut Image) { 93 | for p in img.get_pixels_mut::() { 94 | *p = 0xFFFF - *p; 95 | } 96 | } 97 | 98 | 99 | /// If `do_swap` is true, returns `x` with bytes swapped; otherwise, returns `x`. 100 | macro_rules! cnd_swap { ($x:expr, $do_swap:expr) => { if $do_swap { $x.swap_bytes() } else { $x } }} 101 | 102 | /// If `do_swap` is true, returns `x` with its two lower bytes swapped; otherwise, returns `x`. 103 | fn cnd_swap_16_in_32(x: u32, do_swap: bool) -> u32 { 104 | if do_swap { 105 | ((x & 0xFF) << 8) | (x >> 8) 106 | } else { 107 | x 108 | } 109 | } 110 | 111 | 112 | /// Returns bits per sample. 113 | /// 114 | /// # Parameters 115 | /// 116 | /// * `tiff_field` - The "bits per sample" TIFF field 117 | /// * `endianess_diff` - `true` if the file and machine endianess differs 118 | /// 119 | fn parse_tag_bits_per_sample(file: &mut File, 120 | tiff_field: &TiffField, 121 | endianess_diff: bool) -> Result { 122 | 123 | assert!(tiff_field.tag == TAG_BITS_PER_SAMPLE); 124 | assert!(tiff_field.ftype == TAG_TYPE_WORD); 125 | 126 | let bits_per_sample; 127 | 128 | if tiff_field.count == 1 { 129 | return Ok(cnd_swap!(tiff_field.value, endianess_diff) as usize); 130 | } else { 131 | // Some files may have as many "bits per sample" values specified 132 | // as there are channels. Make sure they are all the same. 133 | 134 | file.seek(SeekFrom::Start(tiff_field.value as u64)); 135 | 136 | let field_buf = utils::read_vec::(file, tiff_field.count as usize)?; 137 | 138 | let first = field_buf[0]; 139 | for val in &field_buf { 140 | if *val != first { return Err(TiffError::ChannelBitDepthsDiffer); } 141 | } 142 | 143 | bits_per_sample = cnd_swap!(first, endianess_diff); 144 | } 145 | 146 | if bits_per_sample != 8 && bits_per_sample != 16 { 147 | Err(TiffError::UnsupportedPixelFormat) 148 | } else { 149 | Ok(bits_per_sample as usize) 150 | } 151 | } 152 | 153 | 154 | /// Sets correct byte order in `tiff_field`'s data. 155 | fn preprocess_tiff_field(tiff_field: &mut TiffField, endianess_diff: bool) { 156 | tiff_field.tag = cnd_swap!(tiff_field.tag, endianess_diff); 157 | tiff_field.ftype = cnd_swap!(tiff_field.ftype, endianess_diff); 158 | tiff_field.count = cnd_swap!(tiff_field.count, endianess_diff); 159 | if tiff_field.count > 1 || tiff_field.ftype == TAG_TYPE_DWORD { 160 | tiff_field.value = cnd_swap!(tiff_field.value, endianess_diff); 161 | } else if tiff_field.count == 1 && tiff_field.ftype == TAG_TYPE_WORD { 162 | // This is a special case where a 16-bit value is stored in 163 | // a 32-bit field, always in the lower-address bytes. So if 164 | // the machine is big-endian, the value always has to be 165 | // shifted right by 16 bits first, regardless of the file's 166 | // endianess, and only then swapped, if the machine and file 167 | // endianesses differ. 168 | if utils::is_machine_big_endian() { 169 | tiff_field.value >>= 16; 170 | } 171 | 172 | tiff_field.value = cnd_swap_16_in_32(tiff_field.value, endianess_diff); 173 | } 174 | } 175 | 176 | 177 | fn determine_pixel_format(samples_per_pixel: usize, bits_per_sample: usize) -> PixelFormat { 178 | match samples_per_pixel { 179 | 1 => match bits_per_sample { 180 | 8 => PixelFormat::Mono8, 181 | 16 => PixelFormat::Mono16, 182 | _ => panic!() 183 | }, 184 | 185 | 3 => match bits_per_sample { 186 | 8 => PixelFormat::RGB8, 187 | 16 => PixelFormat::RGB16, 188 | _ => panic!() 189 | }, 190 | 191 | _ => panic!() 192 | } 193 | } 194 | 195 | 196 | fn validate_tiff_format(samples_per_pixel: usize, photometric_interpretation: u32) -> Result<(), TiffError> { 197 | if samples_per_pixel == 1 && photometric_interpretation != PHMET_BLACK_IS_ZERO && photometric_interpretation != PHMET_WHITE_IS_ZERO || 198 | samples_per_pixel == 3 && photometric_interpretation != PHMET_RGB || 199 | samples_per_pixel != 1 && samples_per_pixel != 3 { 200 | 201 | Err(TiffError::UnsupportedPixelFormat) 202 | } else { 203 | Ok(()) 204 | } 205 | } 206 | 207 | 208 | pub fn load_tiff(file_name: &str) -> Result { 209 | let mut file = OpenOptions::new().read(true).write(false).open(file_name)?; 210 | 211 | let tiff_header: TiffHeader = utils::read_struct(&mut file)?; 212 | 213 | let endianess_diff = utils::is_machine_big_endian() && tiff_header.id == INTEL_BYTE_ORDER; 214 | 215 | if cnd_swap!(tiff_header.version, endianess_diff) != TIFF_VERSION { 216 | return Err(TiffError::UnknownVersion); 217 | } 218 | 219 | // Seek to the first TIFF directory 220 | file.seek(SeekFrom::Start(cnd_swap!(tiff_header.dir_offset, endianess_diff) as u64))?; 221 | 222 | let mut num_dir_entries: u16 = utils::read_struct(&mut file)?; 223 | num_dir_entries = cnd_swap!(num_dir_entries, endianess_diff); 224 | 225 | // All the `Option`s below need to be read; if any is missing, we will panic 226 | let mut img_width: Option = None; 227 | let mut img_height: Option = None; 228 | let mut num_strips: Option = None; 229 | let mut bits_per_sample: Option = None; 230 | let mut rows_per_strip: Option = None; 231 | let mut photometric_interpretation: u32 = PHMET_BLACK_IS_ZERO; 232 | let mut samples_per_pixel: Option = None; 233 | let mut strip_offsets: Option> = None; 234 | 235 | let mut next_field_pos = file.seek(SeekFrom::Current(0)).unwrap(); 236 | for _ in 0..num_dir_entries { 237 | file.seek(SeekFrom::Start(next_field_pos))?; 238 | 239 | let mut tiff_field: TiffField = utils::read_struct(&mut file)?; 240 | 241 | next_field_pos = file.seek(SeekFrom::Current(0)).unwrap(); 242 | 243 | preprocess_tiff_field(&mut tiff_field, endianess_diff); 244 | 245 | match tiff_field.tag { 246 | TAG_IMAGE_WIDTH => img_width = Some(tiff_field.value), 247 | 248 | TAG_IMAGE_HEIGHT => img_height = Some(tiff_field.value), 249 | 250 | TAG_BITS_PER_SAMPLE => bits_per_sample = Some(parse_tag_bits_per_sample(&mut file, &tiff_field, endianess_diff)?), 251 | 252 | TAG_COMPRESSION => if tiff_field.value != NO_COMPRESSION { return Err(TiffError::CompressionNotSupported); }, 253 | 254 | TAG_PHOTOMETRIC_INTERPRETATION => photometric_interpretation = tiff_field.value, 255 | 256 | TAG_STRIP_OFFSETS => { 257 | num_strips = Some(tiff_field.count as usize); 258 | if num_strips.unwrap() == 1 { 259 | strip_offsets = Some(vec![tiff_field.value]); 260 | } else { 261 | file.seek(SeekFrom::Start(tiff_field.value as u64))?; 262 | strip_offsets = Some(utils::read_vec(&mut file, num_strips.unwrap())?); 263 | for sofs in strip_offsets.iter_mut().next().unwrap() { 264 | *sofs = cnd_swap!(*sofs, endianess_diff); 265 | } 266 | } 267 | }, 268 | 269 | TAG_SAMPLES_PER_PIXEL => samples_per_pixel = Some(tiff_field.value as usize), 270 | 271 | TAG_ROWS_PER_STRIP => rows_per_strip = Some(tiff_field.value as usize), 272 | 273 | TAG_PLANAR_CONFIGURATION => if tiff_field.value != PLANAR_CONFIGURATION_CHUNKY { return Err(TiffError::UnsupportedPlanarConfig); }, 274 | 275 | _ => { } // Ignore unknown tags 276 | } 277 | } 278 | 279 | if 0 == rows_per_strip.unwrap() && 1 == num_strips.unwrap() { 280 | // If there is only 1 strip, it contains all the rows 281 | rows_per_strip = Some(*img_height.iter().next().unwrap() as usize ); 282 | } 283 | 284 | validate_tiff_format(samples_per_pixel.unwrap(), photometric_interpretation)?; 285 | 286 | let pix_fmt = determine_pixel_format(samples_per_pixel.unwrap(), bits_per_sample.unwrap()); 287 | 288 | let bytes_per_line = img_width.unwrap() as usize * bytes_per_pixel(pix_fmt); 289 | let mut pixels = utils::alloc_uninitialized::(img_height.unwrap() as usize * bytes_per_line as usize); 290 | 291 | let mut curr_line = 0; 292 | for strip_ofs in strip_offsets.unwrap() { 293 | file.seek(SeekFrom::Start(strip_ofs as u64))?; 294 | 295 | let mut strip_row = 0; 296 | while strip_row < rows_per_strip.unwrap() && curr_line < img_height.unwrap() { 297 | let img_line = &mut pixels[range!(curr_line as usize * bytes_per_line, bytes_per_line)]; 298 | 299 | file.read_exact(img_line)?; 300 | 301 | strip_row += 1; 302 | curr_line += 1; 303 | } 304 | } 305 | 306 | let mut img = Image::new_from_pixels(img_width.unwrap(), img_height.unwrap(), pix_fmt, None, pixels); 307 | 308 | if photometric_interpretation == PHMET_WHITE_IS_ZERO { 309 | // Reverse the values so that "black" is zero, "white" is 255 or 65535. 310 | match pix_fmt { 311 | PixelFormat::Mono8 => negate_grayscale_8(&mut img), 312 | PixelFormat::Mono16 => negate_grayscale_16(&mut img), 313 | _ => panic!() 314 | } 315 | } 316 | 317 | if (pix_fmt == PixelFormat::Mono16 || pix_fmt == PixelFormat::RGB16) && endianess_diff { 318 | utils::swap_words16(&mut img); 319 | } 320 | 321 | Ok(img) 322 | } 323 | 324 | 325 | /// Returns metadata (width, height, ...) without reading the pixel data. 326 | pub fn get_tiff_metadata(file_name: &str) -> Result<(u32, u32, PixelFormat, Option), TiffError> { 327 | let mut file = OpenOptions::new().read(true).write(false).open(file_name)?; 328 | 329 | let tiff_header: TiffHeader = utils::read_struct(&mut file)?; 330 | 331 | let endianess_diff = utils::is_machine_big_endian() && tiff_header.id == INTEL_BYTE_ORDER; 332 | 333 | if cnd_swap!(tiff_header.version, endianess_diff) != TIFF_VERSION { 334 | return Err(TiffError::UnknownVersion); 335 | } 336 | 337 | // Seek to the first TIFF directory 338 | file.seek(SeekFrom::Start(cnd_swap!(tiff_header.dir_offset, endianess_diff) as u64))?; 339 | 340 | let mut num_dir_entries: u16 = utils::read_struct(&mut file)?; 341 | num_dir_entries = cnd_swap!(num_dir_entries, endianess_diff); 342 | 343 | // All the `Option`s below need to be read; if any is missing, we will panic 344 | let mut img_width: Option = None; 345 | let mut img_height: Option = None; 346 | let mut bits_per_sample: Option = None; 347 | let mut photometric_interpretation: u32 = PHMET_BLACK_IS_ZERO; 348 | let mut samples_per_pixel: Option = None; 349 | 350 | let mut next_field_pos = file.seek(SeekFrom::Current(0)).unwrap(); 351 | for _ in 0..num_dir_entries { 352 | file.seek(SeekFrom::Start(next_field_pos))?; 353 | 354 | let mut tiff_field: TiffField = utils::read_struct(&mut file)?; 355 | 356 | next_field_pos = file.seek(SeekFrom::Current(0)).unwrap(); 357 | 358 | preprocess_tiff_field(&mut tiff_field, endianess_diff); 359 | 360 | match tiff_field.tag { 361 | TAG_IMAGE_WIDTH => img_width = Some(tiff_field.value), 362 | 363 | TAG_IMAGE_HEIGHT => img_height = Some(tiff_field.value), 364 | 365 | TAG_BITS_PER_SAMPLE => bits_per_sample = Some(parse_tag_bits_per_sample(&mut file, &tiff_field, endianess_diff)?), 366 | 367 | TAG_COMPRESSION => if tiff_field.value != NO_COMPRESSION { return Err(TiffError::CompressionNotSupported); }, 368 | 369 | TAG_PHOTOMETRIC_INTERPRETATION => photometric_interpretation = tiff_field.value, 370 | 371 | TAG_SAMPLES_PER_PIXEL => samples_per_pixel = Some(tiff_field.value as usize), 372 | 373 | TAG_PLANAR_CONFIGURATION => if tiff_field.value != PLANAR_CONFIGURATION_CHUNKY { return Err(TiffError::UnsupportedPlanarConfig); }, 374 | 375 | _ => { } // Ignore unknown tags 376 | } 377 | } 378 | 379 | validate_tiff_format(samples_per_pixel.unwrap(), photometric_interpretation)?; 380 | 381 | let pix_fmt = determine_pixel_format(samples_per_pixel.unwrap(), bits_per_sample.unwrap()); 382 | 383 | Ok((img_width.unwrap(), img_height.unwrap(), pix_fmt, None)) 384 | } 385 | 386 | 387 | pub fn save_tiff(img: &Image, file_name: &str) -> Result<(), TiffError> { 388 | match img.get_pixel_format() { 389 | PixelFormat::Mono8 | 390 | PixelFormat::Mono16 | 391 | PixelFormat::RGB8 | 392 | PixelFormat::RGB16 => { }, 393 | 394 | _ => panic!() 395 | } 396 | 397 | let mut file = OpenOptions::new().read(false).write(true).create(true).open(file_name)?; 398 | let is_be = utils::is_machine_big_endian(); 399 | 400 | // Note: a 16-bit value (TAG_TYPE_WORD) stored in the 32-bit `tiff_field.value` has to be 401 | // always "left-aligned", i.e. stored in the lower-address two bytes in the file, 402 | // regardless of the file's and machine's endianess. 403 | // 404 | // This means that on a big-endian machine it has to be always shifted left by 16 bits 405 | // prior to writing to file. 406 | 407 | let tiff_header = TiffHeader { id: if is_be { MOTOROLA_BYTE_ORDER } else { INTEL_BYTE_ORDER }, 408 | version: TIFF_VERSION, 409 | dir_offset: size_of::() as u32 }; 410 | 411 | utils::write_struct(&tiff_header, &mut file)?; 412 | 413 | let num_dir_entries: u16 = 10; 414 | utils::write_struct(&num_dir_entries, &mut file)?; 415 | 416 | let next_dir_offset = 0u32; 417 | 418 | let mut field = TiffField { tag: TAG_IMAGE_WIDTH, 419 | ftype: TAG_TYPE_WORD, 420 | count: 1, 421 | value: img.get_width() as u32 }; 422 | if is_be { field.value <<= 16; } 423 | utils::write_struct(&field, &mut file)?; 424 | 425 | field = TiffField { tag: TAG_IMAGE_HEIGHT, 426 | ftype: TAG_TYPE_WORD, 427 | count: 1, 428 | value: img.get_height() as u32 }; 429 | if is_be { field.value <<= 16; } 430 | utils::write_struct(&field, &mut file)?; 431 | 432 | field = TiffField { tag: TAG_BITS_PER_SAMPLE, 433 | ftype: TAG_TYPE_WORD, 434 | count: 1, 435 | value: bytes_per_channel(img.get_pixel_format()) as u32 * 8 }; 436 | if is_be { field.value <<= 16; } 437 | utils::write_struct(&field, &mut file)?; 438 | 439 | field = TiffField { tag: TAG_COMPRESSION, 440 | ftype: TAG_TYPE_WORD, 441 | count: 1, 442 | value: NO_COMPRESSION }; 443 | if is_be { field.value <<= 16; } 444 | utils::write_struct(&field, &mut file)?; 445 | 446 | field = TiffField { tag: TAG_PHOTOMETRIC_INTERPRETATION, 447 | ftype: TAG_TYPE_WORD, 448 | count: 1, 449 | value: match img.get_pixel_format() 450 | { 451 | PixelFormat::Mono8 | PixelFormat::Mono16 => PHMET_BLACK_IS_ZERO, 452 | PixelFormat::RGB8 | PixelFormat::RGB16 => PHMET_RGB, 453 | _ => panic!() 454 | } 455 | }; 456 | if is_be { field.value <<= 16; } 457 | utils::write_struct(&field, &mut file)?; 458 | 459 | field = TiffField { tag: TAG_STRIP_OFFSETS, 460 | ftype: TAG_TYPE_WORD, 461 | count: 1, 462 | // We write the header, num. of directory entries, 10 fields and a next directory offset (==0); pixel data starts next 463 | value: (size_of_val(&tiff_header) + 464 | size_of_val(&num_dir_entries) + 465 | 10 * size_of_val(&field) + 466 | size_of_val(&next_dir_offset)) as u32 467 | }; 468 | if is_be { field.value <<= 16; } 469 | utils::write_struct(&field, &mut file)?; 470 | 471 | field = TiffField { tag: TAG_SAMPLES_PER_PIXEL, 472 | ftype: TAG_TYPE_WORD, 473 | count: 1, 474 | value: get_num_channels(img.get_pixel_format()) as u32 }; 475 | if is_be { field.value <<= 16; } 476 | utils::write_struct(&field, &mut file)?; 477 | 478 | field = TiffField { tag: TAG_ROWS_PER_STRIP, 479 | ftype: TAG_TYPE_WORD, 480 | count: 1, 481 | value: img.get_height() as u32 }; // There is only one strip for the whole image 482 | if is_be { field.value <<= 16; } 483 | utils::write_struct(&field, &mut file)?; 484 | 485 | field = TiffField { tag: TAG_STRIP_BYTE_COUNTS, 486 | ftype: TAG_TYPE_DWORD, 487 | count: 1, 488 | value: img.get_bytes_per_line() as u32 * img.get_height() }; // There is only one strip for the whole image 489 | utils::write_struct(&field, &mut file)?; 490 | 491 | field = TiffField { tag: TAG_PLANAR_CONFIGURATION, 492 | ftype: TAG_TYPE_WORD, 493 | count: 1, 494 | value: PLANAR_CONFIGURATION_CHUNKY }; 495 | if is_be { field.value <<= 16; } 496 | utils::write_struct(&field, &mut file)?; 497 | 498 | // Write the next directory offset (0 = no other directories) 499 | utils::write_struct(&next_dir_offset, &mut file)?; 500 | 501 | file.write_all(img.get_raw_pixels())?; 502 | 503 | Ok(()) 504 | } -------------------------------------------------------------------------------- /src/triangulation.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Delaunay triangulation. 11 | // 12 | 13 | use defs::{Point, Rect}; 14 | 15 | 16 | //TODO: change to usize::max_value(); once const functions are available on stable 17 | const EMPTY: usize = 1 << 30; 18 | 19 | 20 | #[derive(Copy, Clone, Default)] 21 | pub struct Edge { 22 | /// First vertex 23 | pub v0: usize, 24 | /// Second vertex 25 | pub v1: usize, 26 | 27 | /// First adjacent triangle (if EMPTY, `t1` is not EMPTY) 28 | pub t0: usize, 29 | /// Second adjacent triangle (if EMPTY, `t0` is not EMPTY) 30 | pub t1: usize, 31 | 32 | /// First opposite vertex (if EMPTY, `w1` is not EMPTY) 33 | pub w0: usize, 34 | /// Second opposite vertex (if EMPTY, `w0` is not EMPTY) 35 | pub w1: usize 36 | } 37 | 38 | 39 | /// Vertices and edges are specified in CCW order. 40 | /// 41 | /// Note: in `Triangulation.edges` the edges' vertices may not be specified in the order 42 | /// mentioned below for `e0`, `e1`, `e2`. 43 | /// 44 | #[derive(Copy, Clone, Default)] 45 | pub struct Triangle { 46 | /// First vertex 47 | pub v0: usize, 48 | /// Second vertex 49 | pub v1: usize, 50 | /// Third vertex 51 | pub v2: usize, 52 | 53 | /// First edge (contains v0, v1) 54 | pub e0: usize, 55 | /// Second edge (contains v1, v2) 56 | pub e1: usize, 57 | /// Third edge (contains v2, v0) 58 | pub e2: usize 59 | } 60 | 61 | 62 | impl Triangle { 63 | pub fn contains(&self, vertex: usize) -> bool { 64 | vertex == self.v0 || 65 | vertex == self.v1 || 66 | vertex == self.v2 67 | } 68 | 69 | 70 | pub fn next_vertex(&self, vertex: usize) -> usize { 71 | if vertex == self.v0 { 72 | return self.v1; 73 | } else if vertex == self.v1 { 74 | return self.v2; 75 | } else if vertex == self.v2 { 76 | return self.v0; 77 | } else { 78 | panic!("Attempted to get next vertex after {} in triangle ({}, {}, {}).", 79 | vertex, self.v0, self.v1, self.v2); 80 | } 81 | } 82 | 83 | 84 | /// Returns the 'leading' edge of a vertex. 85 | /// 86 | /// Each vertex has a 'leading' and a 'trailing' edge (corresponding to CCW order). 87 | /// The 'leading' edge is the one which contains the vertex and a vertex which succeeds it in CCW order. 88 | /// 89 | pub fn get_leading_edge_containing_vertex(&self, vertex: usize) -> usize { 90 | if vertex == self.v0 { 91 | return self.e0; 92 | } else if vertex == self.v1 { 93 | return self.e1; 94 | } else if vertex == self.v2 { 95 | return self.e2; 96 | } else { 97 | panic!("Attempted to get leading edge containing vertex {} in triangle ({}, {}, {}).", 98 | vertex, self.v0, self.v1, self.v2); 99 | } 100 | } 101 | } 102 | 103 | 104 | #[derive(Clone, Default)] 105 | pub struct Triangulation { 106 | vertices: Vec, 107 | edges: Vec, 108 | triangles: Vec 109 | } 110 | 111 | 112 | impl Triangulation { 113 | pub fn get_vertices(&self) -> &[Point] { 114 | &self.vertices[..] 115 | } 116 | 117 | 118 | pub fn get_edges(&self) -> &[Edge] { 119 | &self.edges[..] 120 | } 121 | 122 | 123 | pub fn get_triangles(&self) -> &[Triangle] { 124 | &self.triangles[..] 125 | } 126 | 127 | 128 | /// Checks if vertex 'pidx' lies inside triangle 'tidx'. 129 | pub fn is_inside_triangle(&self, pidx: usize, tidx: usize) -> bool { 130 | let t = &self.triangles[tidx]; 131 | let (u, v) = calc_barycentric_coords!(&self.vertices[pidx], 132 | &self.vertices[t.v0], 133 | &self.vertices[t.v1], 134 | &self.vertices[t.v2]); 135 | 136 | let w = 1.0 - u - v; 137 | 138 | u >= 0.0 && u <= 1.0 && v >= 0.0 && v <= 1.0 && w >= 0.0 && w <= 1.0 139 | } 140 | 141 | /// Finds Delaunay triangulation for the specified point set (all have to be different). 142 | /// 143 | /// Also adds three additional points for the initial triangle which covers the whole set 144 | /// and `envelope`. `Envelope` has to contain all `points`. 145 | /// 146 | pub fn find_delaunay_triangulation(points: &[Point], envelope: &Rect) -> Triangulation { 147 | let mut tri = Triangulation{ vertices: vec![], edges: vec![], triangles: vec![] }; 148 | 149 | tri.vertices.extend_from_slice(points); 150 | 151 | // Create the initial triangle which covers `envelope` (which in turn must contain all of `points`); 152 | // append its vertices `all_` at the end of the array 153 | 154 | let all0 = Point{ x: envelope.x - 15*(envelope.height as i32)/10 - 16, 155 | y: envelope.y - (envelope.height as i32)/10 - 16 }; 156 | 157 | let all1 = Point{ x: envelope.x + (envelope.width + 15*envelope.height/10 + 16) as i32, 158 | y: all0.y }; 159 | 160 | let all2 = Point{ x: envelope.x + (envelope.width as i32)/2, 161 | y: envelope.y + (envelope.height + 15*envelope.width/10 + 16) as i32 }; 162 | 163 | tri.vertices.push(all0); 164 | tri.vertices.push(all1); 165 | tri.vertices.push(all2); 166 | 167 | // Initial triangle's edges 168 | 169 | tri.edges.push(Edge{ v0: points.len() + 0, v1: points.len() + 1, 170 | t0: 0, t1: EMPTY, 171 | w0: points.len() + 2, w1: EMPTY }); 172 | 173 | tri.edges.push(Edge{ v0: points.len() + 1, v1: points.len() + 2, 174 | t0: 0, t1: EMPTY, 175 | w0: points.len() + 0, w1: EMPTY }); 176 | 177 | tri.edges.push(Edge{ v0: points.len() + 2, v1: points.len() + 0, 178 | t0: 0, t1: EMPTY, 179 | w0: points.len() + 1, w1: EMPTY }); 180 | 181 | tri.triangles.push(Triangle{ v0: points.len() + 0, 182 | v1: points.len() + 1, 183 | v2: points.len() + 2, 184 | e0: 0, e1: 1, e2: 2 }); 185 | 186 | // Process subsequent points and incrementally refresh the triangulation 187 | for pidx in 0..points.len() { 188 | // 1) Find an existing triangle 't' with index 'tidx' to which 'pidx' belongs 189 | let mut tidx: usize = EMPTY; 190 | 191 | for j in 0..tri.triangles.len() { 192 | if tri.is_inside_triangle(pidx, j) { 193 | tidx = j; 194 | break; 195 | } 196 | } 197 | assert!(tidx != EMPTY); // Will never happen, unless 'envelope' does not contain all 'points' 198 | 199 | // Check if point 'pidx' belongs to one of triangle 'tidx's edges 200 | let mut insertion_edge = EMPTY; 201 | 202 | let mut edges_to_check: [usize; 3] = [0; 3]; { 203 | let t = &tri.triangles[tidx]; 204 | 205 | // All items in 'points' have to be different 206 | assert!(tri.vertices[t.v0] != points[pidx] && 207 | tri.vertices[t.v1] != points[pidx] && 208 | tri.vertices[t.v2] != points[pidx]); 209 | 210 | 211 | edges_to_check.copy_from_slice(&[t.e0, t.e1, t.e2]); 212 | } 213 | 214 | for i in 0..3 { 215 | if point_belongs_to_line(&points[pidx], &tri.vertices[tri.edges[edges_to_check[i]].v0], 216 | &tri.vertices[tri.edges[edges_to_check[i]].v1]) { 217 | 218 | insertion_edge = edges_to_check[i]; 219 | break; 220 | } 221 | } 222 | 223 | if insertion_edge != EMPTY { 224 | tri.add_point_on_edge(pidx, insertion_edge); 225 | } else { 226 | tri.add_point_inside_triangle(pidx, tidx); 227 | } 228 | } 229 | 230 | tri 231 | } 232 | 233 | /// Adds new point `pidx` that lies on an existing edge `eidx`. 234 | fn add_point_on_edge(&mut self, pidx: usize, eidx: usize) { 235 | // Starting configuration: (| = edge 'e') 236 | // 237 | // k0 238 | // .|. 239 | // . | . 240 | // q0 | . 241 | // . | q3 242 | // . | . 243 | // wt0 p wt1 244 | // . t0 | t1 . 245 | // . | . 246 | // q1 | q2 247 | // . | . 248 | // . | . 249 | // k1 250 | // 251 | // 252 | // Point 'p' is inserted into edge 'e', which has adjacent triangles t0, t1 253 | // and the corresponding opposing vertices wt0, wt1. The adjacent triangles 254 | // form a quadrilateral (with edges q0-3) whose diagonal is the edge 'e'. 255 | // 256 | // Edge 'e' is subdivided into e0, e1, where e0 contains v0 and e1 contains v1. 257 | // This creates two more edges e2, e3, which subdivide triangle t0 into triangles t0a/t0b 258 | // and triangle t1 into t1a/t1b: 259 | // 260 | // k0 261 | // .|. 262 | // . | . 263 | // q0 | . 264 | // . e0 . q3 265 | // . | . 266 | // . t0a | t1b . 267 | // . | . 268 | // wt0...e2..p...e3..wt1 269 | // . | . 270 | // . t0b | . 271 | // . | t1a . 272 | // . | . 273 | // q1 e1 q2 274 | // . | . 275 | // . | . 276 | // k1 277 | // 278 | // After subdivision, the Delaunay condition needs to be checked for edges e0..3, q0..3. 279 | 280 | 281 | // We subdivide the 2 triangles adjacent to 'e' into two new triangles each. Old triangles in the array 282 | // will be reused, so make room for just 2 new ones. 283 | 284 | self.triangles.push(Default::default()); 285 | self.triangles.push(Default::default()); 286 | 287 | // We subdivide 'e' into two edges at point 'pidx' and create 2 more edges connected to 'pidx'. 288 | // The array element storing 'e' will be reused, so make room for just 3 new ones. 289 | 290 | self.edges.push(Default::default()); 291 | self.edges.push(Default::default()); 292 | self.edges.push(Default::default()); 293 | 294 | macro_rules! e { () => { self.edges[eidx] } } 295 | 296 | let t0_idx = self.edges[eidx].t0; 297 | macro_rules! t0 { () => { self.triangles[t0_idx] } } 298 | let t1_idx = self.edges[eidx].t1; 299 | macro_rules! t1 { () => { self.triangles[t1_idx] } } 300 | 301 | let wt0_idx; // Vertex opposite of 'e' belonging to 't0' 302 | let wt1_idx; // Vertex opposite of 'e' belonging to 't1' 303 | 304 | if t0!().contains(self.edges[eidx].w0) { 305 | wt0_idx = e!().w0; 306 | wt1_idx = e!().w1; 307 | } else { 308 | wt0_idx = e!().w1; 309 | wt1_idx = e!().w0; 310 | } 311 | 312 | let k0 = t1!().next_vertex(wt1_idx); 313 | let k1 = t0!().next_vertex(wt0_idx); 314 | 315 | // Edges of the quadrilateral (k0, wt0_idx, k1, wt1_idx); 316 | // q0 contains (k0, wt0_idx); q1, q2, q3 are the subsequent edges in CCW order. 317 | 318 | let q0_idx = t0!().get_leading_edge_containing_vertex(k0); 319 | let q1_idx = t0!().get_leading_edge_containing_vertex(wt0_idx); 320 | let q2_idx = t1!().get_leading_edge_containing_vertex(k1); 321 | let q3_idx = t1!().get_leading_edge_containing_vertex(wt1_idx); 322 | 323 | // 't0' gets subdivided into 't0a', 't0b' 324 | // 't0a' reuses the storage of 't0' 325 | let t0a_idx = e!().t0; 326 | let mut t0a: Triangle = Default::default(); 327 | 328 | let t0b_idx = self.triangles.len() - 2; // the 1st of the newly allocated triangles 329 | let t1b_idx = self.triangles.len() - 1; // the 2nd of the newly allocated triangles 330 | 331 | // 't1' gets subdivided into 't1a', 't1b' 332 | // 't1a' reuses the storage of 't1' 333 | let t1a_idx = e!().t1; 334 | let mut t1a: Triangle = Default::default(); 335 | 336 | // Edge 'e' (of 'eidx') gets subdivided into 'e0' and 'e1'. New edge 'e2' belongs to 't0', 337 | // new edge 'e3' belongs to 't1'. 338 | 339 | let e0_idx = eidx; 340 | let e1_idx = self.edges.len() - 3; 341 | let e2_idx = self.edges.len() - 2; 342 | let e3_idx = self.edges.len() - 1; 343 | 344 | let mut e0: Edge = Default::default(); 345 | 346 | e0.v0 = pidx; 347 | e0.v1 = k0; 348 | e0.t0 = t0a_idx; 349 | e0.t1 = t1b_idx; 350 | e0.w0 = wt0_idx; 351 | e0.w1 = wt1_idx; 352 | 353 | { 354 | let e1 = &mut self.edges[e1_idx]; 355 | 356 | e1.v0 = pidx; 357 | e1.v1 = k1; 358 | e1.t0 = t0b_idx; 359 | e1.t1 = t1a_idx; 360 | e1.w0 = wt0_idx; 361 | e1.w1 = wt1_idx; 362 | } 363 | 364 | { 365 | let e2 = &mut self.edges[e2_idx]; 366 | e2.v0 = pidx; 367 | e2.v1 = wt0_idx; 368 | e2.t0 = t0a_idx; 369 | e2.t1 = t0b_idx; 370 | e2.w0 = k0; 371 | e2.w1 = k1; 372 | } 373 | 374 | { 375 | let e3 = &mut self.edges[e3_idx]; 376 | 377 | e3.v0 = pidx; 378 | e3.v1 = wt1_idx; 379 | e3.t0 = t1a_idx; 380 | e3.t1 = t1b_idx; 381 | e3.w0 = k0; 382 | e3.w1 = k1; 383 | } 384 | 385 | t0a.v0 = pidx; 386 | t0a.v1 = k0; 387 | t0a.v2 = wt0_idx; 388 | t0a.e0 = e0_idx; 389 | t0a.e1 = q0_idx; 390 | t0a.e2 = e2_idx; 391 | 392 | { 393 | let t0b = &mut self.triangles[t0b_idx]; 394 | 395 | t0b.v0 = pidx; 396 | t0b.v1 = wt0_idx; 397 | t0b.v2 = k1; 398 | t0b.e0 = e2_idx; 399 | t0b.e1 = q1_idx; 400 | t0b.e2 = e1_idx; 401 | } 402 | 403 | t1a.v0 = pidx; 404 | t1a.v1 = k1; 405 | t1a.v2 = wt1_idx; 406 | t1a.e0 = e1_idx; 407 | t1a.e1 = q2_idx; 408 | t1a.e2 = e3_idx; 409 | 410 | { 411 | let t1b = &mut self.triangles[t1b_idx]; 412 | 413 | t1b.v0 = pidx; 414 | t1b.v1 = wt1_idx; 415 | t1b.v2 = k0; 416 | t1b.e0 = e3_idx; 417 | t1b.e1 = q3_idx; 418 | t1b.e2 = e0_idx; 419 | } 420 | 421 | 422 | // Update the edges of the quadrilateral (e.v0, wt0_idx, e.v2, wt1_idx): their adjacent triangles and opposite vertices 423 | 424 | replace_adjacent_triangle(&mut self.edges[q0_idx], t0_idx, t0a_idx); 425 | replace_opposing_vertex(&mut self.edges[q0_idx], k1, pidx); 426 | 427 | replace_adjacent_triangle(&mut self.edges[q1_idx], t0_idx, t0b_idx); 428 | replace_opposing_vertex(&mut self.edges[q1_idx], k0, pidx); 429 | 430 | replace_adjacent_triangle(&mut self.edges[q2_idx], t1_idx, t1a_idx); 431 | replace_opposing_vertex(&mut self.edges[q2_idx], k0, pidx); 432 | 433 | replace_adjacent_triangle(&mut self.edges[q3_idx], t1_idx, t1b_idx); 434 | replace_opposing_vertex(&mut self.edges[q3_idx], k1, pidx); 435 | 436 | // Overwrite old triangles and edges 437 | self.triangles[t0_idx] = t0a; 438 | self.triangles[t1_idx] = t1a; 439 | e!() = e0; 440 | 441 | // Check Delaunay condition for all affected edges 442 | for i in [e0_idx, e1_idx, e2_idx, e3_idx, 443 | q0_idx, q1_idx, q2_idx, q3_idx].iter() { 444 | 445 | self.test_and_swap_edge(*i, EMPTY, EMPTY); 446 | } 447 | } 448 | 449 | 450 | /// Adds a new point 'pidx' inside an existing triangle 'tidx'. 451 | fn add_point_inside_triangle(&mut self, pidx: usize, tidx: usize) { 452 | // 453 | // Subdivide 't' into 3 sub-triangles 'tsub0', 'tsub1', 'tsub3' using 'pidx' 454 | // 455 | // The order of existing triangles has to be preserved (they are referenced by the existing edges), 456 | // so replace 't' by 'tsub0' and add 'tsub1' and 'tsub2' at the triangle array's end. 457 | 458 | let mut tsub0: Triangle = Default::default(); 459 | let tsub0idx = tidx; 460 | 461 | // Add 2 new triangles 462 | self.triangles.push(Default::default()); 463 | self.triangles.push(Default::default()); 464 | 465 | let tsub1idx = self.triangles.len() - 2; 466 | let tsub2idx = self.triangles.len() - 1; 467 | 468 | macro_rules! tsub1 { () => { self.triangles[tsub1idx] } } 469 | macro_rules! tsub2 { () => { self.triangles[tsub2idx] } } 470 | 471 | macro_rules! t { () => { self.triangles[tidx] } } 472 | 473 | // Add 3 new edges 'enew0', 'enew1', 'enew2' which connect 't.v0', 't.v1', 't.v2' with 'pidx' 474 | 475 | self.edges.push(Edge{ v0: t!().v0, 476 | v1: pidx, 477 | t0: tsub0idx, 478 | t1: tsub2idx, 479 | w0: t!().v1, 480 | w1: t!().v2 }); 481 | let enew0 = self.edges.len() - 1; 482 | 483 | self.edges.push(Edge{ v0: t!().v1, 484 | v1: pidx, 485 | t0: tsub0idx, 486 | t1: tsub1idx, 487 | w0: t!().v0, 488 | w1: t!().v2 }); 489 | let enew1 = self.edges.len() - 1; 490 | 491 | //DA_APPEND(tri->edges, ((struct SKRY_edge) { .v0 = t->v2, .v1 = pidx, .t0 = tsub1idx, .t1 = tsub2idx, .w0 = t->v1, .w1 = t->v0 })); 492 | self.edges.push(Edge{ v0: t!().v2, 493 | v1: pidx, 494 | t0: tsub1idx, 495 | t1: tsub2idx, 496 | w0: t!().v1, 497 | w1: t!().v0 }); 498 | let enew2 = self.edges.len() - 1; 499 | 500 | // Fill the new triangles' data 501 | 502 | tsub0.v0 = pidx; 503 | tsub0.v1 = t!().v0; 504 | tsub0.v2 = t!().v1; 505 | tsub0.e0 = enew0; 506 | tsub0.e1 = t!().e0; 507 | tsub0.e2 = enew1; 508 | 509 | tsub1!().v0 = pidx; 510 | tsub1!().v1 = t!().v1; 511 | tsub1!().v2 = t!().v2; 512 | tsub1!().e0 = enew1; 513 | tsub1!().e1 = t!().e1; 514 | tsub1!().e2 = enew2; 515 | 516 | tsub2!().v0 = pidx; 517 | tsub2!().v1 = t!().v2; 518 | tsub2!().v2 = t!().v0; 519 | tsub2!().e0 = enew2; 520 | tsub2!().e1 = t!().e2; 521 | tsub2!().e2 = enew0; 522 | 523 | // Update adjacent triangle and opposing vertex data for 't's edges 524 | 525 | replace_opposing_vertex(&mut self.edges[t!().e0], t!().v2, pidx); 526 | replace_adjacent_triangle(&mut self.edges[t!().e0], tidx, tsub0idx); 527 | 528 | replace_opposing_vertex(&mut self.edges[t!().e1], t!().v0, pidx); 529 | replace_adjacent_triangle(&mut self.edges[t!().e1], tidx, tsub1idx); 530 | 531 | replace_opposing_vertex(&mut self.edges[t!().e2], t!().v1, pidx); 532 | replace_adjacent_triangle(&mut self.edges[t!().e2], tidx, tsub2idx); 533 | 534 | // Keep note of the 't's edges for the subsequent Delaunay check 535 | let te0 = t!().e0; 536 | let te1 = t!().e1; 537 | let te2 = t!().e2; 538 | 539 | // Original triangle 't' is no longer needed, replace it with 'tsub0' 540 | t!() = tsub0; 541 | 542 | // 3) Check Delaunay condition for the old 't's edges and swap them if necessary. 543 | // Also recursively check any edges affected by the swap. 544 | 545 | self.test_and_swap_edge(te0, enew0, enew1); 546 | self.test_and_swap_edge(te1, enew1, enew2); 547 | self.test_and_swap_edge(te2, enew2, enew0); 548 | } 549 | 550 | 551 | /// Checks if point `pidx` is inside the triangle's `tidx` circumcircle. 552 | #[allow(non_snake_case)] 553 | fn is_inside_circumcircle(&self, pidx: usize, tidx: usize) -> bool { 554 | let p = &self.vertices[pidx]; 555 | let t = &self.triangles[tidx]; 556 | 557 | let A = &self.vertices[t.v0]; 558 | let B = &self.vertices[t.v1]; 559 | let C = &self.vertices[t.v2]; 560 | 561 | 562 | // Coordinates of the circumcenter 563 | let ux: f32; 564 | let uy: f32; 565 | 566 | // Squared radius of the circumcircle 567 | let radiusq: f32; 568 | 569 | // Note: the formulas below work correctly regardless of the handedness of the coordinate system used 570 | // (for triangulation of points in an image a left-handed system is used here, i.e. X grows to the right, Y downwards) 571 | 572 | let d = 2.0 * (A.x as f32 * (B.y - C.y) as f32 + 573 | B.x as f32 * (C.y - A.y) as f32 + 574 | C.x as f32 * (A.y - B.y) as f32); 575 | 576 | if d.abs() > 1.0e-8 { 577 | ux = ((sqr!(A.x as f32) + sqr!(A.y as f32)) * (B.y - C.y) as f32 + 578 | (sqr!(B.x as f32) + sqr!(B.y as f32)) * (C.y - A.y) as f32 + 579 | (sqr!(C.x as f32) + sqr!(C.y as f32)) * (A.y - B.y) as f32) / d; 580 | 581 | uy = ((sqr!(A.x as f32) + sqr!(A.y as f32)) * (C.x - B.x) as f32 + 582 | (sqr!(B.x as f32) + sqr!(B.y as f32)) * (A.x - C.x) as f32 + 583 | (sqr!(C.x as f32) + sqr!(C.y as f32)) * (B.x - A.x) as f32) / d; 584 | 585 | radiusq = sqr!(ux - A.x as f32) + sqr!(uy - A.y as f32); 586 | } else { 587 | // Degenerated triangle (co-linear vertices) 588 | let dist_AB_sq = sqr!((A.x - B.x) as f32) + sqr!((A.y - B.y) as f32); 589 | let dist_AC_sq = sqr!((A.x - C.x) as f32) + sqr!((A.y - C.y) as f32); 590 | let dist_BC_sq = sqr!((B.x - C.x) as f32) + sqr!((B.y - C.y) as f32); 591 | 592 | // Extreme vertices of the degenerated triangle 593 | //struct SKRY_point *ext1, *ext2; 594 | let ext1: &Point; 595 | let ext2: &Point; 596 | 597 | if dist_AB_sq >= dist_AC_sq && dist_AB_sq >= dist_BC_sq { 598 | ext1 = &A; 599 | ext2 = &B; 600 | } else if dist_AC_sq >= dist_AB_sq && dist_AC_sq >= dist_BC_sq { 601 | ext1 = &A; 602 | ext2 = &C; 603 | } else { 604 | ext1 = &B; 605 | ext2 = &C; 606 | } 607 | 608 | ux = (ext1.x + ext2.x) as f32 * 0.5; 609 | uy = (ext1.y + ext2.y) as f32 * 0.5; 610 | 611 | radiusq = 0.25 * (sqr!((ext1.x - ext2.x) as f32) + sqr!((ext1.y - ext2.y) as f32)); 612 | } 613 | 614 | sqr!(p.x as f32 - ux) + sqr!(p.y as f32 - uy) < radiusq 615 | } 616 | 617 | 618 | 619 | /// Ensures the specified edge satisfies the Delaunay condition. 620 | /// 621 | /// If edge `e` violates the Delaunay condition, swaps it and recursively 622 | /// continues to test the 4 neighboring edges. 623 | /// 624 | /// Before: 625 | /// 626 | /// v3--e2---v2 627 | /// / t1 ___/ / 628 | /// e3 __e4 e1 629 | /// / _/ t0 / 630 | /// v0---e0--v1 631 | /// 632 | /// After swapping e4: 633 | /// 634 | /// v3--e2---v2 635 | /// / \ t0 / 636 | /// e3 e4 e1 637 | /// / t1 \ / 638 | /// v0--e0--v1 639 | /// 640 | /// How to decide which of the new triangles is now `t0` and which `t1`? 641 | /// For each of the triangles adjacent to `e4` before the swap, take their vertex opposite to `e4` and the next vertex. 642 | /// After edge swap, the new triangle still contains the same 2 vertices. From the example above: 643 | /// 644 | /// 1) For `t0` (`v0-v1-v2`), use `v1`, `v2`. After swap, the new `t0` is the triangle which still contains `v1`, `v2`. 645 | /// 2) For `t1` (`v0-v2-v3`), use `v3`, `v0`. After swap, the new `t1` is the triangle which still contains `v3`, `v0`. 646 | /// 647 | /// After the swap is complete, recursively test `e0`, `e1`, `e2` and `e3`. 648 | /// 649 | fn test_and_swap_edge(&mut self, 650 | e: usize, 651 | // Edges to skip when checking what needs swapping (may be SKRY_EMPTY) 652 | eskip1: usize, 653 | eskip2: usize) { 654 | 655 | // Edge 'e' before the swap 656 | let eprev: Edge = self.edges[e]; 657 | 658 | // 0) Check the Delaunay condition for 'e's adjacent triangles 659 | 660 | if self.edges[e].t0 == EMPTY || self.edges[e].t1 == EMPTY { 661 | return; 662 | } 663 | 664 | // Triangles which share edge 'e' before the swap 665 | let t0prev = eprev.t0; 666 | let t1prev = eprev.t1; 667 | 668 | //TODO: guarantee that always 'w0' belongs to 't0' and 'w1' to 't1' - then we can get rid of all the "contains" checks below 669 | 670 | let mut swap_needed = false; 671 | 672 | if !swap_needed && self.triangles[t0prev].contains(eprev.w0) && self.is_inside_circumcircle(eprev.w1, eprev.t0) { 673 | swap_needed = true; 674 | } 675 | 676 | if !swap_needed && self.triangles[t0prev].contains(eprev.w1) && self.is_inside_circumcircle(eprev.w0, eprev.t0) { 677 | swap_needed = true; 678 | } 679 | 680 | if !swap_needed && self.triangles[t1prev].contains(eprev.w0) && self.is_inside_circumcircle(eprev.w1, eprev.t1) { 681 | swap_needed = true; 682 | } 683 | 684 | if !swap_needed && self.triangles[t1prev].contains(eprev.w1) && self.is_inside_circumcircle(eprev.w0, eprev.t1) { 685 | swap_needed = true; 686 | } 687 | 688 | if !swap_needed { 689 | return; 690 | } 691 | 692 | // List of at most 4 edges that have to be checked recursively after 'e' is swapped 693 | // FIXME: do we have to check all the 4 neighboring edges? 694 | 695 | let mut num_edges_to_check: usize = 0; 696 | let mut edges_to_check: [usize; 4] = [0; 4]; 697 | 698 | if self.triangles[t0prev].e0 != e && self.triangles[t0prev].e0 != eskip1 && self.triangles[t0prev].e0 != eskip2 { 699 | edges_to_check[num_edges_to_check] = self.triangles[t0prev].e0; 700 | num_edges_to_check += 1; 701 | } 702 | 703 | if self.triangles[t0prev].e1 != e && self.triangles[t0prev].e1 != eskip1 && self.triangles[t0prev].e1 != eskip2 { 704 | edges_to_check[num_edges_to_check] = self.triangles[t0prev].e1; 705 | num_edges_to_check += 1; 706 | } 707 | 708 | if self.triangles[t0prev].e2 != e && self.triangles[t0prev].e2 != eskip1 && self.triangles[t0prev].e2 != eskip2 { 709 | edges_to_check[num_edges_to_check] = self.triangles[t0prev].e2; 710 | num_edges_to_check += 1; 711 | } 712 | 713 | if self.triangles[t1prev].e0 != e && self.triangles[t1prev].e0 != eskip1 && self.triangles[t1prev].e0 != eskip2 { 714 | edges_to_check[num_edges_to_check] = self.triangles[t1prev].e0; 715 | num_edges_to_check += 1; 716 | } 717 | 718 | if self.triangles[t1prev].e1 != e && self.triangles[t1prev].e1 != eskip1 && self.triangles[t1prev].e1 != eskip2 { 719 | edges_to_check[num_edges_to_check] = self.triangles[t1prev].e1; 720 | num_edges_to_check += 1; 721 | } 722 | 723 | if self.triangles[t1prev].e2 != e && self.triangles[t1prev].e2 != eskip1 && self.triangles[t1prev].e2 != eskip2 { 724 | edges_to_check[num_edges_to_check] = self.triangles[t1prev].e2; 725 | num_edges_to_check += 1; 726 | } 727 | 728 | // 1) Determine the reference vertices for each triangle 729 | // 730 | // 731 | // 732 | // D---------C 733 | // / t1 ___/ / 734 | // / __e / 735 | // / _/ t0 / 736 | // A---------B 737 | // 738 | // B becomes t0refV 739 | // D becomes t1refV 740 | // 741 | 742 | 743 | let t0refv; // The only vertex in 't0' which does not belong to 'e' 744 | let t1refv; // The only vertex in 't1' which does not belong to 'e' 745 | 746 | if self.triangles[t0prev].contains(eprev.w0) { 747 | t0refv = eprev.w0; 748 | t1refv = eprev.w1; 749 | } else { 750 | t0refv = eprev.w1; 751 | t1refv = eprev.w0; 752 | } 753 | 754 | // 2) Update the triangles 755 | 756 | let mut t0new: Triangle = Default::default(); 757 | let mut t1new: Triangle = Default::default(); 758 | 759 | // For each of the new triangles, the reference vertex from step 1) and the next vertex stay the same as before the swap. 760 | // The third vertex (i.e. the one "previous" to the reference vertex) becomes the vertex opposite 'e', i.e. the other triangle's reference vertex. 761 | // Additionally, reorder the vertices such that the reference vertex becomes v0. 762 | // 763 | // 764 | // 765 | // Before: After: 766 | // 767 | // D---------C D-------C 768 | // / t1 ___/ / / \ t0 / 769 | // / __e / / e / 770 | // / _/ t0 / / t1 \ / 771 | // A---------B A-------B 772 | // 773 | // t0: v0=A, v1=B, v2=C t0: v0=B, v1=C, v2=D 774 | // t1: v0=D, v1=A, v2=C t1: v0=D, v1=A, v2=B 775 | 776 | t0new.v0 = t0refv; 777 | t0new.v1 = self.triangles[t0prev].next_vertex(t0refv); 778 | t0new.v2 = t1refv; 779 | 780 | t1new.v0 = t1refv; 781 | t1new.v1 = self.triangles[t1prev].next_vertex(t1refv); 782 | t1new.v2 = t0refv; 783 | 784 | // For each new triangle, update their edges. The 'leading' edge of the reference vertex (now: e0) stays the same. The second edge comes 785 | // from the other triangle. The third edge is the new 'e' (after the swap). 786 | 787 | t0new.e0 = self.triangles[t0prev].get_leading_edge_containing_vertex(t0new.v0); 788 | t0new.e1 = self.triangles[t1prev].get_leading_edge_containing_vertex(t0new.v1); 789 | t0new.e2 = e; 790 | 791 | t1new.e0 = self.triangles[t1prev].get_leading_edge_containing_vertex(t1new.v0); 792 | t1new.e1 = self.triangles[t0prev].get_leading_edge_containing_vertex(t1new.v1); 793 | t1new.e2 = e; 794 | 795 | // For each of the 5 edges involved, update their adjacent triangles and opposing vertices information. 796 | // For 'e' after swap update also the end vertices. 797 | 798 | replace_opposing_vertex(&mut self.edges[t0new.e0], t1new.v1, t0new.v2); 799 | replace_opposing_vertex(&mut self.edges[t0new.e1], t1new.v1, t0new.v0); 800 | replace_opposing_vertex(&mut self.edges[t1new.e0], t0new.v1, t1new.v2); 801 | replace_opposing_vertex(&mut self.edges[t1new.e1], t0new.v1, t1new.v0); 802 | 803 | replace_adjacent_triangle(&mut self.edges[t0new.e1], eprev.t1, eprev.t0); 804 | replace_adjacent_triangle(&mut self.edges[t1new.e1], eprev.t0, eprev.t1); 805 | 806 | // update edge 'e' after swap 807 | self.edges[e].w0 = t0new.v1; 808 | self.edges[e].w1 = t1new.v1; 809 | 810 | self.edges[e].v0 = t0new.v0; 811 | self.edges[e].v1 = t1new.v0; 812 | 813 | // Overwrite the old triangles 814 | 815 | self.triangles[t0prev] = t0new; 816 | 817 | self.triangles[t1prev] = t1new; 818 | 819 | // Recursively check the affected edges 820 | for i in 0..num_edges_to_check { 821 | self.test_and_swap_edge(edges_to_check[i], e, EMPTY); 822 | } 823 | } 824 | 825 | } 826 | 827 | 828 | /// Checks if point `p` belong to the line specified by `v0`, `v1`. 829 | fn point_belongs_to_line(p: &Point, v0: &Point, v1: &Point) -> bool { 830 | 0 == -v1.x*v0.y + p.x*v0.y + 831 | v0.x*v1.y - p.x*v1.y - 832 | v0.x*p.y + v1.x*p.y 833 | } 834 | 835 | 836 | fn replace_opposing_vertex(edge: &mut Edge, wold: usize, wnew: usize) { 837 | if edge.w0 == wold { 838 | edge.w0 = wnew; 839 | } else if edge.w1 == wold { 840 | edge.w1 = wnew; 841 | } else if edge.w0 == EMPTY { 842 | edge.w0 = wnew; 843 | } else if edge.w1 == EMPTY { 844 | edge.w1 = wnew; 845 | } 846 | } 847 | 848 | 849 | fn replace_adjacent_triangle(edge: &mut Edge, told: usize, tnew: usize) { 850 | if edge.t0 == told { 851 | edge.t0 = tnew; 852 | } else if edge.t1 == told { 853 | edge.t1 = tnew; 854 | } else if edge.t0 == EMPTY { 855 | edge.t0 = tnew; 856 | } else if edge.t1 == EMPTY { 857 | edge.t1 = tnew; 858 | } 859 | } 860 | -------------------------------------------------------------------------------- /src/utils.rs: -------------------------------------------------------------------------------- 1 | // 2 | // libskry_r - astronomical image stacking 3 | // Copyright (c) 2017 Filip Szczerek 4 | // 5 | // This project is licensed under the terms of the MIT license 6 | // (see the LICENSE file for details). 7 | // 8 | // 9 | // File description: 10 | // Utilities. 11 | // 12 | 13 | use defs::{Point, WHITE_8BIT}; 14 | use filters; 15 | use image::{Image, PixelFormat}; 16 | use std; 17 | use std::fs::File; 18 | use std::io::{self, Read, Write}; 19 | use std::slice; 20 | 21 | 22 | macro_rules! sqr { 23 | ($x:expr) => { ($x) * ($x) } 24 | } 25 | 26 | 27 | /// Rounds `x` up to the closest multiple of `n`. 28 | macro_rules! upmult { 29 | ($x:expr, $n:expr) => { (($x) + ($n) - 1) / ($n) * ($n) } 30 | } 31 | 32 | 33 | /// Produces a range of specified length. 34 | macro_rules! range { ($start:expr, $len:expr) => { $start .. $start + $len } } 35 | 36 | 37 | /// Returns ceil(a/b). 38 | macro_rules! updiv { 39 | ($a:expr, $b:expr) => { (($a) + ($b) - 1) / ($b) } 40 | } 41 | 42 | 43 | /// Returns barycentric coordinates `(u, v)` of point `p` in the triangle `(v0, v1, v2)` (`p` can be outside the triangle). 44 | macro_rules! calc_barycentric_coords { 45 | ($p:expr, $v0:expr, $v1:expr, $v2:expr) => { 46 | ((($v1.y - $v2.y) as f32 * ($p.x as f32 - $v2.x as f32) + ($v2.x - $v1.x) as f32 * ($p.y as f32 - $v2.y as f32)) as f32 / (($v1.y - $v2.y) * ($v0.x - $v2.x) + ($v2.x - $v1.x) * ($v0.y - $v2.y)) as f32, 47 | (($v2.y - $v0.y) as f32 * ($p.x as f32 - $v2.x as f32) + ($v0.x - $v2.x) as f32 * ($p.y as f32 - $v2.y as f32)) as f32 / (($v1.y - $v2.y) * ($v0.x - $v2.x) + ($v2.x - $v1.x) * ($v0.y - $v2.y)) as f32) 48 | } 49 | } 50 | 51 | 52 | pub fn read_struct(read: &mut R) -> io::Result { 53 | let num_bytes = ::std::mem::size_of::(); 54 | let mut s = std::mem::MaybeUninit::uninit(); 55 | let buffer = unsafe { slice::from_raw_parts_mut(s.as_mut_ptr() as *mut u8, num_bytes) }; 56 | match read.read_exact(buffer) { 57 | Ok(()) => Ok(unsafe { s.assume_init() }), 58 | Err(e) => { std::mem::forget(s); Err(e) } 59 | } 60 | } 61 | 62 | 63 | pub fn read_vec(file: &mut File, len: usize) -> io::Result> { 64 | let mut vec = alloc_uninitialized::(len); 65 | let num_bytes = ::std::mem::size_of::() * vec.len(); 66 | let buffer = unsafe{ slice::from_raw_parts_mut(vec[..].as_mut_ptr() as *mut u8, num_bytes) }; 67 | file.read_exact(buffer)?; 68 | Ok(vec) 69 | } 70 | 71 | 72 | pub fn write_struct(obj: &T, write: &mut W) -> Result<(), io::Error> { 73 | let num_bytes = ::std::mem::size_of::(); 74 | unsafe { 75 | let buffer = slice::from_raw_parts(obj as *const T as *const u8, num_bytes); 76 | write.write_all(buffer) 77 | } 78 | } 79 | 80 | 81 | /// Allocates an uninitialized `Vec` having `len` elements. 82 | pub fn alloc_uninitialized(len: usize) -> Vec { 83 | let mut v = Vec::::with_capacity(len); 84 | unsafe { v.set_len(len); } 85 | 86 | v 87 | } 88 | 89 | 90 | /// Returns the min. and max. pixel values in a Mono8 image 91 | pub fn find_min_max_brightness(img: &Image) -> (u8, u8) { 92 | assert!(img.get_pixel_format() == PixelFormat::Mono8); 93 | 94 | let mut bmin: u8 = WHITE_8BIT; 95 | let mut bmax = 0u8; 96 | 97 | for val in img.get_pixels() { 98 | if *val < bmin { bmin = *val; } 99 | if *val > bmax { bmax = *val; } 100 | } 101 | 102 | (bmin, bmax) 103 | } 104 | 105 | 106 | /// Checks if the specified position `pos` in `img` (Mono8) is appropriate for block matching. 107 | /// 108 | /// Uses the distribution of gradient directions around `pos` to decide 109 | /// if the location is safe for block matching. It is not if the image 110 | /// is dominated by a single edge (e.g. the limb of overexposed solar disk, 111 | /// without prominences or resolved spicules). Should block matching be performed 112 | /// in such circumstances, the tracked point would jump along the edge. 113 | /// 114 | pub fn assess_gradients_for_block_matching(img: &Image, 115 | pos: Point, 116 | neighborhood_radius: u32) -> bool { 117 | 118 | let block_size = 2*neighborhood_radius + 1; 119 | 120 | let block = img.get_fragment_copy(Point{ x: pos.x - neighborhood_radius as i32, 121 | y: pos.y - neighborhood_radius as i32 }, 122 | block_size, block_size, false); 123 | 124 | // Blur to reduce noise impact 125 | let block_blurred = filters::apply_box_blur(&block, 1, 3); 126 | 127 | let mut line_m1 = block_blurred.get_line_raw(0); // Line at y-1 128 | let mut line_0 = block_blurred.get_line_raw(1); // Line at y 129 | let mut line_p1 = block_blurred.get_line_raw(2); // line at y+1 130 | 131 | // Determine the histogram of gradient directions within `block_blurred` 132 | 133 | const NUM_DIRS: usize = 512; 134 | 135 | let mut dirs = [0.0f64; NUM_DIRS]; // Contains sums of gradient lengths 136 | 137 | for y in 1 .. block_size-1 { 138 | for x in 1 .. (block_size-1) as usize { 139 | // Calculate gradient using Sobel filter 140 | let grad_x = 2 * (line_0[x+1] as i32 - line_0[x-1] as i32) 141 | + line_m1[x+1] as i32 - line_m1[x-1] as i32 142 | + line_p1[x+1] as i32 - line_p1[x-1] as i32; 143 | 144 | let grad_y = 2 * (line_p1[x] as i32 - line_m1[x] as i32) 145 | + line_p1[x+1] as i32 - line_m1[x+1] as i32 146 | + line_p1[x-1] as i32 - line_m1[x-1] as i32; 147 | 148 | let grad_len = f64::sqrt((sqr!(grad_x) + sqr!(grad_y)) as f64); 149 | if grad_len > 0.0 { 150 | let cos_dir = grad_x as f64 / grad_len; 151 | let mut dir = f64::acos(cos_dir); 152 | if grad_y < 0 { dir = -dir; } 153 | 154 | let mut index: i32 = NUM_DIRS as i32/2 + (dir * NUM_DIRS as f64 / (2.0 * std::f64::consts::PI)) as i32; 155 | 156 | if index < 0 { index = 0; } 157 | else if index >= NUM_DIRS as i32 { index = NUM_DIRS as i32 - 1; } 158 | 159 | dirs[index as usize] += grad_len; 160 | } 161 | } 162 | 163 | // Move line pointers up 164 | line_m1 = line_0; 165 | line_0 = line_p1; 166 | if y < block_size - 2 { 167 | line_p1 = block_blurred.get_line_raw(y + 2); 168 | } 169 | } 170 | 171 | // Smooth out the histogram to remove spikes (caused by Sobel filter's anisotropy) 172 | let dirs_smooth = filters::median_filter(&dirs[..], 1); 173 | 174 | // We declare that gradient variability is too low if there are 175 | // consecutive zeros over more than 1/2 of the histogram and 176 | // the longest non-zero sequence is shorter than 1/4 of histogram 177 | 178 | let mut zero_count = 0usize; 179 | let mut nzero_count = 0usize; 180 | 181 | let mut max_zero_count = 0usize; 182 | let mut max_nzero_count = 0usize; 183 | 184 | for ds in dirs_smooth { 185 | if ds == 0.0 { 186 | zero_count += 1; 187 | if nzero_count > max_nzero_count { max_nzero_count = nzero_count; } 188 | nzero_count = 0; 189 | } else { 190 | if zero_count > max_zero_count { max_zero_count = zero_count; } 191 | zero_count = 0; 192 | nzero_count += 1; 193 | } 194 | } 195 | 196 | if max_zero_count > NUM_DIRS/3 && max_nzero_count < NUM_DIRS/4 { false } else { true} 197 | } 198 | 199 | /// Changes endianess of 16-bit words. 200 | pub fn swap_words16(img: &mut Image) { 201 | for val in img.get_pixels_mut::() { 202 | *val = u16::swap_bytes(*val); 203 | } 204 | } 205 | 206 | 207 | pub fn is_machine_big_endian() -> bool { 208 | u16::to_be(0x1122u16) == 0x1122u16 209 | } --------------------------------------------------------------------------------