Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-31
... that is a unique combination of: (1) multi-gradient Single Point Imaging involving global phase...-encoding gradients. The combination approach of single point imaging with the spin-echo signal detection...
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
NASA Astrophysics Data System (ADS)
Gustavsson, Anna-Karin; Petrov, Petar N.; Lee, Maurice Y.; Shechtman, Yoav; Moerner, W. E.
2018-02-01
To obtain a complete picture of subcellular nanostructures, cells must be imaged with high resolution in all three dimensions (3D). Here, we present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet, and the light sheet can therefore be formed using simple optics. The result is flexible and user-friendly 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D superresolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSF for fiducial bead tracking and live axial drift correction. We envision TILT3D to become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.
Active point out-of-plane ultrasound calibration
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.
2015-03-01
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Garson, A. B.; Anastasio, M. A.
In this study, we report initial demonstrations of the use of single crystals in indirect x-ray imaging with a benchtop implementation of propagation-based (PB) x-ray phase contrast imaging. Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point-spread function (PSF) with the 50-μm thick single crystal scintillators than with the reference polycrystalline phosphor/scintillator. Fiber-optic plate depth-of-focus and Al reflective-coating aspects are also elucidated. Guided by the results from the 25-mm diameter crystal samples, we report additionally the first results with a unique 88-mm diameter single crystal bonded to a fiber optic platemore » and coupled to the large format CCD. Both PSF and x-ray phase contrast imaging data are quantified and presented.« less
Gustavsson, Anna-Karin; Petrov, Petar N; Lee, Maurice Y; Shechtman, Yoav; Moerner, W E
2018-02-01
To obtain a complete picture of subcellular nanostructures, cells must be imaged with high resolution in all three dimensions (3D). Here, we present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet, and the light sheet can therefore be formed using simple optics. The result is flexible and user-friendly 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSF for fiducial bead tracking and live axial drift correction. We envision TILT3D to become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.
Lucky Imaging: Improved Localization Accuracy for Single Molecule Imaging
Cronin, Bríd; de Wet, Ben; Wallace, Mark I.
2009-01-01
We apply the astronomical data-analysis technique, Lucky imaging, to improve resolution in single molecule fluorescence microscopy. We show that by selectively discarding data points from individual single-molecule trajectories, imaging resolution can be improved by a factor of 1.6 for individual fluorophores and up to 5.6 for more complex images. The method is illustrated using images of fluorescent dye molecules and quantum dots, and the in vivo imaging of fluorescently labeled linker for activation of T cells. PMID:19348772
3D single-molecule super-resolution microscopy with a tilted light sheet.
Gustavsson, Anna-Karin; Petrov, Petar N; Lee, Maurice Y; Shechtman, Yoav; Moerner, W E
2018-01-09
Tilted light sheet microscopy with 3D point spread functions (TILT3D) combines a novel, tilted light sheet illumination strategy with long axial range point spread functions (PSFs) for low-background, 3D super-localization of single molecules as well as 3D super-resolution imaging in thick cells. Because the axial positions of the single emitters are encoded in the shape of each single-molecule image rather than in the position or thickness of the light sheet, the light sheet need not be extremely thin. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The result is simple and flexible 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validate TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed tetrapod PSFs for fiducial bead tracking and live axial drift correction.
Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J
2011-05-21
Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.
Feasibility study and quality assessment of unmanned aircraft system-derived multispectral images
NASA Astrophysics Data System (ADS)
Chang, Kuo-Jen
2017-04-01
The purpose of study is to explore the precision and the applicability of UAS-derived multispectral images. In this study, the Micro-MCA6 multispectral camera was mounted on quadcopter. The Micro-MCA6 shoot images synchronized of each single band. By means of geotagged images and control points, the orthomosaic images of each single band generated firstly by 14cm resolution. The multispectral image was merged complete with 6 bands. In order to improve the spatial resolution, the 6 band image fused with 9cm resolution image taken from RGB camera. Quality evaluation of the image is verified of the each single band by using control points and check points. The standard deviations of errors are within 1 to 2 pixel resolution of each band. The quality of the multispectral image is compared with 3 cm resolution orthomosaic RGB image gathered from UAV in the same mission, as well. The standard deviations of errors are within 2 to 3 pixel resolution. The result shows that the errors resulting from the blurry and the band dislocation of the objects edge identification. To the end, the normalized difference vegetation index (NDVI) extracted from the image to explore the condition of vegetation and the nature of the environment. This study demonstrates the feasibility and the capability of the high resolution multispectral images.
Three-dimensional ocular kinematics underlying binocular single vision
Misslisch, H.
2016-01-01
We have analyzed the binocular coordination of the eyes during far-to-near refixation saccades based on the evaluation of distance ratios and angular directions of the projected target images relative to the eyes' rotation centers. By defining the geometric point of binocular single vision, called Helmholtz point, we found that disparities during fixations of targets at near distances were limited in the subject's three-dimensional visual field to the vertical and forward directions. These disparities collapsed to simple vertical disparities in the projective binocular image plane. Subjects were able to perfectly fuse the vertically disparate target images with respect to the projected Helmholtz point of single binocular vision, independent of the particular location relative to the horizontal plane of regard. Target image fusion was achieved by binocular torsion combined with corrective modulations of the differential half-vergence angles of the eyes in the horizontal plane. Our findings support the notion that oculomotor control combines vergence in the horizontal plane of regard with active torsion in the frontal plane to achieve fusion of the dichoptic binocular target images. PMID:27655969
Rioux, James A; Beyea, Steven D; Bowen, Chris V
2017-02-01
Purely phase-encoded techniques such as single point imaging (SPI) are generally unsuitable for in vivo imaging due to lengthy acquisition times. Reconstruction of highly undersampled data using compressed sensing allows SPI data to be quickly obtained from animal models, enabling applications in preclinical cellular and molecular imaging. TurboSPI is a multi-echo single point technique that acquires hundreds of images with microsecond spacing, enabling high temporal resolution relaxometry of large-R 2 * systems such as iron-loaded cells. TurboSPI acquisitions can be pseudo-randomly undersampled in all three dimensions to increase artifact incoherence, and can provide prior information to improve reconstruction. We evaluated the performance of CS-TurboSPI in phantoms, a rat ex vivo, and a mouse in vivo. An algorithm for iterative reconstruction of TurboSPI relaxometry time courses does not affect image quality or R 2 * mapping in vitro at acceleration factors up to 10. Imaging ex vivo is possible at similar acceleration factors, and in vivo imaging is demonstrated at an acceleration factor of 8, such that acquisition time is under 1 h. Accelerated TurboSPI enables preclinical R 2 * mapping without loss of data quality, and may show increased specificity to iron oxide compared to other sequences.
From Relativistic Electrons to X-ray Phase Contrast Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Garson, A. B.; Anastasio, M. A.
2017-10-09
We report the initial demonstrations of the use of single crystals in indirect x-ray imaging for x-ray phase contrast imaging at the Washington University in St. Louis Computational Bioimaging Laboratory (CBL). Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point spread function (21 μm (FWHM)) with the 25-mm diameter single crystals than the reference polycrystalline phosphor’s 80-μm value. Potential fiber-optic plate depth-of-focus aspects and 33-μm diameter carbon fiber imaging are also addressed.
Scanning Transmission Electron Microscopy at High Resolution
Wall, J.; Langmore, J.; Isaacson, M.; Crewe, A. V.
1974-01-01
We have shown that a scanning transmission electron microscope with a high brightness field emission source is capable of obtaining better than 3 Å resolution using 30 to 40 keV electrons. Elastic dark field images of single atoms of uranium and mercury are shown which demonstrate this fact as determined by a modified Rayleigh criterion. Point-to-point micrograph resolution between 2.5 and 3.0 Å is found in dark field images of micro-crystallites of uranium and thorium compounds. Furthermore, adequate contrast is available to observe single atoms as light as silver. Images PMID:4521050
MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods
NASA Astrophysics Data System (ADS)
Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.
2012-03-01
Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.
A precise pointing nanopipette for single-cell imaging via electroosmotic injection.
Lv, Jian; Qian, Ruo-Can; Hu, Yong-Xu; Liu, Shao-Chuang; Cao, Yue; Zheng, Yong-Jie; Long, Yi-Tao
2016-11-24
The precise transportation of fluorescent probes to the designated location in living cells is still a challenge. Here, we present a new addition to nanopipettes as a powerful tool to deliver fluorescent molecules to a given place in a single cell by electroosmotic flow, indicating favorable potential for further application in single-cell imaging.
Single shot laser speckle based 3D acquisition system for medical applications
NASA Astrophysics Data System (ADS)
Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young
2018-06-01
The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.
Surface Imaging Skin Friction Instrument and Method
NASA Technical Reports Server (NTRS)
Brown, James L. (Inventor); Naughton, Jonathan W. (Inventor)
1999-01-01
A surface imaging skin friction instrument allowing 2D resolution of spatial image by a 2D Hilbert transform and 2D inverse thin-oil film solver, providing an innovation over prior art single point approaches. Incoherent, monochromatic light source can be used. The invention provides accurate, easy to use, economical measurement of larger regions of surface shear stress in a single test.
A single FPGA-based portable ultrasound imaging system for point-of-care applications.
Kim, Gi-Duck; Yoon, Changhan; Kye, Sang-Bum; Lee, Youngbae; Kang, Jeeun; Yoo, Yangmo; Song, Tai-kyong
2012-07-01
We present a cost-effective portable ultrasound system based on a single field-programmable gate array (FPGA) for point-of-care applications. In the portable ultrasound system developed, all the ultrasound signal and image processing modules, including an effective 32-channel receive beamformer with pseudo-dynamic focusing, are embedded in an FPGA chip. For overall system control, a mobile processor running Linux at 667 MHz is used. The scan-converted ultrasound image data from the FPGA are directly transferred to the system controller via external direct memory access without a video processing unit. The potable ultrasound system developed can provide real-time B-mode imaging with a maximum frame rate of 30, and it has a battery life of approximately 1.5 h. These results indicate that the single FPGA-based portable ultrasound system developed is able to meet the processing requirements in medical ultrasound imaging while providing improved flexibility for adapting to emerging POC applications.
NASA Astrophysics Data System (ADS)
Nayak, M.; Beck, J.; Udrea, B.
This paper focuses on the aerospace application of a single beam laser rangefinder (LRF) for 3D imaging, shape detection, and reconstruction in the context of a space-based space situational awareness (SSA) mission scenario. The primary limitation to 3D imaging from LRF point clouds is the one-dimensional nature of the single beam measurements. A method that combines relative orbital motion and scanning attitude motion to generate point clouds has been developed and the design and characterization of multiple relative motion and attitude maneuver profiles are presented. The target resident space object (RSO) has the shape of a generic telecommunications satellite. The shape and attitude of the RSO are unknown to the chaser satellite however, it is assumed that the RSO is un-cooperative and has fixed inertial pointing. All sensors in the metrology chain are assumed ideal. A previous study by the authors used pure Keplerian motion to perform a similar 3D imaging mission at an asteroid. A new baseline for proximity operations maneuvers for LRF scanning, based on a waypoint adaptation of the Hill-Clohessy-Wiltshire (HCW) equations is examined. Propellant expenditure for each waypoint profile is discussed and combinations of relative motion and attitude maneuvers that minimize the propellant used to achieve a minimum required point cloud density are studied. Both LRF strike-point coverage and point cloud density are maximized; the capability for 3D shape registration and reconstruction from point clouds generated with a single beam LRF without catalog comparison is proven. Next, a method of using edge detection algorithms to process a point cloud into a 3D modeled image containing reconstructed shapes is presented. Weighted accuracy of edge reconstruction with respect to the true model is used to calculate a qualitative “ metric” that evaluates effectiveness of coverage. Both edge recognition algorithms and the metric are independent of point cloud densit- , therefore they are utilized to compare the quality of point clouds generated by various attitude and waypoint command profiles. The RSO model incorporates diverse irregular protruding shapes, such as open sensor covers, instrument pods and solar arrays, to test the limits of the algorithms. This analysis is used to mathematically prove that point clouds generated by a single-beam LRF can achieve sufficient edge recognition accuracy for SSA applications, with meaningful shape information extractable even from sparse point clouds. For all command profiles, reconstruction of RSO shapes from the point clouds generated with the proposed method are compared to the truth model and conclusions are drawn regarding their fidelity.
Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.
Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels
2013-01-01
Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.
Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-01-01
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813
Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-06-24
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.
Optical aberration correction for simple lenses via sparse representation
NASA Astrophysics Data System (ADS)
Cui, Jinlin; Huang, Wei
2018-04-01
Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.
A single scan skeletonization algorithm: application to medical imaging of trabecular bone
NASA Astrophysics Data System (ADS)
Arlicot, Aurore; Amouriq, Yves; Evenou, Pierre; Normand, Nicolas; Guédon, Jean-Pierre
2010-03-01
Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms. The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand, the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis. However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images. This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro architecture in order to quantify bone integrity.
Circular motion geometry using minimal data.
Jiang, Guang; Quan, Long; Tsui, Hung-Tat
2004-06-01
Circular motion or single axis motion is widely used in computer vision and graphics for 3D model acquisition. This paper describes a new and simple method for recovering the geometry of uncalibrated circular motion from a minimal set of only two points in four images. This problem has been previously solved using nonminimal data either by computing the fundamental matrix and trifocal tensor in three images or by fitting conics to tracked points in five or more images. It is first established that two sets of tracked points in different images under circular motion for two distinct space points are related by a homography. Then, we compute a plane homography from a minimal two points in four images. After that, we show that the unique pair of complex conjugate eigenvectors of this homography are the image of the circular points of the parallel planes of the circular motion. Subsequently, all other motion and structure parameters are computed from this homography in a straighforward manner. The experiments on real image sequences demonstrate the simplicity, accuracy, and robustness of the new method.
Toward a Global Bundle Adjustment of SPOT 5 - HRS Images
NASA Astrophysics Data System (ADS)
Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.
2012-07-01
The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the global block is unique it becomes easier to manage the historic and the different evolutions of the computations (new images, new GCPs or tie points). The location is now unique and consequently coherent all around the world, avoiding steps and artifacts on the borders of DSMs (Digital Surface Models) and OrthoImages historically calculated from different blocks. No extrapolation far from GCPs in the limits of images is done anymore. Using the global block as a reference will allow new images from other sources to be easily located on this reference.
SPIRAL-SPRITE: a rapid single point MRI technique for application to porous media.
Szomolanyi, P; Goodyear, D; Balcom, B; Matheson, D
2001-01-01
This study presents the application of a new, rapid, single point MRI technique which samples k space with spiral trajectories. The general principles of the technique are outlined along with application to porous concrete samples, solid pharmaceutical tablets and gas phase imaging. Each sample was chosen to highlight specific features of the method.
Landsat TM image maps of the Shirase and Siple Coast ice streams, West Antarctica
Ferrigno, Jane G.; Mullins, Jerry L.; Stapleton, Jo Anne; Bindschadler, Robert; Scambos, Ted A.; Bellisime, Lynda B.; Bowell, Jo-Ann; Acosta, Alex V.
1994-01-01
Fifteen 1: 250000 and one 1: 1000 000 scale Landsat Thematic Mapper (TM) image mosaic maps are currently being produced of the West Antarctic ice streams on the Shirase and Siple Coasts. Landsat TM images were acquired between 1984 and 1990 in an area bounded approximately by 78°-82.5°S and 120°- 160° W. Landsat TM bands 2, 3 and 4 were combined to produce a single band, thereby maximizing data content and improving the signal-to-noise ratio. The summed single band was processed with a combination of high- and low-pass filters to remove longitudinal striping and normalize solar elevation-angle effects. The images were mosaicked and transformed to a Lambert conformal conic projection using a cubic-convolution algorithm. The projection transformation was controled with ten weighted geodetic ground-control points and internal image-to-image pass points with annotation of major glaciological features. The image maps are being published in two formats: conventional printed map sheets and on a CD-ROM.
Stereo multiplexed holographic particle image velocimeter
Adrian, Ronald J.; Barnhart, Donald H.; Papen, George A.
1996-01-01
A holographic particle image velocimeter employs stereoscopic recording of particle images, taken from two different perspectives and at two distinct points in time for each perspective, on a single holographic film plate. The different perspectives are provided by two optical assemblies, each including a collecting lens, a prism and a focusing lens. Collimated laser energy is pulsed through a fluid stream, with elements carried in the stream scattering light, some of which is collected by each collecting lens. The respective focusing lenses are configured to form images of the scattered light near the holographic plate. The particle images stored on the plate are reconstructed using the same optical assemblies employed in recording, by transferring the film plate and optical assemblies as a single integral unit to a reconstruction site. At the reconstruction site, reconstruction beams, phase conjugates of the reference beams used in recording the image, are directed to the plate, then selectively through either one of the optical assemblies, to form an image reflecting the chosen perspective at the two points in time.
Stereo multiplexed holographic particle image velocimeter
Adrian, R.J.; Barnhart, D.H.; Papen, G.A.
1996-08-20
A holographic particle image velocimeter employs stereoscopic recording of particle images, taken from two different perspectives and at two distinct points in time for each perspective, on a single holographic film plate. The different perspectives are provided by two optical assemblies, each including a collecting lens, a prism and a focusing lens. Collimated laser energy is pulsed through a fluid stream, with elements carried in the stream scattering light, some of which is collected by each collecting lens. The respective focusing lenses are configured to form images of the scattered light near the holographic plate. The particle images stored on the plate are reconstructed using the same optical assemblies employed in recording, by transferring the film plate and optical assemblies as a single integral unit to a reconstruction site. At the reconstruction site, reconstruction beams, phase conjugates of the reference beams used in recording the image, are directed to the plate, then selectively through either one of the optical assemblies, to form an image reflecting the chosen perspective at the two points in time. 13 figs.
Quantitative assessment of dynamic PET imaging data in cancer imaging.
Muzi, Mark; O'Sullivan, Finbarr; Mankoff, David A; Doot, Robert K; Pierce, Larry A; Kurland, Brenda F; Linden, Hannah M; Kinahan, Paul E
2012-11-01
Clinical imaging in positron emission tomography (PET) is often performed using single-time-point estimates of tracer uptake or static imaging that provides a spatial map of regional tracer concentration. However, dynamic tracer imaging can provide considerably more information about in vivo biology by delineating both the temporal and spatial pattern of tracer uptake. In addition, several potential sources of error that occur in static imaging can be mitigated. This review focuses on the application of dynamic PET imaging to measuring regional cancer biologic features and especially in using dynamic PET imaging for quantitative therapeutic response monitoring for cancer clinical trials. Dynamic PET imaging output parameters, particularly transport (flow) and overall metabolic rate, have provided imaging end points for clinical trials at single-center institutions for years. However, dynamic imaging poses many challenges for multicenter clinical trial implementations from cross-center calibration to the inadequacy of a common informatics infrastructure. Underlying principles and methodology of PET dynamic imaging are first reviewed, followed by an examination of current approaches to dynamic PET image analysis with a specific case example of dynamic fluorothymidine imaging to illustrate the approach. Copyright © 2012 Elsevier Inc. All rights reserved.
Beyea, S D; Balcom, B J; Bremner, T W; Prado, P J; Cross, A R; Armstrong, R L; Grattan-Bellew, P E
1998-11-01
The removal of water from pores in hardened cement paste smaller than 50 nm results in cracking of the cement matrix due to the tensile stresses induced by drying shrinkage. Cracks in the matrix fundamentally alter the permeability of the material, and therefore directly affect the drying behaviour. Using Single-Point Imaging (SPI), we obtain one-dimensional moisture profiles of hydrated White Portland cement cylinders as a function of drying time. The drying behaviour of White Portland cement, is distinctly different from the drying behaviour of related concrete materials containing aggregates.
Scanning fluorescent microthermal imaging apparatus and method
Barton, Daniel L.; Tangyunyong, Paiboon
1998-01-01
A scanning fluorescent microthermal imaging (FMI) apparatus and method is disclosed, useful for integrated circuit (IC) failure analysis, that uses a scanned and focused beam from a laser to excite a thin fluorescent film disposed over the surface of the IC. By collecting fluorescent radiation from the film, and performing point-by-point data collection with a single-point photodetector, a thermal map of the IC is formed to measure any localized heating associated with defects in the IC.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Estimating IMU heading error from SAR images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.
Single-shot spiral imaging at 7 T.
Engel, Maria; Kasper, Lars; Barmet, Christoph; Schmid, Thomas; Vionnet, Laetitia; Wilm, Bertram; Pruessmann, Klaas P
2018-03-25
The purpose of this work is to explore the feasibility and performance of single-shot spiral MRI at 7 T, using an expanded signal model for reconstruction. Gradient-echo brain imaging is performed on a 7 T system using high-resolution single-shot spiral readouts and half-shot spirals that perform dual-image acquisition after a single excitation. Image reconstruction is based on an expanded signal model including the encoding effects of coil sensitivity, static off-resonance, and magnetic field dynamics. The latter are recorded concurrently with image acquisition, using NMR field probes. The resulting image resolution is assessed by point spread function analysis. Single-shot spiral imaging is achieved at a nominal resolution of 0.8 mm, using spiral-out readouts of 53-ms duration. High depiction fidelity is achieved without conspicuous blurring or distortion. Effective resolutions are assessed as 0.8, 0.94, and 0.98 mm in CSF, gray matter and white matter, respectively. High image quality is also achieved with half-shot acquisition yielding image pairs at 1.5-mm resolution. Use of an expanded signal model enables single-shot spiral imaging at 7 T with unprecedented image quality. Single-shot and half-shot spiral readouts deploy the sensitivity benefit of high field for rapid high-resolution imaging, particularly for functional MRI and arterial spin labeling. © 2018 International Society for Magnetic Resonance in Medicine.
Calibration of a polarimetric imaging SAR
NASA Technical Reports Server (NTRS)
Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.
1991-01-01
Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.
St-Arnaud, Karl; Aubertin, Kelly; Strupler, Mathias; Madore, Wendy-Julie; Grosset, Andrée-Anne; Petrecca, Kevin; Trudel, Dominique; Leblond, Frédéric
2018-01-01
Raman spectroscopy is a promising cancer detection technique for surgical guidance applications. It can provide quantitative information relating to global tissue properties associated with structural, metabolic, immunological, and genetic biochemical phenomena in terms of molecular species including amino acids, lipids, proteins, and nucleic acid (DNA). To date in vivo Raman spectroscopy systems mostly included probes and biopsy needles typically limited to single-point tissue interrogation over a scale between 100 and 500 microns. The development of wider field handheld systems could improve tumor localization for a range of open surgery applications including brain, ovarian, and skin cancers. Here we present a novel Raman spectroscopy implementation using a coherent imaging bundle of fibers to create a probe capable of reconstructing molecular images over mesoscopic fields of view. Detection is performed using linear scanning with a rotation mirror and an imaging spectrometer. Different slits widths were tested at the entrance of the spectrometer to optimize spatial and spectral resolution while preserving sufficient signal-to-noise ratios to detect the principal Raman tissue features. The nonbiological samples, calcite and polytetrafluoroethylene (PTFE), were used to characterize the performance of the system. The new wide-field probe was tested on ex vivo samples of calf brain and swine tissue. Raman spectral content of both tissue types were validated with data from the literature and compared with data acquired with a single-point Raman spectroscopy probe. The single-point probe was used as the gold standard against which the new instrument was benchmarked as it has already been thoroughly validated for biological tissue characterization. We have developed and characterized a practical noncontact handheld Raman imager providing tissue information at a spatial resolution of 115 microns over a field of view >14 mm 2 and a spectral resolution of 6 cm -1 over the whole fingerprint region. Typical integration time to acquire an entire Raman image over swine tissue was set to approximately 100 s. Spectra acquired with both probes (single-point and wide-field) showed good agreement, with a Pearson correlation factor >0.85 over different tissue categories. Protein and lipid content of imaged tissue were manifested into the measured spectra which correlated well with previous findings in the literature. An example of quantitative molecular map is presented for swine tissue and calf brain based on the ratio of protein-to-lipid content showing clear delineations between white and gray matter as well as between adipose and muscle tissue. We presented the development of a Raman imaging probe with a field of view of a few millimeters and a spatial resolution consistent with standard surgical imaging methods using an imaging bundle. Spectra acquired with the newly developed system on swine tissue and calf brain correlated well with an establish single-point probe and observed spectral features agreed with previous finding in the literature. The imaging probe has demonstrated its ability to reconstruct molecular images of soft tissues. The approach presented here has a lot of potential for the development of surgical Raman imaging probe to guide the surgeon during cancer surgery. © 2017 American Association of Physicists in Medicine.
Scanning fluorescent microthermal imaging apparatus and method
Barton, D.L.; Tangyunyong, P.
1998-01-06
A scanning fluorescent microthermal imaging (FMI) apparatus and method is disclosed, useful for integrated circuit (IC) failure analysis, that uses a scanned and focused beam from a laser to excite a thin fluorescent film disposed over the surface of the IC. By collecting fluorescent radiation from the film, and performing point-by-point data collection with a single-point photodetector, a thermal map of the IC is formed to measure any localized heating associated with defects in the IC. 1 fig.
NASA Astrophysics Data System (ADS)
Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei
2018-02-01
For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.
Smartphone Microscopy of Parasite Eggs Accumulated into a Single Field of View
Sowerby, Stephen J.; Crump, John A.; Johnstone, Maree C.; Krause, Kurt L.; Hill, Philip C.
2016-01-01
A Nokia Lumia 1020 cellular phone (Microsoft Corp., Auckland, New Zealand) was configured to image the ova of Ascaris lumbricoides converged into a single field of view but on different focal planes. The phone was programmed to acquire images at different distances and, using public domain computer software, composite images were created that brought all the eggs into sharp focus. This proof of concept informs a framework for field-deployable, point of care monitoring of soil-transmitted helminths. PMID:26572870
Concrete/mortar water phase transition studied by single-point MRI methods.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W; Grattan-Bellew, P E
1998-01-01
A series of magnetic resonance imaging (MRI) water density and T2* profiles in hardened concrete and mortar samples has been obtained during freezing conditions (-50 degrees C < T < 11 degrees C). The single-point ramped imaging with T1 enhancement (SPRITE) sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 microseconds and T1 < 3.6 ms). The frozen and evaporable water distribution was quantified through a position based study of the profile magnitude. Submillimetric resolution of proton-density and T2*-relaxation parameters as a function of temperature has been achieved.
Single Point vs. Mapping Approach for Spectral Cytopathology (SCP)
Schubert, Jennifer M.; Mazur, Antonella I.; Bird, Benjamin; Miljković, Miloš; Diem, Max
2011-01-01
In this paper we describe the advantages of collecting infrared microspectral data in imaging mode opposed to point mode. Imaging data are processed using the PapMap algorithm, which co-adds pixel spectra that have been scrutinized for R-Mie scattering effects as well as other constraints. The signal-to-noise quality of PapMap spectra will be compared to point spectra for oral mucosa cells deposited onto low-e slides. Also the effects of software atmospheric correction will be discussed. Combined with the PapMap algorithm, data collection in imaging mode proves to be a superior method for spectral cytopathology. PMID:20449833
Predict Brain MR Image Registration via Sparse Learning of Appearance and Transformation
Wang, Qian; Kim, Minjeong; Shi, Yonghong; Wu, Guorong; Shen, Dinggang
2014-01-01
We propose a new approach to register the subject image with the template by leveraging a set of intermediate images that are pre-aligned to the template. We argue that, if points in the subject and the intermediate images share similar local appearances, they may have common correspondence in the template. In this way, we learn the sparse representation of a certain subject point to reveal several similar candidate points in the intermediate images. Each selected intermediate candidate can bridge the correspondence from the subject point to the template space, thus predicting the transformation associated with the subject point at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point, instead of allowing only a single correspondence. Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. We further embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we refine our estimated transformation field via existing registration method in effective manners. We apply our method to registering brain MR images, and conclude that the proposed framework is competent to improve registration performances substantially. PMID:25476412
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
Inspection with Robotic Microscopic Imaging
NASA Technical Reports Server (NTRS)
Pedersen, Liam; Deans, Matthew; Kunz, Clay; Sargent, Randy; Chen, Alan; Mungas, Greg
2005-01-01
Future Mars rover missions will require more advanced onboard autonomy for increased scientific productivity and reduced mission operations cost. One such form of autonomy can be achieved by targeting precise science measurements to be made in a single command uplink cycle. In this paper we present an overview of our solution to the subproblems of navigating a rover into place for microscopic imaging, mapping an instrument target point selected by an operator using far away science camera images to close up hazard camera images, verifying the safety of placing a contact instrument on a sample or finding nearby safe points, and analyzing the data that comes back from the rover. The system developed includes portions used in the Multiple Target Single Cycle Instrument Placement demonstration at NASA Ames in October 2004, and portions of the MI Toolkit delivered to the Athena Microscopic Imager Instrument Team for the MER mission still operating on Mars today. Some of the component technologies are also under consideration for MSL mission infusion.
Doblas, Ana; Sánchez-Ortiga, Emilio; Martínez-Corral, Manuel; Saavedra, Genaro; Garcia-Sucerquia, Jorge
2014-04-01
The advantages of using a telecentric imaging system in digital holographic microscopy (DHM) to study biological specimens are highlighted. To this end, the performances of nontelecentric DHM and telecentric DHM are evaluated from the quantitative phase imaging (QPI) point of view. The evaluated stability of the microscope allows single-shot QPI in DHM by using telecentric imaging systems. Quantitative phase maps of a section of the head of the drosophila melanogaster fly and of red blood cells are obtained via single-shot DHM with no numerical postprocessing. With these maps we show that the use of telecentric DHM provides larger field of view for a given magnification and permits more accurate QPI measurements with less number of computational operations.
Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet
2013-01-01
Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.
Lee, Dong-Hoon; Lee, Do-Wan; Han, Bong-Soo
2016-01-01
The purpose of this study is an application of scale invariant feature transform (SIFT) algorithm to stitch the cervical-thoracic-lumbar (C-T-L) spine magnetic resonance (MR) images to provide a view of the entire spine in a single image. All MR images were acquired with fast spin echo (FSE) pulse sequence using two MR scanners (1.5 T and 3.0 T). The stitching procedures for each part of spine MR image were performed and implemented on a graphic user interface (GUI) configuration. Moreover, the stitching process is performed in two categories; manual point-to-point (mPTP) selection that performed by user specified corresponding matching points, and automated point-to-point (aPTP) selection that performed by SIFT algorithm. The stitched images using SIFT algorithm showed fine registered results and quantitatively acquired values also indicated little errors compared with commercially mounted stitching algorithm in MRI systems. Our study presented a preliminary validation of the SIFT algorithm application to MRI spine images, and the results indicated that the proposed approach can be performed well for the improvement of diagnosis. We believe that our approach can be helpful for the clinical application and extension of other medical imaging modalities for image stitching. PMID:27064404
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
Smartphone Microscopy of Parasite Eggs Accumulated into a Single Field of View.
Sowerby, Stephen J; Crump, John A; Johnstone, Maree C; Krause, Kurt L; Hill, Philip C
2016-01-01
A Nokia Lumia 1020 cellular phone (Microsoft Corp., Auckland, New Zealand) was configured to image the ova of Ascaris lumbricoides converged into a single field of view but on different focal planes. The phone was programmed to acquire images at different distances and, using public domain computer software, composite images were created that brought all the eggs into sharp focus. This proof of concept informs a framework for field-deployable, point of care monitoring of soil-transmitted helminths. © The American Society of Tropical Medicine and Hygiene.
Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2010-02-01
We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.
Method for measuring thermal properties using a long-wavelength infrared thermal image
Walker, Charles L [Albuquerque, NM; Costin, Laurence S [Albuquerque, NM; Smith, Jody L [Albuquerque, NM; Moya, Mary M [Albuquerque, NM; Mercier, Jeffrey A [Albuquerque, NM
2007-01-30
A method for estimating the thermal properties of surface materials using long-wavelength thermal imagery by exploiting the differential heating histories of ground points in the vicinity of shadows. The use of differential heating histories of different ground points of the same surface material allows the use of a single image acquisition step to provide the necessary variation in measured parameters for calculation of the thermal properties of surface materials.
Yuan, Tiezhu; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang
2017-01-01
Radar imaging based on electromagnetic vortex can achieve azimuth resolution without relative motion. The present paper investigates this imaging technique with the use of a single receiving antenna through theoretical analysis and experimental results. Compared with the use of multiple receiving antennas, the echoes from a single receiver cannot be used directly for image reconstruction using Fourier method. The reason is revealed by using the point spread function. An additional phase is compensated for each mode before imaging process based on the array parameters and the elevation of the targets. A proof-of-concept imaging system based on a circular phased array is created, and imaging experiments of corner-reflector targets are performed in an anechoic chamber. The azimuthal image is reconstructed by the use of Fourier transform and spectral estimation methods. The azimuth resolution of the two methods is analyzed and compared through experimental data. The experimental results verify the principle of azimuth resolution and the proposed phase compensation method. PMID:28335487
Evaluation of a high framerate multi-exposure laser speckle contrast imaging setup
NASA Astrophysics Data System (ADS)
Hultman, Martin; Fredriksson, Ingemar; Strömberg, Tomas; Larsson, Marcus
2018-02-01
We present a first evaluation of a new multi-exposure laser speckle contrast imaging (MELSCI) system for assessing spatial variations in the microcirculatory perfusion. The MELSCI system is based on a 1000 frames per second 1-megapixel camera connected to a field programmable gate arrays (FPGA) capable of producing MELSCI data in realtime. The imaging system is evaluated against a single point laser Doppler flowmetry (LDF) system during occlusionrelease provocations of the arm in five subjects. Perfusion is calculated from MELSCI data using current state-of-the-art inverse models. The analysis displayed a good agreement between measured and modeled data, with an average error below 6%. This strongly indicates that the applied model is capable of accurately describing the MELSCI data and that the acquired data is of high quality. Comparing readings from the occlusion-release provocation showed that the MELSCI perfusion was significantly correlated (R=0.83) to the single point LDF perfusion, clearly outperforming perfusion estimations based on a single exposure time. We conclude that the MELSCI system provides blood flow images of enhanced quality, taking us one step closer to a system that accurately can monitor dynamic changes in skin perfusion over a large area in real-time.
Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging
NASA Astrophysics Data System (ADS)
Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi
2016-10-01
In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.
Saito, Kenta; Kobayashi, Kentaro; Tani, Tomomi; Nagai, Takeharu
2008-01-01
Multi-point scanning confocal microscopy using a Nipkow disk enables the acquisition of fluorescent images with high spatial and temporal resolutions. Like other single-point scanning confocal systems that use Galvano meter mirrors, a commercially available Nipkow spinning disk confocal unit, Yokogawa CSU10, requires lasers as the excitation light source. The choice of fluorescent dyes is strongly restricted, however, because only a limited number of laser lines can be introduced into a single confocal system. To overcome this problem, we developed an illumination system in which light from a mercury arc lamp is scrambled to make homogeneous light by passing it through a multi-mode optical fiber. This illumination system provides incoherent light with continuous wavelengths, enabling the observation of a wide range of fluorophores. Using this optical system, we demonstrate both the high-speed imaging (up to 100 Hz) of intracellular Ca(2+) propagation, and the multi-color imaging of Ca(2+) and PKC-gamma dynamics in living cells.
NASA Astrophysics Data System (ADS)
He, Qiang; Schultz, Richard R.; Wang, Yi; Camargo, Aldo; Martel, Florent
2008-01-01
In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.
Metasurface optics for full-color computational imaging.
Colburn, Shane; Zhan, Alan; Majumdar, Arka
2018-02-01
Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.
Spatial and spectral imaging of point-spread functions using a spatial light modulator
NASA Astrophysics Data System (ADS)
Munagavalasa, Sravan; Schroeder, Bryce; Hua, Xuanwen; Jia, Shu
2017-12-01
We develop a point-spread function (PSF) engineering approach to imaging the spatial and spectral information of molecular emissions using a spatial light modulator (SLM). We show that a dispersive grating pattern imposed upon the emission reveals spectral information. We also propose a deconvolution model that allows the decoupling of the spectral and 3D spatial information in engineered PSFs. The work is readily applicable to single-molecule measurements and fluorescent microscopy.
Burnette, Dylan T; Sengupta, Prabuddha; Dai, Yuhai; Lippincott-Schwartz, Jennifer; Kachar, Bechara
2011-12-27
Superresolution imaging techniques based on the precise localization of single molecules, such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), achieve high resolution by fitting images of single fluorescent molecules with a theoretical Gaussian to localize them with a precision on the order of tens of nanometers. PALM/STORM rely on photoactivated proteins or photoswitching dyes, respectively, which makes them technically challenging. We present a simple and practical way of producing point localization-based superresolution images that does not require photoactivatable or photoswitching probes. Called bleaching/blinking assisted localization microscopy (BaLM), the technique relies on the intrinsic bleaching and blinking behaviors characteristic of all commonly used fluorescent probes. To detect single fluorophores, we simply acquire a stream of fluorescence images. Fluorophore bleach or blink-off events are detected by subtracting from each image of the series the subsequent image. Similarly, blink-on events are detected by subtracting from each frame the previous one. After image subtractions, fluorescence emission signals from single fluorophores are identified and the localizations are determined by fitting the fluorescence intensity distribution with a theoretical Gaussian. We also show that BaLM works with a spectrum of fluorescent molecules in the same sample. Thus, BaLM extends single molecule-based superresolution localization to samples labeled with multiple conventional fluorescent probes.
NASA Astrophysics Data System (ADS)
Digman, Michelle
Fluorescence fluctuation spectroscopy has evolved from single point detection of molecular diffusion to a family of microscopy imaging correlation tools (i.e. ICS, RICS, STICS, and kICS) useful in deriving spatial-temporal dynamics of proteins in living cells The advantage of the imaging techniques is the simultaneous measurement of all points in an image with a frame rate that is increasingly becoming faster with better sensitivity cameras and new microscopy modalities such as the sheet illumination technique. A new frontier in this area is now emerging towards a high level of mapping diffusion rates and protein dynamics in the 2 and 3 dimensions. In this talk, I will discuss the evolution of fluctuation analysis from the single point source to mapping diffusion in whole cells and the technology behind this technique. In particular, new methods of analysis exploit correlation of molecular fluctuations originating from measurement of fluctuation correlations at distant points (pair correlation analysis) and methods that exploit spatial averaging of fluctuations in small regions (iMSD). For example the pair correlation fluctuation (pCF) analyses done between adjacent pixels in all possible radial directions provide a window into anisotropic molecular diffusion. Similar to the connectivity atlas of neuronal connections from the MRI diffusion tensor imaging these new tools will be used to map the connectome of protein diffusion in living cells. For biological reaction-diffusion systems, live single cell spatial-temporal analysis of protein dynamics provides a mean to observe stochastic biochemical signaling in the context of the intracellular environment which may lead to better understanding of cancer cell invasion, stem cell differentiation and other fundamental biological processes. National Institutes of Health Grant P41-RRO3155.
Double peacock eye optical element for extended focal depth imaging with ophthalmic applications.
Romero, Lenny A; Millán, María S; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej
2012-04-01
The aged human eye is commonly affected by presbyopia, and therefore, it gradually loses its capability to form images of objects placed at different distances. Extended depth of focus (EDOF) imaging elements can overcome this inability, despite the introduction of a certain amount of aberration. This paper evaluates the EDOF imaging performance of the so-called peacock eye phase diffractive element, which focuses an incident plane wave into a segment of the optical axis and explores the element's potential use for ophthalmic presbyopia compensation optics. Two designs of the element are analyzed: the single peacock eye, which produces one focal segment along the axis, and the double peacock eye, which is a spatially multiplexed element that produces two focal segments with partial overlapping along the axis. The performances of the peacock eye elements are compared with those of multifocal lenses through numerical simulations as well as optical experiments in the image space. The results demonstrate that the peacock eye elements form sharper images along the focal segment than the multifocal lenses and, therefore, are more suitable for presbyopia compensation. The extreme points of the depth of field in the object space, which represent the remote and the near object points, have been experimentally obtained for both the single and the double peacock eye optical elements. The double peacock eye element has better imaging quality for relatively short and intermediate distances than the single peacock eye, whereas the latter seems better for far distance vision.
Classification of spatially unresolved objects
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.
1972-01-01
A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.
Accurate geometrical optics model for single-lens stereovision system using a prism.
Cui, Xiaoyu; Lim, Kah Bin; Guo, Qiyong; Wang, DaoLei
2012-09-01
In this paper, we proposed a new method for analyzing the image formation of a prism. The prism was considered as a single optical system composed of some planes. By analyzing each plane individually and then combining them together, we derived a transformation matrix which can express the relationship between an object point and its image by the refraction of a prism. We also explained how to use this matrix for epipolar geometry and three-dimensional point reconstruction. Our method is based on optical geometry and could be used in a multiocular prism. Experimentation results are presented to prove the accuracy of our method is better than former researchers' and is comparable with that of the multicamera stereovision system.
Single Fluorescent Molecules as Nano-Illuminators for Biological Structure and Function
NASA Astrophysics Data System (ADS)
Moerner, W. E.
2011-03-01
Since the first optical detection and spectroscopy of a single molecule in a solid (Phys. Rev. Lett. {62}, 2535 (1989)), much has been learned about the ability of single molecules to probe local nanoenvironments and individual behavior in biological and nonbiological materials in the absence of ensemble averaging that can obscure heterogeneity. Because each single fluorophore acts a light source roughly 1 nm in size, microscopic imaging of individual fluorophores leads naturally to superlocalization, or determination of the position of the molecule with precision beyond the optical diffraction limit, simply by digitization of the point-spread function from the single emitter. For example, the shape of single filaments in a living cell can be extracted simply by allowing a single molecule to move through the filament (PNAS {103}, 10929 (2006)). The addition of photoinduced control of single-molecule emission allows imaging beyond the diffraction limit (super-resolution) and a new array of acronyms (PALM, STORM, F-PALM etc.) and advances have appeared. We have used the native blinking and switching of a common yellow-emitting variant of green fluorescent protein (EYFP) reported more than a decade ago (Nature {388}, 355 (1997)) to achieve sub-40 nm super-resolution imaging of several protein structures in the bacterium Caulobacter crescentus: the quasi-helix of the actin-like protein MreB (Nat. Meth. {5}, 947 (2008)), the cellular distribution of the DNA binding protein HU (submitted), and the recently discovered division spindle composed of ParA filaments (Nat. Cell Biol. {12}, 791 (2010)). Even with these advances, better emitters would provide more photons and improved resolution, and a new photoactivatable small-molecule emitter has recently been synthesized and targeted to specific structures in living cells to provide super-resolution images (JACS {132}, 15099 (2010)). Finally, a new optical method for extracting three-dimensional position information based on a double-helix point spread function enables quantitative tracking of single mRNA particles in living yeast cells with 15 ms time resolution and 25-50 nm spatial precision (PNAS {107}, 17864 (2010)). These examples illustrate the power of single-molecule optical imaging in extracting new structural and functional information in living cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.
We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. Themore » depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan
We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of themore » co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).« less
D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based Server
NASA Astrophysics Data System (ADS)
Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y. T.; Remondino, F.; Chippendale, P.; Van Gool, L.
2017-11-01
The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone's camera based on their quality and novelty. The smartphone's app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed.
Microscopy with multimode fibers
NASA Astrophysics Data System (ADS)
Moser, Christophe; Papadopoulos, Ioannis; Farahi, Salma; Psaltis, Demetri
2013-04-01
Microscopes are usually thought of comprising imaging elements such as objectives and eye-piece lenses. A different type of microscope, used for endoscopy, consists of waveguiding elements such as fiber bundles, where each fiber in the bundle transports the light corresponding to one pixel in the image. Recently a new type of microscope has emerged that exploits the large number of propagating modes in a single multimode fiber. We have successfully produced fluorescence images of neural cells with sub-micrometer resolution via a 200 micrometer core multimode fiber. The method for achieving imaging consists of using digital phase conjugation to reproduce a focal spot at the tip of the multimode fiber. The image is formed by scanning the focal spot digitally and collecting the fluorescence point by point.
Feature-based US to CT registration of the aortic root
NASA Astrophysics Data System (ADS)
Lang, Pencilla; Chen, Elvis C. S.; Guiraudon, Gerard M.; Jones, Doug L.; Bainbridge, Daniel; Chu, Michael W.; Drangova, Maria; Hata, Noby; Jain, Ameet; Peters, Terry M.
2011-03-01
A feature-based registration was developed to align biplane and tracked ultrasound images of the aortic root with a preoperative CT volume. In transcatheter aortic valve replacement, a prosthetic valve is inserted into the aortic annulus via a catheter. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to significant morbidity and mortality. Registration of pre-operative CT to transesophageal ultrasound and fluoroscopy images is a major step towards providing augmented image guidance for this procedure. The proposed registration approach uses an iterative closest point algorithm to register a surface mesh generated from CT to 3D US points reconstructed from a single biplane US acquisition, or multiple tracked US images. The use of a single simultaneous acquisition biplane image eliminates reconstruction error introduced by cardiac gating and TEE probe tracking, creating potential for real-time intra-operative registration. A simple initialization procedure is used to minimize changes to operating room workflow. The algorithm is tested on images acquired from excised porcine hearts. Results demonstrate a clinically acceptable accuracy of 2.6mm and 5mm for tracked US to CT and biplane US to CT registration respectively.
A new imaging technique based on resonance for arterial vessels
NASA Astrophysics Data System (ADS)
Zhang, Xiaoming; Fatemi, Mostafa; Greenleaf, James F.
2003-04-01
Vibro-acoustography is a new noncontact imaging method based on the radiation force of ultrasound. We extend this technique for imaging of arterial vessels based on vibration resonance. The arterial vessel is excited remotely by ultrasound at a resonant frequency, at which the vibration of the vessel as well as its transmission to the body surface are large enough to be measured. By scanning the ultrasound beam across the vessel plane and measuring the vibration at one single point on the body or vessel surface, an image of the interior artery can be mapped. Theory is developed that predicts the measured velocity is proportional to the value of the mode shape at resonance. Experimental studies were carried out on a silicone tube embedded in a cylindrical gel phantom of large radius, which simulates a large artery and the surrounding body. The fundamental frequency was measured at which the ultrasound transducer scanned across the tube plane with velocity measurement at one single point on the tube or on the phantom by laser. The images obtained show clearly the interior tube and the modal shape of the tube. The present technique offers a new imaging method for arterial vessels.
Single-Pulse Multi-Point Multi-Component Interferometric Rayleigh Scattering Velocimeter
NASA Technical Reports Server (NTRS)
Bivolaru, Daniel; Danehy, Paul M.; Lee, Joseph W.; Gaffney, Richard L., Jr.; Cutler, Andrew D.
2006-01-01
A simultaneous multi-point, multi-component velocimeter using interferometric detection of the Doppler shift of Rayleigh, Mie, and Rayleigh-Brillouin scattered light in supersonic flow is described. The system uses up to three sets of collection optics and one beam combiner for the reference laser light to form a single collimated beam. The planar Fabry-Perot interferometer used in the imaging mode for frequency detection preserves the spatial distribution of the signal reasonably well. Single-pulse multi-points measurements of up to two orthogonal and one non-orthogonal components of velocity in a Mach 2 free jet were performed to demonstrate the technique. The average velocity measurements show a close agreement with the CFD calculations using the VULCAN code.
Differential Multiphoton Laser Scanning Microscopy
Field, Jeffrey J.; Sheetz, Kraig E.; Chandler, Eric V.; Hoover, Erich E.; Young, Michael D.; Ding, Shi-you; Sylvester, Anne W.; Kleinfeld, David; Squier, Jeff A.
2016-01-01
Multifocal multiphoton microscopy (MMM) in the biological and medical sciences has become an important tool for obtaining high resolution images at video rates. While current implementations of MMM achieve very high frame rates, they are limited in their applicability to essentially those biological samples that exhibit little or no scattering. In this paper, we report on a method for MMM in which imaging detection is not necessary (single element point detection is implemented), and is therefore fully compatible for use in imaging through scattering media. Further, we demonstrate that this method leads to a new type of MMM wherein it is possible to simultaneously obtain multiple images and view differences in excitation parameters in a single shot. PMID:27390511
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes
NASA Astrophysics Data System (ADS)
Teppati Losè, L.; Chiabrando, F.; Spanò, A.
2018-05-01
The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).
NASA Astrophysics Data System (ADS)
Ihsani, Alvin; Farncombe, Troy
2016-02-01
The modelling of the projection operator in tomographic imaging is of critical importance especially when working with algebraic methods of image reconstruction. This paper proposes a distance-driven projection method which is targeted to single-pinhole single-photon emission computed tomograghy (SPECT) imaging since it accounts for the finite size of the pinhole, and the possible tilting of the detector surface in addition to other collimator-specific factors such as geometric sensitivity. The accuracy and execution time of the proposed method is evaluated by comparing to a ray-driven approach where the pinhole is sub-sampled with various sampling schemes. A point-source phantom whose projections were generated using OpenGATE was first used to compare the resolution of reconstructed images with each method using the full width at half maximum (FWHM). Furthermore, a high-activity Mini Deluxe Phantom (Data Spectrum Corp., Durham, NC, USA) SPECT resolution phantom was scanned using a Gamma Medica X-SPECT system and the signal-to-noise ratio (SNR) and structural similarity of reconstructed images was compared at various projection counts. Based on the reconstructed point-source phantom, the proposed distance-driven approach results in a lower FWHM than the ray-driven approach even when using a smaller detector resolution. Furthermore, based on the Mini Deluxe Phantom, it is shown that the distance-driven approach has consistently higher SNR and structural similarity compared to the ray-driven approach as the counts in measured projections deteriorates.
Fu, Yu; Pedrini, Giancarlo
2014-01-01
In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503
Capability of long distance 100 GHz FMCW using a single GDD lamp sensor.
Levanon, Assaf; Rozban, Daniel; Aharon Akram, Avihai; Kopeika, Natan S; Yitzhaky, Yitzhak; Abramovich, Amir
2014-12-20
Millimeter wave (MMW)-based imaging systems are required for applications in medicine, homeland security, concealed weapon detection, and space technology. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The radar system requires that the millimeter wave detector will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the 2D image. New experiments show the capability of long distance FMCW detection by using a large scale Cassegrain projection system, described first (to our knowledge) in this paper. The system presents the capability to employ a long distance of at least 20 m with a low-cost plasma-based glow discharge detector (GDD) focal plane array (FPA). Each point on the object corresponds to a point in the image and includes the distance information. This will enable relatively inexpensive 3D MMW imaging.
NASA Astrophysics Data System (ADS)
Lee, Daeho; Lee, Seohyung
2017-11-01
We propose an image stitching method that can remove ghost effects and realign the structure misalignments that occur in common image stitching methods. To reduce the artifacts caused by different parallaxes, an optimal seam pair is selected by comparing the cross correlations from multiple seams detected by variable cost weights. Along the optimal seam pair, a histogram of oriented gradients is calculated, and feature points for matching are detected. The homography is refined using the matching points, and the remaining misalignment is eliminated using the propagation of deformation vectors calculated from matching points. In multiband blending, the overlapping regions are determined from a distance between the matching points to remove overlapping artifacts. The experimental results show that the proposed method more robustly eliminates misalignments and overlapping artifacts than the existing method that uses single seam detection and gradient features.
Magnetic resonance imaging (MRI) and relaxation time mapping of concrete
NASA Astrophysics Data System (ADS)
Beyea, Steven Donald
2001-07-01
The use of Magnetic Resonance Imaging (MRI) of water in concrete is presented. This thesis will approach the problem of MR imaging of concrete by attempting to design new methods, suited to concrete materials, rather than attempting to force the material to suit the method. A number of techniques were developed, which allow the spatial observation of water in concrete in up to three dimensions, and permits the determination of space resolved moisture content, as well as local NMR relaxation times. These methods are all based on the Single-Point Imaging (SPI) method. The development of these new methods will be described, and the techniques validated using phantom studies. The study of one-dimensional moisture transport in drying concrete was performed using SPI. This work examined the effect of initial mixture proportions and hydration time on the drying behaviour of concrete, over a period of three months. Studies of drying concrete were also performed using spatial mapping of the spin-lattice (T1) and effective spin-spin (T2*) relaxation times, thereby permitting the observation of changes in the water occupied pore surface-to-volume ratio (S/V) as a function of drying. Results of this work demonstrated changes in the S/V due to drying, hydration and drying induced microcracking. Three-dimensional MRI of concrete was performed using SPRITE (Single-Point Ramped Imaging with T1 Enhancement) and turboSPI (turbo Single Point Imaging). While SPRITE allows for weighting of MR images using T 1 and T2*, turboSPI allows T2 weighting of the resulting images. Using relaxation weighting it was shown to be possible to discriminate between water contained within a hydrated cement matrix, and water in highly porous aggregates, used to produce low-density concrete. Three dimensional experiments performed using SPRITE and turboSPI examined the role of self-dessication, drying, initial aggregate saturation and initial mixture conditions on the transport of moisture between porous aggregates and the hydrated matrix. The results demonstrate that water is both added and removed from the aggregates, depending upon the physical conditions. The images also appear to show an influx of cement products into cracks in the solid aggregate. (Abstract shortened by UMI.)
Electron paramagnetic resonance (EPR) is a technique for studying chemical species that have one or more unpaired electrons. The current invention describes Echo-based Single Point Imaging (ESPI), a novel EPR image formation strategy that allows in vivo imaging of physiological function. The National Cancer Institute's Radiation Biology Branch is seeking statements of capability or interest from parties interested in in-licensing an in vivo imaging using Electron paramagnetic resonance (EPR) to measure active oxygen species.
Acquiring 4D Thoracic CT Scans Using Ciné CT Acquisition
NASA Astrophysics Data System (ADS)
Low, Daniel
One method for acquiring 4D thoracic CT scans is to use ciné acquisition. Ciné acquisition is conducted by rotating the gantry and acquiring x-ray projections while keeping the couch stationary. After a complete rotation, a single set of CT slices, the number corresponding to the number of CT detector rows, is produced. The rotation period is typically sub second so each image set corresponds to a single point in time. The ciné image acquisition is repeated for at least one breathing cycle to acquire images throughout the breathing cycle. Once the images are acquired at a single couch position, the couch is moved to the abutting position and the acquisition is repeated. Post-processing of the images sets typically resorts the sets into breathing phases, stacking images from a specific phase to produce a thoracic CT scan at that phase. Benefits of the ciné acquisition protocol include, the ability to precisely identify the phase with respect to the acquired image, the ability to resort images after reconstruction, and the ability to acquire images over arbitrarily long times and for arbitrarily many images (within dose constraints).
Lin, Zhimin; Zeng, Ying; Tong, Li; Zhang, Hangming; Zhang, Chi
2017-01-01
The application of electroencephalogram (EEG) generated by human viewing images is a new thrust in image retrieval technology. A P300 component in the EEG is induced when the subjects see their point of interest in a target image under the rapid serial visual presentation (RSVP) experimental paradigm. We detected the single-trial P300 component to determine whether a subject was interested in an image. In practice, the latency and amplitude of the P300 component may vary in relation to different experimental parameters, such as target probability and stimulus semantics. Thus, we proposed a novel method, Target Recognition using Image Complexity Priori (TRICP) algorithm, in which the image information is introduced in the calculation of the interest score in the RSVP paradigm. The method combines information from the image and EEG to enhance the accuracy of single-trial P300 detection on the basis of traditional single-trial P300 detection algorithm. We defined an image complexity parameter based on the features of the different layers of a convolution neural network (CNN). We used the TRICP algorithm to compute for the complexity of an image to quantify the effect of different complexity images on the P300 components and training specialty classifier according to the image complexity. We compared TRICP with the HDCA algorithm. Results show that TRICP is significantly higher than the HDCA algorithm (Wilcoxon Sign Rank Test, p<0.05). Thus, the proposed method can be used in other and visual task-related single-trial event-related potential detection. PMID:29283998
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
Birefringence of single and bundled microtubules.
Oldenbourg, R; Salmon, E D; Tran, P T
1998-01-01
We have measured the birefringence of microtubules (MTs) and of MT-based macromolecular assemblies in vitro and in living cells by using the new Pol-Scope. A single microtubule in aqueous suspension and imaged with a numerical aperture of 1.4 had a peak retardance of 0.07 nm. The peak retardance of a small bundle increased linearly with the number of MTs in the bundle. Axonemes (prepared from sea urchin sperm) had a peak retardance 20 times higher than that of single MTs, in accordance with the nine doublets and two singlets arrangement of parallel MTs in the axoneme. Measured filament retardance decreased when the filament was defocused or the numerical aperture of the imaging system was decreased. However, the retardance "area," which we defined as the image retardance integrated along a line perpendicular to the filament axis, proved to be independent of focus and of numerical aperture. These results are in good agreement with a theory that we developed for measuring retardances with imaging optics. Our theoretical concept is based on Wiener's theory of mixed dielectrics, which is well established for nonimaging applications. We extend its use to imaging systems by considering the coherence region defined by the optical set-up. Light scattered from within that region interferes coherently in the image point. The presence of a filament in the coherence region leads to a polarization dependent scattering cross section and to a finite retardance measured in the image point. Similar to resolution measurements, the linear dimension of the coherence region for retardance measurements is on the order lambda/(2 NA), where lambda is the wavelength of light and NA is the numerical aperture of the illumination and imaging lenses.
Birefringence of single and bundled microtubules.
Oldenbourg, R; Salmon, E D; Tran, P T
1998-01-01
We have measured the birefringence of microtubules (MTs) and of MT-based macromolecular assemblies in vitro and in living cells by using the new Pol-Scope. A single microtubule in aqueous suspension and imaged with a numerical aperture of 1.4 had a peak retardance of 0.07 nm. The peak retardance of a small bundle increased linearly with the number of MTs in the bundle. Axonemes (prepared from sea urchin sperm) had a peak retardance 20 times higher than that of single MTs, in accordance with the nine doublets and two singlets arrangement of parallel MTs in the axoneme. Measured filament retardance decreased when the filament was defocused or the numerical aperture of the imaging system was decreased. However, the retardance "area," which we defined as the image retardance integrated along a line perpendicular to the filament axis, proved to be independent of focus and of numerical aperture. These results are in good agreement with a theory that we developed for measuring retardances with imaging optics. Our theoretical concept is based on Wiener's theory of mixed dielectrics, which is well established for nonimaging applications. We extend its use to imaging systems by considering the coherence region defined by the optical set-up. Light scattered from within that region interferes coherently in the image point. The presence of a filament in the coherence region leads to a polarization dependent scattering cross section and to a finite retardance measured in the image point. Similar to resolution measurements, the linear dimension of the coherence region for retardance measurements is on the order lambda/(2 NA), where lambda is the wavelength of light and NA is the numerical aperture of the illumination and imaging lenses. PMID:9449366
Tahmasbi, Amir; Ward, E. Sally; Ober, Raimund J.
2015-01-01
Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function. PMID:25837101
Centric scan SPRITE for spin density imaging of short relaxation time porous materials.
Chen, Quan; Halse, Meghan; Balcom, Bruce J
2005-02-01
The single-point ramped imaging with T1 enhancement (SPRITE) imaging technique has proven to be a very robust and flexible method for the study of a wide range of systems with short signal lifetimes. As a pure phase encoding technique, SPRITE is largely immune to image distortions generated by susceptibility variations, chemical shift and paramagnetic impurities. In addition, it avoids the line width restrictions on resolution common to time-based sampling, frequency encoding methods. The standard SPRITE technique is however a longitudinal steady-state imaging method; the image intensity is related to the longitudinal steady state, which not only decreases the signal-to-noise ratio, but also introduces many parameters into the image signal equation. A centric scan strategy for SPRITE removes the longitudinal steady state from the image intensity equation and increases the inherent image intensity. Two centric scan SPRITE methods, that is, Spiral-SPRITE and Conical-SPRITE, with fast acquisition and greatly reduced gradient duty cycle, are outlined. Multiple free induction decay (FID) points may be acquired during SPRITE sampling for signal averaging to increase signal-to-noise ratio or for T2* and spin density mapping without an increase in acquisition time. Experimental results show that most porous sedimentary rock and concrete samples have a single exponential T2* decay due to susceptibility difference-induced field distortion. Inhomogeneous broadening thus dominates, which suggests that spin density imaging can be easily obtained by SPRITE.
Hillman, Elizabeth Mc; Voleti, Venkatakaushik; Patel, Kripa; Li, Wenze; Yu, Hang; Perez-Campos, Citlali; Benezra, Sam E; Bruno, Randy M; Galwaduge, Pubudu T
2018-06-01
As optical reporters and modulators of cellular activity have become increasingly sophisticated, the amount that can be learned about the brain via high-speed cellular imaging has increased dramatically. However, despite fervent innovation, point-scanning microscopy is facing a fundamental limit in achievable 3D imaging speeds and fields of view. A range of alternative approaches are emerging, some of which are moving away from point-scanning to use axially-extended beams or sheets of light, for example swept confocally aligned planar excitation (SCAPE) microscopy. These methods are proving effective for high-speed volumetric imaging of the nervous system of small organisms such as Drosophila (fruit fly) and D. Rerio (Zebrafish), and are showing promise for imaging activity in the living mammalian brain using both single and two-photon excitation. This article describes these approaches and presents a simple model that demonstrates key advantages of axially-extended illumination over point-scanning strategies for high-speed volumetric imaging, including longer integration times per voxel, improved photon efficiency and reduced photodamage. Copyright © 2018 Elsevier Ltd. All rights reserved.
Farnell, D J J; Popat, H; Richmond, S
2016-06-01
Methods used in image processing should reflect any multilevel structures inherent in the image dataset or they run the risk of functioning inadequately. We wish to test the feasibility of multilevel principal components analysis (PCA) to build active shape models (ASMs) for cases relevant to medical and dental imaging. Multilevel PCA was used to carry out model fitting to sets of landmark points and it was compared to the results of "standard" (single-level) PCA. Proof of principle was tested by applying mPCA to model basic peri-oral expressions (happy, neutral, sad) approximated to the junction between the mouth/lips. Monte Carlo simulations were used to create this data which allowed exploration of practical implementation issues such as the number of landmark points, number of images, and number of groups (i.e., "expressions" for this example). To further test the robustness of the method, mPCA was subsequently applied to a dental imaging dataset utilising landmark points (placed by different clinicians) along the boundary of mandibular cortical bone in panoramic radiographs of the face. Changes of expression that varied between groups were modelled correctly at one level of the model and changes in lip width that varied within groups at another for the Monte Carlo dataset. Extreme cases in the test dataset were modelled adequately by mPCA but not by standard PCA. Similarly, variations in the shape of the cortical bone were modelled by one level of mPCA and variations between the experts at another for the panoramic radiographs dataset. Results for mPCA were found to be comparable to those of standard PCA for point-to-point errors via miss-one-out testing for this dataset. These errors reduce with increasing number of eigenvectors/values retained, as expected. We have shown that mPCA can be used in shape models for dental and medical image processing. mPCA was found to provide more control and flexibility when compared to standard "single-level" PCA. Specifically, mPCA is preferable to "standard" PCA when multiple levels occur naturally in the dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Femtosecond few- to single-electron point-projection microscopy for nanoscale dynamic imaging
Bainbridge, A. R.; Barlow Myers, C. W.; Bryan, W. A.
2016-01-01
Femtosecond electron microscopy produces real-space images of matter in a series of ultrafast snapshots. Pulses of electrons self-disperse under space-charge broadening, so without compression, the ideal operation mode is a single electron per pulse. Here, we demonstrate femtosecond single-electron point projection microscopy (fs-ePPM) in a laser-pump fs-e-probe configuration. The electrons have an energy of only 150 eV and take tens of picoseconds to propagate to the object under study. Nonetheless, we achieve a temporal resolution with a standard deviation of 114 fs (equivalent to a full-width at half-maximum of 269 ± 40 fs) combined with a spatial resolution of 100 nm, applied to a localized region of charge at the apex of a nanoscale metal tip induced by 30 fs 800 nm laser pulses at 50 kHz. These observations demonstrate real-space imaging of reversible processes, such as tracking charge distributions, is feasible whilst maintaining femtosecond resolution. Our findings could find application as a characterization method, which, depending on geometry, could resolve tens of femtoseconds and tens of nanometres. Dynamically imaging electric and magnetic fields and charge distributions on sub-micron length scales opens new avenues of ultrafast dynamics. Furthermore, through the use of active compression, such pulses are an ideal seed for few-femtosecond to attosecond imaging applications which will access sub-optical cycle processes in nanoplasmonics. PMID:27158637
Detection of kinetic change points in piece-wise linear single molecule motion
NASA Astrophysics Data System (ADS)
Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.
2018-03-01
Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.
CMOS imager for pointing and tracking applications
NASA Technical Reports Server (NTRS)
Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)
2006-01-01
Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.
Kuusk, Teele; De Bruijn, Roderick; Brouwer, Oscar R; De Jong, Jeroen; Donswijk, Maarten; Grivas, Nikolaos; Hendricksen, Kees; Horenblas, Simon; Prevoo, Warner; Valdés Olmos, Renato A; Van Der Poel, Henk G; Van Rhijn, Bas W G; Wit, Esther M; Bex, Axel
2018-06-01
Lymphatic drainage from renal tumors is unpredictable. In vivo drainage studies of primary lymphatic landing sites may reveal the variability and dynamics of lymphatic connections. The purpose of this study was to investigate the lymphatic drainage pattern of renal tumors in vivo with single photon emission/computerized tomography after intratumor radiotracer injection. We performed a phase II, prospective, single arm study to investigate the distribution of sentinel nodes from renal tumors on single photon emission/computerized tomography. Patients with cT1-3 (less than 10 cm) cN0M0 renal tumors of any subtype were enrolled in analysis. After intratumor ultrasound guided injection of 0.4 ml 99m Tc-nanocolloid we performed preoperative imaging of sentinel nodes with lymphoscintigraphy and single photon emission/computerized tomography. Sentinel and locoregional nonsentinel nodes were resected with a γ probe combined with a mobile γ camera. The primary study end point was the location of sentinel nodes outside the locoregional retroperitoneal templates on single photon emission/computerized tomography. Using a Simon minimax 2-stage design to detect a 25% extralocoregional retroperitoneal template location of sentinel nodes on imaging at α = 0.05 and 80% power at least 40 patients with sentinel node imaging on single photon emission/computerized tomography were needed. Of the 68 patients 40 underwent preoperative single photon emission/computerized tomography of sentinel nodes and were included in primary end point analysis. Lymphatic drainage outside the locoregional retroperitoneal templates was observed in 14 patients (35%). Eight patients (20%) had supradiaphragmatic sentinel nodes. Sentinel nodes from renal tumors were mainly located in the respective locoregional retroperitoneal templates. Simultaneous sentinel nodes were located outside the suggested lymph node dissection templates, including supradiaphragmatic sentinel nodes in more than a third of the patients. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Localization and force analysis at the single virus particle level using atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chih-Hao; Horng, Jim-Tong; Chang, Jeng-Shian
2012-01-06
Highlights: Black-Right-Pointing-Pointer Localization of single virus particle. Black-Right-Pointing-Pointer Force measurements. Black-Right-Pointing-Pointer Force mapping. -- Abstract: Atomic force microscopy (AFM) is a vital instrument in nanobiotechnology. In this study, we developed a method that enables AFM to simultaneously measure specific unbinding force and map the viral glycoprotein at the single virus particle level. The average diameter of virus particles from AFM images and the specificity between the viral surface antigen and antibody probe were integrated to design a three-stage method that sets the measuring area to a single virus particle before obtaining the force measurements, where the influenza virus was usedmore » as the object of measurements. Based on the purposed method and performed analysis, several findings can be derived from the results. The mean unbinding force of a single virus particle can be quantified, and no significant difference exists in this value among virus particles. Furthermore, the repeatability of the proposed method is demonstrated. The force mapping images reveal that the distributions of surface viral antigens recognized by antibody probe were dispersed on the whole surface of individual virus particles under the proposed method and experimental criteria; meanwhile, the binding probabilities are similar among particles. This approach can be easily applied to most AFM systems without specific components or configurations. These results help understand the force-based analysis at the single virus particle level, and therefore, can reinforce the capability of AFM to investigate a specific type of viral surface protein and its distributions.« less
A digital library for medical imaging activities
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo; Furuie, Sérgio S.
2007-03-01
This work presents the development of an electronic infrastructure to make available a free, online, multipurpose and multimodality medical image database. The proposed infrastructure implements a distributed architecture for medical image database, authoring tools, and a repository for multimedia documents. Also it includes a peer-reviewed model that assures quality of dataset. This public repository provides a single point of access for medical images and related information to facilitate retrieval tasks. The proposed approach has been used as an electronic teaching system in Radiology as well.
The vectorization of a ray tracing program for image generation
NASA Technical Reports Server (NTRS)
Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.
1984-01-01
Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.
3D Surface Reconstruction and Volume Calculation of Rills
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.
2015-04-01
We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.
Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long
2018-01-01
With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637
Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting
NASA Astrophysics Data System (ADS)
Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang
In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2010-01-01
With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443
An update of commercial infrared sensing and imaging instruments
NASA Technical Reports Server (NTRS)
Kaplan, Herbert
1989-01-01
A classification of infrared sensing instruments by type and application, listing commercially available instruments, from single point thermal probes to on-line control sensors, to high speed, high resolution imaging systems is given. A review of performance specifications follows, along with a discussion of typical thermographic display approaches utilized by various imager manufacturers. An update report on new instruments, new display techniques and newly introduced features of existing instruments is given.
Resolution of Transverse Electron Beam Measurements using Optical Transition Radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ischebeck, Rasmus; Decker, Franz-Josef; Hogan, Mark
2005-06-22
In the plasma wakefield acceleration experiment E-167, optical transition radiation is used to measure the transverse profile of the electron bunches before and after the plasma acceleration. The distribution of the electric field from a single electron does not give a point-like distribution on the detector, but has a certain extension. Additionally, the resolution of the imaging system is affected by aberrations. The transverse profile of the bunch is thus convolved with a point spread function (PSF). Algorithms that deconvolve the image can help to improve the resolution. Imaged test patterns are used to determine the modulation transfer function ofmore » the lens. From this, the PSF can be reconstructed. The Lucy-Richardson algorithm is used to deconvolute this PSF from test images.« less
Mass Spectrometry Using Nanomechanical Systems: Beyond the Point-Mass Approximation.
Sader, John E; Hanay, M Selim; Neumann, Adam P; Roukes, Michael L
2018-03-14
The mass measurement of single molecules, in real time, is performed routinely using resonant nanomechanical devices. This approach models the molecules as point particles. A recent development now allows the spatial extent (and, indeed, image) of the adsorbate to be characterized using multimode measurements ( Hanay , M. S. , Nature Nanotechnol. , 10 , 2015 , pp 339 - 344 ). This "inertial imaging" capability is achieved through virtual re-engineering of the resonator's vibrating modes, by linear superposition of their measured frequency shifts. Here, we present a complementary and simplified methodology for the analysis of these inertial imaging measurements that exhibits similar performance while streamlining implementation. This development, together with the software that we provide, enables the broad implementation of inertial imaging that opens the door to a range of novel characterization studies of nanoscale adsorbates.
Concrete thawing studied by single-point ramped imaging.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W
1997-12-01
A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.
Single-shot polarimetry imaging of multicore fiber.
Sivankutty, Siddharth; Andresen, Esben Ravn; Bouwmans, Géraud; Brown, Thomas G; Alonso, Miguel A; Rigneault, Hervé
2016-05-01
We report an experimental test of single-shot polarimetry applied to the problem of real-time monitoring of the output polarization states in each core within a multicore fiber bundle. The technique uses a stress-engineered optical element, together with an analyzer, and provides a point spread function whose shape unambiguously reveals the polarization state of a point source. We implement this technique to monitor, simultaneously and in real time, the output polarization states of up to 180 single-mode fiber cores in both conventional and polarization-maintaining fiber bundles. We demonstrate also that the technique can be used to fully characterize the polarization properties of each individual fiber core, including eigen-polarization states, phase delay, and diattenuation.
High-performance imaging of stem cells using single-photon emissions
NASA Astrophysics Data System (ADS)
Wagenaar, Douglas J.; Moats, Rex A.; Hartsough, Neal E.; Meier, Dirk; Hugg, James W.; Yang, Tang; Gazit, Dan; Pelled, Gadi; Patt, Bradley E.
2011-10-01
Radiolabeled cells have been imaged for decades in the field of autoradiography. Recent advances in detector and microelectronics technologies have enabled the new field of "digital autoradiography" which remains limited to ex vivo specimens of thin tissue slices. The 3D field-of-view (FOV) of single cell imaging can be extended to millimeters if the low energy (10-30 keV) photon emissions of radionuclides are used for single-photon nuclear imaging. This new microscope uses a coded aperture foil made of highly attenuating elements such as gold or platinum to form the image as a kind of "lens". The detectors used for single-photon emission microscopy are typically silicon detectors with a pixel pitch less than 60 μm. The goal of this work is to image radiolabeled mesenchymal stem cells in vivo in an animal model of tendon repair processes. Single-photon nuclear imaging is an attractive modality for translational medicine since the labeled cells can be imaged simultaneously with the reparative processes by using the dual-isotope imaging technique. The details our microscope's two-layer gold aperture and the operation of the energy-dispersive, pixellated silicon detector are presented along with the first demonstration of energy discrimination with a 57Co source. Cell labeling techniques have been augmented by genetic engineering with the sodium-iodide symporter, a type of reporter gene imaging method that enables in vivo uptake of free 99mTc or an iodine isotope at a time point days or weeks after the insertion of the genetically modified stem cells into the animal model. This microscopy work in animal research may expand to the imaging of reporter-enabled stem cells simultaneously with the expected biological repair process in human clinical trials of stem cell therapies.
An evaluation of attention models for use in SLAM
NASA Astrophysics Data System (ADS)
Dodge, Samuel; Karam, Lina
2013-12-01
In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.
Astatine-211 imaging by a Compton camera for targeted radiotherapy.
Nagao, Yuto; Yamaguchi, Mitsutaka; Watanabe, Shigeki; Ishioka, Noriko S; Kawachi, Naoki; Watabe, Hiroshi
2018-05-24
Astatine-211 is a promising radionuclide for targeted radiotherapy. It is required to image the distribution of targeted radiotherapeutic agents in a patient's body for optimization of treatment strategies. We proposed to image 211 At with high-energy photons to overcome some problems in conventional planar or single-photon emission computed tomography imaging. We performed an imaging experiment of a point-like 211 At source using a Compton camera, and demonstrated the capability of imaging 211 At with the high-energy photons for the first time. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fast imaging of filaments in the X-point region of Alcator C-Mod
Terry, J. L.; Ballinger, S.; Brunner, D.; ...
2017-01-27
A rich variety of field-aligned fluctuations has been revealed using fast imaging of D α emission from Alcator C-Mod's lower X-point region. Field-aligned filamentary fluctuations are observed along the inner divertor leg, within the Private-Flux-Zone (PFZ), in the Scrape-Off Layer (SOL) outside the outer divertor leg, and, under some conditions, at or above the X-point. The locations and dynamics of the filaments in these regions are strikingly complex in C-Mod. Changes in the filaments’ generation appear to be ordered by plasma density and magnetic configuration. Filaments are not observed for plasmas with n/nGreenwald ≲ 0.12 nor are they observed inmore » Upper Single Null configurations. In a Lower Single Null with 0.12 ≲ n/nGreenwald ≲ 0.45 and Bx∇B directed down, filaments typically move up the inner divertor leg toward the X-point. Reversing the field direction results in the appearance of filaments outside of the outer divertor leg. With the divertor targets “detached”, filaments inside the LCFS are seen. Lastly, these studies were motivated by observations of filaments in the X-point and PFZ regions in MAST, and comparisons with those observations are made.« less
A rapid and robust gradient measurement technique using dynamic single-point imaging.
Jang, Hyungseok; McMillan, Alan B
2017-09-01
We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
von Diezmann, Alex; Shechtman, Yoav; Moerner, W. E.
2017-01-01
Single-molecule super-resolution fluorescence microscopy and single-particle tracking are two imaging modalities that illuminate the properties of cells and materials on spatial scales down to tens of nanometers, or with dynamical information about nanoscale particle motion in the millisecond range, respectively. These methods generally use wide-field microscopes and two-dimensional camera detectors to localize molecules to much higher precision than the diffraction limit. Given the limited total photons available from each single-molecule label, both modalities require careful mathematical analysis and image processing. Much more information can be obtained about the system under study by extending to three-dimensional (3D) single-molecule localization: without this capability, visualization of structures or motions extending in the axial direction can easily be missed or confused, compromising scientific understanding. A variety of methods for obtaining both 3D super-resolution images and 3D tracking information have been devised, each with their own strengths and weaknesses. These include imaging of multiple focal planes, point-spread-function engineering, and interferometric detection. These methods may be compared based on their ability to provide accurate and precise position information of single-molecule emitters with limited photons. To successfully apply and further develop these methods, it is essential to consider many practical concerns, including the effects of optical aberrations, field-dependence in the imaging system, fluorophore labeling density, and registration between different color channels. Selected examples of 3D super-resolution imaging and tracking are described for illustration from a variety of biological contexts and with a variety of methods, demonstrating the power of 3D localization for understanding complex systems. PMID:28151646
Li, Jing; Zhang, Miao; Chen, Lin; Cai, Congbo; Sun, Huijun; Cai, Shuhui
2015-06-01
We employ an amplitude-modulated chirp pulse to selectively excite spins in one or more regions of interest (ROIs) to realize reduced field-of-view (rFOV) imaging based on single-shot spatiotemporally encoded (SPEN) sequence and Fourier transform reconstruction. The proposed rFOV imaging method was theoretically analyzed and illustrated with numerical simulation and tested with phantom experiments and in vivo rat experiments. In addition, point spread function was applied to demonstrate the feasibility of the proposed method. To evaluate the proposed method, the rFOV results were compared with those obtained using the EPI method with orthogonal RF excitation. The simulation and experimental results show that the proposed method can image one or two separated ROIs along the SPEN dimension in a single shot with higher spatial resolution, less sensitive to field inhomogeneity, and practically no aliasing artifacts. In addition, the proposed method may produce rFOV images with comparable signal-to-noise ratio to the rFOV EPI images. The proposed method is promising for the applications under severe susceptibility heterogeneities and for imaging separate ROIs simultaneously. Copyright © 2015 Elsevier Inc. All rights reserved.
Implementation and image processing of a multi-focusing bionic compound eye
NASA Astrophysics Data System (ADS)
Wang, Xin; Guo, Yongcai; Luo, Jiasai
2018-01-01
In this paper, a new BCE with multi-focusing microlens array (MLA) was proposed. The BCE consist of detachable micro-hole array (MHA), multi-focusing MLA and spherical substrate, thus allowing it to have a large FOV without crosstalk and stray light. The MHA was fabricated by the precision machining and the parameters of the microlens varied depend on the aperture of micro-hole, through which the implementation of the multi-focusing MLA was realized under the negative pressure. Without the pattern transfer and substrate reshaping, the whole fabrication method was capable of accomplishing within several minutes by using microinjection technology. Furthermore, the method is cost-effective and easy for operation, thus providing a feasible method for the mass production of the BCE. The corresponding image processing was used to realize the image stitching for the sub-image of each single microlens, which offering an integral image in large FOV. The image stitching was implemented through the overlap between the adjacent sub-images and the feature points between the adjacent sub-images were captured by the Harris point detection. By using the adaptive non-maximal suppression, numerous potential mismatching points were eliminated and the algorithm efficiency was proved effectively. Following this, the random sample consensus (RANSAC) was used for feature points matching, by which the relation of projection transformation of the image is obtained. The implementation of the accurate image matching was then realized after the smooth transition by weighted average method. Experimental results indicate that the image-stitching algorithm can be applied for the curved BCE in large field.
Simultaneous imaging of oxygen tension and blood flow in animals using a digital micromirror device.
Ponticorvo, Adrien; Dunn, Andrew K
2010-04-12
In this study we present a novel imaging method that combines high resolution cerebral blood flow imaging with a highly flexible map of absolute pO(2). In vivo measurements of pO(2) in animals using phosphorescence quenching is a well established method, and is preferable over electrical probes which are inherently invasive and are limited to single point measurements. However, spatially resolved pO(2) measurements using phosphorescence lifetime quenching typically require expensive cameras to obtain images of pO(2) and often suffer from poor signal to noise. Our approach enables us to retain the high temporal resolution and sensitivity of single point detection of phosphorescence by using a digital micromirror device (DMD) to selectively illuminate arbitrarily shaped regions of tissue. In addition, by simultaneously using Laser Speckle Contrast Imaging (LSCI) to measure relative blood flow, we can better examine the relationship between blood flow and absolute pO(2). We successfully used this instrument to study changes that occur during ischemic conditions in the brain with enough spatial resolution to clearly distinguish different regions. This novel instrument will provide researchers with an inexpensive and improved technique to examine multiple hemodynamic parameters simultaneously in the brain as well as other tissues.
Infrared imaging results of an excited planar jet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrington, R.B.
1991-12-01
Planar jets are used for many applications including heating, cooling, and ventilation. Generally such a jet is designed to provide good mixing within an enclosure. In building applications, the jet provides both thermal comfort and adequate indoor air quality. Increased mixing rates may lead to lower short-circuiting of conditioned air, elimination of dead zones within the occupied zone, reduced energy costs, increased occupant comfort, and higher indoor air quality. This paper discusses using an infrared imaging system to show the effect of excitation of a jet on the spread angle and on the jet mixing efficiency. Infrared imaging captures amore » large number of data points in real time (over 50,000 data points per image) providing significant advantages over single-point measurements. We used a screen mesh with a time constant of approximately 0.3 seconds as a target for the infrared camera to detect temperature variations in the jet. The infrared images show increased jet spread due to excitation of the jet. Digital data reduction and analysis show change in jet isotherms and quantify the increased mixing caused by excitation. 17 refs., 20 figs.« less
2012-02-10
Then and Now: These images illustrate the dramatic improvement in NASA computing power over the last 23 years, and its effect on the number of grid points used for flow simulations. At left, an image from the first full-body Navier-Stokes simulation (1988) of an F-16 fighter jet showing pressure on the aircraft body, and fore-body streamlines at Mach 0.90. This steady-state solution took 25 hours using a single Cray X-MP processor to solve the 500,000 grid-point problem. Investigator: Neal Chaderjian, NASA Ames Research Center At right, a 2011 snapshot from a Navier-Stokes simulation of a V-22 Osprey rotorcraft in hover. The blade vortices interact with the smaller turbulent structures. This very detailed simulation used 660 million grid points, and ran on 1536 processors on the Pleiades supercomputer for 180 hours. Investigator: Neal Chaderjian, NASA Ames Research Center; Image: Tim Sandstrom, NASA Ames Research Center
Rodríguez, Jaime; Martín, María T; Herráez, José; Arias, Pedro
2008-12-10
Photogrammetry is a science with many fields of application in civil engineering where image processing is used for different purposes. In most cases, the use of multiple images simultaneously for the reconstruction of 3D scenes is commonly used. However, the use of isolated images is becoming more and more frequent, for which it is necessary to calculate the orientation of the image with respect to the object space (exterior orientation), which is usually made through three rotations through known points in the object space (Euler angles). We describe the resolution of this problem by means of a single rotation through the vanishing line of the image space and completely external to the object, to be more precise, without any contact with it. The results obtained appear to be optimal, and the procedure is simple and of great utility, since no points over the object are required, which is very useful in situations where access is difficult.
Visualization and imaging methods for flames in microgravity
NASA Technical Reports Server (NTRS)
Weiland, Karen J.
1993-01-01
The visualization and imaging of flames has long been acknowledged as the starting point for learning about and understanding combustion phenomena. It provides an essential overall picture of the time and length scales of processes and guides the application of other diagnostics. It is perhaps even more important in microgravity combustion studies, where it is often the only non-intrusive diagnostic measurement easily implemented. Imaging also aids in the interpretation of single-point measurements, such as temperature, provided by thermocouples, and velocity, by hot-wire anemometers. This paper outlines the efforts of the Microgravity Combustion Diagnostics staff at NASA Lewis Research Center in the area of visualization and imaging of flames, concentrating on methods applicable for reduced-gravity experimentation. Several techniques are under development: intensified array camera imaging, and two-dimensional temperature and species concentrations measurements. A brief summary of results in these areas is presented and future plans mentioned.
Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps
NASA Astrophysics Data System (ADS)
Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.
2018-04-01
Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.
Yoon, Se Jin; Noh, Si Cheol; Choi, Heung Ho
2007-01-01
The infrared diagnosis device provides two-dimensional images and patient-oriented results that can be easily understood by the inspection target by using infrared cameras; however, it has disadvantages such as large size, high price, and inconvenient maintenance. In this regard, this study has proposed small-sized diagnosis device for body heat using a single infrared sensor and implemented an infrared detection system using a single infrared sensor and an algorithm that represents thermography using the obtained data on the temperature of the point source. The developed systems had the temperature resolution of 0.1 degree and the reproducibility of +/-0.1 degree. The accuracy was 90.39% at the error bound of +/-0 degree and 99.98% at that of +/-0.1 degree. In order to evaluate the proposed algorithm and system, the infrared images of camera method was compared. The thermal images that have clinical meaning were obtained from a patient who has lesion to verify its clinical applicability.
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
NASA Technical Reports Server (NTRS)
Nabors, Sammy
2015-01-01
NASA offers companies an optical system that provides a unique panoramic perspective with a single camera. NASA's Marshall Space Flight Center has developed a technology that combines a panoramic refracting optic (PRO) lens with a unique detection system to acquire a true 360-degree field of view. Although current imaging systems can acquire panoramic images, they must use up to five cameras to obtain the full field of view. MSFC's technology obtains its panoramic images from one vantage point.
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
Kim, Tae-Hyung; Baek, Moon-Young; Park, Ji Eun; Ryu, Young Jin; Cheon, Jung-Eun; Kim, In-One; Choi, Young Hun
2018-06-01
The purpose of this study is to compare DWI for pediatric brain evaluation using single-shot echo-planar imaging (EPI), periodically rotated overlapping parallel lines with enhanced reconstruction (Blade), and readout-segmented EPI (Resolve). Blade, Resolve, and single-shot EPI were performed for 27 pediatric patients (median age, 9 years), and three datasets were independently reviewed by two radiologists. Qualitative analyses were performed for perceptive coarseness, image distortion, susceptibility-related changes, motion artifacts, and lesion conspicuity using a 5-point Likert scale. Quantitative analyses were conducted for spatial distortion and signal uniformity of each sequence. Mean scores were 2.13, 3.17, and 3.76 for perceptive coarseness; 4.85, 3.96, and 2.19 for image distortion; 4.76, 3.96, and 2.30 for susceptibility-related change; 4.96, 3.83, and 4.69 for motion artifacts; and 2.71, 3.75, and 1.92 for lesion conspicuity, for Blade, Resolve, and single-shot EPI, respectively. Blade and Resolve showed better quality than did single-shot EPI for image distortion, susceptibility-related changes, and lesion conspicuity. Blade showed less image distortion, fewer susceptibility-related changes, and fewer motion artifacts than did Resolve, whereas lesion conspicuity was better with Resolve. Blade showed increased signal variation compared with Resolve and single-shot EPI (coefficients of variation were 0.10, 0.08, and 0.05 for lateral ventricle; 0.13, 0.09, and 0.05 for centrum semiovale; and 0.16, 0.09, and 0.06 for pons in Blade, Resolve, and single-shot EPI, respectively). DWI with Resolve or Blade yields better quality regarding distortion, susceptibility-related changes, and lesion conspicuity, compared with single-shot EPI. Blade is less susceptible to motion artifacts than is Resolve, whereas Resolve yields less noise and better lesion conspicuity than does Blade.
The utility of polarized heliospheric imaging for space weather monitoring.
DeForest, C E; Howard, T A; Webb, D F; Davies, J A
2016-01-01
A polarizing heliospheric imager is a critical next generation tool for space weather monitoring and prediction. Heliospheric imagers can track coronal mass ejections (CMEs) as they cross the solar system, using sunlight scattered by electrons in the CME. This tracking has been demonstrated to improve the forecasting of impact probability and arrival time for Earth-directed CMEs. Polarized imaging allows locating CMEs in three dimensions from a single vantage point. Recent advances in heliospheric imaging have demonstrated that a polarized imager is feasible with current component technology.Developing this technology to a high technology readiness level is critical for space weather relevant imaging from either a near-Earth or deep-space mission. In this primarily technical review, we developpreliminary hardware requirements for a space weather polarizing heliospheric imager system and outline possible ways to flight qualify and ultimately deploy the technology operationally on upcoming specific missions. We consider deployment as an instrument on NOAA's Deep Space Climate Observatory follow-on near the Sun-Earth L1 Lagrange point, as a stand-alone constellation of smallsats in low Earth orbit, or as an instrument located at the Sun-Earth L5 Lagrange point. The critical first step is the demonstration of the technology, in either a science or prototype operational mission context.
NASA Astrophysics Data System (ADS)
Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.
2018-02-01
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images
Frey, Eric C.; Humm, John L.; Ljungberg, Michael
2012-01-01
The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429
Ion photon emission microscope
Doyle, Barney L.
2003-04-22
An ion beam analysis system that creates microscopic multidimensional image maps of the effects of high energy ions from an unfocussed source upon a sample by correlating the exact entry point of an ion into a sample by projection imaging of the ion-induced photons emitted at that point with a signal from a detector that measures the interaction of that ion within the sample. The emitted photons are collected in the lens system of a conventional optical microscope, and projected on the image plane of a high resolution single photon position sensitive detector. Position signals from this photon detector are then correlated in time with electrical effects, including the malfunction of digital circuits, detected within the sample that were caused by the individual ion that created these photons initially.
Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke
X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less
Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime
Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke; ...
2017-06-29
X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less
Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.
2018-05-01
Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction
Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija
2017-01-01
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.
Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija
2017-12-02
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.
An improved ASIFT algorithm for indoor panorama image matching
NASA Astrophysics Data System (ADS)
Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong
2017-07-01
The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.
Wang, Yunsheng; Weinacker, Holger; Koch, Barbara
2008-01-01
A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916
The plasma filling factor of coronal bright points. II. Combined EIS and TRACE results
NASA Astrophysics Data System (ADS)
Dere, K. P.
2009-04-01
Aims: In a previous paper, the volumetric plasma filling factor of coronal bright points was determined from spectra obtained with the Extreme ultraviolet Imaging Spectrometer (EIS). The analysis of these data showed that the median plasma filling factor was 0.015. One interpretation of this result was that the small filling factor was consistent with a single coronal loop with a width of 1-2´´, somewhat below the apparent width. In this paper, higher spatial resolution observations with the Transition Region and Corona Explorer (TRACE) are used to test this interpretation. Methods: Rastered spectra of regions of the quiet Sun were recorded by the EIS during operations with the Hinode satellite. Many of these regions were simultaneously observed with TRACE. Calibrated intensities of Fe xii lines were obtained and images of the quiet corona were constructed from the EIS measurements. Emission measures were determined from the EIS spectra and geometrical widths of coronal bright points were obtained from the TRACE images. Electron densities were determined from density-sensitive line ratios measured with EIS. A comparison of the emission measure and bright point widths with the electron densities yielded the plasma filling factor. Results: The median electron density of coronal bright points is 3 × 109 cm-3 at a temperature of 1.6 × 106 K. The volumetric plasma filling factor of coronal bright points was found to vary from 3 × 10-3 to 0.3 with a median value of 0.04. Conclusions: The current set of EIS and TRACE coronal bright-point observations indicate the median value of their plasma filling factor is 0.04. This can be interpreted as evidence of a considerable subresolution structure in coronal bright points or as the result of a single completely filled plasma loop with widths on the order of 0.2-1.5´´ that has not been spatially resolved in these measurements.
Blind image deconvolution using the Fields of Experts prior
NASA Astrophysics Data System (ADS)
Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi
2012-11-01
In this paper, we present a method for single image blind deconvolution. To improve its ill-posedness, we formulate the problem under Bayesian probabilistic framework and use a prior named Fields of Experts (FoE) which is learnt from natural images to regularize the latent image. Furthermore, due to the sparse distribution of the point spread function (PSF), we adopt a Student-t prior to regularize it. An improved alternating minimization (AM) approach is proposed to solve the resulted optimization problem. Experiments on both synthetic and real world blurred images show that the proposed method can achieve results of high quality.
[Development of the automatic dental X-ray film processor].
Bai, J; Chen, H
1999-07-01
This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
High Resolution X-ray-Induced Acoustic Tomography
Xiang, Liangzhong; Tang, Shanshan; Ahmad, Moiz; Xing, Lei
2016-01-01
Absorption based CT imaging has been an invaluable tool in medical diagnosis, biology, and materials science. However, CT requires a large set of projection data and high radiation dose to achieve superior image quality. In this letter, we report a new imaging modality, X-ray Induced Acoustic Tomography (XACT), which takes advantages of high sensitivity to X-ray absorption and high ultrasonic resolution in a single modality. A single projection X-ray exposure is sufficient to generate acoustic signals in 3D space because the X-ray generated acoustic waves are of a spherical nature and propagate in all directions from their point of generation. We demonstrate the successful reconstruction of gold fiducial markers with a spatial resolution of about 350 μm. XACT reveals a new imaging mechanism and provides uncharted opportunities for structural determination with X-ray. PMID:27189746
Martial, Franck P.; Hartell, Nicholas A.
2012-01-01
Confocal microscopy is routinely used for high-resolution fluorescence imaging of biological specimens. Most standard confocal systems scan a laser across a specimen and collect emitted light passing through a single pinhole to produce an optical section of the sample. Sequential scanning on a point-by-point basis limits the speed of image acquisition and even the fastest commercial instruments struggle to resolve the temporal dynamics of rapid cellular events such as calcium signals. Various approaches have been introduced that increase the speed of confocal imaging. Nipkov disk microscopes, for example, use arrays of pinholes or slits on a spinning disk to achieve parallel scanning which significantly increases the speed of acquisition. Here we report the development of a microscope module that utilises a digital micromirror device as a spatial light modulator to provide programmable confocal optical sectioning with a single camera, at high spatial and axial resolution at speeds limited by the frame rate of the camera. The digital micromirror acts as a solid state Nipkov disk but with the added ability to change the pinholes size and separation and to control the light intensity on a mirror-by-mirror basis. The use of an arrangement of concave and convex mirrors in the emission pathway instead of lenses overcomes the astigmatism inherent with DMD devices, increases light collection efficiency and ensures image collection is achromatic so that images are perfectly aligned at different wavelengths. Combined with non-laser light sources, this allows low cost, high-speed, multi-wavelength image acquisition without the need for complex wavelength-dependent image alignment. The micromirror can also be used for programmable illumination allowing spatially defined photoactivation of fluorescent proteins. We demonstrate the use of this system for high-speed calcium imaging using both a single wavelength calcium indicator and a genetically encoded, ratiometric, calcium sensor. PMID:22937130
Martial, Franck P; Hartell, Nicholas A
2012-01-01
Confocal microscopy is routinely used for high-resolution fluorescence imaging of biological specimens. Most standard confocal systems scan a laser across a specimen and collect emitted light passing through a single pinhole to produce an optical section of the sample. Sequential scanning on a point-by-point basis limits the speed of image acquisition and even the fastest commercial instruments struggle to resolve the temporal dynamics of rapid cellular events such as calcium signals. Various approaches have been introduced that increase the speed of confocal imaging. Nipkov disk microscopes, for example, use arrays of pinholes or slits on a spinning disk to achieve parallel scanning which significantly increases the speed of acquisition. Here we report the development of a microscope module that utilises a digital micromirror device as a spatial light modulator to provide programmable confocal optical sectioning with a single camera, at high spatial and axial resolution at speeds limited by the frame rate of the camera. The digital micromirror acts as a solid state Nipkov disk but with the added ability to change the pinholes size and separation and to control the light intensity on a mirror-by-mirror basis. The use of an arrangement of concave and convex mirrors in the emission pathway instead of lenses overcomes the astigmatism inherent with DMD devices, increases light collection efficiency and ensures image collection is achromatic so that images are perfectly aligned at different wavelengths. Combined with non-laser light sources, this allows low cost, high-speed, multi-wavelength image acquisition without the need for complex wavelength-dependent image alignment. The micromirror can also be used for programmable illumination allowing spatially defined photoactivation of fluorescent proteins. We demonstrate the use of this system for high-speed calcium imaging using both a single wavelength calcium indicator and a genetically encoded, ratiometric, calcium sensor.
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
Imaging Transcriptional Regulation of Eukaryotic mRNA Genes: Advances and Outlook.
Yao, Jie
2017-01-06
Regulation of eukaryotic transcription in vivo occurs at distinct stages. Previous research has identified many active or repressive transcription factors (TFs) and core transcription components and studied their functions in vitro and in vivo. Nonetheless, how individual TFs act in concert to regulate mRNA gene expression in a single cell remains poorly understood. Direct observation of TF assembly and disassembly and various biochemical reactions during transcription of a single-copy gene in vivo is the ideal approach to study this problem. Research in this area requires developing novel techniques for single-cell transcription imaging and integrating imaging studies into understanding the molecular biology of transcription. In the past decade, advanced cell imaging has enabled unprecedented capabilities to visualize individual TF molecules, to track single transcription sites, and to detect individual mRNA in fixed and living cells. These studies have raised several novel insights on transcriptional regulation such as the "hit-and-run" model and transcription bursting that could not be obtained by in vitro biochemistry analysis. At this point, the key question is how to achieve deeper understandings or discover novel mechanisms of eukaryotic transcriptional regulation by imaging transcription in single cells. Meanwhile, further technical advancements are likely required for visualizing distinct kinetic steps of transcription on a single-copy gene in vivo. This review article summarizes recent progress in the field and describes the challenges and opportunities ahead. Copyright © 2016 Elsevier Ltd. All rights reserved.
Single particle maximum likelihood reconstruction from superresolution microscopy images
Verdier, Timothée; Gunzenhauser, Julia; Manley, Suliana; Castelnovo, Martin
2017-01-01
Point localization superresolution microscopy enables fluorescently tagged molecules to be imaged beyond the optical diffraction limit, reaching single molecule localization precisions down to a few nanometers. For small objects whose sizes are few times this precision, localization uncertainty prevents the straightforward extraction of a structural model from the reconstructed images. We demonstrate in the present work that this limitation can be overcome at the single particle level, requiring no particle averaging, by using a maximum likelihood reconstruction (MLR) method perfectly suited to the stochastic nature of such superresolution imaging. We validate this method by extracting structural information from both simulated and experimental PALM data of immature virus-like particles of the Human Immunodeficiency Virus (HIV-1). MLR allows us to measure the radii of individual viruses with precision of a few nanometers and confirms the incomplete closure of the viral protein lattice. The quantitative results of our analysis are consistent with previous cryoelectron microscopy characterizations. Our study establishes the framework for a method that can be broadly applied to PALM data to determine the structural parameters for an existing structural model, and is particularly well suited to heterogeneous features due to its single particle implementation. PMID:28253349
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
Capturing the plenoptic function in a swipe
NASA Astrophysics Data System (ADS)
Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi
2016-09-01
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
Development of new photon-counting detectors for single-molecule fluorescence microscopy.
Michalet, X; Colyer, R A; Scalia, G; Ingargiola, A; Lin, R; Millaud, J E; Weiss, S; Siegmund, Oswald H W; Tremsin, Anton S; Vallerga, John V; Cheng, A; Levi, M; Aharoni, D; Arisaka, K; Villa, F; Guerrieri, F; Panzeri, F; Rech, I; Gulinatti, A; Zappa, F; Ghioni, M; Cova, S
2013-02-05
Two optical configurations are commonly used in single-molecule fluorescence microscopy: point-like excitation and detection to study freely diffusing molecules, and wide field illumination and detection to study surface immobilized or slowly diffusing molecules. Both approaches have common features, but also differ in significant aspects. In particular, they use different detectors, which share some requirements but also have major technical differences. Currently, two types of detectors best fulfil the needs of each approach: single-photon-counting avalanche diodes (SPADs) for point-like detection, and electron-multiplying charge-coupled devices (EMCCDs) for wide field detection. However, there is room for improvements in both cases. The first configuration suffers from low throughput owing to the analysis of data from a single location. The second, on the other hand, is limited to relatively low frame rates and loses the benefit of single-photon-counting approaches. During the past few years, new developments in point-like and wide field detectors have started addressing some of these issues. Here, we describe our recent progresses towards increasing the throughput of single-molecule fluorescence spectroscopy in solution using parallel arrays of SPADs. We also discuss our development of large area photon-counting cameras achieving subnanosecond resolution for fluorescence lifetime imaging applications at the single-molecule level.
Development of new photon-counting detectors for single-molecule fluorescence microscopy
Michalet, X.; Colyer, R. A.; Scalia, G.; Ingargiola, A.; Lin, R.; Millaud, J. E.; Weiss, S.; Siegmund, Oswald H. W.; Tremsin, Anton S.; Vallerga, John V.; Cheng, A.; Levi, M.; Aharoni, D.; Arisaka, K.; Villa, F.; Guerrieri, F.; Panzeri, F.; Rech, I.; Gulinatti, A.; Zappa, F.; Ghioni, M.; Cova, S.
2013-01-01
Two optical configurations are commonly used in single-molecule fluorescence microscopy: point-like excitation and detection to study freely diffusing molecules, and wide field illumination and detection to study surface immobilized or slowly diffusing molecules. Both approaches have common features, but also differ in significant aspects. In particular, they use different detectors, which share some requirements but also have major technical differences. Currently, two types of detectors best fulfil the needs of each approach: single-photon-counting avalanche diodes (SPADs) for point-like detection, and electron-multiplying charge-coupled devices (EMCCDs) for wide field detection. However, there is room for improvements in both cases. The first configuration suffers from low throughput owing to the analysis of data from a single location. The second, on the other hand, is limited to relatively low frame rates and loses the benefit of single-photon-counting approaches. During the past few years, new developments in point-like and wide field detectors have started addressing some of these issues. Here, we describe our recent progresses towards increasing the throughput of single-molecule fluorescence spectroscopy in solution using parallel arrays of SPADs. We also discuss our development of large area photon-counting cameras achieving subnanosecond resolution for fluorescence lifetime imaging applications at the single-molecule level. PMID:23267185
Trading efficiency for effectiveness in similarity-based indexing for image databases
NASA Astrophysics Data System (ADS)
Barros, Julio E.; French, James C.; Martin, Worthy N.; Kelly, Patrick M.
1995-11-01
Image databases typically manage feature data that can be viewed as points in a feature space. Some features, however, can be better expressed as a collection of points or described by a probability distribution function (PDF) rather than as a single point. In earlier work we introduced a similarity measure and a method for indexing and searching the PDF descriptions of these items that guarantees an answer equivalent to sequential search. Unfortunately, certain properties of the data can restrict the efficiency of that method. In this paper we extend that work and examine trade-offs between efficiency and answer quality or effectiveness. These trade-offs reduce the amount of work required during a search by reducing the number of undesired items fetched without excluding an excessive number of the desired ones.
Fusion of light-field and photogrammetric surface form data
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.
2017-08-01
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.
Arikan, Murat; Preiner, Reinhold; Wimmer, Michael
2016-02-01
With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.
Study of coherent reflectometer for imaging internal structures of highly scattering media
NASA Astrophysics Data System (ADS)
Poupardin, Mathieu; Dolfi, Agnes
1996-01-01
Optical reflectometers are potentially useful tools for imaging internal structures of turbid media, particularly of biological media. To get a point by point image, an active imaging system has to distinguish light scattered from a sample volume and light scattered by other locations in the media. Operating this discrimination of light with reflectometers based on coherence can be realized in two ways: assuring a geometric selection or a temporal selection. In this paper we present both methods, showing in each case the influence of the different parameters on the size of the sample volume under the assumption of single scattering. We also study the influence on the detection efficiency of the coherence loss of the incident light resulting from multiple scattering. We adapt a model, first developed for atmospheric lidar in turbulent atmosphere, to get an analytical expression of this detection efficiency in the function of the optical coefficients of the media.
PRESBYOPIA OPTOMETRY METHOD BASED ON DIOPTER REGULATION AND CHARGE COUPLE DEVICE IMAGING TECHNOLOGY.
Zhao, Q; Wu, X X; Zhou, J; Wang, X; Liu, R F; Gao, J
2015-01-01
With the development of photoelectric technology and single-chip microcomputer technology, objective optometry, also known as automatic optometry, is becoming precise. This paper proposed a presbyopia optometry method based on diopter regulation and Charge Couple Device (CCD) imaging technology and, in the meantime, designed a light path that could measure the system. This method projects a test figure to the eye ground and then the reflected image from the eye ground is detected by CCD. The image is then automatically identified by computer and the far point and near point diopters are determined to calculate lens parameter. This is a fully automatic objective optometry method which eliminates subjective factors of the tested subject. Furthermore, it can acquire the lens parameter of presbyopia accurately and quickly and can be used to measure the lens parameter of hyperopia, myopia and astigmatism.
X-ray imaging crystal spectrometer for extended X-ray sources
Bitter, Manfred L.; Fraenkel, Ben; Gorman, James L.; Hill, Kenneth W.; Roquemore, A. Lane; Stodiek, Wolfgang; von Goeler, Schweickhard E.
2001-01-01
Spherically or toroidally curved, double focusing crystals are used in a spectrometer for X-ray diagnostics of an extended X-ray source such as a hot plasma produced in a tokomak fusion experiment to provide spatially and temporally resolved data on plasma parameters using the imaging properties for Bragg angles near 45. For a Bragg angle of 45.degree., the spherical crystal focuses a bundle of near parallel X-rays (the cross section of which is determined by the cross section of the crystal) from the plasma to a point on a detector, with parallel rays inclined to the main plain of diffraction focused to different points on the detector. Thus, it is possible to radially image the plasma X-ray emission in different wavelengths simultaneously with a single crystal.
McEvoy, Linda K; Holland, Dominic; Hagler, Donald J; Fennema-Notestine, Christine; Brewer, James B; Dale, Anders M
2011-06-01
To assess whether single-time-point and longitudinal volumetric magnetic resonance (MR) imaging measures provide predictive prognostic information in patients with amnestic mild cognitive impairment (MCI). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Written informed consent was obtained from all participants or the participants' legal guardians. Cross-validated discriminant analyses of MR imaging measures were performed to differentiate 164 Alzheimer disease (AD) cases from 203 healthy control cases. Separate analyses were performed by using data from MR images obtained at one time point or by combining single-time-point measures with 1-year change measures. Resulting discriminant functions were applied to 317 MCI cases to derive individual patient risk scores. Risk of conversion to AD was estimated as a continuous function of risk score percentile. Kaplan-Meier survival curves were computed for risk score quartiles. Odds ratios (ORs) for the conversion to AD were computed between the highest and lowest quartile scores. Individualized risk estimates from baseline MR examinations indicated that the 1-year risk of conversion to AD ranged from 3% to 40% (average group risk, 17%; OR, 7.2 for highest vs lowest score quartiles). Including measures of 1-year change in global and regional volumes significantly improved risk estimates (P = 001), with the risk of conversion to AD in the subsequent year ranging from 3% to 69% (average group risk, 27%; OR, 12.0 for highest vs lowest score quartiles). Relative to the risk of conversion to AD conferred by the clinical diagnosis of MCI alone, MR imaging measures yield substantially more informative patient-specific risk estimates. Such predictive prognostic information will be critical if disease-modifying therapies become available. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101975/-/DC1. RSNA, 2011
3D Surface Reconstruction of Rills in a Spanish Olive Grove
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Seeger, Manuel; Wirtz, Stefan; Taguas, Encarnación; Ries, Johannes B.
2016-04-01
The low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique is used for 3D surface reconstruction and difference calculation of an 18 meter long rill in South Spain (Andalusia, Puente Genil). The images were taken with a Canon HD video camera before and after a rill experiment in an olive grove. Recording with a video camera has compared to a photo camera a huge time advantage and the method also guarantees more than adequately overlapping sharp images. For each model, approximately 20 minutes of video were taken. As SfM needs single images, the sharpest image was automatically selected from 8 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs and recovers the camera and feature positions. Finally, by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post model a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The results show that rills in olive groves have a high dynamic due to the lack of vegetation cover under the trees, so that the rill can incise until the bedrock. Another reason for the high activity is the intensive employment of machinery.
Performance analysis of grazing incidence imaging systems. [X ray telescope aberrations
NASA Technical Reports Server (NTRS)
Winkler, C. E.; Korsch, D.
1977-01-01
An exact expression relating the coordinates of a point on the incident ray, a point of reflection from an arbitrary surface, and a point on the reflected ray is derived. The exact relation is then specialized for the case of grazing incidence, and first order and third order systematic analyses are carried out for a single reflective surface and then for a combination of two surfaces. The third order treatment yields a complete set of primary aberrations for single element and two element systems. The importance of a judicious choice for a coordinate system in showing field curvature to clearly be the predominant aberration for a two element system is discussed. The validity of the theory is verified through comparisons with the exact ray trace results for the case of the telescope.
Pohl, Lydia; Kölbl, Angelika; Werner, Florian; Mueller, Carsten W; Höschen, Carmen; Häusler, Werner; Kögel-Knabner, Ingrid
2018-04-30
Aluminium (Al)-substituted goethite is ubiquitous in soils and sediments. The extent of Al-substitution affects the physicochemical properties of the mineral and influences its macroscale properties. Bulk analysis only provides total Al/Fe ratios without providing information with respect to the Al-substitution of single minerals. Here, we demonstrate that nanoscale secondary ion mass spectrometry (NanoSIMS) enables the precise determination of Al-content in single minerals, while simultaneously visualising the variation of the Al/Fe ratio. Al-substituted goethite samples were synthesized with increasing Al concentrations of 0.1, 3, and 7 % and analysed by NanoSIMS in combination with established bulk spectroscopic methods (XRD, FTIR, Mössbauer spectroscopy). The high spatial resolution (50-150 nm) of NanoSIMS is accompanied by a high number of single-point measurements. We statistically evaluated the Al/Fe ratios derived from NanoSIMS, while maintaining the spatial information and reassigning it to its original localization. XRD analyses confirmed increasing concentration of incorporated Al within the goethite structure. Mössbauer spectroscopy revealed 11 % of the goethite samples generated at high Al concentrations consisted of hematite. The NanoSIMS data show that the Al/Fe ratios are in agreement with bulk data derived from total digestion and demonstrated small spatial variability between single-point measurements. More advantageously, statistical analysis and reassignment of single-point measurements allowed us to identify distinct spots with significantly higher or lower Al/Fe ratios. NanoSIMS measurements confirmed the capacity to produce images, which indicated the uniform increase in Al-concentrations in goethite. Using a combination of statistical analysis with information from complementary spectroscopic techniques (XRD, FTIR and Mössbauer spectroscopy) we were further able to reveal spots with lower Al/Fe ratios as hematite. Copyright © 2018 John Wiley & Sons, Ltd.
The effects of spatial sampling choices on MR temperature measurements.
Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L
2011-02-01
The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.
Calibration Procedures on Oblique Camera Setups
NASA Astrophysics Data System (ADS)
Kemper, G.; Melykuti, B.; Yu, C.
2016-06-01
Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.
A compressed sensing approach for resolution improvement in fiber-bundle based endomicroscopy
NASA Astrophysics Data System (ADS)
Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.
2018-02-01
Endomicroscopy techniques such as confocal, multi-photon, and wide-field imaging have all been demonstrated using coherent fiber-optic imaging bundles. While the narrow diameter and flexibility of fiber bundles is clinically advantageous, the number of resolvable points in an image is conventionally limited to the number of individual fibers within the bundle. We are introducing concepts from the compressed sensing (CS) field to fiber bundle based endomicroscopy, to allow images to be recovered with more resolvable points than fibers in the bundle. The distal face of the fiber bundle is treated as a low-resolution sensor with circular pixels (fibers) arranged in a hexagonal lattice. A spatial light modulator is located conjugate to the object and distal face, applying multiple high resolution masks to the intermediate image prior to propagation through the bundle. We acquire images of the proximal end of the bundle for each (known) mask pattern and then apply CS inversion algorithms to recover a single high-resolution image. We first developed a theoretical forward model describing image formation through the mask and fiber bundle. We then imaged objects through a rigid fiber bundle and demonstrate that our CS endomicroscopy architecture can recover intra-fiber details while filling inter-fiber regions with interpolation. Finally, we examine the relationship between reconstruction quality and the ratio of the number of mask elements to the number of fiber cores, finding that images could be generated with approximately 28,900 resolvable points for a 1,000 fiber region in our platform.
Xu, Lijun; Chen, Lulu; Li, Xiaolu; He, Tao
2014-10-01
In this paper, we propose a projective rectification method for infrared images obtained from the measurement of temperature distribution on an air-cooled condenser (ACC) surface by using projection profile features and cross-ratio invariability. In the research, the infrared (IR) images acquired by the four IR cameras utilized are distorted to different degrees. To rectify the distorted IR images, the sizes of the acquired images are first enlarged by means of bicubic interpolation. Then, uniformly distributed control points are extracted in the enlarged images by constructing quadrangles with detected vertical lines and detected or constructed horizontal lines. The corresponding control points in the anticipated undistorted IR images are extracted by using projection profile features and cross-ratio invariability. Finally, a third-order polynomial rectification model is established and the coefficients of the model are computed with the mapping relationship between the control points in the distorted and anticipated undistorted images. Experimental results obtained from an industrial ACC unit show that the proposed method performs much better than any previous method we have adopted. Furthermore, all rectified images are stitched together to obtain a complete image of the whole ACC surface with a much higher spatial resolution than that obtained by using a single camera, which is not only useful but also necessary for more accurate and comprehensive analysis of ACC performance and more reliable optimization of ACC operations.
Single image non-uniformity correction using compressive sensing
NASA Astrophysics Data System (ADS)
Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu
2016-05-01
A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.
Mechanical stability of a microscope setup working at a few kelvins for single-molecule localization
NASA Astrophysics Data System (ADS)
Hinohara, Takuya; Hamada, Yuki I.; Nakamura, Ippei; Matsushita, Michio; Fujiyoshi, Satoru
2013-06-01
A great advantage of single-molecule fluorescence imaging is the localization precision of molecule beyond the diffraction limit. Although longer signal-acquisition yields higher precision, acquisition time at room temperature is normally limited by photobleaching, thermal diffusion, and so on. At low temperature of a few kelvins, much longer acquisition is possible and will improve precision if the sample and the objective are held stably enough. The present work examined holding stability of the sample and objective at 1.5 K in superfluid helium in the helium bath. The stability was evaluated by localization precision of a point scattering source of a polymer bead. Scattered light was collected by the objective, and imaged by a home-built rigid imaging unit. The standard deviation of the centroid position determined for 800 images taken continuously in 17 min was 0.5 nm in the horizontal and 0.9 nm in the vertical directions.
NASA Astrophysics Data System (ADS)
Weng, Jiawen; Clark, David C.; Kim, Myung K.
2016-05-01
A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.
NASA Astrophysics Data System (ADS)
Solli, Martin; Lenz, Reiner
In this paper we describe how to include high level semantic information, such as aesthetics and emotions, into Content Based Image Retrieval. We present a color-based emotion-related image descriptor that can be used for describing the emotional content of images. The color emotion metric used is derived from psychophysical experiments and based on three variables: activity, weight and heat. It was originally designed for single-colors, but recent research has shown that the same emotion estimates can be applied in the retrieval of multi-colored images. Here we describe a new approach, based on the assumption that perceived color emotions in images are mainly affected by homogenous regions, defined by the emotion metric, and transitions between regions. RGB coordinates are converted to emotion coordinates, and for each emotion channel, statistical measurements of gradient magnitudes within a stack of low-pass filtered images are used for finding interest points corresponding to homogeneous regions and transitions between regions. Emotion characteristics are derived for patches surrounding each interest point, and saved in a bag-of-emotions, that, for instance, can be used for retrieving images based on emotional content.
Rossa, Carlos; Sloboda, Ron; Usmani, Nawaid; Tavakoli, Mahdi
2016-07-01
This paper proposes a method to predict the deflection of a flexible needle inserted into soft tissue based on the observation of deflection at a single point along the needle shaft. We model the needle-tissue as a discretized structure composed of several virtual, weightless, rigid links connected by virtual helical springs whose stiffness coefficient is found using a pattern search algorithm that only requires the force applied at the needle tip during insertion and the needle deflection measured at an arbitrary insertion depth. Needle tip deflections can then be predicted for different insertion depths. Verification of the proposed method in synthetic and biological tissue shows a deflection estimation error of [Formula: see text]2 mm for images acquired at 35 % or more of the maximum insertion depth, and decreases to 1 mm for images acquired closer to the final insertion depth. We also demonstrate the utility of the model for prostate brachytherapy, where in vivo needle deflection measurements obtained during early stages of insertion are used to predict the needle deflection further along the insertion process. The method can predict needle deflection based on the observation of deflection at a single point. The ultrasound probe can be maintained at the same position during insertion of the needle, which avoids complications of tissue deformation caused by the motion of the ultrasound probe.
Single cell HaloChip assay on paper for point-of-care diagnosis.
Ma, Liyuan; Qiao, Yong; Jones, Ross; Singh, Narendra; Su, Ming
2016-11-01
This article describes a paper-based low cost single cell HaloChip assay that can be used to assess drug- and radiation-induced DNA damage at point-of-care. Printing ink on paper effectively blocks fluorescence of paper materials, provides high affinity to charged polyelectrolytes, and prevents penetration of water in paper. After exposure to drug or ionizing radiation, cells are patterned on paper to create discrete and ordered single cell arrays, embedded inside an agarose gel, lysed with alkaline solution to allow damaged DNA fragments to diffuse out of nucleus cores, and form diffusing halos in the gel matrix. After staining DNA with a fluorescent dye, characteristic halos formed around cells, and the level of DNA damage can be quantified by determining sizes of halos and nucleus with an image processing program based on MATLAB. With its low fabrication cost and easy operation, this HaloChip on paper platform will be attractive to rapidly and accurately determine DNA damage for point-of-care evaluation of drug efficacy and radiation condition. Graphical Abstract Single cell HaloChip on paper.
Large-angle illumination STEM: Toward three-dimensional atom-by-atom imaging
Ishikawa, Ryo; Lupini, Andrew R.; Hinuma, Yoyo; ...
2014-11-26
To completely understand and control materials and their properties, it is of critical importance to determine their atomic structures in all three dimensions. Recent revolutionary advances in electron optics – the inventions of geometric and chromatic aberration correctors as well as electron source monochromators – have provided fertile ground for performing optical depth sectioning at atomic-scale dimensions. In this study we theoretically demonstrate the imaging of top/sub-surface atomic structures and identify the depth of single dopants, single vacancies and the other point defects within materials by large-angle illumination scanning transmission electron microscopy (LAI-STEM). The proposed method also allows us tomore » measure specimen properties such as thickness or three-dimensional surface morphology using observations from a single crystallographic orientation.« less
Scialpi, Michele; Schiavone, Raffaele; D'Andrea, Alfredo; Palumbo, Isabella; Magli, Michelle; Gravante, Sabrina; Falcone, Giuseppe; De Filippi, Claudio; Manganaro, Lucia; Palumbo, Barbara
2015-05-01
To evaluate the image quality and the diagnostic efficacy by single-phase whole-body 64-slice multidetector CT (MDCT) for pediatric oncology. Chest-abdomen-pelvis CT examinations with single-phase split-bolus technique were evaluated for T: detection and delineation of primary tumor (assessment of the extent of the lesion to neighboring tissues), N: regional lymph nodes and M: distant metastasis. Quality scores (5-point scale) were assessed by two radiologists on parenchymal and vascular enhancement. Accurate TNM staging in term of detection and delineation of primary tumor, regional lymph nodes and distant metastasis was obtained in all cases. On the image quality and severity artifact, the Kappa value for the interobserver agreement measure obtained from the analysis was 0.754, (p<0.001), characterizing a very good agreement between observers. Single-pass total body CT split-bolus technique reached the highest overall image quality and an accurate TNM staging in pediatric patients with cancer. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
AUGUSTO'S Sundial: Image-Based Modeling for Reverse Engeneering Purposes
NASA Astrophysics Data System (ADS)
Baiocchi, V.; Barbarella, M.; Del Pizzo, S.; Giannone, F.; Troisi, S.; Piccaro, C.; Marcantonio, D.
2017-02-01
A photogrammetric survey of a unique archaeological site is reported in this paper. The survey was performed using both a panoramic image-based solution and by classical procedure. The panoramic image-based solution was carried out employing a commercial solution: the Trimble V10 Imaging Rover (IR). Such instrument is an integrated cameras system that captures 360 degrees digital panoramas, composed of 12 images, with a single push. The direct comparison of the point clouds obtained with traditional photogrammetric procedure and V10 stations, using the same GCP coordinates has been carried out in Cloud Compare, open source software that can provide the comparison between two point clouds supplied by all the main statistical data. The site is a portion of the dial plate of the "Horologium Augusti" inaugurated in 9 B.C.E. in the area of Campo Marzio and still present intact in the same position, in a cellar of a building in Rome, around 7 meter below the present ground level.
NASA Astrophysics Data System (ADS)
Shi, Yeyin; Thomasson, J. Alex; Yang, Chenghai; Cope, Dale; Sima, Chao
2017-05-01
Though sharing with many commonalities, one of the major differences between conventional high-altitude airborne remote sensing and low-altitude unmanned aerial system (UAS) based remote sensing is that the latter one has much smaller ground footprint for each image shot. To cover the same area on the ground, it requires the low-altitude UASbased platform to take many highly-overlapped images to produce a good mosaic, instead of just one or a few image shots by the high-altitude aerial platform. Such an UAS flight usually takes 10 to 30 minutes or even longer to complete; environmental lighting change during this time span cannot be ignored especially when spectral variations of various parts of a field are of interests. In this case study, we compared the visible reflectance of two aerial imagery - one generated from mosaicked UAS images, the other generated from a single image taken by a manned aircraft - over the same agricultural field to quantitatively evaluate their spectral variations caused by the different data acquisition strategies. Specifically, we (1) developed our customized ground calibration points (GCPs) and an associated radiometric calibration method for UAS data processing based on camera's sensitivity characteristics; (2) developed a basic comparison method for radiometrically calibrated data from the two aerial platforms based on regions of interests. We see this study as a starting point for a series of following studies to understand the environmental influence on UAS data and investigate the solutions to minimize such influence to ensure data quality.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
Hamazawa, Yoshimasa; Koyama, Koichi; Okamura, Terue; Wada, Yasuhiro; Wakasa, Tomoko; Okuma, Tomohisa; Watanabe, Yasuyoshi; Inoue, Yuichi
2007-01-01
We investigated the optimum time for the differentiation tumor from inflammation using dynamic FDG-microPET scans obtained by a MicroPET P4 scanner in animal models. Forty-six rabbits with 92 inflammatory lesions that were induced 2, 5, 7, 14, 30 and 60 days after 0.2 ml (Group 1) or 1.0 ml (Group 2) of turpentine oil injection were used as inflammatory models. Five rabbits with 10 VX2 tumors were used as the tumor model. Helical CT scans were performed before the PET studies. In the PET study, after 4 hours fasting, and following transmission scans and dynamic emission data acquisitions were performed until 2 hours after intravenous FDG injection. Images were reconstructed every 10 minutes using a filtered-back projection method. PET images were analyzed visually referring to CT images. For quantitative analysis, the inflammation-to-muscle (I/M) ratio and tumor-to-muscle (T/M) ratio were calculated after regions of interest were set in tumors and muscles referring to CT images and the time-I/M ratio and time-T/M ratio curves (TRCs) were prepared to show the change over time in these ratios. The histological appearance of both inflammatory lesions and tumor lesions were examined and compared with the CT and FDG-microPET images. In visual and quantitative analysis, All the I/M ratios and the T/M ratios increased over time except that Day 60 of Group 1 showed an almost flat curve. The TRC of the T/M ratio showed a linear increasing curve over time, while that of the I/M ratios showed a parabolic increasing over time at the most. FDG uptake in the inflammatory lesions reflected the histological findings. For differentiating tumors from inflammatory lesions with the early image acquired at 40 min for dual-time imaging, the delayed image must be acquired 30 min after the early image, while imaging at 90 min or later after intravenous FDG injection was necessary in single-time-point imaging. Our results suggest the possibility of shortening the overall testing time in clinical practice by adopting dual-time-point imaging rather than single-time-point imaging.
Monitoring of Building Construction by 4D Change Detection Using Multi-temporal SAR Images
NASA Astrophysics Data System (ADS)
Yang, C. H.; Pang, Y.; Soergel, U.
2017-05-01
Monitoring urban changes is important for city management, urban planning, updating of cadastral map, etc. In contrast to conventional field surveys, which are usually expensive and slow, remote sensing techniques are fast and cost-effective alternatives. Spaceborne synthetic aperture radar (SAR) sensors provide radar images captured rapidly over vast areas at fine spatiotemporal resolution. In addition, the active microwave sensors are capable of day-and-night vision and independent of weather conditions. These advantages make multi-temporal SAR images suitable for scene monitoring. Persistent scatterer interferometry (PSI) detects and analyses PS points, which are characterized by strong, stable, and coherent radar signals throughout a SAR image sequence and can be regarded as substructures of buildings in built-up cities. Attributes of PS points, for example, deformation velocities, are derived and used for further analysis. Based on PSI, a 4D change detection technique has been developed to detect disappearance and emergence of PS points (3D) at specific times (1D). In this paper, we apply this 4D technique to the centre of Berlin, Germany, to investigate its feasibility and application for construction monitoring. The aims of the three case studies are to monitor construction progress, business districts, and single buildings, respectively. The disappearing and emerging substructures of the buildings are successfully recognized along with their occurrence times. The changed substructures are then clustered into single construction segments based on DBSCAN clustering and α-shape outlining for object-based analysis. Compared with the ground truth, these spatiotemporal results have proven able to provide more detailed information for construction monitoring.
Ultra-widefield retinal MHz-OCT imaging with up to 100 degrees viewing angle.
Kolb, Jan Philip; Klein, Thomas; Kufner, Corinna L; Wieser, Wolfgang; Neubauer, Aljoscha S; Huber, Robert
2015-05-01
We evaluate strategies to maximize the field of view (FOV) of in vivo retinal OCT imaging of human eyes. Three imaging modes are tested: Single volume imaging with 85° FOV as well as with 100° and stitching of five 60° images to a 100° mosaic (measured from the nodal point). We employ a MHz-OCT system based on a 1060nm Fourier domain mode locked (FDML) laser with a depth scan rate of 1.68MHz. The high speed is essential for dense isotropic sampling of the large areas. Challenges caused by the wide FOV are discussed and solutions to most issues are presented. Detailed information on the design and characterization of our sample arm optics is given. We investigate the origin of an angle dependent signal fall-off which we observe towards larger imaging angles. It is present in our 85° and 100° single volume images, but not in the mosaic. Our results suggest that 100° FOV OCT is possible with current swept source OCT technology.
Ultra-widefield retinal MHz-OCT imaging with up to 100 degrees viewing angle
Kolb, Jan Philip; Klein, Thomas; Kufner, Corinna L.; Wieser, Wolfgang; Neubauer, Aljoscha S.; Huber, Robert
2015-01-01
We evaluate strategies to maximize the field of view (FOV) of in vivo retinal OCT imaging of human eyes. Three imaging modes are tested: Single volume imaging with 85° FOV as well as with 100° and stitching of five 60° images to a 100° mosaic (measured from the nodal point). We employ a MHz-OCT system based on a 1060nm Fourier domain mode locked (FDML) laser with a depth scan rate of 1.68MHz. The high speed is essential for dense isotropic sampling of the large areas. Challenges caused by the wide FOV are discussed and solutions to most issues are presented. Detailed information on the design and characterization of our sample arm optics is given. We investigate the origin of an angle dependent signal fall-off which we observe towards larger imaging angles. It is present in our 85° and 100° single volume images, but not in the mosaic. Our results suggest that 100° FOV OCT is possible with current swept source OCT technology. PMID:26137363
Sensitivity quantification of remote detection NMR and MRI
NASA Astrophysics Data System (ADS)
Granwehr, J.; Seeley, J. A.
2006-04-01
A sensitivity analysis is presented of the remote detection NMR technique, which facilitates the spatial separation of encoding and detection of spin magnetization. Three different cases are considered: remote detection of a transient signal that must be encoded point-by-point like a free induction decay, remote detection of an experiment where the transient dimension is reduced to one data point like phase encoding in an imaging experiment, and time-of-flight (TOF) flow visualization. For all cases, the sensitivity enhancement is proportional to the relative sensitivity between the remote detector and the circuit that is used for encoding. It is shown for the case of an encoded transient signal that the sensitivity does not scale unfavorably with the number of encoded points compared to direct detection. Remote enhancement scales as the square root of the ratio of corresponding relaxation times in the two detection environments. Thus, remote detection especially increases the sensitivity of imaging experiments of porous materials with large susceptibility gradients, which cause a rapid dephasing of transverse spin magnetization. Finally, TOF remote detection, in which the detection volume is smaller than the encoded fluid volume, allows partial images corresponding to different time intervals between encoding and detection to be recorded. These partial images, which contain information about the fluid displacement, can be recorded, in an ideal case, with the same sensitivity as the full image detected in a single step with a larger coil.
Sawicki, Piotr
2018-01-01
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679
Gabara, Grzegorz; Sawicki, Piotr
2018-03-06
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
Quadratic grating apodized photon sieves for simultaneous multiplane microscopy
NASA Astrophysics Data System (ADS)
Cheng, Yiguang; Zhu, Jiangping; He, Yu; Tang, Yan; Hu, Song; Zhao, Lixin
2017-10-01
We present a new type of imaging device, named quadratic grating apodized photon sieve (QGPS), used as the objective for simultaneous multiplane imaging in X-rays. The proposed QGPS is structured based on the combination of two concepts: photon sieves and quadratic gratings. Its design principles are also expounded in detail. Analysis of imaging properties of QGPS in terms of point-spread function shows that QGPS can image multiple layers within an object field onto a single image plane. Simulated and experimental results in visible light both demonstrate the feasibility of QGPS for simultaneous multiplane imaging, which is extremely promising to detect dynamic specimens by X-ray microscopy in the physical and life sciences.
Um, Ji-Yong; Kim, Yoon-Jee; Cho, Seong-Eun; Chae, Min-Kyun; Kim, Byungsub; Sim, Jae-Yoon; Park, Hong-June
2015-02-01
A single-chip 32-channel analog beamformer is proposed. It achieves a delay resolution of 4 ns and a maximum delay range of 768 ns. It has a focal-point based architecture, which consists of 7 sub-analog beamformers (sub-ABF). Each sub-ABF performs a RX focusing operation for a single focal point. Seven sub-ABFs perform a time-interleaving operation to achieve the maximum delay range of 768 ns. Phase interpolators are used in sub-ABFs to generate sampling clocks with the delay resolution of 4 ns from a low frequency system clock of 5 MHz. Each sub-ABF samples 32 echo signals at different times into sampling capacitors, which work as analog memory cells. The sampled 32 echo signals of each sub-ABF are originated from one target focal point at one instance. They are summed at one instance in a sub-ABF to perform the RX focusing for the target focal point. The proposed ABF chip has been fabricated in a 0.13- μ m CMOS process with an active area of 16 mm (2). The total power consumption is 287 mW. In measurement, the digital echo signals from a commercial ultrasound medical imaging machine were applied to the fabricated chip through commercial DAC chips. Due to the speed limitation of the DAC chips, the delay resolution was relaxed to 10 ns for the real-time measurement. A linear array transducer with no steering operation is used in this work.
Analytical three-point Dixon method: With applications for spiral water-fat imaging.
Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G
2016-02-01
The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.
Adding polarimetric imaging to depth map using improved light field camera 2.0 structure
NASA Astrophysics Data System (ADS)
Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu
2017-06-01
Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.
Spatial imaging of UV emission from Jupiter and Saturn
NASA Technical Reports Server (NTRS)
Clarke, J. T.; Moos, H. W.
1981-01-01
Spatial imaging with the IUE is accomplished both by moving one of the apertures in a series of exposures and within the large aperture in a single exposure. The image of the field of view subtended by the large aperture is focussed directly onto the detector camera face at each wavelength; since the spatial resolution of the instrument is 5 to 6 arc sec and the aperture extends 23.0 by 10.3 arc sec, imaging both parallel and perpendicular to dispersion is possible in a single exposure. The correction for the sensitivity variation along the slit at 1216 A is obtained from exposures of diffuse geocoronal H Ly alpha emission. The relative size of the aperture superimposed on the apparent discs of Jupiter and Saturn in typical observation is illustrated. By moving the planet image 10 to 20 arc sec along the major axis of the aperture (which is constrained to point roughly north-south) maps of the discs of these planets are obtained with 6 arc sec spatial resolution.
Railway clearance intrusion detection method with binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhou, Xingfang; Guo, Baoqing; Wei, Wei
2018-03-01
In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.
Quantitative evaluation of software packages for single-molecule localization microscopy.
Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael
2015-08-01
The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.
Ristanović, Zoran; Kerssens, Marleen M; Kubarev, Alexey V; Hendriks, Frank C; Dedecker, Peter; Hofkens, Johan; Roeffaers, Maarten B J; Weckhuysen, Bert M
2015-02-02
Fluid catalytic cracking (FCC) is a major process in oil refineries to produce gasoline and base chemicals from crude oil fractions. The spatial distribution and acidity of zeolite aggregates embedded within the 50-150 μm-sized FCC spheres heavily influence their catalytic performance. Single-molecule fluorescence-based imaging methods, namely nanometer accuracy by stochastic chemical reactions (NASCA) and super-resolution optical fluctuation imaging (SOFI) were used to study the catalytic activity of sub-micrometer zeolite ZSM-5 domains within real-life FCC catalyst particles. The formation of fluorescent product molecules taking place at Brønsted acid sites was monitored with single turnover sensitivity and high spatiotemporal resolution, providing detailed insight in dispersion and catalytic activity of zeolite ZSM-5 aggregates. The results point towards substantial differences in turnover frequencies between the zeolite aggregates, revealing significant intraparticle heterogeneities in Brønsted reactivity. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Implicit multiplane 3D camera calibration matrices for stereo image processing
NASA Astrophysics Data System (ADS)
McKee, James W.; Burgett, Sherrie J.
1997-12-01
By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173
PIV-Based Examination of Dynamic Stall on an Oscillating Airfoil
2008-03-01
vectors at a very large number of points simultaneously” ( Adrian R. , 2005). PIV is accomplished by tracking indiscriminate particles in the flow at...Particle image velocimetry (PIV) theory has been discussed, developed, and used for over 20 years ( Adrian R. , 2005) as a tool for researchers to...stream flow. It is important to note that single image pair solution can have anomalies (i.e. due to turbulence, blooming , particle debris) that
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
NASA Astrophysics Data System (ADS)
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
Active fire detection using a peat fire radiance model
NASA Astrophysics Data System (ADS)
Kushida, K.; Honma, T.; Kaku, K.; Fukuda, M.
2011-12-01
The fire fractional area and radiances at 4 and 11 μm of active fires in Indonesia were estimated using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. Based on these fire information, a stochastic fire model was used for evaluating two fire detection algorithms of Moderate Resolution Imaging Spectroradiometer (MODIS). One is single-image stochastic fire detection, and the other is multitemporal stochastic fire detection (Kushida, 2010 - IEEE Geosci. Remote Sens. Lett.). The average fire fractional area per one 1 km2 ×1 km2 pixel was 1.7%; this value corresponds to 32% of that of Siberian and Mongolian boreal forest fires. The average radiances at 4 and 11 μm of active fires were 7.2 W/(m2.sr.μm) and 11.1 W/(m2.sr.μm); these values correspond to 47% and 91% of those of Siberian and Mongolian boreal forest fires, respectively. In order to get false alarms less than 20 points per 106 km2 area, for the Siberian and Mongolian boreal forest fires, omission errors (OE) of 50-60% and about 40% were expected for the detections by using the single and multitemporal images, respectively. For Indonesian peat fires, OE of 80-90% was expected for the detections by using the single images. For the peat-fire detections by using the multitemporal images, OE of about 40% was expected, provided that the background radiances were estimated from past multitemporal images with less than the standard deviation of 1K. The analyses indicated that it was difficult to obtain sufficient active-fire information of Indonesian peat fires from single MODIS images for the fire fighting, and that the use of the multitemporal images was important.
NASA Astrophysics Data System (ADS)
Renaud, Olivier; Heintzmann, Rainer; Sáez-Cirión, Asier; Schnelle, Thomas; Mueller, Torsten; Shorte, Spencer
2007-02-01
Three dimensional imaging provides high-content information from living intact biology, and can serve as a visual screening cue. In the case of single cell imaging the current state of the art uses so-called "axial through-stacking". However, three-dimensional axial through-stacking requires that the object (i.e. a living cell) be adherently stabilized on an optically transparent surface, usually glass; evidently precluding use of cells in suspension. Aiming to overcome this limitation we present here the utility of dielectric field trapping of single cells in three-dimensional electrode cages. Our approach allows gentle and precise spatial orientation and vectored rotation of living, non-adherent cells in fluid suspension. Using various modes of widefield, and confocal microscope imaging we show how so-called "microrotation" can provide a unique and powerful method for multiple point-of-view (three-dimensional) interrogation of intact living biological micro-objects (e.g. single-cells, cell aggregates, and embryos). Further, we show how visual screening by micro-rotation imaging can be combined with micro-fluidic sorting, allowing selection of rare phenotype targets from small populations of cells in suspension, and subsequent one-step single cell cloning (with high-viability). Our methodology combining high-content 3D visual screening with one-step single cell cloning, will impact diverse paradigms, for example cytological and cytogenetic analysis on haematopoietic stem cells, blood cells including lymphocytes, and cancer cells.
Filter Function for Wavefront Sensing Over a Field of View
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A filter function has been derived as a means of optimally weighting the wavefront estimates obtained in image-based phase retrieval performed at multiple points distributed over the field of view of a telescope or other optical system. When the data obtained in wavefront sensing and, more specifically, image-based phase retrieval, are used for controlling the shape of a deformable mirror or other optic used to correct the wavefront, the control law obtained by use of the filter function gives a more balanced optical performance over the field of view than does a wavefront-control law obtained by use of a wavefront estimate obtained from a single point in the field of view.
NASA Astrophysics Data System (ADS)
Santos, Sergio; Barcons, Victor; Christenson, Hugo K.; Billingsley, Daniel J.; Bonass, William A.; Font, Josep; Thomson, Neil H.
2013-08-01
A way to operate fundamental mode amplitude modulation atomic force microscopy is introduced which optimizes stability and resolution for a given tip size and shows negligible tip wear over extended time periods (˜24 h). In small amplitude small set-point (SASS) imaging, the cantilever oscillates with sub-nanometer amplitudes in the proximity of the sample, without the requirement of using large drive forces, as the dynamics smoothly lead the tip to the surface through the water layer. SASS is demonstrated on single molecules of double-stranded DNA in ambient conditions where sharp silicon tips (R ˜ 2-5 nm) can resolve the right-handed double helix.
Pulse-Echo Ultrasonic Imaging Method for Eliminating Sample Thickness Variation Effects
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1997-01-01
A pulse-echo, immersion method for ultrasonic evaluation of a material which accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer and automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjustments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnello, A.; et al.
We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epochmore » $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $$\\approx0.8$$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($$R_{\\rm E}\\approx0.2$$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $$\\approx 6\\times10^{11}M_{\\odot},$$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $$\\approx 85$$ (resp. $$\\approx125$$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.« less
Models of the strongly lensed quasar DES J0408−5354
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnello, A.; et al.
We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epochmore » $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $$\\approx0.8$$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($$R_{\\rm E}\\approx0.2$$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $$\\approx 6\\times10^{11}M_{\\odot},$$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $$\\approx 85$$ (resp. $$\\approx125$$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.« less
A 4DCT imaging-based breathing lung model with relative hysteresis
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-01-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811
A 4DCT imaging-based breathing lung model with relative hysteresis
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-12-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.
Mohrs, Oliver K; Petersen, Steffen E; Voigtlaender, Thomas; Peters, Jutta; Nowak, Bernd; Heinemann, Markus K; Kauczor, Hans-Ulrich
2006-10-01
The aim of this study was to evaluate the diagnostic value of time-resolved contrast-enhanced MR angiography in adults with congenital heart disease. Twenty patients with congenital heart disease (mean age, 38 +/- 14 years; range, 16-73 years) underwent contrast-enhanced turbo fast low-angle shot MR angiography. Thirty consecutive coronal 3D slabs with a frame rate of 1-second duration were acquired. The mask defined as the first data set was subtracted from subsequent images. Image quality was evaluated using a 5-point scale (from 1, not assessable, to 5, excellent image quality). Twelve diagnostic parameters yielded 1 point each in case of correct diagnosis (binary analysis into normal or abnormal) and were summarized into three categories: anatomy of the main thoracic vessels (maximum, 5 points), sequential cardiac anatomy (maximum, 5 points), and shunt detection (maximum, 2 points). The results were compared with a combined clinical reference comprising medical or surgical reports and other imaging studies. Diagnostic accuracies were calculated for each of the parameters as well as for the three categories. The mean image quality was 3.7 +/- 1.0. Using a binary approach, 220 (92%) of the 240 single diagnostic parameters could be analyzed. The percentage of maximum diagnostic points, the sensitivity, the specificity, and the positive and the negative predictive values were all 100% for the anatomy of the main thoracic vessels; 97%, 87%, 100%, 100%, and 96% for sequential cardiac anatomy; and 93%, 93%, 92%, 88%, and 96% for shunt detection. Time-resolved contrast-enhanced MR angiography provides, in one breath-hold, anatomic and qualitative functional information in adult patients with congenital heart disease. The high diagnostic accuracy allows the investigator to tailor subsequent specific MR sequences within the same session.
Chen, Song; Li, Xuena; Chen, Meijie; Yin, Yafu; Li, Na; Li, Yaming
2016-10-01
This study is aimed to compare the diagnostic power of using quantitative analysis or visual analysis with single time point imaging (STPI) PET/CT and dual time point imaging (DTPI) PET/CT for the classification of solitary pulmonary nodules (SPN) lesions in granuloma-endemic regions. SPN patients who received early and delayed (18)F-FDG PET/CT at 60min and 180min post-injection were retrospectively reviewed. Diagnoses are confirmed by pathological results or follow-ups. Three quantitative metrics, early SUVmax, delayed SUVmax and retention index(the percentage changes between the early SUVmax and delayed SUVmax), were measured for each lesion. Three 5-point scale score was given by blinded interpretations performed by physicians based on STPI PET/CT images, DTPI PET/CT images and CT images, respectively. ROC analysis was performed on three quantitative metrics and three visual interpretation scores. One-hundred-forty-nine patients were retrospectively included. The areas under curve (AUC) of the ROC curves of early SUVmax, delayed SUVmax, RI, STPI PET/CT score, DTPI PET/CT score and CT score are 0.73, 0.74, 0.61, 0.77 0.75 and 0.76, respectively. There were no significant differences between the AUCs in visual interpretation of STPI PET/CT images and DTPI PET/CT images, nor in early SUVmax and delayed SUVmax. The differences of sensitivity, specificity and accuracy between STPI PET/CT and DTPI PET/CT were not significantly different in either quantitative analysis or visual interpretation. In granuloma-endemic regions, DTPI PET/CT did not offer significant improvement over STPI PET/CT in differentiating malignant SPNs in both quantitative analysis and visual interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Distributed decision making in action: diagnostic imaging investigations within the bigger picture.
Makanjee, Chandra R; Bergh, Anne-Marie; Hoffmann, Willem A
2018-03-01
Decision making in the health care system - specifically with regard to diagnostic imaging investigations - occurs at multiple levels. Professional role players from various backgrounds are involved in making these decisions, from the point of referral to the outcomes of the imaging investigation. The aim of this study was to map the decision-making processes and pathways involved when patients are referred for diagnostic imaging investigations and to explore distributed decision-making events at the points of contact with patients within a health care system. A two-phased qualitative study was conducted in an academic public health complex with the district hospital as entry point. The first phase included case studies of 24 conveniently selected patients, and the second phase involved 12 focus group interviews with health care providers. Data analysis was based on Rapley's interpretation of decision making as being distributed across time, situations and actions, and including different role players and technologies. Clinical decisions incorporating imaging investigations are distributed across the three vital points of contact or decision-making events, namely the initial patient consultation, the diagnostic imaging investigation and the post-investigation consultation. Each of these decision-making events is made up of a sequence of discrete decision-making moments based on the transfer of retrospective, current and prospective information and its transformation into knowledge. This paper contributes to the understanding of the microstructural processes (the 'when' and 'where') involved in the distribution of decisions related to imaging investigations. It also highlights the interdependency in decision-making events of medical and non-medical providers within a single medical encounter. © 2017 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
Height Accuracy Based on Different Rtk GPS Method for Ultralight Aircraft Images
NASA Astrophysics Data System (ADS)
Tahar, K. N.
2015-08-01
Height accuracy is one of the important elements in surveying work especially for control point's establishment which requires an accurate measurement. There are many methods can be used to acquire height value such as tacheometry, leveling and Global Positioning System (GPS). This study has investigated the effect on height accuracy based on different observations which are single based and network based GPS methods. The GPS network is acquired from the local network namely Iskandar network. This network has been setup to provide real-time correction data to rover GPS station while the single network is based on the known GPS station. Nine ground control points were established evenly at the study area. Each ground control points were observed about two and ten minutes. It was found that, the height accuracy give the different result for each observation.
Detecting Planar Surfaces in Outdoor Urban Environments
2008-09-01
coplanar or parallel scene points and lines. Sturm and Maybank (18) perform 3D reconstruction given user-provided coplanarity, perpendicularity, and... Maybank , S. J. A method for intactive 3d reconstruction of piercewise planar objects from single images. in BMVC, 1999, 265–274 [19] Schaffalitzky, F
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
High-performance floating-point image computing workstation for medical applications
NASA Astrophysics Data System (ADS)
Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin
1990-07-01
The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.
Acute interstitial edematous pancreatitis: Findings on non-enhanced MR imaging
Zhang, Xiao-Ming; Feng, Zhi-Song; Zhao, Qiong-Hui; Xiao, Chun-Ming; Mitchell, Donald G; Shu, Jian; Zeng, Nan-Lin; Xu, Xiao-Xue; Lei, Jun-Yang; Tian, Xiao-Bing
2006-01-01
AIM: To study the appearances of acute interstitial edematous pancreatitis (IEP) on non-enhanced MR imaging. METHODS: A total of 53 patients with IEP diagnosed by clinical features and laboratory findings were underwent MR imaging. MR imaging sequences included fast spoiled gradient echo (FSPGR) fat saturation axial T1-weighted imaging, gradient echo T1-weighted (in phase), single shot fast spin echo (SSFSE) T2-weighted, respiratory triggered (R-T) T2-weighted with fat saturation, and MR cholangiopancreatography. Using the MR severity score index, pancreatitis was graded as mild (0-2 points), moderate (3-6 points) and severe (7-10 points). RESULTS: Among the 53 patients, IEP was graded as mild in 37 patients and as moderate in 16 patients. Forty-seven of 53 (89%) patients had at least one abnormality on MR images. Pancreas was hypointense relative to liver on FSPGR T1-weighted images in 18.9% of patients, and hyperintense in 25% and 30% on SSFSE T2-weighted and R-T T2-weighted images, respectively. The prevalences of the findings of IEP on R-T T2-weighted images were, respectively, 85% for pancreatic fascial plane, 77% for left renal fascial plane, 55% for peripancreatic fat stranding, 42% for right renal fascial plane, 45% for perivascular fluid, 40% for thickened pancreatic lobular septum and 25% for peripancreatic fluid, which were markedly higher than those on in-phase or SSFSE T2-weighted images (P < 0.001). CONCLUSION: IEP primarily manifests on non-enhanced MR images as thickened pancreatic fascial plane, left renal fascial plane, peripancreatic fat stranding, and peripancreatic fluid. R-T T2-weighted imaging is more sensitive than in-phase and SSFSE T2-weighted imaging for depicting IEP. PMID:17007053
Acute interstitial edematous pancreatitis: Findings on non-enhanced MR imaging.
Zhang, Xiao-Ming; Feng, Zhi-Song; Zhao, Qiong-Hui; Xiao, Chun-Ming; Mitchell, Donald-G; Shu, Jian; Zeng, Nan-Lin; Xu, Xiao-Xue; Lei, Jun-Yang; Tian, Xiao-Bing
2006-09-28
To study the appearances of acute interstitial edematous pancreatitis (IEP) on non-enhanced MR imaging. A total of 53 patients with IEP diagnosed by clinical features and laboratory findings were underwent MR imaging. MR imaging sequences included fast spoiled gradient echo (FSPGR) fat saturation axial T1-weighted imaging, gradient echo T1-weighted (in phase), single shot fast spin echo (SSFSE) T2-weighted, respiratory triggered (R-T) T2-weighted with fat saturation, and MR cholangiopancreatography. Using the MR severity score index, pancreatitis was graded as mild (0-2 points), moderate (3-6 points) and severe (7-10 points). Among the 53 patients, IEP was graded as mild in 37 patients and as moderate in 16 patients. Forty-seven of 53 (89%) patients had at least one abnormality on MR images. Pancreas was hypointense relative to liver on FSPGR T1-weighted images in 18.9% of patients, and hyperintense in 25% and 30% on SSFSE T2-weighted and R-T T2-weighted images, respectively. The prevalences of the findings of IEP on R-T T2-weighted images were, respectively, 85% for pancreatic fascial plane, 77% for left renal fascial plane, 55% for peripancreatic fat stranding, 42% for right renal fascial plane, 45% for perivascular fluid, 40% for thickened pancreatic lobular septum and 25% for peripancreatic fluid, which were markedly higher than those on in-phase or SSFSE T2-weighted images (P<0.001). IEP primarily manifests on non-enhanced MR images as thickened pancreatic fascial plane, left renal fascial plane, peripancreatic fat stranding, and peripancreatic fluid. R-T T2-weighted imaging is more sensitive than in-phase and SSFSE T2-weighted imaging for depicting IEP.
NASA Astrophysics Data System (ADS)
Korte, Andrew R.; Lee, Young Jin
2013-06-01
We have recently developed a multiplex mass spectrometry imaging (MSI) method which incorporates high mass resolution imaging and MS/MS and MS3 imaging of several compounds in a single data acquisition utilizing a hybrid linear ion trap-Orbitrap mass spectrometer (Perdian and Lee, Anal. Chem. 82, 9393-9400, 2010). Here we extend this capability to obtain positive and negative ion MS and MS/MS spectra in a single MS imaging experiment through polarity switching within spiral steps of each raster step. This methodology was demonstrated for the analysis of various lipid class compounds in a section of mouse brain. This allows for simultaneous imaging of compounds that are readily ionized in positive mode (e.g., phosphatidylcholines and sphingomyelins) and those that are readily ionized in negative mode (e.g., sulfatides, phosphatidylinositols and phosphatidylserines). MS/MS imaging was also performed for a few compounds in both positive and negative ion mode within the same experimental set-up. Insufficient stabilization time for the Orbitrap high voltage leads to slight deviations in observed masses, but these deviations are systematic and were easily corrected with a two-point calibration to background ions.
NASA Astrophysics Data System (ADS)
Brown, Christopher M.; Maggio-Price, Lillian; Seibel, Eric J.
2007-02-01
Scanning fiber endoscope (SFE) technology has shown promise as a minimally invasive optical imaging tool. To date, it is capable of capturing full-color 500-line images, at 15 Hz frame rate in vivo, as a 1.6 mm diameter endoscope. The SFE uses a singlemode optical fiber actuated at mechanical resonance to scan a light spot over tissue while backscattered or fluorescent light at each pixel is detected in time series using several multimode optical fibers. We are extending the capability of the SFE from a RGB reflectance imaging device to a diagnostic tool by imaging laser induced fluorescence (LIF) in tissue, allowing for correlation of endogenous fluorescence to tissue state. Design of the SFE for diagnostic imaging is guided by a comparison of single point spectra acquired from an inflammatory bowel disease (IBD) model to tissue histology evaluated by a pathologist. LIF spectra were acquired by illuminating tissue with a 405 nm light source and detecting intrinsic fluorescence with a multimode optical fiber. The IBD model used in this study was mdr1a-/- mice, where IBD was modulated by infection with Helicobacter bilis. IBD lesions in the mouse model ranged from mild to marked hyperplasia and dysplasia, from the distal colon to the cecum. A principle components analysis (PCA) was conducted on single point spectra of control and IBD tissue. PCA allowed for differentiation between healthy and dysplastic tissue, indicating that emission wavelengths from 620 - 650 nm were best able to differentiate diseased tissue and inflammation from normal healthy tissue.
Wang, E; Babbey, C M; Dunn, K W
2005-05-01
Fluorescence microscopy of the dynamics of living cells presents a special challenge to a microscope imaging system, simultaneously requiring both high spatial resolution and high temporal resolution, but with illumination levels low enough to prevent fluorophore damage and cytotoxicity. We have compared the high-speed Yokogawa CSU10 spinning disc confocal system with several conventional single-point scanning confocal (SPSC) microscopes, using the relationship between image signal-to-noise ratio and fluorophore photobleaching as an index of system efficiency. These studies demonstrate that the efficiency of the CSU10 consistently exceeds that of the SPSC systems. The high efficiency of the CSU10 means that quality images can be collected with much lower levels of illumination; the CSU10 was capable of achieving the maximum signal-to-noise of an SPSC system at illumination levels that incur only at 1/15th of the rate of the photobleaching of the SPSC system. Although some of the relative efficiency of the CSU10 system may be attributed to the use of a CCD rather than a photomultiplier detector system, our analyses indicate that high-speed imaging with the SPSC system is limited by fluorescence saturation at the high levels of illumination frequently needed to collect images at high frame rates. The high speed, high efficiency and freedom from fluorescence saturation combine to make the CSU10 effective for extended imaging of living cells at rates capable of capturing the three-dimensional motion of endosomes moving up to several micrometres per second.
Rapid Protein Separations in Microfluidic Devices
NASA Technical Reports Server (NTRS)
Fan, Z. H.; Das, Champak; Xia, Zheng; Stoyanov, Alexander V.; Fredrickson, Carl K.
2004-01-01
This paper describes fabrication of glass and plastic microfluidic devices for protein separations. Although the long-term goal is to develop a microfluidic device for two-dimensional gel electrophoresis, this paper focuses on the first dimension-isoelectric focusing (IEF). A laser-induced fluorescence (LIF) imaging system has been built for imaging an entire channel in an IEF device. The whole-channel imaging eliminates the need to migrate focused protein bands, which is required if a single-point detector is used. Using the devices and the imaging system, we are able to perform IEF separations of proteins within minutes rather than hours in traditional bench-top instruments.
Image Analysis of a Negatively Curved Graphitic Sheet Model for Amorphous Carbon
NASA Astrophysics Data System (ADS)
Bursill, L. A.; Bourgeois, Laure N.
High-resolution electron micrographs are presented which show essentially curved single sheets of graphitic carbon. Image calculations are then presented for the random surface schwarzite-related model of Townsend et al. (Phys. Rev. Lett. 69, 921-924, 1992). Comparison with experimental images does not rule out the contention that such models, containing surfaces of negative curvature, may be useful for predicting some physical properties of specific forms of nanoporous carbon. Some difficulties of the model predictions, when compared with the experimental images, are pointed out. The range of application of this model, as well as competing models, is discussed briefly.
Laser focus compensating sensing and imaging device
Vann, Charles S.
1993-01-01
A laser focus compensating sensing and imaging device permits the focus of a single focal point of different frequency laser beams emanating from the same source point. In particular it allows the focusing of laser beam originating from the same laser device but having differing intensities so that a low intensity beam will not convert to a higher frequency when passing through a conversion crystal associated with the laser generating device. The laser focus compensating sensing and imaging device uses a cassegrain system to fold the lower frequency, low intensity beam back upon itself so that it will focus at the same focal point as a high intensity beam. An angular tilt compensating lens is mounted about the secondary mirror of the cassegrain system to assist in alignment. In addition cameras or CCD's are mounted with the primary mirror to sense the focused image. A convex lens is positioned co-axial with the cassegrain system on the side of the primary mirror distal of the secondary for use in aligning a target with the laser beam. A first alternate embodiment includes a cassegrain system using a series of shutters and an internally mounted dichroic mirror. A second alternate embodiment uses two laser focus compensating sensing and imaging devices for aligning a moving tool with a work piece.
Laser focus compensating sensing and imaging device
Vann, C.S.
1993-08-31
A laser focus compensating sensing and imaging device permits the focus of a single focal point of different frequency laser beams emanating from the same source point. In particular it allows the focusing of laser beam originating from the same laser device but having differing intensities so that a low intensity beam will not convert to a higher frequency when passing through a conversion crystal associated with the laser generating device. The laser focus compensating sensing and imaging device uses a Cassegrain system to fold the lower frequency, low intensity beam back upon itself so that it will focus at the same focal point as a high intensity beam. An angular tilt compensating lens is mounted about the secondary mirror of the Cassegrain system to assist in alignment. In addition cameras or CCD's are mounted with the primary mirror to sense the focused image. A convex lens is positioned co-axial with the Cassegrain system on the side of the primary mirror distal of the secondary for use in aligning a target with the laser beam. A first alternate embodiment includes a Cassegrain system using a series of shutters and an internally mounted dichroic mirror. A second alternate embodiment uses two laser focus compensating sensing and imaging devices for aligning a moving tool with a work piece.
Moens, Pierre D.J.; Gratton, Enrico; Salvemini, Iyrri L.
2010-01-01
Fluorescence correlation spectroscopy (FCS) was developed in 1972 by Magde, Elson and Webb (Magde et al., 1972). Photon counting detectors and avalanche photodiodes have become standards in FCS to the point that there is a widespread belief that these detectors are essential to perform FCS experiments, despite the fact that FCS was developed using analog detectors. Spatial and temporal intensity fluctuation correlations using analog detection on a commercial Olympus Fluoview 300 microscope has been reported by Brown et al. (2008). However, each analog instrument has its own idiosyncrasies that need to be understood before using the instrument for FCS. In this work we explore the capabilities of the Nikon C1, a low cost confocal microscope, to obtain single point FCS, Raster-scan Image Correlation Spectroscopy (RICS) and Number & Brightness data both in solution and incorporated into the membrane of Giant Unilamellar Vesicles (GUVs). We show that it is possible to obtain dynamic information about fluorescent molecules from single point FCS, RICS and Number & Brightness using the Nikon C1. We highlighted the fact that care should be taken in selecting the acquisition parameters in order to avoid possible artifacts due to the detector noise. However, due to relatively large errors in determining the distribution of digital levels for a given microscope setting, the system is probably only adequate for determining relative brightness within the same image. PMID:20734406
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volker, Arno; Hunter, Alan
Anisotropic materials are being used increasingly in high performance industrial applications, particularly in the aeronautical and nuclear industries. Some important examples of these materials are composites, single-crystal and heavy-grained metals. Ultrasonic array imaging in these materials requires exact knowledge of the anisotropic material properties. Without this information, the images can be adversely affected, causing a reduction in defect detection and characterization performance. The imaging operation can be formulated in two consecutive and reciprocal focusing steps, i.e., focusing the sources and then focusing the receivers. Applying just one of these focusing steps yields an interesting intermediate domain. The resulting common focusmore » point gather (CFP-gather) can be interpreted to determine the propagation operator. After focusing the sources, the observed travel-time in the CFP-gather describes the propagation from the focus point to the receivers. If the correct propagation operator is used, the measured travel-times should be the same as the time-reversed focusing operator due to reciprocity. This makes it possible to iteratively update the focusing operator using the data only and allows the material to be imaged without explicit knowledge of the anisotropic material parameters. Furthermore, the determined propagation operator can also be used to invert for the anisotropic medium parameters. This paper details the proposed technique and demonstrates its use on simulated array data from a specimen of Inconel single-crystal alloy commonly used in the aeronautical and nuclear industries.« less
Baums, Mike H; Spahn, Gunter; Buchhorn, Gottfried H; Schultz, Wolfgang; Hofmann, Lars; Klinger, Hans-Michael
2012-06-01
To investigate the biomechanical and magnetic resonance imaging (MRI)-derived morphologic changes between single- and double-row rotator cuff repair at different time points after fixation. Eighteen mature female sheep were randomly assigned to either a single-row treatment group using arthroscopic Mason-Allen stitches or a double-row treatment group using a combination of arthroscopic Mason-Allen and mattress stitches. Each group was analyzed at 1 of 3 survival points (6 weeks, 12 weeks, and 26 weeks). We evaluated the integrity of the cuff repair using MRI and biomechanical properties using a mechanical testing machine. The mean load to failure was significantly higher in the double-row group compared with the single-row group at 6 and 12 weeks (P = .018 and P = .002, respectively). At 26 weeks, the differences were not statistically significant (P = .080). However, the double-row group achieved a mean load to failure similar to that of a healthy infraspinatus tendon, whereas the single-row group reached only 70% of the load of a healthy infraspinatus tendon. No significant morphologic differences were observed based on the MRI results. This study confirms that in an acute repair model, double-row repair may enhance the speed of mechanical recovery of the tendon-bone complex when compared with single-row repair in the early postoperative period. Double-row rotator cuff repair enables higher mechanical strength that is especially sustained during the early recovery period and may therefore improve clinical outcome. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
Comparison of Resorbable Plating Systems: Complications During Degradation.
Nguyen, Dennis C; Woo, Albert S; Farber, Scott J; Skolnick, Gary B; Yu, Jenny; Naidoo, Sybill D; Patel, Kamlesh B
2017-01-01
Several bioresorbable plating systems have become standard in pediatric craniosynostosis reconstruction. A comparison of these systems is needed to aid surgeons in the preoperative planning process. The authors aim to evaluate 1 institution's experience using Resorb-X by KLS Martin and Delta Resorbable Fixation System by Stryker (Stryker Craniomaxillofacial, Kalamazoo, MI). A sample of patients with single-suture nonsyndromic craniosynostosis treated at St Louis Children's Hospital between 2007 and 2014 using either Resorb-X or Delta bioresorbable plating systems were reviewed. Only patients with preoperative, immediate, and long-term 3-dimensional photographic images or computed tomography scans were included. A comparison of plating system outcomes was performed to determine the need for clinic and emergency room visits, imaging obtained, and incidence of subsequent surgical procedures due to complications. Forty-six patients (24 Resorb-X and 22 Delta) underwent open repair with bioabsorbable plating for single suture craniosynostosis. The mean age at each imaging time point was similar between the 2 plating systems (P > 0.717). Deformity-specific measures for sagittal (cranial index), metopic (interfrontotemporale), and unicoronal (frontal asymmetry) synostosis were equivalent between the systems at all time points (0.05 < P < 0.904). A single Delta patient developed bilateral scalp cellulitis and abscesses and subsequently required operative intervention and antibiotics. Bioabsorbable plating for craniosynostosis in children is effective and has low morbidity. In our experience, the authors did not find a difference between the outcomes and safety profiles between Resorb-X and Delta.
NASA Astrophysics Data System (ADS)
Cao, Liji; Peter, Jörg
2013-06-01
The adoption of axially oriented line illumination patterns for fluorescence excitation in small animals for fluorescence surface imaging (FSI) and fluorescence optical tomography (FOT) is being investigated. A trimodal single-photon-emission-computed-tomography/computed-tomography/optical-tomography (SPECT-CT-OT) small animal imaging system is being modified for employment of point- and line-laser excitation sources. These sources can be arbitrarily positioned around the imaged object. The line source is set to illuminate the object along its entire axial direction. Comparative evaluation of point and line illumination patterns for FSI and FOT is provided involving phantom as well as mouse data. Given the trimodal setup, CT data are used to guide the optical approaches by providing boundary information. Furthermore, FOT results are also being compared to SPECT. Results show that line-laser illumination yields a larger axial field of view (FOV) in FSI mode, hence faster data acquisition, and practically acceptable FOT reconstruction throughout the whole animal. Also, superimposed SPECT and FOT data provide additional information on similarities as well as differences in the distribution and uptake of both probe types. Fused CT data enhance further the anatomical localization of the tracer distribution in vivo. The feasibility of line-laser excitation for three-dimensional fluorescence imaging and tomography is demonstrated for initiating further research, however, not with the intention to replace one by the other.
Yi-Qun, Xu; Wei, Liu; Xin-Ye, Ni
2016-10-01
This study employs dual-source computed tomography single-spectrum imaging to evaluate the effects of contrast agent artifact removal and the computational accuracy of radiotherapy treatment planning improvement. The phantom, including the contrast agent, was used in all experiments. The amounts of iodine in the contrast agent were 30, 15, 7.5, and 0.75 g/100 mL. Two images with different energy values were scanned and captured using dual-source computed tomography (80 and 140 kV). To obtain a fused image, 2 groups of images were processed using single-energy spectrum imaging technology. The Pinnacle planning system was used to measure the computed tomography values of the contrast agent and the surrounding phantom tissue. The difference between radiotherapy treatment planning based on 80 kV, 140 kV, and energy spectrum image was analyzed. For the image with high iodine concentration, the quality of the energy spectrum-fused image was the highest, followed by that of the 140-kV image. That of the 80-kV image was the worst. The difference in the radiotherapy treatment results among the 3 models was significant. When the concentration of iodine was 30 g/100 mL and the distance from the contrast agent at the dose measurement point was 1 cm, the deviation values (P) were 5.95% and 2.20% when image treatment planning was based on 80 and 140 kV, respectively. When the concentration of iodine was 15 g/100 mL, deviation values (P) were -2.64% and -1.69%. Dual-source computed tomography single-energy spectral imaging technology can remove contrast agent artifacts to improve the calculated dose accuracy in radiotherapy treatment planning. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard
2017-07-01
In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.
Lasing in optimized two-dimensional iron-nail-shaped rod photonic crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Soon-Yong; Moon, Seul-Ki; Yang, Jin-Kyu, E-mail: jinkyuyang@kongju.ac.kr
2016-03-15
We demonstrated lasing at the Γ-point band-edge (BE) modes in optimized two-dimensional iron-nail-shaped rod photonic crystals by optical pulse pumping at room temperature. As the radius of the rod increased quadratically toward the edge of the pattern, the quality factor of the Γ-point BE mode increased up to three times, and the modal volume decreased to 56% compared with the values of the original Γ-point BE mode because of the reduction of the optical loss in the horizontal direction. Single-mode lasing from an optimized iron-nail-shaped rod array with an InGaAsP multiple quantum well embedded in the nail heads was observedmore » at a low threshold pump power of 160 μW. Real-image-based numerical simulations showed that the lasing actions originated from the optimized Γ-point BE mode and agreed well with the measurement results, including the lasing polarization, wavelength, and near-field image.« less
Dose fractionation theorem in 3-D reconstruction (tomography)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glaeser, R.M.
It is commonly assumed that the large number of projections for single-axis tomography precludes its application to most beam-labile specimens. However, Hegerl and Hoppe have pointed out that the total dose required to achieve statistical significance for each voxel of a computed 3-D reconstruction is the same as that required to obtain a single 2-D image of that isolated voxel, at the same level of statistical significance. Thus a statistically significant 3-D image can be computed from statistically insignificant projections, as along as the total dosage that is distributed among these projections is high enough that it would have resultedmore » in a statistically significant projection, if applied to only one image. We have tested this critical theorem by simulating the tomographic reconstruction of a realistic 3-D model created from an electron micrograph. The simulations verify the basic conclusions of high absorption, signal-dependent noise, varying specimen contrast and missing angular range. Furthermore, the simulations demonstrate that individual projections in the series of fractionated-dose images can be aligned by cross-correlation because they contain significant information derived from the summation of features from different depths in the structure. This latter information is generally not useful for structural interpretation prior to 3-D reconstruction, owing to the complexity of most specimens investigated by single-axis tomography. These results, in combination with dose estimates for imaging single voxels and measurements of radiation damage in the electron microscope, demonstrate that it is feasible to use single-axis tomography with soft X-ray microscopy of frozen-hydrated specimens.« less
Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images
NASA Astrophysics Data System (ADS)
Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.
2017-05-01
Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.
NASA Astrophysics Data System (ADS)
Wang, Shuping; Shibahara, Nanae; Kuramashi, Daishi; Okawa, Shinpei; Kakuta, Naoto; Okada, Eiji; Maki, Atsushi; Yamada, Yukio
2010-07-01
In order to investigate the effects of anatomical variation in human heads on the optical mapping of brain activity, we perform simulations of optical mapping by solving the photon diffusion equation for layered-models simulating human heads using the finite element method (FEM). Particularly, the effects of the spatial variations in the thicknesses of the skull and cerebrospinal fluid (CSF) layers on mapping images are investigated. Mapping images of single active regions in the gray matter layer are affected by the spatial variations in the skull and CSF layer thicknesses, although the effects are smaller than those of the positions of the active region relative to the data points. The increase in the skull thickness decreases the sensitivity of the images to active regions, while the increase in the CSF layer thickness increases the sensitivity in general. The images of multiple active regions are also influenced by their positions relative to the data points and by their depths from the skin surface.
NASA Astrophysics Data System (ADS)
Quirin, Sean Albert
The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.
Face landmark point tracking using LK pyramid optical flow
NASA Astrophysics Data System (ADS)
Zhang, Gang; Tang, Sikan; Li, Jiaquan
2018-04-01
LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.
Actuator-Assisted Calibration of Freehand 3D Ultrasound System.
Koo, Terry K; Silvia, Nathaniel
2018-01-01
Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.
Actuator-Assisted Calibration of Freehand 3D Ultrasound System
2018-01-01
Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified “collinear point target” phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration. PMID:29854371
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
Kinnunen, Kirsi M; Cash, David M; Poole, Teresa; Frost, Chris; Benzinger, Tammie L S; Ahsan, R Laila; Leung, Kelvin K; Cardoso, M Jorge; Modat, Marc; Malone, Ian B; Morris, John C; Bateman, Randall J; Marcus, Daniel S; Goate, Alison; Salloway, Stephen P; Correia, Stephen; Sperling, Reisa A; Chhatwal, Jasmeer P; Mayeux, Richard P; Brickman, Adam M; Martins, Ralph N; Farlow, Martin R; Ghetti, Bernardino; Saykin, Andrew J; Jack, Clifford R; Schofield, Peter R; McDade, Eric; Weiner, Michael W; Ringman, John M; Thompson, Paul M; Masters, Colin L; Rowe, Christopher C; Rossor, Martin N; Ourselin, Sebastien; Fox, Nick C
2018-01-01
Identifying at what point atrophy rates first change in Alzheimer's disease is important for informing design of presymptomatic trials. Serial T1-weighted magnetic resonance imaging scans of 94 participants (28 noncarriers, 66 carriers) from the Dominantly Inherited Alzheimer Network were used to measure brain, ventricular, and hippocampal atrophy rates. For each structure, nonlinear mixed-effects models estimated the change-points when atrophy rates deviate from normal and the rates of change before and after this point. Atrophy increased after the change-point, which occurred 1-1.5 years (assuming a single step change in atrophy rate) or 3-8 years (assuming gradual acceleration of atrophy) before expected symptom onset. At expected symptom onset, estimated atrophy rates were at least 3.6 times than those before the change-point. Atrophy rates are pathologically increased up to seven years before "expected onset". During this period, atrophy rates may be useful for inclusion and tracking of disease progression. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Helmer, Karl G.; Pasternak, Ofer; Fredman, Eli; Preciado, Ronny I.; Koerte, Inga K.; Sasaki, Takeshi; Mayinger, Michael; Johnson, Andrew M.; Holmes, Jeffrey D.; Forwell, Lorie; Skopelja, Elaine N.; Shenton, Martha E.; Echlin, Paul S.
2015-01-01
Object Concussion, or mild traumatic brain injury (mTBI), is a commonly occurring sports-related injury, especially in contact sports such as hockey. Cerebral microbleeds (CMBs), which are small, hypointense lesions on T2*-weighted images, can result from TBI. The authors use susceptibility-weighted imaging (SWI) to automatically detect small hypointensities that may be subtle signs of chronic and acute damage due to both subconcussive and concussive injury. The goal was to investigate how the burden of these hypointensities change over time, over a playing season, and postconcussion, compared with subjects who did not suffer a medically observed and diagnosed concussion. Methods Images were obtained in 45 university-level adult male and female ice hockey players before and after a single Canadian Interuniversity Sports season. In addition, 11 subjects (5 men and 6 women) underwent imaging at 72 hours, 2 weeks, and 2 months after concussion. To identify subtle changes in brain tissue and potential CMBs, nonvessel clusters of hypointensities on SWI were automatically identified and a hypointensity burden index was calculated for all subjects at the beginning of the season (BOS) and the end of the season (EOS), in addition to postconcussion time points (where applicable). Results A statistically significant increase in the hypointensity burden, relative to the BOS, was observed for male subjects at the 2-week postconcussion time point. A smaller, nonsignificant rise in the burden for all female subjects was also observed within the same time period. The difference in hypointensity burden was also statistically significant for men with concussions between the 2-week time point and the BOS. There were no significant changes in burden for nonconcussed subjects of either sex between the BOS and EOS time points. However, there was a statistically significant difference in the burden between male and female subjects in the nonconcussed group at both the BOS and EOS time points, with males having a higher burden. Conclusions This method extends the utility of SWI from the enhancement and detection of larger (> 5 mm) CMBs that are often observed in more severe TBI, to concussion in which visual detection of injury is difficult. The hypointensity burden metric proposed here shows statistically significant changes over time in the male subjects. A smaller, nonsignificant increase in the burden metric was observed in the female subjects. PMID:24490839
Pulse-echo ultrasonic imaging method for eliminating sample thickness variation effects
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1995-01-01
A pulse-echo, immersion method for ultrasonic evaluation of a material is discussed. It accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer, automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjusments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.
Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology
NASA Technical Reports Server (NTRS)
Larson, Stephen M.; Slaughter, Charles D.
1992-01-01
Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.
Accuracy assessment of fluoroscopy-transesophageal echocardiography registration
NASA Astrophysics Data System (ADS)
Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.
2011-03-01
This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Lin, Wen-Yen; Chou, Wen-Cheng; Chang, Po-Cheng; Chou, Chung-Chuan; Wen, Ming-Shien; Ho, Ming-Yun; Lee, Wen-Chen; Hsieh, Ming-Jer; Lin, Chung-Chih; Tsai, Tsai-Hsuan; Lee, Ming-Yih
2018-03-01
Seismocardiogram (SCG) or mechanocardiography is a noninvasive cardiac diagnostic method; however, previous studies used only a single sensor to detect cardiac mechanical activities that will not be able to identify location-specific feature points in a cardiac cycle corresponding to the four valvular auscultation locations. In this study, a multichannel SCG spectrum measurement system was proposed and examined for cardiac activity monitoring to overcome problems like, position dependency, time delay, and signal attenuation, occurring in traditional single-channel SCG systems. ECG and multichannel SCG signals were simultaneously recorded in 25 healthy subjects. Cardiac echocardiography was conducted at the same time. SCG traces were analyzed and compared with echocardiographic images for feature point identification. Fifteen feature points were identified in the corresponding SCG traces. Among them, six feature points, including left ventricular lateral wall contraction peak velocity, septal wall contraction peak velocity, transaortic peak flow, transpulmonary peak flow, transmitral ventricular relaxation flow, and transmitral atrial contraction flow were identified. These new feature points were not observed in previous studies because the single-channel SCG could not detect the location-specific signals from other locations due to time delay and signal attenuation. As the results, the multichannel SCG spectrum measurement system can record the corresponding cardiac mechanical activities with location-specific SCG signals and six new feature points were identified with the system. This new modality may help clinical diagnoses of valvular heart diseases and heart failure in the future.
Ultrahigh resolution multicolor colocalization of single fluorescent probes
Weiss, Shimon; Michalet, Xavier; Lacoste, Thilo D.
2005-01-18
A novel optical ruler based on ultrahigh-resolution colocalization of single fluorescent probes is described. Two unique families of fluorophores are used, namely energy-transfer fluorescent beads and semiconductor nanocrystal (NC) quantum dots, that can be excited by a single laser wavelength but emit at different wavelengths. A novel multicolor sample-scanning confocal microscope was constructed which allows one to image each fluorescent light emitter, free of chromatic aberrations, by scanning the sample with nanometer scale steps using a piezo-scanner. The resulting spots are accurately localized by fitting them to the known shape of the excitation point-spread-function of the microscope.
Streak camera imaging of single photons at telecom wavelength
NASA Astrophysics Data System (ADS)
Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine
2018-01-01
Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.
Edge Extraction by an Exponential Function Considering X-ray Transmission Characteristics
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Youp Synn, Sang; Cho, Sung Man; Jong Joo, Won
2011-04-01
3-D radiographic methodology has been into the spotlight for quality inspection of mass product or in-service inspection of aging product. To locate a target object in 3-D space, its characteristic contours such as edge length, edge angle, and vertices are very important. In spite of a simple geometry product, it is very difficult to get clear shape contours from a single radiographic image. The image contains scattering noise at the edges and ambiguity coming from X-Ray absorption within the body. This article suggests a concise method to extract whole edges from a single X-ray image. At the edge point of the object, the intensity of the X-ray decays exponentially as the X-ray penetrates the object. Considering this X-Ray decaying property, edges are extracted by using the least square fitting with the control of Coefficient of Determination.
Phase-shifting point diffraction interferometer mask designs
Goldberg, Kenneth Alan
2001-01-01
In a phase-shifting point diffraction interferometer, different image-plane mask designs can improve the operation of the interferometer. By keeping the test beam window of the mask small compared to the separation distance between the beams, the problem of energy from the reference beam leaking through the test beam window is reduced. By rotating the grating and mask 45.degree., only a single one-dimensional translation stage is required for phase-shifting. By keeping two reference pinholes in the same orientation about the test beam window, only a single grating orientation, and thus a single one-dimensional translation stage, is required. The use of a two-dimensional grating allows for a multiplicity of pinholes to be used about the pattern of diffracted orders of the grating at the mask. Orientation marks on the mask can be used to orient the device and indicate the position of the reference pinholes.
Practical aspects of monochromators developed for transmission electron microscopy
Kimoto, Koji
2014-01-01
A few practical aspects of monochromators recently developed for transmission electron microscopy are briefly reviewed. The basic structures and properties of four monochromators, a single Wien filter monochromator, a double Wien filter monochromator, an omega-shaped electrostatic monochromator and an alpha-shaped magnetic monochromator, are outlined. The advantages and side effects of these monochromators in spectroscopy and imaging are pointed out. A few properties of the monochromators in imaging, such as spatial or angular chromaticity, are also discussed. PMID:25125333
GPU-Accelerated Hybrid Algorithm for 3D Localization of Fluorescent Emitters in Dense Clusters
NASA Astrophysics Data System (ADS)
Jung, Yoon; Barsic, Anthony; Piestun, Rafael; Fakhri, Nikta
In stochastic switching-based super-resolution imaging, a random subset of fluorescent emitters are imaged and localized for each frame to construct a single high resolution image. However, the condition of non-overlapping point spread functions (PSFs) imposes constraints on experimental parameters. Recent development in post processing methods such as dictionary-based sparse support recovery using compressive sensing has shown up to an order of magnitude higher recall rate than single emitter fitting methods. However, the computational complexity of this approach scales poorly with the grid size and requires long runtime. Here, we introduce a fast and accurate compressive sensing algorithm for localizing fluorescent emitters in high density in 3D, namely sparse support recovery using Orthogonal Matching Pursuit (OMP) and L1-Homotopy algorithm for reconstructing STORM images (SOLAR STORM). SOLAR STORM combines OMP with L1-Homotopy to reduce computational complexity, which is further accelerated by parallel implementation using GPUs. This method can be used in a variety of experimental conditions for both in vitro and live cell fluorescence imaging.
Wavefront correction using machine learning methods for single molecule localization microscopy
NASA Astrophysics Data System (ADS)
Tehrani, Kayvan F.; Xu, Jianquan; Kner, Peter
2015-03-01
Optical Aberrations are a major challenge in imaging biological samples. In particular, in single molecule localization (SML) microscopy techniques (STORM, PALM, etc.) a high Strehl ratio point spread function (PSF) is necessary to achieve sub-diffraction resolution. Distortions in the PSF shape directly reduce the resolution of SML microscopy. The system aberrations caused by the imperfections in the optics and instruments can be compensated using Adaptive Optics (AO) techniques prior to imaging. However, aberrations caused by the biological sample, both static and dynamic, have to be dealt with in real time. A challenge for wavefront correction in SML microscopy is a robust optimization approach in the presence of noise because of the naturally high fluctuations in photon emission from single molecules. Here we demonstrate particle swarm optimization for real time correction of the wavefront using an intensity independent metric. We show that the particle swarm algorithm converges faster than the genetic algorithm for bright fluorophores.
Image resolution enhancement via image restoration using neural network
NASA Astrophysics Data System (ADS)
Zhang, Shuangteng; Lu, Yihong
2011-04-01
Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.
Single DNA imaging and length quantification through a mobile phone microscope
NASA Astrophysics Data System (ADS)
Wei, Qingshan; Luo, Wei; Chiang, Samuel; Kappel, Tara; Mejia, Crystal; Tseng, Derek; Chan, Raymond Yan L.; Yan, Eddie; Qi, Hangfei; Shabbir, Faizan; Ozkan, Haydar; Feng, Steve; Ozcan, Aydogan
2016-03-01
The development of sensitive optical microscopy methods for the detection of single DNA molecules has become an active research area which cultivates various promising applications including point-of-care (POC) genetic testing and diagnostics. Direct visualization of individual DNA molecules usually relies on sophisticated optical microscopes that are mostly available in well-equipped laboratories. For POC DNA testing/detection, there is an increasing need for the development of new single DNA imaging and sensing methods that are field-portable, cost-effective, and accessible for diagnostic applications in resource-limited or field-settings. For this aim, we developed a mobile-phone integrated fluorescence microscopy platform that allows imaging and sizing of single DNA molecules that are stretched on a chip. This handheld device contains an opto-mechanical attachment integrated onto a smartphone camera module, which creates a high signal-to-noise ratio dark-field imaging condition by using an oblique illumination/excitation configuration. Using this device, we demonstrated imaging of individual linearly stretched λ DNA molecules (48 kilobase-pair, kbp) over 2 mm2 field-of-view. We further developed a robust computational algorithm and a smartphone app that allowed the users to quickly quantify the length of each DNA fragment imaged using this mobile interface. The cellphone based device was tested by five different DNA samples (5, 10, 20, 40, and 48 kbp), and a sizing accuracy of <1 kbp was demonstrated for DNA strands longer than 10 kbp. This mobile DNA imaging and sizing platform can be very useful for various diagnostic applications including the detection of disease-specific genes and quantification of copy-number-variations at POC settings.
Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing
Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin
2016-01-01
A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410
Enoki, Ryosuke; Ono, Daisuke; Hasan, Mazahir T; Honma, Sato; Honma, Ken-Ichi
2012-05-30
Single-point laser scanning confocal imaging produces signals with high spatial resolution in living organisms. However, photo-induced toxicity, bleaching, and focus drift remain challenges, especially when recording over several days for monitoring circadian rhythms. Bioluminescence imaging is a tool widely used for this purpose, and does not cause photo-induced difficulties. However, bioluminescence signals are dimmer than fluorescence signals, and are potentially affected by levels of cofactors, including ATP, O(2), and the substrate, luciferin. Here we describe a novel time-lapse confocal imaging technique to monitor circadian rhythms in living tissues. The imaging system comprises a multipoint scanning Nipkow spinning disk confocal unit and a high-sensitivity EM-CCD camera mounted on an inverted microscope with auto-focusing function. Brain slices of the suprachiasmatic nucleus (SCN), the central circadian clock, were prepared from transgenic mice expressing a clock gene, Period 1 (Per1), and fluorescence reporter protein (Per1::d2EGFP). The SCN slices were cut out together with membrane, flipped over, and transferred to the collagen-coated glass dishes to obtain signals with a high signal-to-noise ratio and to minimize focus drift. The imaging technique and improved culture method enabled us to monitor the circadian rhythm of Per1::d2EGFP from optically confirmed single SCN neurons without noticeable photo-induced effects or focus drift. Using recombinant adeno-associated virus carrying a genetically encoded calcium indicator, we also monitored calcium circadian rhythms at a single-cell level in a large population of SCN neurons. Thus, the Nipkow spinning disk confocal imaging system developed here facilitates long-term visualization of circadian rhythms in living cells. Copyright © 2012 Elsevier B.V. All rights reserved.
Heye, Tobias; Sommer, Gregor; Miedinger, David; Bremerich, Jens; Bieri, Oliver
2015-09-01
To evaluate the anatomical details offered by a new single breath-hold ultrafast 3D balanced steady-state free precession (uf-bSSFP) sequence in comparison to low-dose chest computed tomography (CT). This was an Institutional Review Board (IRB)-approved, Health Insurance Portability and Accountability Act (HIPAA)-compliant prospective study. A total of 20 consecutive patients enrolled in a lung cancer screening trial underwent same-day low-dose chest CT and 1.5T MRI. The presence of pulmonary nodules and anatomical details on 1.9 mm isotropic uf-bSSFP images was compared to 2 mm lung window reconstructions by two readers. The number of branching points on six predefined pulmonary arteries and the distance between the most peripheral visible vessel segment to the pleural surface on thin slices and 50 mm maximum intensity projections (MIP) were assessed. Image quality and sharpness of the pulmonary vasculature were rated on a 5-point scale. The uf-bSSFP detection rate of pulmonary nodules (32 nodules visible on CT and MRI, median diameter 3.9 mm) was 45.5% with 21 false-positive findings (pooled data of both readers). Uf-bSSFP detected 71.2% of branching points visible on CT data. The mean distance between peripheral vasculature and pleural surface was 13.0 ± 4.2 mm (MRI) versus 8.5 ± 3.3 mm (CT) on thin slices and 8.6 ± 3.9 mm (MRI) versus 4.6 ± 2.5 mm (CT) on MIPs. Median image quality and sharpness were rated 4 each. Although CT is superior to MRI, uf-bSSFP imaging provides good anatomical details with sufficient image quality and sharpness obtainable in a single breath-hold covering the entire chest. © 2014 Wiley Periodicals, Inc.
Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.
2017-01-01
A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
Hyperspectral laser-induced autofluorescence imaging of dental caries
NASA Astrophysics Data System (ADS)
Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan
2012-01-01
Dental caries is a disease characterized by demineralization of enamel crystals leading to the penetration of bacteria into the dentine and pulp. Early detection of enamel demineralization resulting in increased enamel porosity, commonly known as white spots, is a difficult diagnostic task. Laser induced autofluorescence was shown to be a useful method for early detection of demineralization. The existing studies involved either a single point spectroscopic measurements or imaging at a single spectral band. In the case of spectroscopic measurements, very little or no spatial information is acquired and the measured autofluorescence signal strongly depends on the position and orientation of the probe. On the other hand, single-band spectral imaging can be substantially affected by local spectral artefacts. Such effects can significantly interfere with automated methods for detection of early caries lesions. In contrast, hyperspectral imaging effectively combines the spatial information of imaging methods with the spectral information of spectroscopic methods providing excellent basis for development of robust and reliable algorithms for automated classification and analysis of hard dental tissues. In this paper, we employ 405 nm laser excitation of natural caries lesions. The fluorescence signal is acquired by a state-of-the-art hyperspectral imaging system consisting of a high-resolution acousto-optic tunable filter (AOTF) and a highly sensitive Scientific CMOS camera in the spectral range from 550 nm to 800 nm. The results are compared to the contrast obtained by near-infrared hyperspectral imaging technique employed in the existing studies on early detection of dental caries.
NASA Astrophysics Data System (ADS)
Duling, Irl N.
2016-05-01
Terahertz energy, with its ability to penetrate clothing and non-conductive materials, has held much promise in the area of security scanning. Millimeter wave systems (300 GHz and below) have been widely deployed. These systems have used full two-dimensional surface imaging, and have resulted in privacy concerns. Pulsed terahertz imaging, can detect the presence of unwanted objects without the need for two-dimensional photographic imaging. With high-speed waveform acquisition it is possible to create handheld tools that can be used to locate anomalies under clothing or headgear looking exclusively at either single point waveforms or cross-sectional images which do not pose a privacy concern. Identification of the anomaly to classify it as a potential threat or a benign object is also possible.
Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation
Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem
2017-01-01
Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mawet, D.; Ruane, G.; Xuan, W.
2017-04-01
High-dispersion coronagraphy (HDC) optimally combines high-contrast imaging techniques such as adaptive optics/wavefront control plus coronagraphy to high spectral resolution spectroscopy. HDC is a critical pathway toward fully characterizing exoplanet atmospheres across a broad range of masses from giant gaseous planets down to Earth-like planets. In addition to determining the molecular composition of exoplanet atmospheres, HDC also enables Doppler mapping of atmosphere inhomogeneities (temperature, clouds, wind), as well as precise measurements of exoplanet rotational velocities. Here, we demonstrate an innovative concept for injecting the directly imaged planet light into a single-mode fiber, linking a high-contrast adaptively corrected coronagraph to a high-resolutionmore » spectrograph (diffraction-limited or not). Our laboratory demonstration includes three key milestones: close-to-theoretical injection efficiency, accurate pointing and tracking, and on-fiber coherent modulation and speckle nulling of spurious starlight signal coupling into the fiber. Using the extreme modal selectivity of single-mode fibers, we also demonstrated speckle suppression gains that outperform conventional image-based speckle nulling by at least two orders of magnitude.« less
NASA Astrophysics Data System (ADS)
Mawet, D.; Ruane, G.; Xuan, W.; Echeverri, D.; Klimovich, N.; Randolph, M.; Fucik, J.; Wallace, J. K.; Wang, J.; Vasisht, G.; Dekany, R.; Mennesson, B.; Choquet, E.; Delorme, J.-R.; Serabyn, E.
2017-04-01
High-dispersion coronagraphy (HDC) optimally combines high-contrast imaging techniques such as adaptive optics/wavefront control plus coronagraphy to high spectral resolution spectroscopy. HDC is a critical pathway toward fully characterizing exoplanet atmospheres across a broad range of masses from giant gaseous planets down to Earth-like planets. In addition to determining the molecular composition of exoplanet atmospheres, HDC also enables Doppler mapping of atmosphere inhomogeneities (temperature, clouds, wind), as well as precise measurements of exoplanet rotational velocities. Here, we demonstrate an innovative concept for injecting the directly imaged planet light into a single-mode fiber, linking a high-contrast adaptively corrected coronagraph to a high-resolution spectrograph (diffraction-limited or not). Our laboratory demonstration includes three key milestones: close-to-theoretical injection efficiency, accurate pointing and tracking, and on-fiber coherent modulation and speckle nulling of spurious starlight signal coupling into the fiber. Using the extreme modal selectivity of single-mode fibers, we also demonstrated speckle suppression gains that outperform conventional image-based speckle nulling by at least two orders of magnitude.
Accuracy Validation of Large-scale Block Adjustment without Control of ZY3 Images over China
NASA Astrophysics Data System (ADS)
Yang, Bo
2016-06-01
Mapping from optical satellite images without ground control is one of the goals of photogrammetry. Using 8802 three linear array stereo images (a total of 26406 images) of ZY3 over China, we propose a large-scale and non-control block adjustment method of optical satellite images based on the RPC model, in which a single image is regarded as an adjustment unit to be organized. To overcome the block distortion caused by unstable adjustment without ground control and the excessive accumulation of errors, we use virtual control points created by the initial RPC model of the images as the weighted observations and add them into the adjustment model to refine the adjustment. We use 8000 uniformly distributed high precision check points to evaluate the geometric accuracy of the DOM (Digital Ortho Model) and DSM (Digital Surface Model) production, for which the standard deviations of plane and elevation are 3.6 m and 4.2 m respectively. The geometric accuracy is consistent across the whole block and the mosaic accuracy of neighboring DOM is within a pixel, thus, the seamless mosaic could take place. This method achieves the goal of an accuracy of mapping without ground control better than 5 m for the whole China from ZY3 satellite images.
A pseudoinverse deformation vector field generator and its applications
Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.
2010-01-01
Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247
Twin nucleation and migration in FeCr single crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patriarca, L.; Abuzaid, Wael; Sehitoglu, Huseyin, E-mail: huseyin@illinois.edu
2013-01-15
Tension and compression experiments were conducted on body-centered cubic Fe -47.8 at pct. Cr single crystals. The critical resolved shear stress (CRSS) magnitudes for slip nucleation, twin nucleation and twin migration were established. We show that the nucleation of slip occurs at a CRSS of about 88 MPa, while twinning nucleates at a CRSS of about 191 MPa with an associated load drop. Following twin nucleation, twin migration proceeds at a CRSS that is lower than the initiation stress ( Almost-Equal-To 114-153 MPa). The experimental results of the nucleation stresses indicate that the Schmid law holds to a first approximationmore » for the slip and twin nucleation cases, but to a lesser extent for twin migration particularly when considerable slip strains preceded twinning. The CRSSs were determined experimentally using digital image correlation (DIC) in conjunction with electron back scattering diffraction (EBSD). The DIC measurements enabled pinpointing the precise stress on the stress-strain curves where twins or slip were activated. The crystal orientations were obtained using EBSD and used to determine the activated twin and slip systems through trace analysis. - Highlights: Black-Right-Pointing-Pointer Digital image correlation allows to capture slip/twin initiation for bcc FeCr. Black-Right-Pointing-Pointer Crystal orientations from EBSD allow slip/twin system indexing. Black-Right-Pointing-Pointer Nucleation of slip always precedes twinning. Black-Right-Pointing-Pointer Twin growth is sustained with a lower stress than required for nucleation. Black-Right-Pointing-Pointer Twin-slip interactions provide high hardening at the onset of plasticity.« less
A mitral annulus tracking approach for navigation of off-pump beating heart mitral valve repair.
Li, Feng P; Rajchl, Martin; Moore, John; Peters, Terry M
2015-01-01
To develop and validate a real-time mitral valve annulus (MVA) tracking approach based on biplane transesophageal echocardiogram (TEE) data and magnetic tracking systems (MTS) to be used in minimally invasive off-pump beating heart mitral valve repair (MVR). The authors' guidance system consists of three major components: TEE, magnetic tracking system, and an image guidance software platform. TEE provides real-time intraoperative images to show the cardiac motion and intracardiac surgical tools. The magnetic tracking system tracks the TEE probe and the surgical tools. The software platform integrates the TEE image planes and the virtual model of the tools and the MVA model on the screen. The authors' MVA tracking approach, which aims to update the MVA model in near real-time, comprises of three steps: image based gating, predictive reinitialization, and registration based MVA tracking. The image based gating step uses a small patch centered at each MVA point in the TEE images to identify images at optimal cardiac phases for updating the position of the MVA. The predictive reinitialization step uses the position and orientation of the TEE probe provided by the magnetic tracking system to predict the position of the MVA points in the TEE images and uses them for the initialization of the registration component. The registration based MVA tracking step aims to locate the MVA points in the images selected by the image based gating component by performing image based registration. The validation of the MVA tracking approach was performed in a phantom study and a retrospective study on porcine data. In the phantom study, controlled translations were applied to the phantom and the tracked MVA was compared to its "true" position estimated based on a magnetic sensor attached to the phantom. The MVA tracking accuracy was 1.29 ± 0.58 mm when the translation distance is about 1 cm, and increased to 2.85 ± 1.19 mm when the translation distance is about 3 cm. In the study on porcine data, the authors compared the tracked MVA to a manually segmented MVA. The overall accuracy is 2.37 ± 1.67 mm for single plane images and 2.35 ± 1.55 mm for biplane images. The interoperator variation in manual segmentation was 2.32 ± 1.24 mm for single plane images and 1.73 ± 1.18 mm for biplane images. The computational efficiency of the algorithm on a desktop computer with an Intel(®) Xeon(®) CPU @3.47 GHz and an NVIDIA GeForce 690 graphic card is such that the time required for registering four MVA points was about 60 ms. The authors developed a rapid MVA tracking algorithm for use in the guidance of off-pump beating heart transapical mitral valve repair. This approach uses 2D biplane TEE images and was tested on a dynamic heart phantom and interventional porcine image data. Results regarding the accuracy and efficiency of the authors' MVA tracking algorithm are promising, and fulfill the requirements for surgical navigation.
O'Reilly, Meaghan A; Hough, Olivia; Hynynen, Kullervo
2017-03-01
Microbubble-mediated focused ultrasound (US) opening of the blood-brain barrier (BBB) has shown promising results for the treatment of brain tumors and conditions such as Alzheimer disease. Practical clinical implementation of focused US treatments would aim to treat a substantial portion of the brain; thus, the safety of opening large volumes must be investigated. This study investigated whether the opened volume affects the time for the BBB to be restored after treatment. Sprague Dawley rats (n = 5) received bilateral focused US treatments. One hemisphere received a single sonication, and the contralateral hemisphere was targeted with 4 overlapping foci. Contrast-enhanced T1-weighted magnetic resonance imaging was used to assess the integrity of the BBB at 0, 6, and 24 hours after focused US. At time 0, there was no significant difference in the mean enhancement between the single- and multi-point sonications (mean ± SD, 29.7% ± 18.4% versus 29.7% ± 24.1%; P = .9975). The mean cross-sectional area of the BBB opening resulting from the multi-point sonication was approximately 3.5-fold larger than that of the single-point case (14.2 ± 4.7 versus 4.1 ± 3.3 mm 2 ; P < .0001). The opened volumes in 9 of 10 hemispheres were closed by 6 hours after focused US. The remaining treatment location had substantially reduced enhancement at 6 hours and was closed by 24 hours. Histologic analysis revealed small morphologic changes associated with this location. T2-weighted images at 6 and 24 hours showed no signs of edema. T2*-weighted images obtained at 6 hours also showed no signs hemorrhage in any animal. The time for the BBB to close after focused US was independent of the opening volume on the time scale investigated. No differences in treatment effects were observable by magnetic resonance imaging follow-up between larger- and smaller-volume sonications, suggesting that larger-volume BBB opening can be performed safely. © 2017 by the American Institute of Ultrasound in Medicine.
Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.
Xiao, Dan; Balcom, Bruce J
2012-07-01
Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.
Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas
2012-01-01
Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602
Radiometric Correction of Multitemporal Hyperspectral Uas Image Mosaics of Seedling Stands
NASA Astrophysics Data System (ADS)
Markelin, L.; Honkavaara, E.; Näsi, R.; Viljanen, N.; Rosnell, T.; Hakala, T.; Vastaranta, M.; Koivisto, T.; Holopainen, M.
2017-10-01
Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.
Imaging samples in silica aerogel using an experimental point spread function.
White, Amanda J; Ebel, Denton S
2015-02-01
Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology.
Two-dimensional thermography image retrieval from zig-zag scanned data with TZ-SCAN
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Yamasaki, Ryohei; Arai, Kohei
2008-10-01
TZ-SCAN is a simple and low cost thermal imaging device which consists of a single point radiation thermometer on a tripod with a pan-tilt rotator, a DC motor controller board with a USB interface, and a laptop computer for rotator control, data acquisition, and data processing. TZ-SCAN acquires a series of zig-zag scanned data and stores the data as CSV file. A 2-D thermal distribution image can be retrieved by using the second quefrency peak calculated from TZ-SCAN data. An experiment is conducted to confirm the validity of the thermal retrieval algorithm. The experimental result shows efficient accuracy for 2-D thermal distribution image retrieval.
Photothermal measurements of high Tc superconductors
NASA Astrophysics Data System (ADS)
Fanton, J. T.; Mitzi, D. B.; Kapitulnik, A.; Khuri-Yakub, B. T.; Kino, G. S.; Gazit, D.; Feigelson, R. S.
1989-08-01
We demonstrate a photothermal method for making point measurements of the thermal conductivities of high Tc superconductors. Images made at room temperature on polycrystalline materials show the thermal inhomogeneities. Measurements on single-crystal Bi2Sr2CaCu2Ox compounds reveal a very large anisotropy of about 7:1 in the thermal conductivity.
Intraocular scattering compensation in retinal imaging
Christaras, Dimitrios; Ginis, Harilaos; Pennos, Alexandros; Artal, Pablo
2016-01-01
Intraocular scattering affects fundus imaging in a similar way that affects vision; it causes a decrease in contrast which depends on both the intrinsic scattering of the eye but also on the dynamic range of the image. Consequently, in cases where the absolute intensity in the fundus image is important, scattering can lead to a wrong estimation. In this paper, a setup capable of acquiring fundus images and estimating objectively intraocular scattering was built, and the acquired images were then used for scattering compensation in fundus imaging. The method consists of two parts: first, reconstruct the individual’s wide-angle Point Spread Function (PSF) at a specific wavelength to be used within an enhancement algorithm on an acquired fundus image to compensate for scattering. As a proof of concept, a single pass measurement with a scatter filter was carried out first and the complete algorithm of the PSF reconstruction and the scattering compensation was applied. The advantage of the single pass test is that one can compare the reconstructed image with the original one and see the validity, thus testing the efficiency of the method. Following the test, the algorithm was applied in actual fundus images in human eyes and the effect on the contrast of the image before and after the compensation was compared. The comparison showed that depending on the wavelength, contrast can be reduced by 8.6% under certain conditions. PMID:27867710
Single-Chip CMUT-on-CMOS Front-End System for Real-Time Volumetric IVUS and ICE Imaging
Gurun, Gokce; Tekes, Coskun; Zahorian, Jaime; Xu, Toby; Satir, Sarp; Karaman, Mustafa; Hasler, Jennifer; Degertekin, F. Levent
2014-01-01
Intravascular ultrasound (IVUS) and intracardiac echography (ICE) catheters with real-time volumetric ultrasound imaging capability can provide unique benefits to many interventional procedures used in the diagnosis and treatment of coronary and structural heart diseases. Integration of CMUT arrays with front-end electronics in single-chip configuration allows for implementation of such catheter probes with reduced interconnect complexity, miniaturization, and high mechanical flexibility. We implemented a single-chip forward-looking (FL) ultrasound imaging system by fabricating a 1.4-mm-diameter dual-ring CMUT array using CMUT-on-CMOS technology on a front-end IC implemented in 0.35-µm CMOS process. The dual-ring array has 56 transmit elements and 48 receive elements on two separate concentric annular rings. The IC incorporates a 25-V pulser for each transmitter and a low-noise capacitive transimpedance amplifier (TIA) for each receiver, along with digital control and smart power management. The final shape of the silicon chip is a 1.5-mm-diameter donut with a 430-µm center hole for a guide wire. The overall front-end system requires only 13 external connections and provides 4 parallel RF outputs while consuming an average power of 20 mW. We measured RF A-scans from the integrated single-chip array which show full functionality at 20.1 MHz with 43% fractional bandwidth. We also tested and demonstrated the image quality of the system on a wire phantom and an ex-vivo chicken heart sample. The measured axial and lateral point resolutions are 92 µm and 251 µm, respectively. We successfully acquired volumetric imaging data from the ex-vivo chicken heart with 60 frames per second without any signal averaging. These demonstrative results indicate that single-chip CMUT-on-CMOS systems have the potential to produce real-time volumetric images with image quality and speed suitable for catheter based clinical applications. PMID:24474131
Single-chip CMUT-on-CMOS front-end system for real-time volumetric IVUS and ICE imaging.
Gurun, Gokce; Tekes, Coskun; Zahorian, Jaime; Xu, Toby; Satir, Sarp; Karaman, Mustafa; Hasler, Jennifer; Degertekin, F Levent
2014-02-01
Intravascular ultrasound (IVUS) and intracardiac echography (ICE) catheters with real-time volumetric ultrasound imaging capability can provide unique benefits to many interventional procedures used in the diagnosis and treatment of coronary and structural heart diseases. Integration of capacitive micromachined ultrasonic transducer (CMUT) arrays with front-end electronics in single-chip configuration allows for implementation of such catheter probes with reduced interconnect complexity, miniaturization, and high mechanical flexibility. We implemented a single-chip forward-looking (FL) ultrasound imaging system by fabricating a 1.4-mm-diameter dual-ring CMUT array using CMUT-on-CMOS technology on a front-end IC implemented in 0.35-μm CMOS process. The dual-ring array has 56 transmit elements and 48 receive elements on two separate concentric annular rings. The IC incorporates a 25-V pulser for each transmitter and a low-noise capacitive transimpedance amplifier (TIA) for each receiver, along with digital control and smart power management. The final shape of the silicon chip is a 1.5-mm-diameter donut with a 430-μm center hole for a guide wire. The overall front-end system requires only 13 external connections and provides 4 parallel RF outputs while consuming an average power of 20 mW. We measured RF A-scans from the integrated single- chip array which show full functionality at 20.1 MHz with 43% fractional bandwidth. We also tested and demonstrated the image quality of the system on a wire phantom and an ex vivo chicken heart sample. The measured axial and lateral point resolutions are 92 μm and 251 μm, respectively. We successfully acquired volumetric imaging data from the ex vivo chicken heart at 60 frames per second without any signal averaging. These demonstrative results indicate that single-chip CMUT-on-CMOS systems have the potential to produce realtime volumetric images with image quality and speed suitable for catheter-based clinical applications.
Quantitative Image Restoration in Bright Field Optical Microscopy.
Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús
2017-11-07
Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Long-tao; Jiang, Ning; Lv, Ming-shan
2015-10-01
With the emergence of the anti-ship missle with the capability of infrared imaging guidance, the traditional single jamming measures, because of the jamming mechanism and technical flaws or unsuitable use, greatly reduced the survival probability of the war-ship in the future naval battle. Intergrated jamming of IR weakening + smoke-screen Can not only make jamming to the search and tracking of IR imaging guidance system , but also has feasibility in conjunction, besides , which also make the best jamming effect. The research conclusion has important realistic meaning for raising the antimissile ability of surface ships. With the development of guidance technology, infrared guidance system has expanded by ir point-source homing guidance to infrared imaging guidance, Infrared imaging guidance has made breakthrough progress, Infrared imaging guidance system can use two-dimensional infrared image information of the target, achieve the precise tracking. Which has Higher guidance precision, better concealment, stronger anti-interference ability and could Target the key parts. The traditional single infrared smoke screen jamming or infrared decoy flare interference cannot be imposed effective interference. So, Research how to effectively fight against infrared imaging guided weapons threat measures and means, improving the surface ship antimissile ability is an urgent need to solve.
Reconstruction of three-dimensional porous media using a single thin section
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2012-06-01
The purpose of any reconstruction method is to generate realizations of two- or multiphase disordered media that honor limited data for them, with the hope that the realizations provide accurate predictions for those properties of the media for which there are no data available, or their measurement is difficult. An important example of such stochastic systems is porous media for which the reconstruction technique must accurately represent their morphology—the connectivity and geometry—as well as their flow and transport properties. Many of the current reconstruction methods are based on low-order statistical descriptors that fail to provide accurate information on the properties of heterogeneous porous media. On the other hand, due to the availability of high resolution two-dimensional (2D) images of thin sections of a porous medium, and at the same time, the high cost, computational difficulties, and even unavailability of complete 3D images, the problem of reconstructing porous media from 2D thin sections remains an outstanding unsolved problem. We present a method based on multiple-point statistics in which a single 2D thin section of a porous medium, represented by a digitized image, is used to reconstruct the 3D porous medium to which the thin section belongs. The method utilizes a 1D raster path for inspecting the digitized image, and combines it with a cross-correlation function, a grid splitting technique for deciding the resolution of the computational grid used in the reconstruction, and the Shannon entropy as a measure of the heterogeneity of the porous sample, in order to reconstruct the 3D medium. It also utilizes an adaptive technique for identifying the locations and optimal number of hard (quantitative) data points that one can use in the reconstruction process. The method is tested on high resolution images for Berea sandstone and a carbonate rock sample, and the results are compared with the data. To make the comparison quantitative, two sets of statistical tests consisting of the autocorrelation function, histogram matching of the local coordination numbers, the pore and throat size distributions, multiple-points connectivity, and single- and two-phase flow permeabilities are used. The comparison indicates that the proposed method reproduces the long-range connectivity of the porous media, with the computed properties being in good agreement with the data for both porous samples. The computational efficiency of the method is also demonstrated.
Optical comparison of multizone and single-zone photorefractive keratectomy
NASA Astrophysics Data System (ADS)
Gonzalez-Cirre, Xochitl; Manns, Fabrice; Rol, Pascal O.; Parel, Jean-Marie A.
1997-05-01
The purpose is to calculate and compare the point-spread function and the central ablation depth (CAD) of a paraxial eye model after photo-refractive keratectomy (PRK), with single and multizone treatments. A modified Le Grand-El Hage paraxial eye model, with a pupil diameter ranging from 2 to 8 mm was used. Ray-tracing was performed for initial myopia ranging from 1 to 10D; after single zone PRK; after double zone PRK; and after tripe zone PRK. The ray-tracing of a parallel incident beam was calculated by using the paraxial matrix method. At equal CAD, the optical image quality is better after single zone treatments. Multizone treatments do not seem to be advantageous optically.
Frood, R; Baren, J; McDermott, G; Bottomley, D; Patel, C; Scarsbrook, A
2018-04-30
To evaluate the efficacy of single time-point half-body (skull base to thighs) fluorine-18 choline positron emission tomography-computed tomography (PET-CT) compared to a triple-phase acquisition protocol in the detection of prostate carcinoma recurrence. Consecutive choline PET-CT studies performed at a single tertiary referral centre in patients with biochemical recurrence of prostate carcinoma between September 2012 and March 2017 were reviewed retrospectively. The indication for the study, imaging protocol used, imaging findings, whether management was influenced by the PET-CT, and subsequent patient outcome were recorded. Ninety-one examinations were performed during the study period; 42 were carried out using a triple-phase protocol (dynamic pelvic imaging for 20 minutes after tracer injection, half-body acquisition at 60 minutes and delayed pelvic scan at 90 minutes) between 2012 and August 2015. Subsequently following interim review of diagnostic performance, a streamlined protocol and appropriate-use criteria were introduced. Forty-nine examinations were carried out using the single-phase protocol between 2015 and 2017. Twenty-nine (69%) of the triple-phase studies were positive for recurrence compared to 38 (78%) of the single-phase studies. Only one patient who had a single-phase study would have benefited from a dynamic acquisition, they have required no further treatment or imaging and are currently under prostate-specific antigen (PSA) surveillance. Choline PET-CT remains a useful tool for the detection of prostate recurrence when used in combination with appropriate-use criteria. Removal of dynamic and delayed acquisition phases reduces study time without adversely affecting accuracy. Benefits include shorter imaging time which improves patient comfort, reduced cost, and improved scanner efficiency. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
Adhikari, Srikar
2014-06-01
To compare images obtained using two linear transducers with a different range of frequencies, and to determine if there is a significant difference in the quality of images between the two transducers for medical decision-making. This was a single-blinded, cross-sectional study at an academic medical center. Twenty-five emergency medicine clinical scenarios with ultrasound images (using both 10-5 and 14-5 MHz transducers) covering a variety of point-of-care ultrasound applications were presented to four emergency physician sonographers. They were blinded to the study hypothesis and type of the transducer used to obtain the images. On a scale of 1-10, the mean image quality rating for 10-5 MHz transducer was 7.09 (95 % CI 6.73-7.45) and 6.49 (95 % CI 5.99-6.99) for 14-5 MHz transducer. In the majority of cases (84 %, 95 % CI 75.7-92.3 %), sonographers indicated that images obtained with a 10-5 MHz transducer were satisfactory for medical decision-making. They preferred images obtained with a 10-5 MHz transducer over 14-5 MHz transducer in 39 % (95 % CI 30-50 %) of cases. The images obtained with a 14-5 MHz transducer were preferred over 10-5 MHz transducer in only 16 % (95 % CI 7.7-24.3 %) of the cases. The 14-5 MHz transducer has a slight advantage over 10-5 MHz transducer for ocular, upper airway, and musculoskeletal (tendon) ultrasound applications. A 10-5 MHz linear transducer is adequate to obtain images that can be used for medical decision-making for a variety of point-of-care ultrasound applications.
Evaluation of PACS in a multihospital environment
NASA Astrophysics Data System (ADS)
Siegel, Eliot L.; Reiner, Bruce I.; Protopapas, Zenon
1998-07-01
Although a number of authors have described the challenges and benefits of filmless operation using a hospital-wide Picture Archival and Communication System (PACS), there have been few descriptions of a multi-hospital wide area PACS. The purpose of this paper is to describe our two and a half year experience with PACS in an integrated multi-facility health care environment, the Veterans Affairs Maryland Health Care System (VAMHCS). On June 17, 1995 the Radiology and Nuclear Medicine services became integrated for four medical centers forming the VA Maryland Health Care System creating a single multi-facility imaging department. The facilities consisted of the Baltimore VA (acute and outpatient care, tertiary referral center), Ft. Howard (primarily long term care), Perry Point (primarily psychiatric care), and the Baltimore Rehabilitation and extended care facility (nursing home). The combined number of studies at all four sites is slightly more than 80,000 examinations per year. In addition to residents and fellows, the number of radiologists at Baltimore was approximately seven, with two at Perry Point, one at Ft. Howard, and no radiologists at the Rehabilitation and Extended Care facility. A single HIS/RIS, which is located physically at the Baltimore VAMC is utilized for all four medical centers. The multi- facility image management and communication system utilizes two separate PAC Systems that are physically located at the Baltimore VA Medical Center (BVAMC). The commercial system (GE Medical Systems) has been in place in Baltimore for more than 41/2 years and is utilized primarily in the acquisition, storage, distribution and display of radiology and nuclear medicine studies. The second PACS is the VISTA Imaging System, which has been developed as a module of the VA's HIS/RIS by and for the Department of Veterans Affairs. All of the radiology images obtained on the commercial PACS are requested by the VISTA Imaging System using DICOM query/retrieve commands and are stored on a separate server and optical jukebox. Additionally, the VISTA system is used to store all images obtained by all specialties in the medical center including pathology, dermatology, GI medicine, surgery, podiatry, ophthalmology, etc. Using this two PAC system approach, the hospital is able to achieve redundancy with regard to image storage, retrieval, and display of radiology images. The transition to a 'virtual' multi-facility imaging department was accomplished over a period of two years. Initially, Perry Point and Ft. Howard replaced their general radiographic film processors with Computed Radiography (CR) units. The CR units and subsequently, the CT and Ultrasound systems at Perry Point were interfaced (DeJarnette Research Systems) with the commercial PACS located in Baltimore. A HIS/RIS to modality interface was developed (DeJarnette and Fuji Medical Systems) between the computed radiography and CT units and VISTA Information System at Baltimore. A digital dictation system was recently implemented across the multi- facility network. The integration of the three radiology departments into a single virtual imaging department serving four medical centers has resulted in a number of benefits. Economically, there has been the elimination via attrition of one and a half radiologist FTE's (full time equivalents) and an administrative position resulting in an annual savings of more than $375,000 per year. Additionally, the expenditures for moonlighter coverage for vacation, meeting, and sick leave have been eliminated. There is now subspecialty coverage for primary or secondary interpretation and for peer review.
Magnetic resonance imaging-ultrasound fusion biopsy for prediction of final prostate pathology.
Le, Jesse D; Stephenson, Samuel; Brugger, Michelle; Lu, David Y; Lieu, Patricia; Sonn, Geoffrey A; Natarajan, Shyam; Dorey, Frederick J; Huang, Jiaoti; Margolis, Daniel J A; Reiter, Robert E; Marks, Leonard S
2014-11-01
We explored the impact of magnetic resonance imaging-ultrasound fusion prostate biopsy on the prediction of final surgical pathology. A total of 54 consecutive men undergoing radical prostatectomy at UCLA after fusion biopsy were included in this prospective, institutional review board approved pilot study. Using magnetic resonance imaging-ultrasound fusion, tissue was obtained from a 12-point systematic grid (mapping biopsy) and from regions of interest detected by multiparametric magnetic resonance imaging (targeted biopsy). A single radiologist read all magnetic resonance imaging, and a single pathologist independently rereviewed all biopsy and whole mount pathology, blinded to prior interpretation and matched specimen. Gleason score concordance between biopsy and prostatectomy was the primary end point. Mean patient age was 62 years and median prostate specific antigen was 6.2 ng/ml. Final Gleason score at prostatectomy was 6 (13%), 7 (70%) and 8-9 (17%). A tertiary pattern was detected in 17 (31%) men. Of 45 high suspicion (image grade 4-5) magnetic resonance imaging targets 32 (71%) contained prostate cancer. The per core cancer detection rate was 20% by systematic mapping biopsy and 42% by targeted biopsy. The highest Gleason pattern at prostatectomy was detected by systematic mapping biopsy in 54%, targeted biopsy in 54% and a combination in 81% of cases. Overall 17% of cases were upgraded from fusion biopsy to final pathology and 1 (2%) was downgraded. The combination of targeted biopsy and systematic mapping biopsy was needed to obtain the best predictive accuracy. In this pilot study magnetic resonance imaging-ultrasound fusion biopsy allowed for the prediction of final prostate pathology with greater accuracy than that reported previously using conventional methods (81% vs 40% to 65%). If confirmed, these results will have important clinical implications. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Spatio-temporal Variability of Albedo and its Impact on Glacier Melt Modelling
NASA Astrophysics Data System (ADS)
Kinnard, C.; Mendoza, C.; Abermann, J.; Petlicki, M.; MacDonell, S.; Urrutia, R.
2017-12-01
Albedo is an important variable for the surface energy balance of glaciers, yet its representation within distributed glacier mass-balance models is often greatly simplified. Here we study the spatio-temporal evolution of albedo on Glacier Universidad, central Chile (34°S, 70°W), using time-lapse terrestrial photography, and investigate its effect on the shortwave radiation balance and modelled melt rates. A 12 megapixel digital single-lens reflex camera was setup overlooking the glacier and programmed to take three daily images of the glacier during a two-year period (2012-2014). One image was chosen for each day with no cloud shading on the glacier. The RAW images were projected onto a 10m resolution digital elevation model (DEM), using the IMGRAFT software (Messerli and Grinsted, 2015). A six-parameter camera model was calibrated using a single image and a set of 17 ground control points (GCPs), yielding a georeferencing accuracy of <1 pixel in image coordinates. The camera rotation was recalibrated for new images based on a set of common tie points over stable terrain, thus accounting for possible camera movement over time. The reflectance values from the projected image were corrected for topographic and atmospheric influences using a parametric solar irradiation model, following a modified algorithm based on Corripio (2004), and then converted to albedo using reference albedo measurements from an on-glacier automatic weather station (AWS). The image-based albedo was found to compare well with independent albedo observations from a second AWS in the glacier accumulation area. Analysis of the albedo maps showed that the albedo is more spatially-variable than the incoming solar radiation, making albedo a more important factor of energy balance spatial variability. The incorporation of albedo maps within an enhanced temperature index melt model revealed that the spatio-temporal variability of albedo is an important factor for the calculation of glacier-wide meltwater fluxes.
Delorme, Arnaud; Miyakoshi, Makoto; Jung, Tzyy-Ping; Makeig, Scott
2014-01-01
With the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects. We have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability. This study demonstrates new methods for computing and visualizing grand ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute. PMID:25447029
Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.
Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet
2018-05-01
Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Self-correcting multi-atlas segmentation
NASA Astrophysics Data System (ADS)
Gao, Yi; Wilford, Andrew; Guo, Liang
2016-03-01
In multi-atlas segmentation, one typically registers several atlases to the new image, and their respective segmented label images are transformed and fused to form the final segmentation. After each registration, the quality of the registration is reflected by the single global value: the final registration cost. Ideally, if the quality of the registration can be evaluated at each point, independent of the registration process, which also provides a direction in which the deformation can further be improved, the overall segmentation performance can be improved. We propose such a self-correcting multi-atlas segmentation method. The method is applied on hippocampus segmentation from brain images and statistically significantly improvement is observed.
Mechanical Strains Induced in Osteoblasts by Use of Point Femtosecond Laser Targeting
Bomzon, Ze'ev; Day, Daniel; Gu, Min; Cartmell, Sarah
2006-01-01
A study demonstrating how ultrafast laser radiation stimulates osteoblasts is presented. The study employed a custom made optical system that allowed for simultaneous confocal cell imaging and targeted femtosecond pulse laser irradiation. When femtosecond laser light was focused onto a single cell, a rise in intracellular Ca2+ levels was observed followed by contraction of the targeted cell. This contraction caused deformation of neighbouring cells leading to a heterogeneous strain field throughout the monolayer. Quantification of the strain fields in the monolayer using digital image correlation revealed local strains much higher than threshold values typically reported to stimulate extracellular bone matrix production in vitro. This use of point targeting with femtosecond pulse lasers could provide a new method for stimulating cell activity in orthopaedic tissue engineering. PMID:23165014
Aerial Images and Convolutional Neural Network for Cotton Bloom Detection.
Xu, Rui; Li, Changying; Paterson, Andrew H; Jiang, Yu; Sun, Shangpeng; Robertson, Jon S
2017-01-01
Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of -4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.
Techniques of noninvasive optical tomographic imaging
NASA Astrophysics Data System (ADS)
Rosen, Joseph; Abookasis, David; Gokhler, Mark
2006-01-01
Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.
Textureless Macula Swelling Detection with Multiple Retinal Fundus Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul
2010-01-01
Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyse the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. Wemore » also present automatic algorithms to measure features from the reconstructed image which are useful in Point-of-Care automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalises the image; second, all available views are registered using non-morphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naiveheight- map of the macula. Results are presented on three sets of synthetic images and two sets of real world images. These preliminary tests show the ability to infer a minimum swelling of 300 microns and to correlate the reconstruction with the swollen location.« less
Accessing the exceptional points of parity-time symmetric acoustics
Shi, Chengzhi; Dubois, Marc; Chen, Yun; Cheng, Lei; Ramezani, Hamidreza; Wang, Yuan; Zhang, Xiang
2016-01-01
Parity-time (PT) symmetric systems experience phase transition between PT exact and broken phases at exceptional point. These PT phase transitions contribute significantly to the design of single mode lasers, coherent perfect absorbers, isolators, and diodes. However, such exceptional points are extremely difficult to access in practice because of the dispersive behaviour of most loss and gain materials required in PT symmetric systems. Here we introduce a method to systematically tame these exceptional points and control PT phases. Our experimental demonstration hinges on an active acoustic element that realizes a complex-valued potential and simultaneously controls the multiple interference in the structure. The manipulation of exceptional points offers new routes to broaden applications for PT symmetric physics in acoustics, optics, microwaves and electronics, which are essential for sensing, communication and imaging. PMID:27025443
Cartographic analyses of geographic information available on Google Earth Images
NASA Astrophysics Data System (ADS)
Oliveira, J. C.; Ramos, J. R.; Epiphanio, J. C.
2011-12-01
The propose was to evaluate planimetric accuracy of satellite images available on database of Google Earth. These images are referents to the vicinities of the Federal Univertisity of Viçosa, Minas Gerais - Brazil. The methodology developed evaluated the geographical information of three groups of images which were in accordance to the level of detail presented in the screen images (zoom). These groups of images were labeled to Zoom 1000 (a single image for the entire study area), Zoom 100 (formed by a mosaic of 73 images) and Zoom 100 with geometric correction (this mosaic is like before, however, it was applied a geometric correction through control points). In each group of image was measured the Cartographic Accuracy based on statistical analyses and brazilian's law parameters about planimetric mapping. For this evaluation were identified 22 points in each group of image, where the coordinates of each point were compared to the coordinates of the field obtained by GPS (Global Positioning System). The Table 1 show results related to accuracy (based on a threshold equal to 0.5 mm * mapping scale) and tendency (abscissa and ordinate) between the coordinates of the image and the coordinates of field. Table 1 The geometric correction applied to the Group Zoom 100 reduced the trends identified earlier, and the statistical tests pointed a usefulness of the data for a mapping at a scale of 1/5000 with error minor than 0.5 mm * scale. The analyses proved the quality of cartographic data provided by Google, as well as the possibility of reduce the divergences of positioning present on the data. It can be concluded that it is possible to obtain geographic information database available on Google Earth, however, the level of detail (zoom) used at the time of viewing and capturing information on the screen influences the quality cartographic of the mapping. Although cartographic and thematic potential present in the database, it is important to note that both the software as data distributed by Google Earth has policies for use and distribution.
Table 1 - PLANIMETRIC ANALYSIS
Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2018-05-01
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
Anatomy guided automated SPECT renal seed point estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar; Kumar, Sailendra
2010-04-01
Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.
Development of a single-axis ultrasonic levitator and the study of the radial particle oscillations
NASA Astrophysics Data System (ADS)
Baer, Sebastian; Andrade, Marco A. B.; Esen, Cemal; Adamowski, Julio Cezar; Ostendorf, Andreas
2012-05-01
This work describes the development and analysis of a new single-axis acoustic levitator, which consists of a 38 kHz Langevin-type piezoelectric transducer with a concave radiating surface and a concave reflector. The new levitator design allows to significantly reducing the electric power necessary to levitate particles and to stabilize the levitated sample in both radial and axial directions. In this investigation the lateral oscillations of a levitated particle were measured with a single point Laser Doppler Vibrometer (LDV) and an image evaluation technique. The lateral oscillations were measured for different values of particle diameter, particle density and applied electrical power.
Technical Considerations on Scanning and Image Analysis for Amyloid PET in Dementia.
Akamatsu, Go; Ohnishi, Akihito; Aita, Kazuki; Ikari, Yasuhiko; Yamamoto, Yasuji; Senda, Michio
2017-01-01
Brain imaging techniques, such as computed tomography (CT), magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), and positron emission tomography (PET), can provide essential and objective information for the early and differential diagnosis of dementia. Amyloid PET is especially useful to evaluate the amyloid-β pathological process as a biomarker of Alzheimer's disease. This article reviews critical points about technical considerations on the scanning and image analysis methods for amyloid PET. Each amyloid PET agent has its own proper administration instructions and recommended uptake time, scan duration, and the method of image display and interpretation. In addition, we have introduced general scanning information, including subject positioning, reconstruction parameters, and quantitative and statistical image analysis. We believe that this article could make amyloid PET a more reliable tool in clinical study and practice.
Multifocus confocal Raman microspectroscopy for fast multimode vibrational imaging of living cells.
Okuno, Masanari; Hamaguchi, Hiro-o
2010-12-15
We have developed a multifocus confocal Raman microspectroscopic system for the fast multimode vibrational imaging of living cells. It consists of an inverted microscope equipped with a microlens array, a pinhole array, a fiber bundle, and a multichannel Raman spectrometer. Forty-eight Raman spectra from 48 foci under the microscope are simultaneously obtained by using multifocus excitation and image-compression techniques. The multifocus confocal configuration suppresses the background generated from the cover glass and the cell culturing medium so that high-contrast images are obtainable with a short accumulation time. The system enables us to obtain multimode (10 different vibrational modes) vibrational images of living cells in tens of seconds with only 1 mW laser power at one focal point. This image acquisition time is more than 10 times faster than that in conventional single-focus Raman microspectroscopy.
HUBBLE VIEWS DISTANT GALAXIES THROUGH A COSMIC LENS
NASA Technical Reports Server (NTRS)
2002-01-01
Near-infrared image of Jupiter taken in a 2.22 micron filter from the Apache Point Observatory 3.5-meter telescope at 05:35 UT July 19. The G and D impact sites appear in this spectral region of strong methane absorption as image as a single white cloud over 14,000 km in diameter. At higher contrast, the impact regions can be resolved into an intensely bright core about 4,000 km diameter embedded within the larger cloud. Mark Marley and Nancy Chanover, Department of Astronomy, New Mexico State University
Application of information theory to the design of line-scan imaging systems
NASA Technical Reports Server (NTRS)
Huck, F. O.; Park, S. K.; Halyo, N.; Stallman, S.
1981-01-01
Information theory is used to formulate a single figure of merit for assessing the performance of line scan imaging systems as a function of their spatial response (point spread function or modulation transfer function), sensitivity, sampling and quantization intervals, and the statistical properties of a random radiance field. Computational results for the information density and efficiency (i.e., the ratio of information density to data density) are intuitively satisfying and compare well with experimental and theoretical results obtained by earlier investigators concerned with the performance of TV systems.
Trelease, R B
1996-01-01
Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.
NASA Astrophysics Data System (ADS)
Kurek, A. R.; Stachowski, A.; Banaszek, K.; Pollo, A.
2018-05-01
High-angular-resolution imaging is crucial for many applications in modern astronomy and astrophysics. The fundamental diffraction limit constrains the resolving power of both ground-based and spaceborne telescopes. The recent idea of a quantum telescope based on the optical parametric amplification (OPA) of light aims to bypass this limit for the imaging of extended sources by an order of magnitude or more. We present an updated scheme of an OPA-based device and a more accurate model of the signal amplification by such a device. The semiclassical model that we present predicts that the noise in such a system will form so-called light speckles as a result of light interference in the optical path. Based on this model, we analysed the efficiency of OPA in increasing the angular resolution of the imaging of extended targets and the precise localization of a distant point source. According to our new model, OPA offers a gain in resolved imaging in comparison to classical optics. For a given time-span, we found that OPA can be more efficient in localizing a single distant point source than classical telescopes.
Life After Press: The Role of the Picture Library in Communicating Astronomy to the Public
NASA Astrophysics Data System (ADS)
Evans, G. S.
2005-12-01
Science communication is increasingly led by the image, providing opportunities for 'visual' disciplines such as astronomy to receive greater public exposure. In consequence, there is a huge demand for good and exciting images within the publishing media. The picture library is a conduit linking image makers of all kinds to image buyers of all kinds. The image maker benefits from the exposure of their pictures to the people who want to use them, with minimal time investment, and with the safeguards of effective rights management. The image buyer benefits from a wide choice of images available from a single point of contact, stored in a database that offers a choice between subject-based and conceptual searches. By forming this link between astronomer, professional or amateur, and the publishing media, the picture library helps to make the wonders of astronomy visible to a wider public audience.
Handheld, point-of-care laser speckle imaging
Farraro, Ryan; Fathi, Omid; Choi, Bernard
2016-01-01
Abstract. Laser speckle imaging (LSI) enables measurement of relative changes in blood flow in biological tissues. We postulate that a point-of-care form factor will lower barriers to routine clinical use of LSI. Here, we describe a first-generation handheld LSI device based on a tablet computer. The coefficient of variation of speckle contrast was <2% after averaging imaging data collected over an acquisition period of 5.3 s. With a single, experienced user, handheld motion artifacts had a negligible effect on data collection. With operation by multiple users, we did not identify any significant difference (p>0.05) between the measured speckle contrast values using either a handheld or mounted configuration. In vivo data collected during occlusion experiments demonstrate that a handheld LSI is capable of both quantitative and qualitative assessment of changes in blood flow. Finally, as a practical application of handheld LSI, we collected data from a 53-day-old neonate with confirmed compromised blood flow in the hand. We readily identified with LSI a region of diminished blood flow in the thumb of the affected hand. Our data collectively suggest that handheld LSI is a promising technique to enable clinicians to obtain point-of-care measurements of blood flow. PMID:27579578
Handheld, point-of-care laser speckle imaging
NASA Astrophysics Data System (ADS)
Farraro, Ryan; Fathi, Omid; Choi, Bernard
2016-09-01
Laser speckle imaging (LSI) enables measurement of relative changes in blood flow in biological tissues. We postulate that a point-of-care form factor will lower barriers to routine clinical use of LSI. Here, we describe a first-generation handheld LSI device based on a tablet computer. The coefficient of variation of speckle contrast was <2% after averaging imaging data collected over an acquisition period of 5.3 s. With a single, experienced user, handheld motion artifacts had a negligible effect on data collection. With operation by multiple users, we did not identify any significant difference (p>0.05) between the measured speckle contrast values using either a handheld or mounted configuration. In vivo data collected during occlusion experiments demonstrate that a handheld LSI is capable of both quantitative and qualitative assessment of changes in blood flow. Finally, as a practical application of handheld LSI, we collected data from a 53-day-old neonate with confirmed compromised blood flow in the hand. We readily identified with LSI a region of diminished blood flow in the thumb of the affected hand. Our data collectively suggest that handheld LSI is a promising technique to enable clinicians to obtain point-of-care measurements of blood flow.
Pérez-Cota, Fernando; Smith, Richard J; Moradi, Emilia; Marques, Leonel; Webb, Kevin F; Clark, Matt
2015-10-01
At low frequencies ultrasound is a valuable tool to mechanically characterize and image biological tissues. There is much interest in using high-frequency ultrasound to investigate single cells. Mechanical characterization of vegetal and biological cells by measurement of Brillouin oscillations has been demonstrated using ultrasound in the GHz range. This paper presents a method to extend this technique from the previously reported single-point measurements and line scans into a high-resolution acoustic imaging tool. Our technique uses a three-layered metal-dielectric-metal film as a transducer to launch acoustic waves into the cell we want to study. The design of this transducer and measuring system is optimized to overcome the vulnerability of a cell to the exposure of laser light and heat without sacrificing the signal-to-noise ratio. The transducer substrate shields the cell from the laser radiation, efficiently generates acoustic waves, facilitates optical detection in transmission, and aids with heat dissipation away from the cell. This paper discusses the design of the transducers and instrumentation and presents Brillouin frequency images on phantom, fixed, and living cells.
RANKING TEM CAMERAS BY THEIR RESPONSE TO ELECTRON SHOT NOISE
Grob, Patricia; Bean, Derek; Typke, Dieter; Li, Xueming; Nogales, Eva; Glaeser, Robert M.
2013-01-01
We demonstrate two ways in which the Fourier transforms of images that consist solely of randomly distributed electrons (shot noise) can be used to compare the relative performance of different electronic cameras. The principle is to determine how closely the Fourier transform of a given image does, or does not, approach that of an image produced by an ideal camera, i.e. one for which single-electron events are modeled as Kronecker delta functions located at the same pixels where the electrons were incident on the camera. Experimentally, the average width of the single-electron response is characterized by fitting a single Lorentzian function to the azimuthally averaged amplitude of the Fourier transform. The reciprocal of the spatial frequency at which the Lorentzian function falls to a value of 0.5 provides an estimate of the number of pixels at which the corresponding line-spread function falls to a value of 1/e. In addition, the excess noise due to stochastic variations in the magnitude of the response of the camera (for single-electron events) is characterized by the amount to which the appropriately normalized power spectrum does, or does not, exceed the total number of electrons in the image. These simple measurements provide an easy way to evaluate the relative performance of different cameras. To illustrate this point we present data for three different types of scintillator-coupled camera plus a silicon-pixel (direct detection) camera. PMID:23747527
A novel in vitro image-based assay identifies new drug leads for giardiasis.
Hart, Christopher J S; Munro, Taylah; Andrews, Katherine T; Ryan, John H; Riches, Andrew G; Skinner-Adams, Tina S
2017-04-01
Giardia duodenalis is an intestinal parasite that causes giardiasis, a widespread human gastrointestinal disease. Treatment of giardiasis relies on a small arsenal of compounds that can suffer from limitations including side-effects, variable treatment efficacy and parasite drug resistance. Thus new anti-Giardia drug leads are required. The search for new compounds with anti-Giardia activity currently depends on assays that can be labour-intensive, expensive and restricted to measuring activity at a single time-point. Here we describe a new in vitro assay to assess anti-Giardia activity. This image-based assay utilizes the Perkin-Elmer Operetta ® and permits automated assessment of parasite growth at multiple time points without cell-staining. Using this new approach, we assessed the "Malaria Box" compound set for anti-Giardia activity. Three compounds with sub-μM activity (IC 50 0.6-0.9 μM) were identified as potential starting points for giardiasis drug discovery. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Identifying and Overcoming Obstacles to Point-of-Care Data Collection for Eye Care Professionals
Lobach, David F.; Silvey, Garry M.; Macri, Jennifer M.; Hunt, Megan; Kacmaz, Roje O.; Lee, Paul P.
2005-01-01
Supporting data entry by clinicians is considered one of the greatest challenges in implementing electronic health records. In this paper we describe a formative evaluation study using three different methodologies through which we identified obstacles to point-of-care data entry for eye care and then used the formative process to develop and test solutions to overcome these obstacles. The greatest obstacles were supporting free text annotation of clinical observations and accommodating the creation of detailed diagrams in multiple colors. To support free text entry, we arrived at an approach that captures an image of a free text note and associates this image with related data elements in an encounter note. The detailed diagrams included a color pallet that allowed changing pen color with a single stroke and also captured the diagrams as an image associated with related data elements. During observed sessions with simulated patients, these approaches satisfied the clinicians’ documentation needs by capturing the full range of clinical complexity that arises in practice. PMID:16779083
Elad, M; Feuer, A
1997-01-01
The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration
NASA Astrophysics Data System (ADS)
Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.
2012-02-01
The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.
Single-lens stereovision system using a prism: position estimation of a multi-ocular prism.
Cui, Xiaoyu; Lim, Kah Bin; Zhao, Yue; Kee, Wei Loon
2014-05-01
In this paper, a position estimation method using a prism-based single-lens stereovision system is proposed. A multifaced prism was considered as a single optical system composed of few refractive planes. A transformation matrix which relates the coordinates of an object point to its coordinates on the image plane through the refraction of the prism was derived based on geometrical optics. A mathematical model which is able to denote the position of an arbitrary faces prism with only seven parameters is introduced. This model further extends the application of the single-lens stereovision system using a prism to other areas. Experimentation results are presented to prove the effectiveness and robustness of our proposed model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besemer, A; Marsh, I; Bednarz, B
Purpose: The calculation of 3D internal dose calculations in targeted radionuclide therapy requires the acquisition and temporal coregistration of a serial PET/CT or SPECT/CT images. This work investigates the dosimetric impact of different temporal coregistration methods commonly used for 3D internal dosimetry. Methods: PET/CT images of four mice were acquired at 1, 24, 48, 72, 96, 144 hrs post-injection of {sup 124}I-CLR1404. The therapeutic {sup 131}I-CLR1404 absorbed dose rate (ADR) was calculated at each time point using a Geant4-based MC dosimetry platform using three temporal image coregistration Methods: (1) no coregistration (NC), whole body sequential CT-CT affine coregistration (WBAC), andmore » individual sequential ROI-ROI affine coregistration (IRAC). For NC, only the ROI mean ADR was integrated to obtain ROI mean doses. For WBAC, the CT at each time point was coregistered to a single reference CT. The CT transformations were applied to the corresponding ADR images and the dose was calculated on a voxel-basis within the whole CT volume. For IRAC, each individual ROI was isolated and sequentially coregistered to a single reference ROI. The ROI transformations were applied to the corresponding ADR images and the dose was calculated on a voxel-basis within the ROI volumes. Results: The percent differences in the ROI mean doses were as large as 109%, 88%, and 32%, comparing the WBAC vs. IRAC, NC vs. IRAC, and NC vs. WBAC methods, respectively. The CoV in the mean dose between the all three methods ranged from 2–36%. The pronounced curvature of the spinal cord was not adequately coregistered using WBAC which resulted in large difference between the WBAC and IRAC. Conclusion: The method used for temporal image coregistration can result in large differences in 3D internal dosimetry calculations. Care must be taken to choose the most appropriate method depending on the imaging conditions, clinical site, and specific application. This work is partially funded by NIH Grant R21 CA198392-01.« less
Panier, Thomas; Romano, Sebastián A; Olive, Raphaël; Pietri, Thomas; Sumbre, Germán; Candelier, Raphaël; Debrégeas, Georges
2013-01-01
The optical transparency and the small dimensions of zebrafish at the larval stage make it a vertebrate model of choice for brain-wide in-vivo functional imaging. However, current point-scanning imaging techniques, such as two-photon or confocal microscopy, impose a strong limit on acquisition speed which in turn sets the number of neurons that can be simultaneously recorded. At 5 Hz, this number is of the order of one thousand, i.e., approximately 1-2% of the brain. Here we demonstrate that this limitation can be greatly overcome by using Selective-plane Illumination Microscopy (SPIM). Zebrafish larvae expressing the genetically encoded calcium indicator GCaMP3 were illuminated with a scanned laser sheet and imaged with a camera whose optical axis was oriented orthogonally to the illumination plane. This optical sectioning approach was shown to permit functional imaging of a very large fraction of the brain volume of 5-9-day-old larvae with single- or near single-cell resolution. The spontaneous activity of up to 5,000 neurons was recorded at 20 Hz for 20-60 min. By rapidly scanning the specimen in the axial direction, the activity of 25,000 individual neurons from 5 different z-planes (approximately 30% of the entire brain) could be simultaneously monitored at 4 Hz. Compared to point-scanning techniques, this imaging strategy thus yields a ≃20-fold increase in data throughput (number of recorded neurons times acquisition rate) without compromising the signal-to-noise ratio (SNR). The extended field of view offered by the SPIM method allowed us to directly identify large scale ensembles of neurons, spanning several brain regions, that displayed correlated activity and were thus likely to participate in common neural processes. The benefits and limitations of SPIM for functional imaging in zebrafish as well as future developments are briefly discussed.
Transverse correlations in triphoton entanglement: Geometrical and physical optics
NASA Astrophysics Data System (ADS)
Wen, Jianming; Xu, P.; Rubin, Morton H.; Shih, Yanhua
2007-08-01
The transverse correlation of triphoton entanglement generated within a single crystal is analyzed. Among many interesting features of the transverse correlation, they arise from the spectral function F of the triphoton state produced in the parametric processes. One consequence of transverse effects of entangled states is quantum imaging, which is theoretically studied in photon counting measurements. Klyshko’s two-photon advanced-wave picture is found to be applicable to the multiphoton entanglement with some modifications. We found that in the two-photon coincidence counting measurement by using triphoton entanglement, although the Gaussian thin lens equation (GTLE) holds, the imaging shown in coincidences is obscure and has a poor quality. This is because of tracing the remaining transverse modes in the untouched beam. In the triphoton imaging experiments, two kinds of cases have been examined. For the case that only one object with one thin lens is placed in the system, we found that the GTLE holds as expected in the triphoton coincidences and the effective distance between the lens and imaging plane is the parallel combination of two distances between the lens and two detectors weighted by wavelengths, which behaves as the parallel combination of resistors in the electromagnetism theory. Only in this case, a point-point correspondence for forming an image is well-accomplished. However, when two objects or two lenses are inserted in the system, though the GTLEs are well-satisfied, in general a point-point correspondence for imaging cannot be established. Under certain conditions, two blurred images may be observed in the coincidence counts. We have also studied the ghost interference-diffraction experiments by using double slits as apertures in triphoton entanglement. It was found that when two double slits are used in two optical beams, the interference-diffraction patterns show unusual features compared with the two-photon case. This unusual behavior is a destructive interference between two amplitudes for two photons crossing two double slits.
Volumetric Two-photon Imaging of Neurons Using Stereoscopy (vTwINS)
Song, Alexander; Charles, Adam S.; Koay, Sue Ann; Gauthier, Jeff L.; Thiberge, Stephan Y.; Pillow, Jonathan W.; Tank, David W.
2017-01-01
Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large scale recording of neural activity in vivo. Here we introduce volumetric Two-photon Imaging of Neurons using Stereoscopy (vTwINS), a volumetric calcium imaging method that employs an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced “image pairs” in the resulting 2D image, and the separation distance between images is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a novel orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrate vTwINS by imaging neural population activity in mouse primary visual cortex and hippocampus. Our results demonstrate that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame-rate. PMID:28319111
An integrated eddy current detection and imaging system on a silicon chip
NASA Technical Reports Server (NTRS)
Henderson, H. Thurman; Kartalia, K. P.; Dury, Joseph D.
1991-01-01
Eddy current probes have been used for many years for numerous sensing applications including crack detection in metals. However, these applications have traditionally used the eddy current effect in the form of a physically wound single or different probe pairs which of necessity must be made quite large compared to microelectronics dimensions. Also, the traditional wound probe can only take a point reading, although that point might include tens of individual cracks or crack arrays; thus, conventional eddy current probes are beset by two major problems: (1) no detailed information can be obtained about the crack or crack array; and (2) for applications such as quality assurance, a vast amount of time must be taken to scan a complete surface. Laboratory efforts have been made to fabricate linear arrays of single turn probes in a thick film format on a ceramic substrate as well as in a flexible cable format; however, such efforts inherently suffer from relatively large size requirements as well as sensitivity issues. Preliminary efforts to fully extend eddy current probing from a point or single dimensional level to a two dimensional micro-eddy current format on a silicon chip, which might overcome all of the above problems, are presented.
Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding
2018-02-01
Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.
Image-guided automatic triggering of a fractional CO2 laser in aesthetic procedures.
Wilczyński, Sławomir; Koprowski, Robert; Wiernek, Barbara K; Błońska-Fajfrowska, Barbara
2016-09-01
Laser procedures in dermatology and aesthetic medicine are associated with the need for manual laser triggering. This leads to pulse overlapping and side effects. Automatic laser triggering based on image analysis can provide a secure fit to each successive doses of radiation. A fractional CO2 laser was used in the study. 500 images of the human skin of healthy subjects were acquired. Automatic triggering was initiated by an application together with a camera which tracks and analyses the skin in visible light. The tracking algorithm uses the methods of image analysis to overlap images. After locating the characteristic points in analysed adjacent areas, the correspondence of graphs is found. The point coordinates derived from the images are the vertices of graphs with respect to which isomorphism is sought. When the correspondence of graphs is found, it is possible to overlap the neighbouring parts of the image. The proposed method of laser triggering owing to the automatic image fitting method allows for 100% repeatability. To meet this requirement, there must be at least 13 graph vertices obtained from the image. For this number of vertices, the time of analysis of a single image is less than 0.5s. The proposed method, applied in practice, may help reduce the number of side effects during dermatological laser procedures resulting from laser pulse overlapping. In addition, it reduces treatment time and enables to propose new techniques of treatment through controlled, precise laser pulse overlapping. Copyright © 2016 Elsevier Ltd. All rights reserved.
Long-term High-Resolution Intravital Microscopy in the Lung with a Vacuum Stabilized Imaging Window
Rodriguez-Tirado, Carolina; Kitamura, Takanori; Kato, Yu; Pollard, Jeffery W.; Condeelis, John S.; Entenberg, David
2017-01-01
Metastasis to secondary sites such as the lung, liver and bone is a traumatic event with a mortality rate of approximately 90% 1. Of these sites, the lung is the most difficult to assess using intravital optical imaging due to its enclosed position within the body, delicate nature and vital role in sustaining proper physiology. While clinical modalities (positron emission tomography (PET), magnetic resonance imaging (MRI) and computed tomography (CT)) are capable of providing noninvasive images of this tissue, they lack the resolution necessary to visualize the earliest seeding events, with a single pixel consisting of nearly a thousand cells. Current models of metastatic lung seeding postulate that events just after a tumor cell's arrival are deterministic for survival and subsequent growth. This means that real-time intravital imaging tools with single cell resolution 2 are required in order to define the phenotypes of the seeding cells and test these models. While high resolution optical imaging of the lung has been performed using various ex vivo preparations, these experiments are typically single time-point assays and are susceptible to artifacts and possible erroneous conclusions due to the dramatically altered environment (temperature, profusion, cytokines, etc.) resulting from removal from the chest cavity and circulatory system 3. Recent work has shown that time-lapse intravital optical imaging of the intact lung is possible using a vacuum stabilized imaging window 2,4,5 however, typical imaging times have been limited to approximately 6 hr. Here we describe a protocol for performing long-term intravital time-lapse imaging of the lung utilizing such a window over a period of 12 hr. The time-lapse image sequences obtained using this method enable visualization and quantitation of cell-cell interactions, membrane dynamics and vascular perfusion in the lung. We further describe an image processing technique that gives an unprecedentedly clear view of the lung microvasculature. PMID:27768066
NASA Technical Reports Server (NTRS)
Chang, A. Y.; Battles, B. E.; Hanson, R. K.
1990-01-01
In high speed flows, laser induced fluorescence (LIF) on Doppler shifted transitions is an attractive technique for velocity measurement. LIF velocimetry was applied to combined single-point measurements of velocity, temperature, and pressure and 2-D imaging of velocity and pressure. Prior to recent research using NO, LIF velocimetry in combustion related flows relied largely on the use of seed molecules. Simultaneous, single-point LIF measurements is reported of velocity, temperature, and pressure using the naturally occurring combustion species OH. This experiment is an extension of earlier research in which a modified ring dye laser was used to make time resolved temperature measurements behind reflected shock waves by using OH absorption an in postflame gases by using OH LIF. A pair of fused-silica rhombs mounted on a single galvanonmeter in an intracavity-doubled Spectra-Physics 380 ring laser permit the UV output to be swept continuously over a few wave numbers at an effective frequency of 3kHz.
The NIRCam Optical Telescope Simulator (NOTES)
NASA Technical Reports Server (NTRS)
Kubalak, David; Hakun, Claef; Greeley, Bradford; Eichorn, William; Leviton, Douglas; Guishard, Corina; Gong, Qian; Warner, Thomas; Bugby, David; Robinson, Frederick;
2007-01-01
The Near Infra-Red Camera (NIRCam), the 0.6-5.0 micron imager and wavefront sensing instrument for the James Webb Space Telescope (JWST), will be used on orbit both as a science instrument, and to tune the alignment of the telescope. The NIRCam Optical Telescope Element Simulator (NOTES) will be used during ground testing to provide an external stimulus to verify wavefront error, imaging characteristics, and wavefront sensing performance of this crucial instrument. NOTES is being designed and built by NASA Goddard Space Flight Center with the help of Swales Aerospace and Orbital Sciences Corporation. It is a single-point imaging system that uses an elliptical mirror to form an U20 image of a point source. The point source will be fed via optical fibers from outside the vacuum chamber. A tip/tilt mirror is used to change the chief ray angle of the beam as it passes through the aperture stop and thus steer the image over NIRCam's field of view without moving the pupil or introducing field aberrations. Interchangeable aperture stop elements allow us to simulate perfect JWST wavefronts for wavefront error testing, or introduce transmissive phase plates to simulate a misaligned JWST segmented mirror for wavefront sensing verification. NOTES will be maintained at an operating temperature of 80K during testing using thermal switches, allowing it to operate within the same test chamber as the NIRCam instrument. We discuss NOTES' current design status and on-going development activities.
Single-Cell Western Blotting after Whole-Cell Imaging to Assess Cancer Chemotherapeutic Response
2015-01-01
Intratumor heterogeneity remains a major obstacle to effective cancer therapy and personalized medicine. Current understanding points to differential therapeutic response among subpopulations of tumor cells as a key challenge to successful treatment. To advance our understanding of how this heterogeneity is reflected in cell-to-cell variations in chemosensitivity and expression of drug-resistance proteins, we optimize and apply a new targeted proteomics modality, single-cell western blotting (scWestern), to a human glioblastoma cell line. To acquire both phenotypic and proteomic data on the same, single glioblastoma cells, we integrate high-content imaging prior to the scWestern assays. The scWestern technique supports thousands of concurrent single-cell western blots, with each assay comprised of chemical lysis of single cells seated in microwells, protein electrophoresis from those microwells into a supporting polyacrylamide (PA) gel layer, and in-gel antibody probing. We systematically optimize chemical lysis and subsequent polyacrylamide gel electrophoresis (PAGE) of the single-cell lysate. The scWestern slides are stored for months then reprobed, thus allowing archiving and later analysis as relevant to sparingly limited, longitudinal cell specimens. Imaging and scWestern analysis of single glioblastoma cells dosed with the chemotherapeutic daunomycin showed both apoptotic (cleaved caspase 8- and annexin V-positive) and living cells. Intriguingly, living glioblastoma subpopulations show up-regulation of a multidrug resistant protein, P-glycoprotein (P-gp), suggesting an active drug efflux pump as a potential mechanism of drug resistance. Accordingly, linking of phenotype with targeted protein analysis with single-cell resolution may advance our understanding of drug response in inherently heterogeneous cell populations, such as those anticipated in tumors. PMID:25226230
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
Hand-eye calibration for rigid laparoscopes using an invariant point.
Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2016-06-01
Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
NASA Astrophysics Data System (ADS)
Mefleh, Fuad N.; Baker, G. Hamilton; Kwartowitz, David M.
2014-03-01
In our previous work we presented a novel image-guided surgery (IGS) system, Kit for Navigation by Image Focused Exploration (KNIFE).1,2 KNIFE has been demonstrated to be effective in guiding mock clinical procedures with the tip of an electromagnetically tracked catheter overlaid onto a pre-captured bi-plane fluoroscopic loop. Representation of the catheter in KNIFE differs greatly from what is captured by the fluoroscope, due to distortions and other properties of fluoroscopic images. When imaged by a fluoroscope, catheters can be visualized due to the inclusion of radiopaque materials (i.e. Bi, Ba, W) in the polymer blend.3 However, in KNIFE catheter location is determined using a single tracking seed located in the catheter tip that is represented as a single point overlaid on pre-captured fluoroscopic images. To bridge the gap in catheter representation between KNIFE and traditional methods we constructed a catheter with five tracking seeds positioned along the distal 70 mm of the catheter. We have currently investigated the use of four spline interpolation methods for estimation of true catheter shape and have assesed the error in their estimation of true catheter shape. In this work we present a method for the evaluation of interpolation algorithms with respect to catheter shape determination.
A computerized tomography system for transcranial ultrasound imaging.
Tang, Sai Chun; Clement, Gregory T
Hardware for tomographic imaging presents both challenge and opportunity for simplification when compared with traditional pulse-echo imaging systems. Specifically, point diffraction tomography does not require simultaneous powering of elements, in theory allowing just a single transmit channel and a single receive channel to be coupled with a switching or multiplexing network. In our ongoing work on transcranial imaging, we have developed a 512-channel system designed to transmit and/or receive a high voltage signal from/to arbitrary elements of an imaging array. The overall design follows a hierarchy of modules including a software interface, microcontroller, pulse generator, pulse amplifier, high-voltage power converter, switching mother board, switching daughter board, receiver amplifier, analog-to-digital converter, peak detector, memory, and USB communication. Two pulse amplifiers are included, each capable of producing up to 400Vpp via power MOSFETS. Switching is based around mechanical relays that allow passage of 200V, while still achieving switching times of under 2ms, with an operating frequency ranging from below 100kHz to 10MHz. The system is demonstrated through ex vivo human skulls using 1MHz transducers. The overall system design is applicable to planned human studies in transcranial image acquisition, and may have additional tomographic applications for other materials necessitating a high signal output.
Inversion domain boundaries in ZnO with additions of Fe2O3 studied by high-resolution ADF imaging.
Wolf, Frank; Freitag, Bert H; Mader, Werner
2007-01-01
Columns of metal atoms in the polytypoid compound Fe2O3(ZnO)15 could be resolved by high angle annular dark field imaging in a transmission electron microscopy (TEM)/STEM electron microscope--a result which could not be realized by high-resolution bright field imaging due to inherent strain from inversion domains and inversion domain boundaries (IDBs) in the crystals. The basal plane IDB was imaged in [11 00] yielding the spacing of the two adjacent ZnO domains, while imaging in [21 1 0] yields the position of single metal ions. The images allow the construction of the entire domain structure including the stacking sequence and positions of the oxygen ions. The IDB consists of a single layer of octahedrally co-ordinated Fe3+ ions, and the inverted ZnO domains are related by point symmetry at the iron position. The FeO6 octahedrons are compressed along the ZnO c-axis resulting in a FeO bond length of 0.208 nm which is in the range of FeO distances in iron containing oxides. The model of the basal plane boundary resembles that of the IDB in polytypoid ZnO-In2O3 compounds.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
2008-07-31
Unlike the Lyrtech, each DSP on a Bittware board offers 3 MB of on-chip memory and 3 GFLOPs of 32-bit peak processing power. Based on the performance...Each NVIDIA 8800 Ultra features 576 GFLOPS on 128 612-MHz single-precision floating-point SIMD processors, arranged in 16 clusters of eight. Each
ERIC Educational Resources Information Center
Stoch, Yonit K.; Williams, Cori J.; Granich, Joanna; Hunt, Anna M.; Landau, Lou I.; Newnham, John P.; Whitehouse, Andrew J. O.
2012-01-01
An existing randomised controlled trial was used to investigate whether multiple ultrasound scans may be associated with the autism phenotype. From 2,834 single pregnancies, 1,415 were selected at random to receive ultrasound imaging and continuous wave Doppler flow studies at five points throughout pregnancy (Intensive) and 1,419 to receive a…
Navigation for fluoroscopy-guided cryo-balloon ablation procedures of atrial fibrillation
NASA Astrophysics Data System (ADS)
Bourier, Felix; Brost, Alexander; Kleinoeder, Andreas; Kurzendorfer, Tanja; Koch, Martin; Kiraly, Attila; Schneider, Hans-Juergen; Hornegger, Joachim; Strobel, Norbert; Kurzidim, Klaus
2012-02-01
Atrial fibrillation (AFib), the most common arrhythmia, has been identified as a major cause of stroke. The current standard in interventional treatment of AFib is the pulmonary vein isolation (PVI). PVI is guided by fluoroscopy or non-fluoroscopic electro-anatomic mapping systems (EAMS). Either classic point-to-point radio-frequency (RF)- catheter ablation or so-called single-shot-devices like cryo-balloons are used to achieve electrically isolation of the pulmonary veins and the left atrium (LA). Fluoroscopy-based systems render overlay images from pre-operative 3-D data sets which are then merged with fluoroscopic imaging, thereby adding detailed 3-D information to conventional fluoroscopy. EAMS provide tracking and visualization of RF catheters by means of electro-magnetic tracking. Unfortunately, current navigation systems, fluoroscopy-based or EAMS, do not provide tools to localize and visualize single shot devices like cryo-balloon catheters in 3-D. We present a prototype software for fluoroscopy-guided ablation procedures that is capable of superimposing 3-D datasets as well as reconstructing cyro-balloon catheters in 3-D. The 3-D cyro-balloon reconstruction was evaluated on 9 clinical data sets, yielded a reprojected 2-D error of 1.72 mm +/- 1.02 mm.
Platform control for space-based imaging: the TOPSAT mission
NASA Astrophysics Data System (ADS)
Dungate, D.; Morgan, C.; Hardacre, S.; Liddle, D.; Cropp, A.; Levett, W.; Price, M.; Steyn, H.
2004-11-01
This paper describes the imaging mode ADCS design for the TOPSAT satellite, an Earth observation demonstration mission targeted at military applications. The baselined orbit for TOPSAT is a 600-700km sun synchronous orbit from which images up to 30° off track can be captured. For this baseline, the imaging camera proves a resolution of 2.5m and a nominal image size of 15x15km. The ADCS design solution for the imaging mode uses a moving demand approach to enable a single control algorithm solution for both the preparatory reorientation prior to image capture and the post capture return to nadir pointing. During image capture proper, control is suspended to minimise the disturbances experienced by the satellite from the wheels. Prior to each imaging sequence, the moving demand attitude and rate profiles are calculated such that the correct attitude and rate are achieved at the correct orbital position, enabling the correct target area to be captured.
NASA Astrophysics Data System (ADS)
Gonzaga, S.; Biretta, J.; Wiggs, M. S.; Hsu, J. C.; Smith, T. E.; Bergeron, L.
1998-12-01
The drizzle software combines dithered images while preserving photometric accuracy, enhancing resolution, and removing geometric distortion. A recent upgrade also allows removal of cosmic rays from single images at each dither pointing. This document gives detailed examples illustrating drizzling procedures for six cases: WFPC2 observations of a deep field, a crowded field, a large galaxy, a planetary nebula, STIS/CCD observations of a HDF-North field, and NICMOS/NIC2 observations of the Egg Nebula. Command scripts and input images for each example are available on the WFPC2 WWW website. Users are encouraged to retrieve the data for the case that most closely resembles their own data and then practice and experiment drizzling the example.
Automatic Camera Orientation and Structure Recovery with Samantha
NASA Astrophysics Data System (ADS)
Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.
2011-09-01
SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.
NASA Astrophysics Data System (ADS)
Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim
2014-09-01
Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.
Lung fissure detection in CT images using global minimal paths
NASA Astrophysics Data System (ADS)
Appia, Vikram; Patil, Uday; Das, Bipul
2010-03-01
Pulmonary fissures separate human lungs into five distinct regions called lobes. Detection of fissure is essential for localization of the lobar distribution of lung diseases, surgical planning and follow-up. Treatment planning also requires calculation of the lobe volume. This volume estimation mandates accurate segmentation of the fissures. Presence of other structures (like vessels) near the fissure, along with its high variational probability in terms of position, shape etc. makes the lobe segmentation a challenging task. Also, false incomplete fissures and occurrence of diseases add to the complications of fissure detection. In this paper, we propose a semi-automated fissure segmentation algorithm using a minimal path approach on CT images. An energy function is defined such that the path integral over the fissure is the global minimum. Based on a few user defined points on a single slice of the CT image, the proposed algorithm minimizes a 2D energy function on the sagital slice computed using (a) intensity (b) distance of the vasculature, (c) curvature in 2D, (d) continuity in 3D. The fissure is the infimum energy path between a representative point on the fissure and nearest lung boundary point in this energy domain. The algorithm has been tested on 10 CT volume datasets acquired from GE scanners at multiple clinical sites. The datasets span through different pathological conditions and varying imaging artifacts.
Co-Registration of Terrestrial and Uav-Based Images - Experimental Results
NASA Astrophysics Data System (ADS)
Gerke, M.; Nex, F.; Jende, P.
2016-03-01
For many applications within urban environments the combined use of images taken from the ground and from unmanned aerial platforms seems interesting: while from the airborne perspective the upper parts of objects including roofs can be observed, the ground images can complement the data from lateral views to retrieve a complete visualisation or 3D reconstruction of interesting areas. The automatic co-registration of air- and ground-based images is still a challenge and cannot be considered solved. The main obstacle is originating from the fact that objects are photographed from quite different angles, and hence state-of-the-art tie point measurement approaches cannot cope with the induced perspective transformation. One first important step towards a solution is to use airborne images taken under slant directions. Those oblique views not only help to connect vertical images and horizontal views but also provide image information from 3D-structures not visible from the other two directions. According to our experience, however, still a good planning and many images taken under different viewing angles are needed to support an automatic matching across all images and complete bundle block adjustment. Nevertheless, the entire process is still quite sensible - the removal of a single image might lead to a completely different or wrong solution, or separation of image blocks. In this paper we analyse the impact different parameters and strategies have on the solution. Those are a) the used tie point matcher, b) the used software for bundle adjustment. Using the data provided in the context of the ISPRS benchmark on multi-platform photogrammetry, we systematically address the mentioned influences. Concerning the tie-point matching we test the standard SIFT point extractor and descriptor, but also the SURF and ASIFT-approaches, the ORB technique, as well as (A)KAZE, which are based on a nonlinear scale space. In terms of pre-processing we analyse the Wallis-filter. Results show that in more challenging situations, in this case for data captured from different platforms at different days most approaches do not perform well. Wallis-filtering emerged to be most helpful especially for the SIFT approach. The commercial software pix4dmapper succeeds in overall bundle adjustment only for some configurations, and especially not for the entire image block provided.
Geiger-mode APD camera system for single-photon 3D LADAR imaging
NASA Astrophysics Data System (ADS)
Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir
2012-06-01
The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
Vorticity field measurement using digital inline holography
NASA Astrophysics Data System (ADS)
Mallery, Kevin; Hong, Jiarong
2017-11-01
We demonstrate the direct measurement of a 3D vorticity field using digital inline holographic microscopy. Microfiber tracer particles are illuminated with a 532 nm continuous diode laser and imaged using a single CCD camera. The recorded holographic images are processed using a GPU-accelerated inverse problem approach to reconstruct the 3D structure of each microfiber in the imaged volume. The translation and rotation of each microfiber are measured using a time-resolved image sequence - yielding velocity and vorticity point measurements. The accuracy and limitations of this method are investigated using synthetic holograms. Measurements of solid body rotational flow are used to validate the accuracy of the technique under known flow conditions. The technique is further applied to a practical turbulent flow case for investigating its 3D velocity field and vorticity distribution.
NASA Astrophysics Data System (ADS)
Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas
2016-05-01
We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.
Simultaneous narrowband ultrasonic strain-flow imaging
NASA Astrophysics Data System (ADS)
Tsou, Jean K.; Mai, Jerome J.; Lupotti, Fermin A.; Insana, Michael F.
2004-04-01
We are summarizing new research aimed at forming spatially and temporally registered combinations of strain and color-flow images using echo data recorded from a commercial ultrasound system. Applications include diagnosis of vascular diseases and tumor malignancies. The challenge is to meet the diverse needs of each measurement. The approach is to first apply eigenfilters that separate echo components from moving tissues and blood flow, and then estimate blood velocity and tissue displacement from the filtered-IQ-signal phase modulations. At the cost of a lower acquisition frame rate, we find the autocorrelation strain estimator yields higher resolution strain estimate than the cross-correlator since estimates are made from ensembles at a single point in space. The technique is applied to in vivo carotid imaging, to demonstrate the sensitivity for strain-flow vascular imaging.
Magnetic topological analysis of coronal bright points
NASA Astrophysics Data System (ADS)
Galsgaard, K.; Madjarska, M. S.; Moreno-Insertis, F.; Huang, Z.; Wiegelmann, T.
2017-10-01
Context. We report on the first of a series of studies on coronal bright points which investigate the physical mechanism that generates these phenomena. Aims: The aim of this paper is to understand the magnetic-field structure that hosts the bright points. Methods: We use longitudinal magnetograms taken by the Solar Optical Telescope with the Narrowband Filter Imager. For a single case, magnetograms from the Helioseismic and Magnetic Imager were added to the analysis. The longitudinal magnetic field component is used to derive the potential magnetic fields of the large regions around the bright points. A magneto-static field extrapolation method is tested to verify the accuracy of the potential field modelling. The three dimensional magnetic fields are investigated for the presence of magnetic null points and their influence on the local magnetic domain. Results: In nine out of ten cases the bright point resides in areas where the coronal magnetic field contains an opposite polarity intrusion defining a magnetic null point above it. We find that X-ray bright points reside, in these nine cases, in a limited part of the projected fan-dome area, either fully inside the dome or expanding over a limited area below which typically a dominant flux concentration resides. The tenth bright point is located in a bipolar loop system without an overlying null point. Conclusions: All bright points in coronal holes and two out of three bright points in quiet Sun regions are seen to reside in regions containing a magnetic null point. An as yet unidentified process(es) generates the brigh points in specific regions of the fan-dome structure. The movies are available at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.
2017-02-01
Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.
A 4DCT imaging-based breathing lung model with relative hysteresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less
Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui
2014-07-11
Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.
Spahn, Christoph; Glaesmann, Mathilda; Gao, Yunfeng; Foo, Yong Hwee; Lampe, Marko; Kenney, Linda J; Heilemann, Mike
2017-01-01
Despite their small size and the lack of compartmentalization, bacteria exhibit a striking degree of cellular organization, both in time and space. During the last decade, a group of new microscopy techniques emerged, termed super-resolution microscopy or nanoscopy, which facilitate visualizing the organization of proteins in bacteria at the nanoscale. Single-molecule localization microscopy (SMLM) is especially well suited to reveal a wide range of new information regarding protein organization, interaction, and dynamics in single bacterial cells. Recent developments in click chemistry facilitate the visualization of bacterial chromatin with a resolution of ~20 nm, providing valuable information about the ultrastructure of bacterial nucleoids, especially at short generation times. In this chapter, we describe a simple-to-realize protocol that allows determining precise structural information of bacterial nucleoids in fixed cells, using direct stochastic optical reconstruction microscopy (dSTORM). In combination with quantitative photoactivated localization microscopy (PALM), the spatial relationship of proteins with the bacterial chromosome can be studied. The position of a protein of interest with respect to the nucleoids and the cell cylinder can be visualized by super-resolving the membrane using point accumulation for imaging in nanoscale topography (PAINT). The combination of the different SMLM techniques in a sequential workflow maximizes the information that can be extracted from single cells, while maintaining optimal imaging conditions for each technique.
SU-E-QI-15: Single Point Dosimetry by Means of Cerenkov Radiation Energy Transfer (CRET)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volotskova, O; Jenkins, C; Xing, L
2014-06-15
Purpose: Cerenkov light is generated when a charged particles with energy greater then 250 keV, moves faster than the speed of light in a given medium. Both x-ray photons and electrons produce optical Cerenkov photons during the static megavoltage linear accelerator (LINAC) operational mode. Recently, Cerenkov radiation gained considerable interest as possible candidate as a new imaging modality. Optical signals generated by Cerenkov radiation may act as a surrogate for the absorbed superficial radiation dose. We demonstrated a novel single point dosimetry method for megavoltage photon and electron therapy utilizing down conversion of Cerenkov photons. Methods: The custom build signalmore » characterization system was used: a sample holder (probe) with adjacent light tight compartments was connected via fiber-optic cables to a photon counting photomultiplier tube (PMT). One compartment contains a medium only while the other contains medium and red-shifting nano-particles (Q-dots, nanoclusters). By taking the difference between the two signals (Cerenkov photons and CRET photons) we obtain a measure of the down-converted light, which we expect to be proportional to dose as measured with an adjacent ion chamber. Experimental results are compared to Monte Carlo simulations performed using the GEANT4 code. Results: The signal correlation between CR signal, CRET readings and dose produced by LINAC at a single point were investigated. The experimental results were compared with simulations. The dose linearity, signal to noise ratio and dose rate dependence were tested with custom build CRET based probe. Conclusion: Performance characteristics of the proposed single point CRET based probe were evaluated. The direct use of the induced Cerenkov emission and CRET in an irradiated single point volume as an indirect surrogate for the imparted dose was investigated. We conclude that CRET is a promising optical based dosimetry method that offers advantages over those already proposed.« less
Pump-probe micro-spectroscopy by means of an ultra-fast acousto-optics delay line.
Audier, Xavier; Balla, Naveen; Rigneault, Hervé
2017-01-15
We demonstrate femtosecond pump-probe transient absorption spectroscopy using a programmable dispersive filter as an ultra-fast delay line. Combined with fast synchronous detection, this delay line allows for recording of 6 ps decay traces at 34 kHz. With such acquisition speed, we perform single point pump-probe spectroscopy on bulk samples in 80 μs and hyperspectral pump-probe imaging over a field of view of 100 μm in less than a second. The usability of the method is illustrated in a showcase experiment to image and discriminate between two pigments in a mixture.
High-resolution seismic-reflection data offshore of Dana Point, southern California borderland
Sliter, Ray W.; Ryan, Holly F.; Triezenberg, Peter J.
2010-01-01
The U.S. Geological Survey collected high-resolution shallow seismic-reflection profiles in September 2006 in the offshore area between Dana Point and San Mateo Point in southern Orange and northern San Diego Counties, California. Reflection profiles were located to image folds and reverse faults associated with the San Mateo fault zone and high-angle strike-slip faults near the shelf break (the Newport-Inglewood fault zone) and at the base of the slope. Interpretations of these data were used to update the USGS Quaternary fault database and in shaking hazard models for the State of California developed by the Working Group for California Earthquake Probabilities. This cruise was funded by the U.S. Geological Survey Coastal and Marine Catastrophic Hazards project. Seismic-reflection data were acquired aboard the R/V Sea Explorer, which is operated by the Ocean Institute at Dana Point. A SIG ELC820 minisparker seismic source and a SIG single-channel streamer were used. More than 420 km of seismic-reflection data were collected. This report includes maps of the seismic-survey sections, linked to Google Earth? software, and digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats.
PointCom: semi-autonomous UGV control with intuitive interface
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham
2008-04-01
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).
Expansion Mini-Microscopy: An Enabling Alternative in Point-of-Care Diagnostics
Zhang, Yu Shrike; Santiago, Grissel Trujillo-de; Alvarez, Mario Moisés; Schiff, Steven J.; Boyden, Edward S.; Khademhosseini, Ali
2017-01-01
Diagnostics play a significant role in health care. In the developing world and low-resource regions the utility for point-of-care (POC) diagnostics becomes even greater. This need has long been recognized, and diagnostic technology has seen tremendous progress with the development of portable instrumentation such as miniature imagers featuring low complexity and cost. However, such inexpensive devices have not been able to achieve a resolution sufficient for POC detection of pathogens at very small scales, such as single-cell parasites, bacteria, fungi, and viruses. To this end, expansion microscopy (ExM) is a recently developed technique that, by physically expanding preserved biological specimens through a chemical process, enables super-resolution imaging on conventional microscopes and improves imaging resolution of a given microscope without the need to modify the existing microscope hardware. Here we review recent advances in ExM and portable imagers, respectively, and discuss the rational combination of the two technologies, that we term expansion mini-microscopy (ExMM). In ExMM, the physical expansion of a biological sample followed by imaging on a mini-microscope achieves a resolution as high as that attainable by conventional high-end microscopes imaging non-expanded samples, at significant reduction in cost. We believe that this newly developed ExMM technique is likely to find widespread applications in POC diagnostics in resource-limited and remote regions by expanded-scale imaging of biological specimens that are otherwise not resolvable using low-cost imagers. PMID:29062977
Bok, Jan; Schauer, Petr
2014-01-01
In the paper, the SEM detector is evaluated by the modulation transfer function (MTF) which expresses the detector's influence on the SEM image contrast. This is a novel approach, since the MTF was used previously to describe only the area imaging detectors, or whole imaging systems. The measurement technique and calculation of the MTF for the SEM detector are presented. In addition, the measurement and calculation of the detective quantum efficiency (DQE) as a function of the spatial frequency for the SEM detector are described. In this technique, the time modulated e-beam is used in order to create well-defined input signal for the detector. The MTF and DQE measurements are demonstrated on the Everhart-Thornley scintillation detector. This detector was alternated using the YAG:Ce, YAP:Ce, and CRY18 single-crystal scintillators. The presented MTF and DQE characteristics show good imaging properties of the detectors with the YAP:Ce or CRY18 scintillator, especially for a specific type of the e-beam scan. The results demonstrate the great benefit of the description of SEM detectors using the MTF and DQE. In addition, point-by-point and continual-sweep e-beam scans in SEM were discussed and their influence on the image quality was revealed using the MTF. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lawrence, Kurt C.; Park, Bosoon; Windham, William R.; Mao, Chengye; Poole, Gavin H.
2003-03-01
A method to calibrate a pushbroom hyperspectral imaging system for "near-field" applications in agricultural and food safety has been demonstrated. The method consists of a modified geometric control point correction applied to a focal plane array to remove smile and keystone distortion from the system. Once a FPA correction was applied, single wavelength and distance calibrations were used to describe all points on the FPA. Finally, a percent reflectance calibration, applied on a pixel-by-pixel basis, was used for accurate measurements for the hyperspectral imaging system. The method was demonstrated with a stationary prism-grating-prism, pushbroom hyperspectral imaging system. For the system described, wavelength and distance calibrations were used to reduce the wavelength errors to <0.5 nm and distance errors to <0.01mm (across the entrance slit width). The pixel-by-pixel percent reflectance calibration, which was performed at all wavelengths with dark current and 99% reflectance calibration-panel measurements, was verified with measurements on a certified gradient Spectralon panel with values ranging from about 14% reflectance to 99% reflectance with errors generally less than 5% at the mid-wavelength measurements. Results from the calibration method, indicate the hyperspectral imaging system has a usable range between 420 nm and 840 nm. Outside this range, errors increase significantly.
de Boer, Johannes F.; Leitgeb, Rainer; Wojtkowski, Maciej
2017-01-01
Optical coherence tomography (OCT) has become one of the most successful optical technologies implemented in medicine and clinical practice mostly due to the possibility of non-invasive and non-contact imaging by detecting back-scattered light. OCT has gone through a tremendous development over the past 25 years. From its initial inception in 1991 [Science 254, 1178 (1991)1957169] it has become an indispensable medical imaging technology in ophthalmology. Also in fields like cardiology and gastro-enterology the technology is envisioned to become a standard of care. A key contributor to the success of OCT has been the sensitivity and speed advantage offered by Fourier domain OCT. In this review paper the development of FD-OCT will be revisited, providing a single comprehensive framework to derive the sensitivity advantage of both SD- and SS-OCT. We point out the key aspects of the physics and the technology that has enabled a more than 2 orders of magnitude increase in sensitivity, and as a consequence an increase in the imaging speed without loss of image quality. This speed increase provided a paradigm shift from point sampling to comprehensive 3D in vivo imaging, whose clinical impact is still actively explored by a large number of researchers worldwide. PMID:28717565
A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Becker, D.; Cain, S.
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.
NASA Astrophysics Data System (ADS)
Preuss, R.
2014-12-01
This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelada, O; Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg; Decker, R
2014-06-15
Purpose: Tumor hypoxia is correlated with treatment failure. To date, there are no published studies investigating hypoxia in non-small cell lung cancer (NSCLC) patients undergoing SBRT. We aim to use 18F-fluoromisonidazole (18F-FMISO) positron emission tomography (PET) imaging to non-invasively quantify the tumor hypoxic volume (HV), to elucidate potential roles of reoxygenation and tumor vascular response at high doses, and to identify an optimal prognostic imaging time-point. Methods: SBRT-eligible patients with NSCLC tumors >1cm were prospectively enrolled in an IRB-approved study. Computed Tomography and dynamic PET images (0–120min, 150–180min, and 210–240min post-injection) were acquired using a Siemens BiographmCT PET/CT scanner. 18F-FMISOmore » PET was performed on a single patient at 3 different time points around a single SBRT delivery of 18 Gy and HVs were compared using a tumor-to-blood ratio (TBR)>1.2 and rate of influx (Ki)>0.0015 (Patlak). Results: Results from our first patient showed substantial temporal changes in HV following SBRT. Using a TBR threshold >1.2 and summed images 210–240min, the HVs were 19%, 31% and 13% of total tumor volume on day 0, 2 (48 hours post-SBRT), and 4 (96 hours post-SBRT). The absolute volume of hypoxia increased by nearly a factor of 2 after 18 Gy and then decreased almost to baseline 96 hours later. Selected imaging timepoints resulted in temporal changes in HV quantification obtained with TBR. Ki, calculated using 4-hour dynamic data, evaluated HVs as 22%, 75% and 21%, respectively. Conclusions: ith the results of only one patient, this novel pilot study highlights the potential benefit of 18F-FMISO PET imaging as results indicate substantial temporal changes in tumor HV post-SBRT. Analysis suggests that TBR is not a robust parameter for accurate HV quantification and heavily influenced by imaging timepoint selection. Kinetic modeling parameters are more sensitive and may aid in future treatment individualization based on patient-specific biological information.« less
Witmer, Matthew T; Parlitsis, George; Patel, Sarju; Kiss, Szilárd
2013-01-01
To compare ultra-widefield fluorescein angiography imaging using the Optos(®) Optomap(®) and the Heidelberg Spectralis(®) noncontact ultra-widefield module. Five patients (ten eyes) underwent ultra-widefield fluorescein angiography using the Optos(®) panoramic P200Tx imaging system and the noncontact ultra-widefield module in the Heidelberg Spectralis(®) HRA+OCT system. The images were obtained as a single, nonsteered shot centered on the macula. The area of imaged retina was outlined and quantified using Adobe(®) Photoshop(®) C5 software. The total area and area within each of four visualized quadrants was calculated and compared between the two imaging modalities. Three masked reviewers also evaluated each quadrant per eye (40 total quadrants) to determine which modality imaged the retinal vasculature most peripherally. Optos(®) imaging captured a total retinal area averaging 151,362 pixels, ranging from 116,998 to 205,833 pixels, while the area captured using the Heidelberg Spectralis(®) was 101,786 pixels, ranging from 73,424 to 116,319 (P = 0.0002). The average area per individual quadrant imaged by Optos(®) versus the Heidelberg Spectralis(®) superiorly was 32,373 vs 32,789 pixels, respectively (P = 0.91), inferiorly was 24,665 vs 26,117 pixels, respectively (P = 0.71), temporally was 47,948 vs 20,645 pixels, respectively (P = 0.0001), and nasally was 46,374 vs 22,234 pixels, respectively (P = 0.0001). The Heidelberg Spectralis(®) was able to image the superior and inferior retinal vasculature to a more distal point than was the Optos(®), in nine of ten eyes (18 of 20 quadrants). The Optos(®) was able to image the nasal and temporal retinal vasculature to a more distal point than was the Heidelberg Spectralis(®), in ten of ten eyes (20 of 20 quadrants). The ultra-widefield fluorescein angiography obtained with the Optos(®) and Heidelberg Spectralis(®) ultra-widefield imaging systems are both excellent modalities that provide views of the peripheral retina. On a single nonsteered image, the Optos(®) Optomap(®) covered a significantly larger total retinal surface area, with greater image variability, than did the Heidelberg Spectralis(®) ultra-widefield module. The Optos(®) captured an appreciably wider view of the retina temporally and nasally, albeit with peripheral distortion, while the ultra-widefield Heidelberg Spectralis(®) module was able to image the superior and inferior retinal vasculature more peripherally. The clinical significance of these findings as well as the area imaged on steered montaged images remains to be determined.
NASA Astrophysics Data System (ADS)
Martinis, C.; Baumgardner, J.; Wroten, J.; Mendillo, M.
2018-04-01
Optical signatures of ionospheric disturbances exist at all latitudes on Earth-the most well known case being visible aurora at high latitudes. Sub-visual emissions occur equatorward of the auroral zones that also indicate periods and locations of severe Space Weather effects. These fall into three magnetic latitude domains in each hemisphere: (1) sub-auroral latitudes ∼40-60°, (2) mid-latitudes (20-40°) and (3) equatorial-to-low latitudes (0-20°). Boston University has established a network of all-sky-imagers (ASIs) with sites at opposite ends of the same geomagnetic field lines in each hemisphere-called geomagnetic conjugate points. Our ASIs are autonomous instruments that operate in mini-observatories situated at four conjugate pairs in North and South America, plus one pair linking Europe and South Africa. In this paper, we describe instrument design, data-taking protocols, data transfer and archiving issues, image processing, science objectives and early results for each latitude domain. This unique capability addresses how a single source of disturbance is transformed into similar or different effects based on the unique "receptor" conditions (seasonal effects) found in each hemisphere. Applying optical conjugate point observations to Space Weather problems offers a new diagnostic approach for understanding the global system response functions operating in the Earth's upper atmosphere.
a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.
2017-09-01
We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.
Height Control and Deposition Measurement for the Electron Beam Free Form Fabrication (EBF3) Process
NASA Technical Reports Server (NTRS)
Hafley, Robert A. (Inventor); Seufzer, William J. (Inventor)
2017-01-01
A method of controlling a height of an electron beam gun and wire feeder during an electron freeform fabrication process includes utilizing a camera to generate an image of the molten pool of material. The image generated by the camera is utilized to determine a measured height of the electron beam gun relative to the surface of the molten pool. The method further includes ensuring that the measured height is within the range of acceptable heights of the electron beam gun relative to the surface of the molten pool. The present invention also provides for measuring a height of a solid metal deposit formed upon cooling of a molten pool. The height of a single point can be measured, or a plurality of points can be measured to provide 2D or 3D surface height measurements.
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
Distinguishing one from many using super-resolution compressive sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Distinguishing one from many using super-resolution compressive sensing
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...
2018-05-14
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
NASA Astrophysics Data System (ADS)
Attendu, Xavier; Crunelle, Camille; de Sivry-Houle, Martin Poinsinet; Maubois, Billie; Urbain, Joanie; Turrell, Chloe; Strupler, Mathias; Godbout, Nicolas; Boudoux, Caroline
2018-04-01
Previous works have demonstrated feasibility of combining optical coherence tomography (OCT) and hyper-spectral imaging (HSI) through a single double-clad fiber (DCF). In this proceeding we present the continued development of a system combining both modalities and capable of rapid imaging. We discuss the development of a rapidly scanning, dual-band, polygonal swept-source system which combines NIR (1260-1340 nm) and visible (450-800 nm) wavelengths. The NIR band is used for OCT imaging while visible light allows HSI. Scanning rates up to 24 kHz are reported. Furthermore, we present and discuss the fiber system used for light transport, delivery and collection, and the custom signal acquisition software. Key points include the use of a double-clad fiber coupler as well as important alignments and back-reflection management. Simultaneous and co-registered imaging with both modalities is presented in a bench-top system
Real-time blind image deconvolution based on coordinated framework of FPGA and DSP
NASA Astrophysics Data System (ADS)
Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun
2015-10-01
Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.
Optic probe for multiple angle image capture and optional stereo imaging
Malone, Robert M.; Kaufman, Morris I.
2016-11-29
A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.
Global Plasmaspheric Imaging: A New "Light" Focusing on Familiar Questions
NASA Technical Reports Server (NTRS)
Adrian, M. L.; Six, N. Frank (Technical Monitor)
2002-01-01
Until recently plasmaspheric physics, for that matter, magnetospheric physics as a whole, has relied primarily on single point in-situ measurement, theory, modeling, and a considerable amount of extrapolation in order to envision the global structure of the plasmasphere. This condition changed with the launch of the IMAGE satellite in March 2000. Using the Extreme Ultraviolet (EUV) imager on WAGE, we can now view the global structure of the plasmasphere bathed in the glow of resonantly scattered 30.4 nm radiation allowing the space physics community to view the dynamics of this global structure as never before. This talk will: (1) define the plasmasphere from the perspective of plasmaspheric physics prior to March 2000; (2) present a review of EUV imaging optics and the IMAGE mission; and focus on efforts to understand an old and familiar feature of plasmaspheric physics, embedded plasmaspheric density troughs, in this new global light with the assistance of forward modeling.
Hybrid region merging method for segmentation of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo
2014-12-01
Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.
Mapping of sea ice and measurement of its drift using aircraft synthetic aperture radar images
NASA Technical Reports Server (NTRS)
Leberl, F.; Bryan, M. L.; Elachi, C.; Farr, T.; Campbell, W.
1979-01-01
Side-looking radar images of Arctic sea ice were obtained as part of the Arctic Ice Dynamics Joint Experiment. Repetitive coverages of a test site in the Arctic were used to measure sea ice drift, employing single images and blocks of overlapping radar image strips; the images were used in conjunction with data from the aircraft inertial navigation and altimeter. Also, independently measured, accurate positions of a number of ground control points were available. Initial tests of the method were carried out with repeated coverages of a land area on the Alaska coast (Prudhoe). Absolute accuracies achieved were essentially limited by the accuracy of the inertial navigation data. Errors of drift measurements were found to be about + or - 2.5 km. Relative accuracy is higher; its limits are set by the radar image geometry and the definition of identical features in sequential images. The drift of adjacent ice features with respect to one another could be determined with errors of less than + or - 0.2 km.
Haneder, Stefan; Siedek, Florian; Doerner, Jonas; Pahn, Gregor; Grosse Hokamp, Nils; Maintz, David; Wybranski, Christian
2018-01-01
Background A novel, multi-energy, dual-layer spectral detector computed tomography (SDCT) is commercially available now with the vendor's claim that it yields the same or better quality of polychromatic, conventional CT images like modern single-energy CT scanners without any radiation dose penalty. Purpose To intra-individually compare the quality of conventional polychromatic CT images acquired with a dual-layer spectral detector (SDCT) and the latest generation 128-row single-energy-detector (CT128) from the same manufacturer. Material and Methods Fifty patients underwent portal-venous phase, thoracic-abdominal CT scans with the SDCT and prior CT128 imaging. The SDCT scanning protocol was adapted to yield a similar estimated dose length product (DLP) as the CT128. Patient dose optimization by automatic tube current modulation and CT image reconstruction with a state-of-the-art iterative algorithm were identical on both scanners. CT image contrast-to-noise ratio (CNR) was compared between the SDCT and CT128 in different anatomic structures. Image quality and noise were assessed independently by two readers with 5-point-Likert-scales. Volume CT dose index (CTDI vol ), and DLP were recorded and normalized to 68 cm acquisition length (DLP 68 ). Results The SDCT yielded higher mean CNR values of 30.0% ± 2.0% (26.4-32.5%) in all anatomic structures ( P < 0.001) and excellent scores for qualitative parameters surpassing the CT128 (all P < 0.0001) with substantial inter-rater agreement (κ ≥ 0.801). Despite adapted scan protocols the SDCT yielded lower values for CTDI vol (-10.1 ± 12.8%), DLP (-13.1 ± 13.9%), and DLP 68 (-15.3 ± 16.9%) than the CT128 (all P < 0.0001). Conclusion The SDCT scanner yielded better CT image quality compared to the CT128 and lower radiation dose parameters.
Lee, Ji Won; Kim, Chang Won; Lee, Geewon; Lee, Han Cheol; Kim, Sang-Pil; Choi, Bum Sung; Jeong, Yeon Joo
2018-02-01
Background Using the hybrid electrocardiogram (ECG)-gated computed tomography (CT) technique, assessment of entire aorta, coronary arteries, and aortic valve can be possible using single-bolus contrast administration within a single acquisition. Purpose To compare the image quality of hybrid ECG-gated and non-gated CT angiography of the aorta and evaluate the effect of a motion correction algorithm (MCA) on coronary artery image quality in a hybrid ECG-gated aorta CT group. Material and Methods In total, 104 patients (76 men; mean age = 65.8 years) prospectively randomized into two groups (Group 1 = hybrid ECG-gated CT; Group 2 = non-gated CT) underwent wide-detector array aorta CT. Image quality, assessed using a four-point scale, was compared between the groups. Coronary artery image quality was compared between the conventional reconstruction and motion correction reconstruction subgroups in Group 1. Results Group 1 showed significant advantages over Group 2 in aortic wall, cardiac chamber, aortic valve, coronary ostia, and main coronary arteries image quality (all P < 0.001). All Group 1 patients had diagnostic image quality of the aortic wall and left ostium. The MCA significantly improved the image quality of the three main coronary arteries ( P < 0.05). Moreover, per-vessel interpretability improved from 92.3% to 97.1% with the MCA ( P = 0.013). Conclusion Hybrid ECG-gated CT significantly improved the heart and aortic wall image quality and the MCA can further improve the image quality and interpretability of coronary arteries.
Galaxy clustering with photometric surveys using PDF redshift information
Asorey, J.; Carrasco Kind, M.; Sevilla-Noarbe, I.; ...
2016-03-28
Here, photometric surveys produce large-area maps of the galaxy distribution, but with less accurate redshift information than is obtained from spectroscopic methods. Modern photometric redshift (photo-z) algorithms use galaxy magnitudes, or colors, that are obtained through multi-band imaging to produce a probability density function (PDF) for each galaxy in the map. We used simulated data to study the effect of using different photo-z estimators to assign galaxies to redshift bins in order to compare their effects on angular clustering and galaxy bias measurements. We found that if we use the entire PDF, rather than a single-point (mean or mode) estimate, the deviations are less biased, especially when using narrow redshift bins. When the redshift bin widths aremore » $$\\Delta z=0.1$$, the use of the entire PDF reduces the typical measurement bias from 5%, when using single point estimates, to 3%.« less
Tuning charge and correlation effects for a single molecule on a graphene device
Wickenburg, Sebastian; Lu, Jiong; Lischner, Johannes; ...
2016-11-25
The ability to understand and control the electronic properties of individual molecules in a device environment is crucial for developing future technologies at the nanometre scale and below. Achieving this, however, requires the creation of three-terminal devices that allow single molecules to be both gated and imaged at the atomic scale. We have accomplished this by integrating a graphene field effect transistor with a scanning tunnelling microscope, thus allowing gate-controlled charging and spectroscopic interrogation of individual tetrafluoro-tetracyanoquinodimethane molecules. We observe a non-rigid shift in the molecule’s lowest unoccupied molecular orbital energy (relative to the Dirac point) as a function ofmore » gate voltage due to graphene polarization effects. Our results show that electron–electron interactions play an important role in how molecular energy levels align to the graphene Dirac point, and may significantly influence charge transport through individual molecules incorporated in graphene-based nanodevices.« less
Geometric registration of images by similarity transformation using two reference points
NASA Technical Reports Server (NTRS)
Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)
2011-01-01
A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.
Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique
NASA Technical Reports Server (NTRS)
Wargo, M. J.; Witt, A. F.
1992-01-01
A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.
Multimodal Microchannel and Nanowell-Based Microfluidic Platforms for Bioimaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Tao; Smallwood, Chuck R.; Zhu, Ying
2017-03-30
Modern live-cell imaging approaches permit real-time visualization of biological processes. However, limitations for unicellular organism trapping, culturing and long-term imaging can preclude complete understanding of how such microorganisms respond to perturbations in their local environment or linking single-cell variability to whole population dynamics. We have developed microfluidic platforms to overcome prior technical bottlenecks to allow both chemostat and compartmentalized cellular growth conditions using the same device. Additionally, a nanowell-based platform enables a high throughput approach to scale up compartmentalized imaging optimized within the microfluidic device. These channel and nanowell platforms are complementary, and both provide fine control over the localmore » environment as well as the ability to add/replace media components at any experimental time point.« less
Linking brain, mind and behavior.
Makeig, Scott; Gramann, Klaus; Jung, Tzyy-Ping; Sejnowski, Terrence J; Poizner, Howard
2009-08-01
Cortical brain areas and dynamics evolved to organize motor behavior in our three-dimensional environment also support more general human cognitive processes. Yet traditional brain imaging paradigms typically allow and record only minimal participant behavior, then reduce the recorded data to single map features of averaged responses. To more fully investigate the complex links between distributed brain dynamics and motivated natural behavior, we propose the development of wearable mobile brain/body imaging (MoBI) systems that continuously capture the wearer's high-density electrical brain and muscle signals, three-dimensional body movements, audiovisual scene and point of regard, plus new data-driven analysis methods to model their interrelationships. The new imaging modality should allow new insights into how spatially distributed brain dynamics support natural human cognition and agency.
Feasibility of one-shot-per-crystal structure determination using Laue diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornaby, Sterling; CHESS; Szebenyi, Doletha M. E.
Structure determination was successfully carried out using single Laue exposures from a group of lysozyme crystals. The Laue method may be a viable option for collection of one-shot-per-crystal data from microcrystals. Crystal size is an important factor in determining the number of diffraction patterns which may be obtained from a protein crystal before severe radiation damage sets in. As crystal dimensions decrease this number is reduced, eventually falling to one, at which point a complete data set must be assembled using data from multiple crystals. When only a single exposure is to be collected from each crystal, the polychromatic Lauemore » technique may be preferable to monochromatic methods owing to its simultaneous recording of a large number of fully recorded reflections per image. To assess the feasibility of solving structures using single Laue images from multiple crystals, data were collected using a ‘pink’ beam at the CHESS D1 station from groups of lysozyme crystals with dimensions of the order of 20–30 µm mounted on MicroMesh grids. Single-shot Laue data were used for structure determination by molecular replacement and correct solutions were obtained even when as few as five crystals were used.« less
Lill, Yoriko; Martinez, Karen L; Lill, Markus A; Meyer, Bruno H; Vogel, Horst; Hecht, Bert
2005-08-12
We report on an in vivo single-molecule study of the signaling kinetics of G protein-coupled receptors (GPCR) performed using the neurokinin 1 receptor (NK1R) as a representative member. The NK1R signaling cascade is triggered by the specific binding of a fluorescently labeled agonist, substance P (SP). The diffusion of single receptor-ligand complexes in plasma membrane of living HEK 293 cells is imaged using fast single-molecule wide-field fluorescence microscopy at 100 ms time resolution. Diffusion trajectories are obtained which show intra- and intertrace heterogeneity in the diffusion mode. To investigate universal patterns in the diffusion trajectories we take the ligand-binding event as the common starting point. This synchronization allows us to observe changes in the character of the ligand-receptor-complex diffusion. Specifically, we find that the diffusion of ligand-receptor complexes is slowed down significantly and becomes more constrained as a function of time during the first 1000 ms. The decelerated and more constrained diffusion is attributed to an increasing interaction of the GPCR with cellular structures after the ligand-receptor complex is formed.
Dynamics of Single Hydrogen Bubbles at a Platinum Microelectrode.
Yang, Xuegeng; Karnbach, Franziska; Uhlemann, Margitta; Odenbach, Stefan; Eckert, Kerstin
2015-07-28
Bubble dynamics, including the formation, growth, and detachment, of single H2 bubbles was studied at a platinum microelectrode during the electrolysis of 1 M H2SO4 electrolyte. The bubbles were visualized through a microscope by a high-speed camera. Electrochemical measurements were conducted in parallel to measure the transient current. The periodic current oscillations, resulting from the periodic formation and detachment of single bubbles, allow the bubble lifetime and size to be predicted from the transient current. A comparison of the bubble volume calculated from the current and from the recorded bubble image shows a gas evolution efficiency increasing continuously with the growth of the bubble until it reaches 100%. Two different substrates, glass and epoxy, were used to embed the Pt wire. While nearly no difference was found with respect to the growth law for the bubble radius, the contact angle differs strongly for the two types of cell. Data provided for the contact point evolution further complete the image of single hydrogen bubble growth. Finally, the velocity field driven by the detached bubble was measured by means of PIV, and the effects of the convection on the subsequent bubble were evaluated.
Ren, Zhou-Xin; Yu, Hai-Bin; Shen, Jun-Ling; Li, Ya; Li, Jian-Sheng
2015-06-01
To establish a preprocessing method for cell morphometry in microscopic images of A549 cells in epithelial-mesenchymal transition (EMT). Adobe Photoshop CS2 (Adobe Systems, Inc.) was used for preprocessing the images. First, all images were processed for size uniformity and high distinguishability between the cell and background area. Then, a blank image with the same size and grids was established and cross points of the grids were added into a distinct color. The blank image was merged into a processed image. In the merged images, the cells with 1 or more cross points were chosen, and then the cell areas were enclosed and were replaced in a distinct color. Except for chosen cellular areas, all areas were changed into a unique hue. Three observers quantified roundness of cells in images with the image preprocess (IPP) or without the method (Controls), respectively. Furthermore, 1 observer measured the roundness 3 times with the 2 methods, respectively. The results between IPPs and Controls were compared for repeatability and reproducibility. As compared with the Control method, among 3 observers, use of the IPP method resulted in a higher number and a higher percentage of same-chosen cells in an image. The relative average deviation values of roundness, either for 3 observers or 1 observer, were significantly higher in Controls than in IPPs (p < 0.01 or 0.001). The values of intraclass correlation coefficient, both in Single Type or Average, were higher in IPPs than in Controls both for 3 observers and 1 observer. Processed with Adobe Photoshop, a chosen cell from an image was more objective, regular, and accurate, creating an increase of reproducibility and repeatability on morphometry of A549 cells in epithelial to mesenchymal transition.
Single-Frame Terrain Mapping Software for Robotic Vehicles
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.
2011-01-01
This software is a component in an unmanned ground vehicle (UGV) perception system that builds compact, single-frame terrain maps for distribution to other systems, such as a world model or an operator control unit, over a local area network (LAN). Each cell in the map encodes an elevation value, terrain classification, object classification, terrain traversability, terrain roughness, and a confidence value into four bytes of memory. The input to this software component is a range image (from a lidar or stereo vision system), and optionally a terrain classification image and an object classification image, both registered to the range image. The single-frame terrain map generates estimates of the support surface elevation, ground cover elevation, and minimum canopy elevation; generates terrain traversability cost; detects low overhangs and high-density obstacles; and can perform geometry-based terrain classification (ground, ground cover, unknown). A new origin is automatically selected for each single-frame terrain map in global coordinates such that it coincides with the corner of a world map cell. That way, single-frame terrain maps correctly line up with the world map, facilitating the merging of map data into the world map. Instead of using 32 bits to store the floating-point elevation for a map cell, the vehicle elevation is assigned to the map origin elevation and reports the change in elevation (from the origin elevation) in terms of the number of discrete steps. The single-frame terrain map elevation resolution is 2 cm. At that resolution, terrain elevation from 20.5 to 20.5 m (with respect to the vehicle's elevation) is encoded into 11 bits. For each four-byte map cell, bits are assigned to encode elevation, terrain roughness, terrain classification, object classification, terrain traversability cost, and a confidence value. The vehicle s current position and orientation, the map origin, and the map cell resolution are all included in a header for each map. The map is compressed into a vector prior to delivery to another system.
Adjustment of multi-CCD-chip-color-camera heads
NASA Astrophysics Data System (ADS)
Guyenot, Volker; Tittelbach, Guenther; Palme, Martin
1999-09-01
The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.
Base pair mismatch recognition using plasmon resonant particle labels.
Oldenburg, Steven J; Genick, Christine C; Clark, Keith A; Schultz, David A
2002-10-01
We demonstrate the use of silver plasmon resonant particles (PRPs), as reporter labels, in a microarray-based DNA hybridization assay in which we screen for a known polymorphic site in the breast cancer gene BRCA1. PRPs (40-100 nm in diameter) image as diffraction-limited points of colored light in a standard microscope equipped with dark-field illumination, and can be individually identified and discriminated against background scatter. Rather than overall intensity, the number of PRPs counted in a CCD image by a software algorithm serves as the signal in these assays. In a typical PRP hybridization assay, we achieve a detection sensitivity that is approximately 60 x greater than that achieved by using fluorescent labels. We conclude that single particle counting is robust, generally applicable to a wide variety of assay platforms, and can be integrated into low-cost and quantitative detection systems for single nucleotide polymorphism analysis.
Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J
2012-01-01
A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617
Image Tiling for Profiling Large Objects
NASA Technical Reports Server (NTRS)
Venkataraman, Ajit; Schock, Harold; Mercer, Carolyn R.
1992-01-01
Three dimensional surface measurements of large objects arc required in a variety of industrial processes. The nature of these measurements is changing as optical instruments arc beginning to replace conventional contact probes scanned over the objects. A common characteristic of the optical surface profilers is the trade off between measurement accuracy and field of view. In order to measure a large object with high accuracy, multiple views arc required. An accurate transformation between the different views is needed to bring about their registration. In this paper, we demonstrate how the transformation parameters can be obtained precisely by choosing control points which lie in the overlapping regions of the images. A good starting point for the transformation parameters is obtained by having a knowledge of the scanner position. The selection of the control points arc independent of the object geometry. By successively recording multiple views and obtaining transformation with respect to a single coordinate system, a complete physical model of an object can be obtained. Since all data arc in the same coordinate system, it can thus be used for building automatic models for free form surfaces.
Okur, A; Kantarci, M; Karaca, L; Yildiz, S; Sade, R; Pirimoglu, B; Keles, M; Avci, A; Çankaya, E; Schmitt, P
2016-03-01
To assess the efficiency of a novel quiescent-interval single-shot (QISS) technique for non-contrast-enhanced magnetic resonance angiography (MRA) of haemodialysis fistulas. QISS MRA and colour Doppler ultrasound (CDU) images were obtained from 22 haemodialysis patients with end-stage renal disease (ESRD). A radiologist with extensive experience in vascular imaging initially assessed the fistulas using CDU. Two observers analysed each QISS MRA data set in terms of image quality, using a five-point scale ranging from 0 (non-diagnostic) to 4 (excellent), and lumen diameters of all segments were measured. One hundred vascular segments were analysed for QISS MRA. Two anastomosis segments were considered non-diagnostic. None of the arterial or venous segments were evaluated as non-diagnostic. The image quality was poorer for the anastomosis level compared to the other segments (p<0.001 for arterial segments, and p<0.05 for venous segments), while no significant difference was determined for other vascular segments. QISS MRA has the potential to provide valuable complementary information to CDU regarding the imaging of haemodialysis fistulas. In addition, QISS non-enhanced MRA represents an alternative for assessment of haemodialysis fistulas, in which the administration of iodinated or gadolinium-based contrast agents is contraindicated. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting
Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao
2016-01-01
A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
A Multi-Camera System for Bioluminescence Tomography in Preclinical Oncology Research
Lewis, Matthew A.; Richer, Edmond; Slavine, Nikolai V.; Kodibagkar, Vikram D.; Soesbe, Todd C.; Antich, Peter P.; Mason, Ralph P.
2013-01-01
Bioluminescent imaging (BLI) of cells expressing luciferase is a valuable noninvasive technique for investigating molecular events and tumor dynamics in the living animal. Current usage is often limited to planar imaging, but tomographic imaging can enhance the usefulness of this technique in quantitative biomedical studies by allowing accurate determination of tumor size and attribution of the emitted light to a specific organ or tissue. Bioluminescence tomography based on a single camera with source rotation or mirrors to provide additional views has previously been reported. We report here in vivo studies using a novel approach with multiple rotating cameras that, when combined with image reconstruction software, provides the desired representation of point source metastases and other small lesions. Comparison with MRI validated the ability to detect lung tumor colonization in mouse lung. PMID:26824926
Muldoon, Timothy J; Polydorides, Alexandros D; Maru, Dipen M; Harpaz, Noam; Harris, Michael T; Hofstettor, Wayne; Hiotis, Spiros P; Kim, Sanghyun A; Ky, Alex J; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2012-01-01
Background Confocal endomicroscopy has revolutionized endoscopy by offering sub-cellular images of gastrointestinal epithelium; however, field-of-view is limited. There is a need for multi-scale endoscopy platforms that use widefield imaging to better direct placement of high-resolution probes. Design Feasibility Study Objective This study evaluates the feasibility of a single agent, proflavine hemisulfate, as a contrast medium during both widefield and high resolution imaging to characterize morphologic changes associated with a variety of gastrointestinal conditions. Setting U.T. M.D. Anderson Cancer Center (Houston, TX) and Mount Sinai Medical Center (New York, NY) Patients, Interventions, and Main Outcome Measurements Surgical specimens were obtained from 15 patients undergoing esophagectomy/colectomy. Proflavine, a vital fluorescent dye, was applied topically. Specimens were imaged with a widefield multispectral microscope and a high-resolution microendoscope. Images were compared to histopathology. Results Widefield-fluorescence imaging enhanced visualization of morphology, including the presence and spatial distribution of glands, glandular distortion, atrophy and crowding. High-resolution imaging of widefield-abnormal areas revealed that neoplastic progression corresponded to glandular heterogeneity and nuclear crowding in dysplasia, with glandular effacement in carcinoma. These widefield and high-resolution image features correlated well with histopathology. Limitations This imaging approach must be validated in vivo with a larger sample size. Conclusions Multi-scale proflavine-enhanced fluorescence imaging can delineate epithelial changes in a variety of gastrointestinal conditions. Distorted glandular features seen with widefield imaging could serve as a critical ‘bridge’ to high-resolution probe placement. An endoscopic platform combining the two modalities with a single vital-dye may facilitate point-of-care decision-making by providing real-time, in vivo diagnoses. PMID:22301343
Vital-dye enhanced fluorescence imaging of GI mucosa: metaplasia, neoplasia, inflammation.
Thekkek, Nadhi; Muldoon, Timothy; Polydorides, Alexandros D; Maru, Dipen M; Harpaz, Noam; Harris, Michael T; Hofstettor, Wayne; Hiotis, Spiros P; Kim, Sanghyun A; Ky, Alex Jenny; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2012-04-01
Confocal endomicroscopy has revolutionized endoscopy by offering subcellular images of the GI epithelium; however, the field of view is limited. Multiscale endoscopy platforms that use widefield imaging are needed to better direct the placement of high-resolution probes. Feasibility study. This study evaluated the feasibility of a single agent, proflavine hemisulfate, as a contrast medium during both widefield and high-resolution imaging to characterize the morphologic changes associated with a variety of GI conditions. The University of Texas MD Anderson Cancer Center, Houston, Texas, and Mount Sinai Medical Center, New York, New York. PATIENTS, INTERVENTIONS, AND MAIN OUTCOME MEASUREMENTS: Resected specimens were obtained from 15 patients undergoing EMR, esophagectomy, or colectomy. Proflavine hemisulfate, a vital fluorescent dye, was applied topically. The specimens were imaged with a widefield multispectral microscope and a high-resolution microendoscope. The images were compared with histopathologic examination. Widefield fluorescence imaging enhanced visualization of morphology, including the presence and spatial distribution of glands, glandular distortion, atrophy, and crowding. High-resolution imaging of widefield abnormal areas revealed that neoplastic progression corresponded to glandular heterogeneity and nuclear crowding in dysplasia, with glandular effacement in carcinoma. These widefield and high-resolution image features correlated well with the histopathologic features. This imaging approach must be validated in vivo with a larger sample size. Multiscale proflavine-enhanced fluorescence imaging can delineate epithelial changes in a variety of GI conditions. Distorted glandular features seen with widefield imaging could serve as a critical bridge to high-resolution probe placement. An endoscopic platform combining the two modalities with a single vital dye may facilitate point-of-care decision making by providing real-time, in vivo diagnoses. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Hydrodynamic interaction of two particles in confined linear shear flow at finite Reynolds number
NASA Astrophysics Data System (ADS)
Yan, Yiguang; Morris, Jeffrey F.; Koplik, Joel
2007-11-01
We discuss the hydrodynamic interactions of two solid bodies placed in linear shear flow between parallel plane walls in a periodic geometry at finite Reynolds number. The computations are based on the lattice Boltzmann method for particulate flow, validated here by comparison to previous results for a single particle. Most of our results pertain to cylinders in two dimensions but some examples are given for spheres in three dimensions. Either one mobile and one fixed particle or else two mobile particles are studied. The motion of a mobile particle is qualitatively similar in both cases at early times, exhibiting either trajectory reversal or bypass, depending upon the initial vector separation of the pair. At longer times, if a mobile particle does not approach a periodic image of the second, its trajectory tends to a stable limit point on the symmetry axis. The effect of interactions with periodic images is to produce nonconstant asymptotic long-time trajectories. For one free particle interacting with a fixed second particle within the unit cell, the free particle may either move to a fixed point or take up a limit cycle. Pairs of mobile particles starting from symmetric initial conditions are shown to asymptotically reach either fixed points, or mirror image limit cycles within the unit cell, or to bypass one another (and periodic images) indefinitely on a streamwise periodic trajectory. The limit cycle possibility requires finite Reynolds number and arises as a consequence of streamwise periodicity when the system length is sufficiently short.
Classification of footwear outsole patterns using Fourier transform and local interest points.
Richetelli, Nicole; Lee, Mackenzie C; Lasky, Carleen A; Gump, Madison E; Speir, Jacqueline A
2017-06-01
Successful classification of questioned footwear has tremendous evidentiary value; the result can minimize the potential suspect pool and link a suspect to a victim, a crime scene, or even multiple crime scenes to each other. With this in mind, several different automated and semi-automated classification models have been applied to the forensic footwear recognition problem, with superior performance commonly associated with two different approaches: correlation of image power (magnitude) or phase, and the use of local interest points transformed using the Scale Invariant Feature Transform (SIFT) and compared using Random Sample Consensus (RANSAC). Despite the distinction associated with each of these methods, all three have not been cross-compared using a single dataset, of limited quality (i.e., characteristic of crime scene-like imagery), and created using a wide combination of image inputs. To address this question, the research presented here examines the classification performance of the Fourier-Mellin transform (FMT), phase-only correlation (POC), and local interest points (transformed using SIFT and compared using RANSAC), as a function of inputs that include mixed media (blood and dust), transfer mechanisms (gel lifters), enhancement techniques (digital and chemical) and variations in print substrate (ceramic tiles, vinyl tiles and paper). Results indicate that POC outperforms both FMT and SIFT+RANSAC, regardless of image input (type, quality and totality), and that the difference in stochastic dominance detected for POC is significant across all image comparison scenarios evaluated in this study. Copyright © 2017 Elsevier B.V. All rights reserved.
Mapping and localization for extraterrestrial robotic explorations
NASA Astrophysics Data System (ADS)
Xu, Fengliang
In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
Considerations for the Use of STEREO -HI Data for Astronomical Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tappin, S. J., E-mail: james.tappin@stfc.ac.uk
Recent refinements to the photometric calibrations of the Heliospheric Imagers (HI) on board the Solar TErrestrial RElations Observatory ( STEREO ) have revealed a number of subtle effects in the measurement of stellar signals with those instruments. These effects need to be considered in the interpretation of STEREO -HI data for astronomy. In this paper we present an analysis of these effects and how to compensate for them when using STEREO -HI data for astronomical studies. We determine how saturation of the HI CCD detectors affects the apparent count rates of stars after the on-board summing of pixels and exposures.more » Single-exposure calibration images are analyzed and compared with binned and summed science images to determine the influence of saturation on the science images. We also analyze how the on-board cosmic-ray scrubbing algorithm affects stellar images. We determine how this interacts with the variations of instrument pointing to affect measurements of stars. We find that saturation is a significant effect only for the brightest stars, and that its onset is gradual. We also find that degraded pointing stability, whether of the entire spacecraft or of the imagers, leads to reduced stellar count rates and also increased variation thereof through interaction with the on-board cosmic-ray scrubbing algorithm. We suggest ways in which these effects can be mitigated for astronomical studies and also suggest how the situation can be improved for future imagers.« less
Polarimetric ISAR: Simulation and image reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.
In polarimetric ISAR the illumination platform, typically airborne, carries a pair of antennas that are directed toward a fixed point on the surface as the platform moves. During platform motion, the antennas maintain their gaze on the point, creating an effective aperture for imaging any targets near that point. The interaction between the transmitted fields and targets (e.g. ships) is complicated since the targets are typically many wavelengths in size. Calculation of the field scattered from the target typically requires solving Maxwell’s equations on a large three-dimensional numerical grid. This is prohibitive to use in any real-world imaging algorithm, somore » the scattering process is typically simplified by assuming the target consists of a cloud of independent, non-interacting, scattering points (centers). Imaging algorithms based on this scattering model perform well in many applications. Since polarimetric radar is not very common, the scattering model is often derived for a scalar field (single polarization) where the individual scatterers are assumed to be small spheres. However, when polarization is important, we must generalize the model to explicitly account for the vector nature of the electromagnetic fields and its interaction with objects. In this note, we present a scattering model that explicitly includes the vector nature of the fields but retains the assumption that the individual scatterers are small. The response of the scatterers is described by electric and magnetic dipole moments induced by the incident fields. We show that the received voltages in the antennas are linearly related to the transmitting currents through a scattering impedance matrix that depends on the overall geometry of the problem and the nature of the scatterers.« less
Highly multiplexed single-cell analysis of formalin-fixed, paraffin-embedded cancer tissue
Gerdes, Michael J.; Sevinsky, Christopher J.; Sood, Anup; Adak, Sudeshna; Bello, Musodiq O.; Bordwell, Alexander; Can, Ali; Corwin, Alex; Dinn, Sean; Filkins, Robert J.; Hollman, Denise; Kamath, Vidya; Kaanumalle, Sireesha; Kenny, Kevin; Larsen, Melinda; Lazare, Michael; Lowes, Christina; McCulloch, Colin C.; McDonough, Elizabeth; Pang, Zhengyu; Rittscher, Jens; Santamaria-Pang, Alberto; Sarachan, Brion D.; Seel, Maximilian L.; Seppo, Antti; Shaikh, Kashan; Sui, Yunxia; Zhang, Jingyu; Ginty, Fiona
2013-01-01
Limitations on the number of unique protein and DNA molecules that can be characterized microscopically in a single tissue specimen impede advances in understanding the biological basis of health and disease. Here we present a multiplexed fluorescence microscopy method (MxIF) for quantitative, single-cell, and subcellular characterization of multiple analytes in formalin-fixed paraffin-embedded tissue. Chemical inactivation of fluorescent dyes after each image acquisition round allows reuse of common dyes in iterative staining and imaging cycles. The mild inactivation chemistry is compatible with total and phosphoprotein detection, as well as DNA FISH. Accurate computational registration of sequential images is achieved by aligning nuclear counterstain-derived fiducial points. Individual cells, plasma membrane, cytoplasm, nucleus, tumor, and stromal regions are segmented to achieve cellular and subcellular quantification of multiplexed targets. In a comparison of pathologist scoring of diaminobenzidine staining of serial sections and automated MxIF scoring of a single section, human epidermal growth factor receptor 2, estrogen receptor, p53, and androgen receptor staining by diaminobenzidine and MxIF methods yielded similar results. Single-cell staining patterns of 61 protein antigens by MxIF in 747 colorectal cancer subjects reveals extensive tumor heterogeneity, and cluster analysis of divergent signaling through ERK1/2, S6 kinase 1, and 4E binding protein 1 provides insights into the spatial organization of mechanistic target of rapamycin and MAPK signal transduction. Our results suggest MxIF should be broadly applicable to problems in the fields of basic biological research, drug discovery and development, and clinical diagnostics. PMID:23818604
Highly multiplexed single-cell analysis of formalin-fixed, paraffin-embedded cancer tissue.
Gerdes, Michael J; Sevinsky, Christopher J; Sood, Anup; Adak, Sudeshna; Bello, Musodiq O; Bordwell, Alexander; Can, Ali; Corwin, Alex; Dinn, Sean; Filkins, Robert J; Hollman, Denise; Kamath, Vidya; Kaanumalle, Sireesha; Kenny, Kevin; Larsen, Melinda; Lazare, Michael; Li, Qing; Lowes, Christina; McCulloch, Colin C; McDonough, Elizabeth; Montalto, Michael C; Pang, Zhengyu; Rittscher, Jens; Santamaria-Pang, Alberto; Sarachan, Brion D; Seel, Maximilian L; Seppo, Antti; Shaikh, Kashan; Sui, Yunxia; Zhang, Jingyu; Ginty, Fiona
2013-07-16
Limitations on the number of unique protein and DNA molecules that can be characterized microscopically in a single tissue specimen impede advances in understanding the biological basis of health and disease. Here we present a multiplexed fluorescence microscopy method (MxIF) for quantitative, single-cell, and subcellular characterization of multiple analytes in formalin-fixed paraffin-embedded tissue. Chemical inactivation of fluorescent dyes after each image acquisition round allows reuse of common dyes in iterative staining and imaging cycles. The mild inactivation chemistry is compatible with total and phosphoprotein detection, as well as DNA FISH. Accurate computational registration of sequential images is achieved by aligning nuclear counterstain-derived fiducial points. Individual cells, plasma membrane, cytoplasm, nucleus, tumor, and stromal regions are segmented to achieve cellular and subcellular quantification of multiplexed targets. In a comparison of pathologist scoring of diaminobenzidine staining of serial sections and automated MxIF scoring of a single section, human epidermal growth factor receptor 2, estrogen receptor, p53, and androgen receptor staining by diaminobenzidine and MxIF methods yielded similar results. Single-cell staining patterns of 61 protein antigens by MxIF in 747 colorectal cancer subjects reveals extensive tumor heterogeneity, and cluster analysis of divergent signaling through ERK1/2, S6 kinase 1, and 4E binding protein 1 provides insights into the spatial organization of mechanistic target of rapamycin and MAPK signal transduction. Our results suggest MxIF should be broadly applicable to problems in the fields of basic biological research, drug discovery and development, and clinical diagnostics.
Hanson, G Jay; Michalak, Gregory J; Childs, Robert; McCollough, Brian; Kurup, Anil N; Hough, David M; Frye, Judson M; Fidler, Jeff L; Venkatesh, Sudhakar K; Leng, Shuai; Yu, Lifeng; Halaweish, Ahmed F; Harmsen, W Scott; McCollough, Cynthia H; Fletcher, J G
2018-06-01
Single-energy low tube potential (SE-LTP) and dual-energy virtual monoenergetic (DE-VM) CT images both increase the conspicuity of hepatic lesions by increasing iodine signal. Our purpose was to compare the conspicuity of proven liver lesions, artifacts, and radiologist preferences in dose-matched SE-LTP and DE-VM images. Thirty-one patients with 72 proven liver lesions (21 benign, 51 malignant) underwent full-dose contrast-enhanced dual-energy CT (DECT). Half-dose images were obtained using single tube reconstruction of the dual-source SE-LTP projection data (80 or 100 kV), and by inserting noise into dual-energy projection data, with DE-VM images reconstructed from 40 to 70 keV. Three blinded gastrointestinal radiologists evaluated half-dose SE-LTP and DE-VM images, ranking and grading liver lesion conspicuity and diagnostic confidence (4-point scale) on a per-lesion basis. Image quality (noise, artifacts, sharpness) was evaluated, and overall image preference was ranked on per-patient basis. Lesion-to-liver contrast-to-noise ratio (CNR) was compared between techniques. Mean lesion size was 1.5 ± 1.2 cm. Across the readers, the mean conspicuity ratings for 40, 45, and 50 keV half-dose DE-VM images were superior compared to other half-dose image sets (p < 0.0001). Per-lesion diagnostic confidence was similar between half-dose SE-LTP compared to half-dose DE-VM images (p ≥ 0.05; 1.19 vs. 1.24-1.32). However, SE-LTP images had less noise and artifacts and were sharper compared to DE-VM images less than 70 keV (p < 0.05). On a per-patient basis, radiologists preferred SE-LTP images the most and preferred 40-50 keV the least (p < 0.0001). Lesion CNR was also higher in SE-LTP images than DE-VM images (p < 0.01). For the same applied dose level, liver lesions were more conspicuous using DE-VM compared to SE-LTP; however, SE-LTP images were preferred more than any single DE-VM energy level, likely due to lower noise and artifacts.
Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha
2008-01-01
Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5×10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (∼650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1–2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1–2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms. PMID:18697559
Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha
2008-07-01
Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5 x 10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (approximately 650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1-2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1-2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms.
Ferroelectric and multiferroic domain imaging by Laser-induced photoemission microscopy
NASA Astrophysics Data System (ADS)
Hoefer, Anke; Fechner, Michael; Duncker, Klaus; Mertig, Ingrid; Widdra, Wolf
2013-03-01
The ferroelectric as well as multiferroic surface domain structures of BaTiO3(001) and BiFeO3(001) are imaged based on photoemission electron microscopy (PEEM) by femtosecond laser threshold excitation under UHV conditions. For well-prepared BaTiO3(001), three ferroelectric domain types are clearly discriminable due to work function differences. At room temperature, the surface domains resemble the known ferroelectric domain structure of the bulk. Upon heating above the Curie point of 400 K, the specific surface domain pattern remains up to 500 K. Ab-initio calculations explain this observation by a remaining tetragonal distortion of the topmost unit cells stabilized by a surface relaxation. The (001) surface of the single-phase multiferroic BiFeO3 which is ferroelectric and antiferromagnetic, shows clear ferroelectric work function contrast in PEEM. Additionally, the multiferroic domains show significant linear dichroism. The observation of a varying dichroism for different ferroelectric domains can be explained based on the coupled ferroelectric-antiferromagnetic order in BiFeO3. It demonstrates multiferroic imaging of different domain types within a single, lab-based experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X; Gao, H; Sharp, G
Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes
Hernández-Montes, Maria del Socorro; Furlong, Cosme; Rosowski, John J.; Hulli, Nesim; Harrington, Ellery; Cheng, Jeffrey Tao; Ravicz, Michael E.; Santoyo, Fernando Mendoza
2009-01-01
Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. This paper presents advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full-field-of-view characterization of nanometer scale sound-induced displacements of the surface of the TM at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical-research environment to address basic-science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad-frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and cadaveric human samples are shown, and their potential utility discussed. PMID:19566316
Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes
NASA Astrophysics Data System (ADS)
Del Socorro Hernández-Montes, Maria; Furlong, Cosme; Rosowski, John J.; Hulli, Nesim; Harrington, Ellery; Cheng, Jeffrey Tao; Ravicz, Michael E.; Santoyo, Fernando Mendoza
2009-05-01
Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. We present advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full field-of-view characterization of nanometer-scale sound-induced displacements of the TM surface at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical research environment to address basic science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and human cadaver samples are shown, and their potential utility is discussed.
Solution-based single molecule imaging of surface-immobilized conjugated polymers.
Dalgarno, Paul A; Traina, Christopher A; Penedo, J Carlos; Bazan, Guillermo C; Samuel, Ifor D W
2013-05-15
The photophysical behavior of conjugated polymers used in modern optoelectronic devices is strongly influenced by their structural dynamics and conformational heterogeneity, both of which are dependent on solvent properties. Single molecule studies of these polymer systems embedded in a host matrix have proven to be very powerful to investigate the fundamental fluorescent properties. However, such studies lack the possibility of examining the relationship between conformational dynamics and photophysical response in solution, which is the phase from which films for devices are deposited. By developing a synthetic strategy to incorporate a biotin moiety as a surface attachment point at one end of a polyalkylthiophene, we immobilize it, enabling us to make the first single molecule fluorescence measurements of conjugated polymers for long periods of time in solution. We identify fluctuation patterns in the fluorescence signal that can be rationalized in terms of photobleaching and stochastic transitions to reversible dark states. Moreover, by using the advantages of solution-based imaging, we demonstrate that the addition of oxygen scavengers improves optical stability by significantly decreasing the photobleaching rates.
Tuning Charge and Correlation Effects for a Single Molecule on a Graphene Device
NASA Astrophysics Data System (ADS)
Tsai, Hsin-Zon; Wickenburg, Sebastian; Lu, Jiong; Lischner, Johannes; Omrani, Arash A.; Riss, Alexander; Karrasch, Christoph; Jung, Han Sae; Khajeh, Ramin; Wong, Dillon; Watanabe, Kenji; Taniguchi, Takashi; Zettl, Alex; Louie, Steven G.; Crommie, Michael F.
Controlling electronic devices down to the single molecule level is a grand challenge of nanotechnology. Single-molecules have been integrated into devices capable of tuning electronic response, but a drawback for these systems is that their microscopic structure remains unknown due to inability to image molecules in the junction region. Here we present a combined STM and nc-AFM study demonstrating gate-tunable control of the charge state of individual F4TCNQ molecules at the surface of a graphene field effect transistor. This is different from previous studies in that the Fermi level of the substrate was continuously tuned across the molecular orbital energy level. Using STS we have determined the resulting energy level evolution of the LUMO, its associated vibronic modes, and the graphene Dirac point (ED). We show that the energy difference between ED and the LUMO increases as EF is moved away from ED due to electron-electron interactions that renormalize the molecular quasiparticle energy. This is attributed to gate-tunable image-charge screening in graphene and corroborated by ab initio calculations.
Matsumoto, Keiichi; Kitamura, Keishi; Mizuta, Tetsuro; Shimizu, Keiji; Murase, Kenya; Senda, Michio
2006-02-20
Transmission scanning can be successfully performed with a Cs-137 single-photon-emitting point source for three-dimensional PET imaging. This method was effective for postinjection transmission scanning because of differences in physical energy. However, scatter contamination in the transmission data lowers measured attenuation coefficients. The purpose of this study was to investigate the accuracy of the influence of object scattering by measuring the attenuation coefficients on the transmission images. We also compared the results with the conventional germanium line source method. Two different types of PET scanner, the SET-3000 G/X (Shimadzu Corp.) and ECAT EXACT HR(+) (Siemens/CTI) , were used. For the transmission scanning, the SET-3000 G/X and ECAT HR(+) were the Cs-137 point source and Ge-68/Ga-68 line source, respectively. With the SET-3000 G/X, we performed transmission measurement at two energy gate settings, the standard 600-800 keV as well as 500-800 keV. The energy gate setting of the ECAT HR(+) was 350-650 keV. The effects of scattering in a uniform phantom with different cross-sectional areas ranging from 201 cm(2) to 314 cm(2) to 628 cm(2) (apposition of the two 20 cm diameter phantoms) and 943 cm(2) (stacking of the three 20 cm diameter phantoms) were acquired without emission activity. First, we evaluated the attenuation coefficients of the two different types of transmission scanning using region of interest (ROI) analysis. In addition, we evaluated the attenuation coefficients with and without segmentation for Cs-137 transmission images using the same analysis. The segmentation method was a histogram-based soft-tissue segmentation process that can also be applied to reconstructed transmission images. In the Cs-137 experiment, the maximum underestimation was 3% without segmentation, which was reduced to less than 1% with segmentation at the center of the largest phantom. In the Ge-68/Ga-68 experiment, the difference in mean attenuation coefficients was stable with all phantoms. We evaluated the accuracy of attenuation coefficients of Cs-137 single-transmission scans. The results for Cs-137 suggest that scattered photons depend on object size. Although Cs-137 single-transmission scans contained scattered photons, attenuation coefficient error could be reduced using by the segmentation method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Macrander, A. T.
Using the 1-BM-C beamline at the Advanced Photon Source (APS), we have performed the initial indirect x - ray imaging point-spread-function (PSF) test of a unique 88-mm diameter YAG:Ce single crystal of only 100 - micron thickness. The crystal was bonded to a fiber optic plat e (FOP) for mechanical support and to allow the option for FO coupling to a large format camera. This configuration resolution was compared to that of self - supported 25-mm diameter crystals, with and without an Al reflective coating. An upstream monochromator was used to select 17-keV x-rays from the broadband APS bending magnetmore » source of synchrotron radiation. The upstream , adjustable Mo collimators were then used to provide a series of x-ray source transverse sizes from 200 microns down to about 15-20 microns (FWHM) at the crystal surface. The emitted scintillator radiation was in this case lens coupled to the ANDOR Neo sCMOS camera, and the indirect x-ray images were processed offline by a MATLAB - based image processing program. Based on single Gaussian peak fits to the x-ray image projected profiles, we observed a 10.5 micron PSF. This sample thus exhibited superior spatial resolution to standard P43 polycrystalline phosphors of the same thickness which would have about a 100-micron PSF. Lastly, this single crystal resolution combined with the 88-mm diameter makes it a candidate to support future x-ray diffraction or wafer topography experiments.« less
Determination of piezo-optic coefficients of crystals by means of four-point bending.
Krupych, Oleg; Savaryn, Viktoriya; Krupych, Andriy; Klymiv, Ivan; Vlokh, Rostyslav
2013-06-10
A technique developed recently for determining piezo-optic coefficients (POCs) of isotropic optical media, which represents a combination of digital imaging laser interferometry and a classical four-point bending method, is generalized and applied to a single-crystalline anisotropic material. The peculiarities of measuring procedures and data processing for the case of optically uniaxial crystals are described in detail. The capabilities of the technique are tested on the example of canonical nonlinear optical crystal LiNbO3. The high precision achieved in determination of the POCs for isotropic and anisotropic materials testifies that the technique should be both versatile and reliable.
Effective image differencing with convolutional neural networks for real-time transient hunting
NASA Astrophysics Data System (ADS)
Sedaghat, Nima; Mahabal, Ashish
2018-06-01
Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.
Contaminant characterization on hair and fiber surfaces using imaging TOF-SIMS
NASA Astrophysics Data System (ADS)
Groenewold, Gary S.; Gresham, Garold L.; Gianotto, Anita K.; Avci, Recep
1999-02-01
Imaging time-of-flight secondary ion mass spectrometry (SIMS) was used to evaluate the detection of contaminant chemicals on the surfaces of single synthetic textile and canine hair fibers. The results of the study showed that a variety of chemical classes can be detected. Both cocaine and heroin could be easily observed as intact protonated molecules ([M + H]+) in the cation spectra acquired from textile fibers. Two organophosphates were evaluated: malathion, which is a common pesticide, and pinacolyl methyl phosphonic acid (PMPA), which is the principal degradation product of the nerve agent soman (a close relative of sarin). Malathion could be observed as (CH3O)2P(equalsS)S-, which is formed by thiophosphate cleavage of the intact malathion. PMPA is observed as the conjugate base ([PMPA - H]-). Surfactant chemicals found in hair care products were successfully detected on single hair fibers. Specifically, alkyl sulfates, ethoxylated alkyl sulfates, silicones, and alkylammonium compounds could be readily identified in spectra acquired from single hair fiber samples exposed to shampoo and/or conditioner. Generally, the results of the study show that imaging SIMS is applicable to single fiber analysis, for a range of adsorbed compound types. The forensic application of this instrumental approach has not been widely recognized. However, the ability of the technique to acquire specific chemical information from trace samples clearly points to applications where the need for chemical analysis is great, but the amount of sample is limited.
NASA Astrophysics Data System (ADS)
Terabe, K.; Takekawa, S.; Nakamura, M.; Kitamura, K.; Higuchi, S.; Gotoh, Y.; Gruverman, A.
2002-09-01
We have investigated the ferroelectric domain structure formed in a Sr0.61Ba0.39Nb2O6 single crystal by cooling the crystal through the Curie point. Imaging the etched surface structure using a scanning force microscope (SFM) in both the topographic mode and the piezoresponse mode revealed that a multidomain structure of nanoscale islandlike domains was formed. The islandlike domains could be inverted by applying an appropriate voltage using a conductive SFM tip. Furthermore, a nanoscale periodically inverted-domain structure was artificially fabricated using the crystal which underwent poling treatment.
Calibration and Performance of the Michelson Doppler Imager on SOHO.
NASA Astrophysics Data System (ADS)
Zayer, I.; Morrison, M.; Tarbell, T. D.; Title, A.; Wolfson, C. J.; MDI Engineering Team; Bogart, R. S.; Bush, R. I.; Hoeksema, J. T.; Duvall, T.; Sa, L. A. D.; Scherrer, P. H.; Schou, J.
1996-05-01
The Michelson Doppler Imager (MDI) instrument probes the interior of the Sun by measuring the photospheric manifestations of solar oscillations. MDI was launched in December, 1995, on the Solar and Heliospheric Observatory (SOHO) and has been successfully observing the Sun since then. The instrument images the Sun on a 1024 x 1024 pixel CCD camera through a series of increasingly narrow spectral filters. The final elements, a pair of tunable Michelson interferometers, enable MDI to record filtergrams with FWHM bandwidth of 94 m Angstroms with a resolution of 4 arcseconds over the whole disk. Images can also be collected in MDI's higher resolution (1.25 arcsecond) field centered about 160 arcseconds north of the equator. An extensive calibration program has verified the end-to-end performance of the instrument in flight. MDI is working very well; we present the most important calibration results and a sample of early science observations. The Image Stabilization System (ISS) maintains overall pointing to better than ca. 0.01 arcsec, while the ISS' diagnostic mode allows us to measure spectrally narrow pointing jitter down to less than 1 mili-arcsec. We have confirmed the linearity of each CCD pixel to lie within 0.5%\\ (the FWHM of the distribution is 0.2% ), and have to date not detected any contamination on the detector, which is cooled to -72 C. The noise in a single Dopplergram is of the order of 20 m/s, and initial measurements of transverse velocities are reliable to 100 m/s. The sensitivity of magnetograms reach 5G in a 10 minute average (15G in a single magnetogram). MDI's primary observable, the p-modes from full-disk medium-l data, are of very high quality out to l=300 as seen in the initial l-nu diagram. The SOI-MDI program is supported by NASA contract NAG5-3077.
Double Photon Emission Coincidence Imaging using GAGG-SiPM pixel detectors
NASA Astrophysics Data System (ADS)
Shimazoe, K.; Uenomachi, M.; Mizumachi, Y.; Takahashi, H.; Masao, Y.; Shoji, Y.; Kamada, K.; Yoshikawa, A.
2017-12-01
Single photon emission computed tomography(SPECT) is a useful medical imaging modality using single photon detection from radioactive tracers, such as 99Tc and 111In, however further development of increasing the contrast in the image is still under investigation. A novel method (Double Photon Emission CT / DPECT) using a coincidence detection of two cascade gamma-rays from 111In is proposed and characterized in this study. 111In, which is well-known and commonly used as a SPECT tracer, emits two cascade photons of 171 keV and 245 keV with a short delay of approximately 85 ns. The coincidence detection of two gamma-rays theoretically determines the position in a single point compared with a line in single photon detection and increases the signal to noise ratio drastically. A fabricated pixel detector for this purpose consists of 8 × 8 array of high-resolution type 1.5 mm thickness Ce:GAGG (3.9% @ 662 keV, 6.63g/cm3, C&A Co. Ce:Gd3Ga2.7Al2.3O12 2.5 × 2.5 × 1.5 mm3) crystals coupled a 3 mm pixel SiPM array (Hamamatsu MPPC S13361-2050NS-08). The signal from each pixel is processed and readout using time over threshold (TOT) based parallel processing circuit to extract the energy and timing information. The coincidence was detected by FPGA with the frequency of 400 MHz. Two pixel detectors coupled to parallel-hole collimators are located at the degree of 90 to determine the position and coincidence events (time window =1 μs) are detected and used for making back-projection image. The basic principle of DPECT is characterized including the detection efficiency and timing resolution.
NASA Astrophysics Data System (ADS)
Leberl, F.; Gruber, M.; Ponticelli, M.; Wiechert, A.
2012-07-01
The UltraCam-project created a novel Large Format Digital Aerial Camera. It was inspired by the ISPRS Congress 2000 in Amsterdam. The search for a promising imaging idea succeeded in May 2001, defining a tiling approach with multiple lenses and multiple area CCD arrays to assemble a seamless and geometrically stable monolithic photogrammetric aerial large format image. First resources were spent on the project in September 2011. The initial UltraCam-D was announced and demonstrated in May 2003. By now the imaging principle has resulted in a 4th generation UltraCam Eagle, increasing the original swath width from 11,500 pixels to beyond 20,000. Inspired by the original imaging principle, alternatives have been investigated, and the UltraCam-G carries the swath width even further, namely to a frame image with nearly 30,000 pixels, however, with a modified tiling concept and optimized for orthophoto production. We explain the advent of digital aerial large format imaging and how it benefits from improvements in computing technology to cope with data flows at a rate of 3 Gigabits per second and a need to deal with Terabytes of imagery within a single aerial sortie. We also address the many benefits of a transition to a fully digital workflow with a paradigm shift away from minimizing a project's number of aerial photographs and towards maximizing the automation of photogrammetric workflows by means of high redundancy imaging strategies. The instant gratification from near-real-time aerial triangulations and dense image matching has led to a reassessment of the value of photogrammetric point clouds to successfully compete with direct point cloud measurements by LiDAR.
Flagella and motility behaviour of square bacteria.
Alam, M; Claviez, M; Oesterhelt, D; Kessel, M
1984-01-01
Square bacteria are shown to have right-handed helical (RH) flagella. They swim forward by clockwise (CW), and backwards by counterclockwise (CCW) rotation of their flagella. They are propelled by several or single filaments arising at several or single points on the cell surface. When there are several filaments a stable bundle is formed that does not fly apart during the change from clockwise to counterclockwise rotation or vice versa. In addition to the flagella attached to the cells, large amounts of detached flagella aggregated into thick super-flagella, can be observed at all phases of growth. Images Fig. 1. Fig. 2. Fig. 3. Fig. 4. Fig. 5. Fig. 6. PMID:6526006
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone
NASA Astrophysics Data System (ADS)
Xia, G.; Hu, C.
2018-04-01
The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev
2017-02-01
Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.
Hickling, Susannah; Lei, Hao; Hobson, Maritza; Léger, Pierre; Wang, Xueding; El Naqa, Issam
2017-02-01
The aim of this work was to experimentally demonstrate the feasibility of x-ray acoustic computed tomography (XACT) as a dosimetry tool in a clinical radiotherapy environment. The acoustic waves induced following a single pulse of linear accelerator irradiation in a water tank were detected with an immersion ultrasound transducer. By rotating the collimator and keeping the transducer stationary, acoustic signals at varying angles surrounding the field were detected and reconstructed to form an XACT image. Simulated XACT images were obtained using a previously developed simulation workflow. Profiles extracted from experimental and simulated XACT images were compared to profiles measured with an ion chamber. A variety of radiation field sizes and shapes were investigated. XACT images resembling the geometry of the delivered radiation field were obtained for fields ranging from simple squares to more complex shapes. When comparing profiles extracted from simulated and experimental XACT images of a 4 cm × 4 cm field, 97% of points were found to pass a 3%/3 mm gamma test. Agreement between simulated and experimental XACT images worsened when comparing fields with fine details. Profiles extracted from experimental XACT images were compared to profiles obtained through clinical ion chamber measurements, confirming that the intensity of XACT images is related to deposited radiation dose. Seventy-seven percent of the points in a profile extracted from an experimental XACT image of a 4 cm × 4 cm field passed a 7%/4 mm gamma test when compared to an ion chamber measured profile. In a complicated puzzle-piece shaped field, 86% of the points in an XACT extracted profile passed a 7%/4 mm gamma test. XACT images with intensity related to the spatial distribution of deposited dose in a water tank were formed for a variety of field sizes and shapes. XACT has the potential to be a useful tool for absolute, relative and in vivo dosimetry. © 2016 American Association of Physicists in Medicine.
Zghaib, Tarek; Keramati, Ali; Chrispin, Jonathan; Huang, Dong; Balouch, Muhammad A; Ciuffo, Luisa; Berger, Ronald D; Marine, Joseph E; Ashikaga, Hiroshi; Calkins, Hugh; Nazarian, Saman; Spragg, David D
2018-01-01
Bipolar voltage mapping, as part of atrial fibrillation (AF) ablation, is traditionally performed in a point-by-point (PBP) approach using single-tip ablation catheters. Alternative techniques for fibrosis-delineation include fast-anatomical mapping (FAM) with multi-electrode circular catheters, and late gadolinium-enhanced magnetic-resonance imaging (LGE-MRI). The correlation between PBP, FAM, and LGE-MRI fibrosis assessment is unknown. In this study, we examined AF substrate using different modalities (PBP, FAM, and LGE-MRI mapping) in patients presenting for an AF ablation. LGE-MRI was performed pre-ablation in 26 patients (73% males, age 63±8years). Local image-intensity ratio (IIR) was used to normalize myocardial intensities. PBP- and FAM-voltage maps were acquired, in sinus rhythm, prior to ablation and co-registered to LGE-MRI. Mean bipolar voltage for all 19,087 FAM voltage points was 0.88±1.27mV and average IIR was 1.08±0.18. In an adjusted mixed-effects model, each unit increase in local IIR was associated with 57% decrease in bipolar voltage (p<0.0001). IIR of >0.74 corresponded to bipolar voltage <0.5 mV. A total of 1554 PBP-mapping points were matched to the nearest FAM-point. In an adjusted mixed-effects model, log-FAM bipolar voltage was significantly associated with log-PBP bipolar voltage (ß=0.36, p<0.0001). At low-voltages, FAM-mapping distribution was shifted to the left compared to PBP-mapping; at intermediate voltages, FAM and PBP voltages were overlapping; and at high voltages, FAM exceeded PBP-voltages. LGE-MRI, FAM and PBP-mapping show good correlation in delineating electro-anatomical AF substrate. Each approach has fundamental technical characteristics, the awareness of which allows proper assessment of atrial fibrosis.
NASA Astrophysics Data System (ADS)
Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.
2017-06-01
Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
In vivo microscopy of human leucocytes(Conference Presentation)
NASA Astrophysics Data System (ADS)
Winer, Matan; Yeheskely-Hayon, Daniella; Zeidan, Adel; Yelin, Dvir
2017-02-01
White blood cells (WBC) analysis is an important part of the complete blood count, providing good indication of the patient's immune system status. The most common types of WBCs are the neutrophils and lymphocytes that comprise approximately 60% and 30% of the total WBC count, respectively; differentiating between these cells at the point of care would assist in accurate diagnosis of the possible source of infection (viral or bacterial) and in effective prescription of antibiotics. In this work, we demonstrate the potential of spectrally encoded flow cytometry (SEFC) to non-invasively image WBC in human patients, allowing morphology characterization of the main types of WBCs. The optical setup includes a broadband light that was diffracted and focused onto a single transverse line within the cross section of a small blood vessel at the inner patient lip. Light backscattered from the tissue was measured by a high-speed spectrometer, forming a two-dimensional reflectance confocal image of the flowing cells. By imaging at different depths into vessels of different diameters, we determine optimal imaging conditions (i.e. imaging geometry, speed and depth) for counting the total amount of WBCs and for differentiating between their main types. The presented technology could serve for analyzing the immune system status at the point of care, and for studying the morphological and dynamical characteristics of these cells in vivo.
Use of routine clinical multimodality imaging in a rabbit model of osteoarthritis--part I.
Bouchgua, M; Alexander, K; d'Anjou, M André; Girard, C A; Carmel, E Norman; Beauchamp, G; Richard, H; Laverty, S
2009-02-01
To evaluate in vivo the evolution of osteoarthritis (OA) lesions temporally in a rabbit model of OA with clinically available imaging modalities: computed radiography (CR), helical single-slice computed tomography (CT), and 1.5 tesla (T) magnetic resonance imaging (MRI). Imaging was performed on knees of anesthetized rabbits [10 anterior cruciate ligament transection (ACLT) and contralateral sham joints and six control rabbits] at baseline and at intervals up to 12 weeks post-surgery. Osteophytosis, subchondral bone sclerosis, bone marrow lesions (BMLs), femoropatellar effusion and articular cartilage were assessed. CT had the highest sensitivity (90%) and specificity (91%) to detect osteophytes. A significant increase in total joint osteophyte score occurred at all time-points post-operatively in the ACLT group alone. BMLs were identified and occurred most commonly in the lateral femoral condyle of the ACLT joints and were not identified in the tibia. A significant increase in joint effusion was present in the ACLT joints until 8 weeks after surgery. Bone sclerosis or cartilage defects were not reliably assessed with the selected imaging modalities. Combined, clinically available CT and 1.5 T MRI allowed the assessment of most of the characteristic lesions of OA and at early time-points in the development of the disease. However, the selected 1.5 T MRI sequences and acquisition times did not permit the detection of cartilage lesions in this rabbit OA model.
Hand-held optical imager (Gen-2): improved instrumentation and target detectability
Gonzalez, Jean; DeCerce, Joseph; Erickson, Sarah J.; Martinez, Sergio L.; Nunez, Annie; Roman, Manuela; Traub, Barbara; Flores, Cecilia A.; Roberts, Seigbeh M.; Hernandez, Estrella; Aguirre, Wenceslao; Kiszonas, Richard
2012-01-01
Abstract. Hand-held optical imagers are developed by various researchers towards reflectance-based spectroscopic imaging of breast cancer. Recently, a Gen-1 handheld optical imager was developed with capabilities to perform two-dimensional (2-D) spectroscopic as well as three-dimensional (3-D) tomographic imaging studies. However, the imager was bulky with poor surface contact (∼30%) along curved tissues, and limited sensitivity to detect targets consistently. Herein, a Gen-2 hand-held optical imager that overcame the above limitations of the Gen-1 imager has been developed and the instrumentation described. The Gen-2 hand-held imager is less bulky, portable, and has improved surface contact (∼86%) on curved tissues. Additionally, the forked probe head design is capable of simultaneous bilateral reflectance imaging of both breast tissues, and also transillumination imaging of a single breast tissue. Experimental studies were performed on tissue phantoms to demonstrate the improved sensitivity in detecting targets using the Gen-2 imager. The improved instrumentation of the Gen-2 imager allowed detection of targets independent of their location with respect to the illumination points, unlike in Gen-1 imager. The developed imager has potential for future clinical breast imaging with enhanced sensitivity, via both reflectance and transillumination imaging. PMID:23224163
Differential Optical Synthetic Aperture Radar
Stappaerts, Eddy A.
2005-04-12
A new differential technique for forming optical images using a synthetic aperture is introduced. This differential technique utilizes a single aperture to obtain unique (N) phases that can be processed to produce a synthetic aperture image at points along a trajectory. This is accomplished by dividing the aperture into two equal "subapertures", each having a width that is less than the actual aperture, along the direction of flight. As the platform flies along a given trajectory, a source illuminates objects and the two subapertures are configured to collect return signals. The techniques of the invention is designed to cancel common-mode errors, trajectory deviations from a straight line, and laser phase noise to provide the set of resultant (N) phases that can produce an image having a spatial resolution corresponding to a synthetic aperture.
Positron Emission Mammography with Multiple Angle Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark F. Smith; Stan Majewski; Raymond R. Raylman
2002-11-01
Positron emission mammography (PEM) of F-18 fluorodeoxyglucose (FbG) uptake in breast tumors with dedicated detectors typically has been accomplished with two planar detectors in a fixed position with the breast under compression. The potential use of PEM imaging at two detector positions to guide stereotactic breast biopsy has motivated us to use PEM coincidence data acquired at two or more detector positions together in a single image reconstruction. Multiple angle PEM acquisition and iterative image reconstruction were investigated using point source and compressed breast phantom acquisitions with 5, 9, 12 and 15 mm diameter spheres and a simulated tumor:background activitymore » concentration ratio of 6:1. Image reconstruction was performed with an iterative MLEM algorithm that used coincidence events between any two detector pixels on opposed detector heads at each detector position. This present study compared two acquisition protocols: 2 angle acquisition with detector angular positions of -15 and +15 degrees and 11 angle acquisition with detector positions spaced at 3 degree increments over the range -15 to +15 degrees. Three-dimensional image resolution was assessed for the point source acquisitions, and contrast and signal-to-noise metrics were evaluated for the compressed breast phantom with different simulated tumor sizes. Radial and tangential resolutions were similar for the two protocols, while normal resolution was better for the 2 angle acquisition. Analysis is complicated by the asymmetric point spread functions. Signal- to-noise vs. contrast tradeoffs were better for 11 angle acquisition for the smallest visible 9 mm sphere, while tradeoff results were mixed for the larger and more easily visible 12 mm and 15 mm diameter spheres. Additional study is needed to better understand the performance of limited angle tomography for PEM. PEM tomography experiments with complete angular sampling are planned.« less
Application of whole slide image markup and annotation for pathologist knowledge capture.
Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H
2013-01-01
The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.
Application of whole slide image markup and annotation for pathologist knowledge capture
Campbell, Walter S.; Foster, Kirk W.; Hinrichs, Steven H.
2013-01-01
Objective: The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Methods: Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Results: Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). Conclusion: This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use. PMID:23599902
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Photonic Doppler velocimetry lens array probe incorporating stereo imaging
Malone, Robert M.; Kaufman, Morris I.
2015-09-01
A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.
Local strain and damage mapping in single trabeculae during three-point bending tests
Jungmann, R.; Szabo, M.E.; Schitter, G.; Tang, Raymond Yue-Sing; Vashishth, D.; Hansma, P.K.; Thurner, P.J.
2012-01-01
The use of bone mineral density as a surrogate to diagnose bone fracture risk in individuals is of limited value. However, there is growing evidence that information on trabecular microarchitecture can improve the assessment of fracture risk. One current strategy is to exploit finite element analysis (FEA) applied to 3D image data of several mm-sized trabecular bone structures obtained from non-invasive imaging modalities for the prediction of apparent mechanical properties. However, there is a lack of FE damage models, based on solid experimental facts, which are needed to validate such approaches and to provide criteria marking elastic–plastic deformation transitions as well as microdamage initiation and accumulation. In this communication, we present a strategy that could elegantly lead to future damage models for FEA: direct measurements of local strains involved in microdamage initiation and plastic deformation in single trabeculae. We use digital image correlation to link stress whitening in bone, reported to be correlated to microdamage, to quantitative local strain values. Our results show that the whitening zones, i.e. damage formation, in the presented loading case of a three-point bending test correlate best with areas of elevated tensile strains oriented parallel to the long axis of the samples. The average local strains along this axis were determined to be (1.6 ± 0.9)% at whitening onset and (12 ± 4)% just prior to failure. Overall, our data suggest that damage initiation in trabecular bone is asymmetric in tension and compression, with failure originating and propagating over a large range of tensile strains. PMID:21396601
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Kuan-Hao; Hu, Lingzhi; Traughber, Melanie
Purpose: MR-based pseudo-CT has an important role in MR-based radiation therapy planning and PET attenuation correction. The purpose of this study is to establish a clinically feasible approach, including image acquisition, correction, and CT formation, for pseudo-CT generation of the brain using a single-acquisition, undersampled ultrashort echo time (UTE)-mDixon pulse sequence. Methods: Nine patients were recruited for this study. For each patient, a 190-s, undersampled, single acquisition UTE-mDixon sequence of the brain was acquired (TE = 0.1, 1.5, and 2.8 ms). A novel method of retrospective trajectory correction of the free induction decay (FID) signal was performed based on point-spreadmore » functions of three external MR markers. Two-point Dixon images were reconstructed using the first and second echo data (TE = 1.5 and 2.8 ms). R2{sup ∗} images (1/T2{sup ∗}) were then estimated and were used to provide bone information. Three image features, i.e., Dixon-fat, Dixon-water, and R2{sup ∗}, were used for unsupervised clustering. Five tissue clusters, i.e., air, brain, fat, fluid, and bone, were estimated using the fuzzy c-means (FCM) algorithm. A two-step, automatic tissue-assignment approach was proposed and designed according to the prior information of the given feature space. Pseudo-CTs were generated by a voxelwise linear combination of the membership functions of the FCM. A low-dose CT was acquired for each patient and was used as the gold standard for comparison. Results: The contrast and sharpness of the FID images were improved after trajectory correction was applied. The mean of the estimated trajectory delay was 0.774 μs (max: 1.350 μs; min: 0.180 μs). The FCM-estimated centroids of different tissue types showed a distinguishable pattern for different tissues, and significant differences were found between the centroid locations of different tissue types. Pseudo-CT can provide additional skull detail and has low bias and absolute error of estimated CT numbers of voxels (−22 ± 29 HU and 130 ± 16 HU) when compared to low-dose CT. Conclusions: The MR features generated by the proposed acquisition, correction, and processing methods may provide representative clustering information and could thus be used for clinical pseudo-CT generation.« less
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Probabilistic model for quick detection of dissimilar binary images
NASA Astrophysics Data System (ADS)
Mustafa, Adnan A. Y.
2015-09-01
We present a quick method to detect dissimilar binary images. The method is based on a "probabilistic matching model" for image matching. The matching model is used to predict the probability of occurrence of distinct-dissimilar image pairs (completely different images) when matching one image to another. Based on this model, distinct-dissimilar images can be detected by matching only a few points between two images with high confidence, namely 11 points for a 99.9% successful detection rate. For image pairs that are dissimilar but not distinct-dissimilar, more points need to be mapped. The number of points required to attain a certain successful detection rate or confidence depends on the amount of similarity between the compared images. As this similarity increases, more points are required. For example, images that differ by 1% can be detected by mapping fewer than 70 points on average. More importantly, the model is image size invariant; so, images of any sizes will produce high confidence levels with a limited number of matched points. As a result, this method does not suffer from the image size handicap that impedes current methods. We report on extensive tests conducted on real images of different sizes.
Does touch inhibit visual imagery? A case study on acquired blindness.
von Trott Zu Solz, Jana; Paolini, Marco; Silveira, Sarita
2017-06-01
In a single-case study of acquired blindness, differential brain activation patterns for visual imagery of familiar objects with and without tactile exploration as well as of tactilely explored unfamiliar objects were observed. Results provide new insight into retrieval of visual images from episodic memory and point toward a potential tactile inhibition of visual imagery. © 2017 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
NASA Astrophysics Data System (ADS)
Reinhart, Anna Merle; Spindeldreier, Claudia Katharina; Jakubek, Jan; Martišíková, Mária
2017-06-01
Carbon ion beam radiotherapy enables a very localised dose deposition. However, even small changes in the patient geometry or positioning errors can significantly distort the dose distribution. A live, non-invasive monitoring system of the beam delivery within the patient is therefore highly desirable, and could improve patient treatment. We present a novel three-dimensional method for imaging the beam in the irradiated object, exploiting the measured tracks of single secondary ions emerging under irradiation. The secondary particle tracks are detected with a TimePix stack—a set of parallel pixelated semiconductor detectors. We developed a three-dimensional reconstruction algorithm based on maximum likelihood expectation maximization. We demonstrate the applicability of the new method in the irradiation of a cylindrical PMMA phantom of human head size with a carbon ion pencil beam of {226} MeV u-1. The beam image in the phantom is reconstructed from a set of nine discrete detector positions between {-80}^\\circ and {50}^\\circ from the beam axis. Furthermore, we demonstrate the potential to visualize inhomogeneities by irradiating a PMMA phantom with an air gap as well as bone and adipose tissue surrogate inserts. We successfully reconstructed a three-dimensional image of the treatment beam in the phantom from single secondary ion tracks. The beam image corresponds well to the beam direction and energy. In addition, cylindrical inhomogeneities with a diameter of {2.85} cm and density differences down to {0.3} g cm-3 to the surrounding material are clearly visualized. This novel three-dimensional method to image a therapeutic carbon ion beam in the irradiated object does not interfere with the treatment and requires knowledge only of single secondary ion tracks. Even with detectors with only a small angular coverage, the three-dimensional reconstruction of the fragmentation points presented in this work was found to be feasible.
Reinhart, Anna Merle; Spindeldreier, Claudia Katharina; Jakubek, Jan; Martišíková, Mária
2017-06-21
Carbon ion beam radiotherapy enables a very localised dose deposition. However, even small changes in the patient geometry or positioning errors can significantly distort the dose distribution. A live, non-invasive monitoring system of the beam delivery within the patient is therefore highly desirable, and could improve patient treatment. We present a novel three-dimensional method for imaging the beam in the irradiated object, exploiting the measured tracks of single secondary ions emerging under irradiation. The secondary particle tracks are detected with a TimePix stack-a set of parallel pixelated semiconductor detectors. We developed a three-dimensional reconstruction algorithm based on maximum likelihood expectation maximization. We demonstrate the applicability of the new method in the irradiation of a cylindrical PMMA phantom of human head size with a carbon ion pencil beam of [Formula: see text] MeV u -1 . The beam image in the phantom is reconstructed from a set of nine discrete detector positions between [Formula: see text] and [Formula: see text] from the beam axis. Furthermore, we demonstrate the potential to visualize inhomogeneities by irradiating a PMMA phantom with an air gap as well as bone and adipose tissue surrogate inserts. We successfully reconstructed a three-dimensional image of the treatment beam in the phantom from single secondary ion tracks. The beam image corresponds well to the beam direction and energy. In addition, cylindrical inhomogeneities with a diameter of [Formula: see text] cm and density differences down to [Formula: see text] g cm -3 to the surrounding material are clearly visualized. This novel three-dimensional method to image a therapeutic carbon ion beam in the irradiated object does not interfere with the treatment and requires knowledge only of single secondary ion tracks. Even with detectors with only a small angular coverage, the three-dimensional reconstruction of the fragmentation points presented in this work was found to be feasible.
Muselaers, Constantijn H J; Rijpkema, Mark; Bos, Desirée L; Langenhuijsen, Johan F; Oyen, Wim J G; Mulders, Peter F A; Oosterwijk, Egbert; Boerman, Otto C
2015-08-01
Tumor targeted optical imaging using antibodies labeled with near infrared fluorophores is a sensitive imaging modality that might be used during surgery to assure complete removal of malignant tissue. We evaluated the feasibility of dual modality imaging and image guided surgery with the dual labeled anti-carbonic anhydrase IX antibody preparation (111)In-DTPA-G250-IRDye800CW in mice with intraperitoneal clear cell renal cell carcinoma. BALB/c nu/nu mice with intraperitoneal SK-RC-52 lesions received 10 μg DTPA-G250-IRDye800CW labeled with 15 MBq (111)In or 10 μg of the dual labeled irrelevant control antibody NUH-82 (20 mice each). To evaluate when tumors could be detected, 4 mice per group were imaged weekly during 5 weeks with single photon emission computerized tomography/computerized tomography and the fluorescence imaging followed by ex vivo biodistribution studies. As early as 1 week after tumor cell inoculation single photon emission computerized tomography and fluorescence images showed clear delineation of intraperitoneal clear cell renal cell carcinoma with good concordance between single photon emission computerized tomography/computerized tomography and fluorescence images. The high and specific accumulation of the dual labeled antibody conjugate in tumors was confirmed in the biodistribution studies. Maximum tumor uptake was observed 1 week after inoculation (mean ± SD 58.5% ± 18.7% vs 5.6% ± 2.3% injected dose per gm for DTPA-G250-IRDye800CW vs NUH-82, respectively). High tumor uptake was also observed at other time points. This study demonstrates the feasibility of dual modality imaging with dual labeled antibody (111)In-DTPA-G250-IRDye800CW in a clear cell renal cell carcinoma model. Results indicate that preoperative and intraoperative detection of carbonic anhydrase IX expressing tumors, positive resection margins and metastasis might be feasible with this approach. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Hyperpolarized (129) Xe imaging of the rat lung using spiral IDEAL.
Doganay, Ozkan; Wade, Trevor; Hegarty, Elaine; McKenzie, Charles; Schulte, Rolf F; Santyr, Giles E
2016-08-01
To implement and optimize a single-shot spiral encoding strategy for rapid 2D IDEAL projection imaging of hyperpolarized (Hp) (129) Xe in the gas phase, and in the pulmonary tissue (PT) and red blood cells (RBCs) compartments of the rat lung, respectively. A theoretical and experimental point spread function analysis was used to optimize the spiral k-space read-out time in a phantom. Hp (129) Xe IDEAL images from five healthy rats were used to: (i) optimize flip angles by a Bloch equation analysis using measured kinetics of gas exchange and (ii) investigate the feasibility of the approach to characterize the exchange of Hp (129) Xe. A read-out time equal to approximately 1.8 × T2* was found to provide the best trade-off between spatial resolution and signal-to-noise ratio (SNR). Spiral IDEAL approaches that use the entire dissolved phase magnetization should give an SNR improvement of a factor of approximately three compared with Cartesian approaches with similar spatial resolution. The IDEAL strategy allowed imaging of gas, PT, and RBC compartments with sufficient SNR and temporal resolution to permit regional gas exchange measurements in healthy rats. Single-shot spiral IDEAL imaging of gas, PT and RBC compartments and gas exchange is feasible in rat lung using Hp (129) Xe. Magn Reson Med 76:566-576, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Atomic force microscopy imaging of macromolecular complexes.
Santos, Sergio; Billingsley, Daniel; Thomson, Neil
2013-01-01
This chapter reviews amplitude modulation (AM) AFM in air and its applications to high-resolution imaging and interpretation of macromolecular complexes. We discuss single DNA molecular imaging and DNA-protein interactions, such as those with topoisomerases and RNA polymerase. We show how relative humidity can have a major influence on resolution and contrast and how it can also affect conformational switching of supercoiled DNA. Four regimes of AFM tip-sample interaction in air are defined and described, and relate to water perturbation and/or intermittent mechanical contact of the tip with either the molecular sample or the surface. Precise control and understanding of the AFM operational parameters is shown to allow the user to switch between these different regimes: an interpretation of the origins of topographical contrast is given for each regime. Perpetual water contact is shown to lead to a high-resolution mode of operation, which we term SASS (small amplitude small set-point) imaging, and which maximizes resolution while greatly decreasing tip and sample wear and any noise due to perturbation of the surface water. Thus, this chapter provides sufficient information to reliably control the AFM in the AM AFM mode of operation in order to image both heterogeneous samples and single macromolecules including complexes, with high resolution and with reproducibility. A brief introduction to AFM, its versatility and applications to biology is also given while providing references to key work and general reviews in the field.
Witmer, Matthew T; Parlitsis, George; Patel, Sarju; Kiss, Szilárd
2013-01-01
Purpose To compare ultra-widefield fluorescein angiography imaging using the Optos® Optomap® and the Heidelberg Spectralis® noncontact ultra-widefield module. Methods Five patients (ten eyes) underwent ultra-widefield fluorescein angiography using the Optos® panoramic P200Tx imaging system and the noncontact ultra-widefield module in the Heidelberg Spectralis® HRA+OCT system. The images were obtained as a single, nonsteered shot centered on the macula. The area of imaged retina was outlined and quantified using Adobe® Photoshop® C5 software. The total area and area within each of four visualized quadrants was calculated and compared between the two imaging modalities. Three masked reviewers also evaluated each quadrant per eye (40 total quadrants) to determine which modality imaged the retinal vasculature most peripherally. Results Optos® imaging captured a total retinal area averaging 151,362 pixels, ranging from 116,998 to 205,833 pixels, while the area captured using the Heidelberg Spectralis® was 101,786 pixels, ranging from 73,424 to 116,319 (P = 0.0002). The average area per individual quadrant imaged by Optos® versus the Heidelberg Spectralis® superiorly was 32,373 vs 32,789 pixels, respectively (P = 0.91), inferiorly was 24,665 vs 26,117 pixels, respectively (P = 0.71), temporally was 47,948 vs 20,645 pixels, respectively (P = 0.0001), and nasally was 46,374 vs 22,234 pixels, respectively (P = 0.0001). The Heidelberg Spectralis® was able to image the superior and inferior retinal vasculature to a more distal point than was the Optos®, in nine of ten eyes (18 of 20 quadrants). The Optos® was able to image the nasal and temporal retinal vasculature to a more distal point than was the Heidelberg Spectralis®, in ten of ten eyes (20 of 20 quadrants). Conclusion The ultra-widefield fluorescein angiography obtained with the Optos® and Heidelberg Spectralis® ultra-widefield imaging systems are both excellent modalities that provide views of the peripheral retina. On a single nonsteered image, the Optos® Optomap® covered a significantly larger total retinal surface area, with greater image variability, than did the Heidelberg Spectralis® ultra-widefield module. The Optos® captured an appreciably wider view of the retina temporally and nasally, albeit with peripheral distortion, while the ultra-widefield Heidelberg Spectralis® module was able to image the superior and inferior retinal vasculature more peripherally. The clinical significance of these findings as well as the area imaged on steered montaged images remains to be determined. PMID:23458976
Lohkamp, Laura-Nanna; Vajkoczy, Peter; Budach, Volker; Kufeld, Markus
2018-05-01
Estimating efficacy, safety and outcome of frameless image-guided robotic radiosurgery for the treatment of recurrent brain metastases after whole brain radiotherapy (WBRT). We performed a retrospective single-center analysis including patients with recurrent brain metastases after WBRT, who have been treated with single session radiosurgery, using the CyberKnife® Radiosurgery System (CKRS) (Accuray Inc., CA) between 2011 and 2016. The primary end point was local tumor control, whereas secondary end points were distant tumor control, treatment-related toxicity and overall survival. 36 patients with 140 recurrent brain metastases underwent 46 single session CKRS treatments. Twenty one patients had multiple brain metastases (58%). The mean interval between WBRT and CKRS accounted for 2 years (range 0.2-7 years). The median number of treated metastases per treatment session was five (range 1-12) with a tumor volume of 1.26 ccm (mean) and a median tumor dose of 18 Gy prescribed to the 70% isodose line. Two patients experienced local tumor recurrence within the 1st year after treatment and 13 patients (36%) developed novel brain metastases. Nine of these patients underwent additional one to three CKRS treatments. Eight patients (22.2%) showed treatment-related radiation reactions on MRI, three with clinical symptoms. Median overall survival was 19 months after CKRS. The actuarial 1-year local control rate was 94.2%. CKRS has proven to be locally effective and safe due to high local tumor control rates and low toxicity. Thus CKRS offers a reliable salvage treatment option for recurrent brain metastases after WBRT.
NASA Astrophysics Data System (ADS)
Matikainen, Leena; Karila, Kirsi; Hyyppä, Juha; Litkey, Paula; Puttonen, Eetu; Ahokas, Eero
2017-06-01
During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.
Mobile viewer system for virtual 3D space using infrared LED point markers and camera
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-09-01
The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.
Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.
1999-01-01
This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.
[Could we perform quality second trimester ultrasound among obese pregnant women?].
Fuchs, F; Voulgaropoulos, A; Houllier, M; Senat, M-V
2013-05-01
To compare the quality of second trimester ultrasound images and their anatomical quality scores among obese women and those with a normal body mass index (BMI). This prospective study, which took place from 2009 to 2011, included every obese pregnant woman (prepregnancy BMI greater than 30 kg/m(2)) who had an ultrasound examination at 20 to 24 weeks in our hospital and a control group with a normal BMI (20-24.9kg/m(2)) who had the same examination. A single operator evaluated the quality of all images, reviewing the standardized ultrasound planes - three biometric and six anatomical - required by French guidelines and scoring the quality of the six anatomical images. Each image was assessed according to 4-6 criteria, each worth one point. We sought excellent quality, defined as the frequency of maximum points for a given image. The obese group included 223 women and the control group 60. The completion rate for each image was at least 95 % in the control group and 90 % in the obese group, except for diaphragm and right outflow tract images. Overall, the excellence rate varied from 35 % to 92 % in the normal BMI group and 18 % to 58 % in the obese group and was significantly lower in the latter for all images except abdominal circumference (P=0.26) and the spine (P=0.06). Anatomical quality scores were also significantly lower in the obese group (22.3 vs. 27.2 ; P=0.001). Image quality and global anatomical scores in second trimester ultrasound scans were significantly lower among obese than normal-weight women. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Heaps, Charles W.; Schatz, George C.
2017-06-01
A computational method to model diffraction-limited images from super-resolution surface-enhanced Raman scattering microscopy is introduced. Despite significant experimental progress in plasmon-based super-resolution imaging, theoretical predictions of the diffraction limited images remain a challenge. The method is used to calculate localization errors and image intensities for a single spherical gold nanoparticle-molecule system. The light scattering is calculated using a modification of generalized Mie (T-matrix) theory with a point dipole source and diffraction limited images are calculated using vectorial diffraction theory. The calculation produces the multipole expansion for each emitter and the coherent superposition of all fields. Imaging the constituent fields in addition to the total field provides new insight into the strong coupling between the molecule and the nanoparticle. Regardless of whether the molecular dipole moment is oriented parallel or perpendicular to the nanoparticle surface, the anisotropic excitation distorts the center of the nanoparticle as measured by the point spread function by approximately fifty percent of the particle radius toward to the molecule. Inspection of the nanoparticle multipoles reveals that distortion arises from a weak quadrupole resonance interfering with the dipole field in the nanoparticle. When the nanoparticle-molecule fields are in-phase, the distorted nanoparticle field dominates the observed image. When out-of-phase, the nanoparticle and molecule are of comparable intensity and interference between the two emitters dominates the observed image. The method is also applied to different wavelengths and particle radii. At off-resonant wavelengths, the method predicts images closer to the molecule not because of relative intensities but because of greater distortion in the nanoparticle. The method is a promising approach to improving the understanding of plasmon-enhanced super-resolution experiments.
Holan, Scott H; Viator, John A
2008-06-21
Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.
New photon-counting detectors for single-molecule fluorescence spectroscopy and imaging
Michalet, X.; Colyer, R. A.; Scalia, G.; Weiss, S.; Siegmund, Oswald H. W.; Tremsin, Anton S.; Vallerga, John V.; Villa, F.; Guerrieri, F.; Rech, I.; Gulinatti, A.; Tisa, S.; Zappa, F.; Ghioni, M.; Cova, S.
2013-01-01
Solution-based single-molecule fluorescence spectroscopy is a powerful new experimental approach with applications in all fields of natural sciences. Two typical geometries can be used for these experiments: point-like and widefield excitation and detection. In point-like geometries, the basic concept is to excite and collect light from a very small volume (typically femtoliter) and work in a concentration regime resulting in rare burst-like events corresponding to the transit of a single-molecule. Those events are accumulated over time to achieve proper statistical accuracy. Therefore the advantage of extreme sensitivity is somewhat counterbalanced by a very long acquisition time. One way to speed up data acquisition is parallelization. Here we will discuss a general approach to address this issue, using a multispot excitation and detection geometry that can accommodate different types of novel highly-parallel detector arrays. We will illustrate the potential of this approach with fluorescence correlation spectroscopy (FCS) and single-molecule fluorescence measurements. In widefield geometries, the same issues of background reduction and single-molecule concentration apply, but the duration of the experiment is fixed by the time scale of the process studied and the survival time of the fluorescent probe. Temporal resolution on the other hand, is limited by signal-to-noise and/or detector resolution, which calls for new detector concepts. We will briefly present our recent results in this domain. PMID:24729836
New photon-counting detectors for single-molecule fluorescence spectroscopy and imaging.
Michalet, X; Colyer, R A; Scalia, G; Weiss, S; Siegmund, Oswald H W; Tremsin, Anton S; Vallerga, John V; Villa, F; Guerrieri, F; Rech, I; Gulinatti, A; Tisa, S; Zappa, F; Ghioni, M; Cova, S
2011-05-13
Solution-based single-molecule fluorescence spectroscopy is a powerful new experimental approach with applications in all fields of natural sciences. Two typical geometries can be used for these experiments: point-like and widefield excitation and detection. In point-like geometries, the basic concept is to excite and collect light from a very small volume (typically femtoliter) and work in a concentration regime resulting in rare burst-like events corresponding to the transit of a single-molecule. Those events are accumulated over time to achieve proper statistical accuracy. Therefore the advantage of extreme sensitivity is somewhat counterbalanced by a very long acquisition time. One way to speed up data acquisition is parallelization. Here we will discuss a general approach to address this issue, using a multispot excitation and detection geometry that can accommodate different types of novel highly-parallel detector arrays. We will illustrate the potential of this approach with fluorescence correlation spectroscopy (FCS) and single-molecule fluorescence measurements. In widefield geometries, the same issues of background reduction and single-molecule concentration apply, but the duration of the experiment is fixed by the time scale of the process studied and the survival time of the fluorescent probe. Temporal resolution on the other hand, is limited by signal-to-noise and/or detector resolution, which calls for new detector concepts. We will briefly present our recent results in this domain.
A simple method for multiday imaging of slice cultures.
Seidl, Armin H; Rubel, Edwin W
2010-01-01
The organotypic slice culture (Stoppini et al. A simple method for organotypic cultures of nervous tissue. 1991;37:173-182) has become the method of choice to answer a variety of questions in neuroscience. For many experiments, however, it would be beneficial to image or manipulate a slice culture repeatedly, for example, over the course of many days. We prepared organotypic slice cultures of the auditory brainstem of P3 and P4 mice and kept them in vitro for up to 4 weeks. Single cells in the auditory brainstem were transfected with plasmids expressing fluorescent proteins by way of electroporation (Haas et al. Single-cell electroporation for gene transfer in vivo. 2001;29:583-591). The culture was then placed in a chamber perfused with oxygenated ACSF and the labeled cell imaged with an inverted wide-field microscope repeatedly for multiple days, recording several time-points per day, before returning the slice to the incubator. We describe a simple method to image a slice culture preparation during the course of multiple days and over many continuous hours, without noticeable damage to the tissue or photobleaching. Our method uses a simple, inexpensive custom-built insulator constructed around the microscope to maintain controlled temperature and uses a perfusion chamber as used for in vitro slice recordings. (c) 2009 Wiley-Liss, Inc.
Single-Command Approach and Instrument Placement by a Robot on a Target
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang
2005-01-01
AUTOAPPROACH is a computer program that enables a mobile robot to approach a target autonomously, starting from a distance of as much as 10 m, in response to a single command. AUTOAPPROACH is used in conjunction with (1) software that analyzes images acquired by stereoscopic cameras aboard the robot and (2) navigation and path-planning software that utilizes odometer readings along with the output of the image-analysis software. Intended originally for application to an instrumented, wheeled robot (rover) in scientific exploration of Mars, AUTOAPPROACH could be adapted to terrestrial applications, notably including the robotic removal of land mines and other unexploded ordnance. A human operator generates the approach command by selecting the target in images acquired by the robot cameras. The approach path consists of multiple legs. Feature points are derived from images that contain the target and are thereafter tracked to correct odometric errors and iteratively refine estimates of the position and orientation of the robot relative to the target on successive legs. The approach is terminated when the robot attains the position and orientation required for placing a scientific instrument at the target. The workspace of the robot arm is then autonomously checked for self/terrain collisions prior to the deployment of the scientific instrument onto the target.
Comparing Individual Tree Segmentation Based on High Resolution Multispectral Image and Lidar Data
NASA Astrophysics Data System (ADS)
Xiao, P.; Kelly, M.; Guo, Q.
2014-12-01
This study compares the use of high-resolution multispectral WorldView images and high density Lidar data for individual tree segmentation. The application focuses on coniferous and deciduous forests in the Sierra Nevada Mountains. The tree objects are obtained in two ways: a hybrid region-merging segmentation method with multispectral images, and a top-down and bottom-up region-growing method with Lidar data. The hybrid region-merging method is used to segment individual tree from multispectral images. It integrates the advantages of global-oriented and local-oriented region-merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region. The merging iterations are constrained within the local vicinity, thus the segmentation is accelerated and can reflect the local context. The top-down region-growing method is adopted in coniferous forest to delineate individual tree from Lidar data. It exploits the spacing between the tops of trees to identify and group points into a single tree based on simple rules of proximity and likely tree shape. The bottom-up region-growing method based on the intensity and 3D structure of Lidar data is applied in deciduous forest. It segments tree trunks based on the intensity and topological relationships of the points, and then allocate other points to exact tree crowns according to distance. The accuracies for each method are evaluated with field survey data in several test sites, covering dense and sparse canopy. Three types of segmentation results are produced: true positive represents a correctly segmented individual tree, false negative represents a tree that is not detected and assigned to a nearby tree, and false positive represents that a point or pixel cluster is segmented as a tree that does not in fact exist. They respectively represent correct-, under-, and over-segmentation. Three types of index are compared for segmenting individual tree from multispectral image and Lidar data: recall, precision and F-score. This work explores the tradeoff between the expensive Lidar data and inexpensive multispectral image. The conclusion will guide the optimal data selection in different density canopy areas for individual tree segmentation, and contribute to the field of forest remote sensing.
Multiscale study on stochastic reconstructions of shale samples
NASA Astrophysics Data System (ADS)
Lili, J.; Lin, M.; Jiang, W. B.
2016-12-01
Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared. The result indicates that the agreement between the original 3D sample and the generated stochastic realizations is excellent. This work is supported by "973" Program (2014CB239004), the Key Instrument Developing Project of the CAS (ZDYZ2012-1-08-02) and the National Natural Science Foundation of China (Grant No. 41574129).
Shokouhi, Sepideh; Rogers, Baxter P; Kang, Hakmook; Ding, Zhaohua; Claassen, Daniel O; Mckay, John W; Riddle, William R
2015-01-01
Amyloid-beta (Aβ) imaging with positron emission tomography (PET) holds promise for detecting the presence of Aβ plaques in the cortical gray matter. Many image analyses focus on regional average measurements of tracer activity distribution; however, considerable additional information is available in the images. Metrics that describe the statistical properties of images, such as the two-point correlation function (S2), have found wide applications in astronomy and materials science. S2 provides a detailed characterization of spatial patterns in images typically referred to as clustering or flocculence. The objective of this study was to translate the two-point correlation method into Aβ-PET of the human brain using 11C-Pittsburgh compound B (11C-PiB) to characterize longitudinal changes in the tracer distribution that may reflect changes in Aβ plaque accumulation. We modified the conventional S2 metric, which is primarily used for binary images and formulated a weighted two-point correlation function (wS2) to describe nonbinary, real-valued PET images with a single statistical function. Using serial 11C-PiB scans, we calculated wS2 functions from two-dimensional PET images of different cortical regions as well as three-dimensional data from the whole brain. The area under the wS2 functions was calculated and compared with the mean/median of the standardized uptake value ratio (SUVR). For three-dimensional data, we compared the area under the wS2 curves with the subjects' cerebrospinal fluid measures. Overall, the longitudinal changes in wS2 correlated with the increase in mean SUVR but showed lower variance. The whole brain results showed a higher inverse correlation between the cerebrospinal Aβ and wS2 than between the cerebrospinal Aβ and SUVR mean/median. We did not observe any confounding of wS2 by region size or injected dose. The wS2 detects subtle changes and provides additional information about the binding characteristics of radiotracers and Aβ accumulation that are difficult to verify with mean SUVR alone.
Object recognition and localization from 3D point clouds by maximum-likelihood estimation
NASA Astrophysics Data System (ADS)
Dantanarayana, Harshana G.; Huntley, Jonathan M.
2017-08-01
We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.
Pande, Paritosh; Shelton, Ryan L; Monroy, Guillermo L; Nolan, Ryan M; Boppart, Stephen A
2016-10-01
The thickness of the human tympanic membrane (TM) is known to vary considerably across different regions of the TM. Quantitative determination of the thickness distribution and mapping of the TM is of significant importance in hearing research, particularly in mathematical modeling of middle-ear dynamics. Change in TM thickness is also associated with several middle-ear pathologies. Determination of the TM thickness distribution could therefore also enable a more comprehensive diagnosis of various otologic diseases. Despite its importance, very limited data on human TM thickness distribution, obtained almost exclusively from ex vivo samples, are available in the literature. In this study, the thickness distribution for the in vivo human TM is reported for the first time. A hand-held imaging system, which combines a low coherence interferometry (LCI) technique for single-point thickness measurement, with video-otoscopy for recording the image of the TM, was used to collect the data used in this study. Data were acquired by pointing the imaging probe over different regions of the TM, while simultaneously recording the LCI and concomitant TM surface video image data from an average of 500 locations on the TM. TM thickness distribution maps were obtained by mapping the LCI imaging sites onto an anatomically accurate wide-field image of the TM, which was generated by mosaicking the sequence of multiple small field-of-view video-otoscopy images. Descriptive statistics of the thickness measurements obtained from the different regions of the TM are presented, and the general thickness distribution trends are discussed.
NASA Astrophysics Data System (ADS)
Carvalho, Diego D. B.; Akkus, Zeynettin; Bosch, Johan G.; van den Oord, Stijn C. H.; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In this work, we investigate nonrigid motion compensation in simultaneously acquired (side-by-side) B-mode ultrasound (BMUS) and contrast enhanced ultrasound (CEUS) image sequences of the carotid artery. These images are acquired to study the presence of intraplaque neovascularization (IPN), which is a marker of plaque vulnerability. IPN quantification is visualized by performing the maximum intensity projection (MIP) on the CEUS image sequence over time. As carotid images contain considerable motion, accurate global nonrigid motion compensation (GNMC) is required prior to the MIP. Moreover, we demonstrate that an improved lumen and plaque differentiation can be obtained by averaging the motion compensated BMUS images over time. We propose to use a previously published 2D+t nonrigid registration method, which is based on minimization of pixel intensity variance over time, using a spatially and temporally smooth B-spline deformation model. The validation compares displacements of plaque points with manual trackings by 3 experts in 11 carotids. The average (+/- standard deviation) root mean square error (RMSE) was 99+/-74μm for longitudinal and 47+/-18μm for radial displacements. These results were comparable with the interobserver variability, and with results of a local rigid registration technique based on speckle tracking, which estimates motion in a single point, whereas our approach applies motion compensation to the entire image. In conclusion, we evaluated that the GNMC technique produces reliable results. Since this technique tracks global deformations, it can aid in the quantification of IPN and the delineation of lumen and plaque contours.
Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
Baek, Seung Ok; Cho, Hee Kyung; Jung, Gil Su; Son, Su Min; Cho, Yun Woo; Ahn, Sang Ho
2014-09-01
Transcutaneous neuromuscular electrical stimulation (NMES) can stimulate contractions in deep lumbar stabilizing muscles. An optimal protocol has not been devised for the activation of these muscles by NMES, and information is lacking regarding an optimal stimulation point on the abdominal wall. The goal was to determine a single optimized stimulation point on the abdominal wall for transcutaneous NMES for the activation of deep lumbar stabilizing muscles. Ultrasound images of the spinal stabilizing muscles were captured during NMES at three sites on the lateral abdominal wall. After an optimal location for the placement of the electrodes was determined, changes in the thickness of the lumbar multifidus (LM) were measured during NMES. Three stimulation points were investigated using 20 healthy physically active male volunteers. A reference point R, 1 cm superior to the iliac crest along the midaxillary line, was used. Three study points were used: stimulation point S1 was located 2 cm superior and 2 cm medial to the anterior superior iliac spine, stimulation point S3 was 2 cm below the lowest rib along the same sagittal plane as S1, and stimulation point S2 was midway between S1 and S3. Sessions were conducted stimulating at S1, S2, or S3 using R for reference. Real-time ultrasound imaging (RUSI) of the abdominal muscles was captured during each stimulation session. In addition, RUSI images were captured of the LM during stimulation at S1. Thickness, as measured by RUSI, of the transverse abdominis (TrA), obliquus internus, and obliquus externus was greater during NMES than at rest for all three study points (p<.05). Transverse abdominis was significantly stimulated more by NMES at S1 than at the other points (p<.05). The LM thickness was also significantly greater during NMES at S1 than at rest (p<.05). Neuromuscular electrical stimulation at S1 optimally activated deep spinal stabilizing muscles, TrA and LM, as evidenced by RUSI. The authors recommend this optimal stimulation point be used for NMES in the course of lumbar spine stabilization training in patients having difficulty initiating contraction of these muscles. Copyright © 2014 Elsevier Inc. All rights reserved.
Zarkevich, Nikolai A.; Johnson, Duane D.
2015-01-09
The nudged-elastic band (NEB) method is modified with concomitant two climbing images (C2-NEB) to find a transition state (TS) in complex energy landscapes, such as those with a serpentine minimal energy path (MEP). If a single climbing image (C1-NEB) successfully finds the TS, then C2-NEB finds it too. Improved stability of C2-NEB makes it suitable for more complex cases, where C1-NEB misses the TS because the MEP and NEB directions near the saddle point are different. Generally, C2-NEB not only finds the TS, but guarantees, by construction, that the climbing images approach it from the opposite sides along the MEP.more » In addition, C2-NEB provides an accuracy estimate from the three images: the highest-energy one and its climbing neighbors. C2-NEB is suitable for fixed-cell NEB and the generalized solid-state NEB.« less
INSAR Study Of Landslides In The Region Of Lake Sevan-Armenia
NASA Astrophysics Data System (ADS)
Lazarov, A.; Minchev, D.
2012-01-01
The region of Lake Sevan in Armenia is of theoretical and practical interest due to its very high landslide phenomena caused by metrological and hydrological reasons. Based on the ESA Principal Investigator Number C1P-6051 and requested data from ASAR instrument of ESA ENVISAT satellite four single look complex images including two images from 2008 and two images from 2009 of the region of the Sevan Lake in Armenia are obtained and thoroughly investigated. The one of the images is pointed out as a master and the rest of them, three images as slaves. Hence, three interferometric pairs are produced. Then data of NASA SRTM mission is applied to the interferometric pairs in order to remove topography from the interferograms. Three interferograms generated illustrate decreasing of coherence caused by high temporary decorelation, which means decreasing the level of coincidence of SLC’s in each interferometric pair, according to the time of acquisition each of them.
Lens-free imaging of magnetic particles in DNA assays.
Colle, Frederik; Vercruysse, Dries; Peeters, Sara; Liu, Chengxun; Stakenborg, Tim; Lagae, Liesbet; Del-Favero, Jurgen
2013-11-07
We present a novel opto-magnetic system for the fast and sensitive detection of nucleic acids. The system is based on a lens-free imaging approach resulting in a compact and cheap optical readout of surface hybridized DNA fragments. In our system magnetic particles are attracted towards the detection surface thereby completing the labeling step in less than 1 min. An optimized surface functionalization combined with magnetic manipulation was used to remove all nonspecifically bound magnetic particles from the detection surface. A lens-free image of the specifically bound magnetic particles on the detection surface was recorded by a CMOS imager. This recorded interference pattern was reconstructed in software, to represent the particle image at the focal distance, using little computational power. As a result we were able to detect DNA concentrations down to 10 pM with single particle sensitivity. The possibility of integrated sample preparation by manipulation of magnetic particles, combined with the cheap and highly compact lens-free detection makes our system an ideal candidate for point-of-care diagnostic applications.
Lunar single-scattering, porosity, and surface-roughness properties with SMART-1/AMIE
NASA Astrophysics Data System (ADS)
Parviainen, H.; Muinonen, K.; Näränen, J.; Josset, J.-L.; Beauvivre, S.; Pinet, P.; Chevrel, S.; Koschny, D.; Grieger, B.; Foing, B.
2009-04-01
We analyze the single-scattering albedo and phase function, local surface roughness and regolith porosity, and the coherent backscattering, single scattering, and shadowing contributions to the opposition effect for specific lunar mare regions imaged by the SMART-1/AMIE camera. We account for shadowing due to surface roughness and mutual shadowing among the regolith particles with ray-tracing computations for densely-packed particulate media with a fractional-Brownian-motion interface with free space. The shadowing modeling allows us to derive the hundred-micron-scale volume-element scattering phase function for the lunar mare regolith. We explain the volume-element phase function by a coherent-backscattering model, where the single scatterers are the submicron-to-micron-scale particle inhomogeneities and/or the smallest particles on the lunar surface. We express the single-scatterer phase function as a sum of three Henyey-Greenstein terms, accounting for increased backward scattering in both narrow and wide angular ranges. The Moon exhibits an opposition effect, that is, a nonlinear increase of disk-integrated brightness with decreasing solar phase angle, the angle between the Sun and the observer as seen from the object. Recently, the coherent-backscattering mechanism (CBM) has been introduced to explain the opposition effect. CBM is a multiple-scattering interference mechanism, where reciprocal waves propagating through the same scatterers in opposite directions always interfere constructively in the backward-scattering direction but with varying interference characteristics in other directions. In addition to CBM, mutual shadowing among regolith particles (SMp) and rough-surface shadowing (SMr) have their effect on the behavior of the observed lunar surface brightness. In order to accrue knowledge on the volume-element and, ultimately, single-scattering properties of the lunar regolith, both SMp and SMr need to be accurately accounted for. We included four different lunar mare regions in our study. Each of these regions covers several hundreds of square kilometers of lunar surface. When selecting the regions, we have required that they have been imaged by AMIE across a wide range of phase angles, including the opposition geometry. The phase-angle range covered is 0-109 °, with incidence and emergence angles (ι and ε) ranging within 7-87 ° and 0-53 °, respectively. The pixel scale varies from 288m down to 29m. Biases and dark currents were subtracted from the images in the usual way, followed by a flat-field correction. New dark-current reduction procedures have recently been derived from in-flight measurements to replace the ground-calibration images . The clear filter was chosen for the present study as it provides the largest field of view and is currently the best-calibrated channel. Off-nadir-pointing observations allowed for the extensive phase-angle coverage. In total, 220 images are used for the present study. The photometric data points were extracted as follows. First, on average, 50 sample areas of 10 Ã- 10 pixels were chosen by hand from each image. Second, the surface normal, ι, ε, °, and α were computed for each pixel in each sample area using the NASA/NAIF SPICE software toolkit with the latest and corrected SMART-1/AMIE SPICE kernels. Finally, the illumination angles and the observed intensity were averaged over each sample area. In total, the images used in the study resulted in approximately 11000 photometric sample points for the four mare regions. We make use of fractional-Brownian-motion surfaces in modeling the interface between free space and regolith and a size distribution of spherical particles in modeling the particulate medium. We extract the effects of the stochastic geometry from the lunar photometry and, simultaneously, obtain the volume-element scattering phase function of the lunar regolith locations studied. The volume-element phase function allows us to constrain the physical properties of the regolith particles. Based on the present theoretical modeling of the lunar photometry from SMART-1/AMIE, we conclude that most of the lunar mare opposition effect is caused by coherent backscattering and single scattering within volume elements comparable to lunar particle sizes, with only a small contribution from shadowing effects. We thus suggest that the lunar single scatterers exhibit intensity enhancement towards the backward scattering direction in resemblance to the scattering characteristics experimentally measured and theoretically computed for realistic small particles. Further interpretations of the lunar volume-element phase function will be the subject of future research.
Registration of opthalmic images using control points
NASA Astrophysics Data System (ADS)
Heneghan, Conor; Maguire, Paul
2003-03-01
A method for registering pairs of digital ophthalmic images of the retina is presented using anatomical features as control points present in both images. The anatomical features chosen are blood vessel crossings and bifurcations. These control points are identified by a combination of local contrast enhancement, and morphological processing. In general, the matching between control points is unknown, however, so an automated algorithm is used to determine the matching pairs of control points in the two images as follows. Using two control points from each image, rigid global transform (RGT) coefficients are calculated for all possible combinations of control point pairs, and the set of RGT coefficients is identified. Once control point pairs are established, registration of two images can be achieved by using linear regression to optimize an RGT, bilinear or second order polynomial global transform. An example of cross-modal image registration using an optical image and a fluorescein angiogram of an eye is presented to illustrate the technique.
NASA Astrophysics Data System (ADS)
Besner, Sebastien; Shao, Peng; Scarcelli, Giuliano; Pineda, Roberto; Yun, Seok-Hyun (Andy)
2016-03-01
Keratoconus is a degenerative disorder of the eye characterized by human cornea thinning and morphological change to a more conical shape. Current diagnosis of this disease relies on topographic imaging of the cornea. Early and differential diagnosis is difficult. In keratoconus, mechanical properties are found to be compromised. A clinically available invasive technique capable of measuring the mechanical properties of the cornea is of significant importance for understanding the mechanism of keratoconus development and improve detection and intervention in keratoconus. The capability of Brillouin imaging to detect local longitudinal modulus in human cornea has been demonstrated previously. We report our non-contact, non-invasive, clinically viable Brillouin imaging system engineered to evaluate mechanical properties human cornea in vivo. The system takes advantage of a highly dispersive 2-stage virtually imaged phased array (VIPA) to detect weak Brillouin scattering signal from biological samples. With a 1.5-mW light beam from a 780-nm single-wavelength laser source, the system is able to detect Brillouin frequency shift of a single point in human cornea less than 0.3 second, at a 5μm/30μm lateral/axial resolution. Sensitivity of the system was quantified to be ~ 10 MHz. A-scans at different sample locations on a human cornea with a motorized human interface. We imaged both normal and keratoconic human corneas with this system. Whereas no significantly difference were observed outside keratocnic cones compared with normal cornea, a highly statistically significantly decrease was found in the cone regions.
Bray, Mark-Anthony; Gustafsdottir, Sigrun M; Rohban, Mohammad H; Singh, Shantanu; Ljosa, Vebjorn; Sokolnicki, Katherine L; Bittker, Joshua A; Bodycombe, Nicole E; Dančík, Vlado; Hasaka, Thomas P; Hon, Cindy S; Kemp, Melissa M; Li, Kejie; Walpita, Deepika; Wawer, Mathias J; Golub, Todd R; Schreiber, Stuart L; Clemons, Paul A; Shamji, Alykhan F
2017-01-01
Abstract Background Large-scale image sets acquired by automated microscopy of perturbed samples enable a detailed comparison of cell states induced by each perturbation, such as a small molecule from a diverse library. Highly multiplexed measurements of cellular morphology can be extracted from each image and subsequently mined for a number of applications. Findings This microscopy dataset includes 919 265 five-channel fields of view, representing 30 616 tested compounds, available at “The Cell Image Library” (CIL) repository. It also includes data files containing morphological features derived from each cell in each image, both at the single-cell level and population-averaged (i.e., per-well) level; the image analysis workflows that generated the morphological features are also provided. Quality-control metrics are provided as metadata, indicating fields of view that are out-of-focus or containing highly fluorescent material or debris. Lastly, chemical annotations are supplied for the compound treatments applied. Conclusions Because computational algorithms and methods for handling single-cell morphological measurements are not yet routine, the dataset serves as a useful resource for the wider scientific community applying morphological (image-based) profiling. The dataset can be mined for many purposes, including small-molecule library enrichment and chemical mechanism-of-action studies, such as target identification. Integration with genetically perturbed datasets could enable identification of small-molecule mimetics of particular disease- or gene-related phenotypes that could be useful as probes or potential starting points for development of future therapeutics. PMID:28327978
Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.
Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu
2016-11-01
In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.
Interactive-cut: Real-time feedback segmentation for translational research.
Egger, Jan; Lüddemann, Tobias; Schwarzenberg, Robert; Freisleben, Bernd; Nimsky, Christopher
2014-06-01
In this contribution, a scale-invariant image segmentation algorithm is introduced that "wraps" the algorithm's parameters for the user by its interactive behavior, avoiding the definition of "arbitrary" numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
Kamali, Tschackad; Považay, Boris; Kumar, Sunil; Silberberg, Yaron; Hermann, Boris; Werkmeister, René; Drexler, Wolfgang; Unterhuber, Angelika
2014-10-01
We demonstrate a multimodal optical coherence tomography (OCT) and online Fourier transform coherent anti-Stokes Raman scattering (FTCARS) platform using a single sub-12 femtosecond (fs) Ti:sapphire laser enabling simultaneous extraction of structural and chemical ("morphomolecular") information of biological samples. Spectral domain OCT prescreens the specimen providing a fast ultrahigh (4×12 μm axial and transverse) resolution wide field morphologic overview. Additional complementary intrinsic molecular information is obtained by zooming into regions of interest for fast label-free chemical mapping with online FTCARS spectroscopy. Background-free CARS is based on a Michelson interferometer in combination with a highly linear piezo stage, which allows for quick point-to-point extraction of CARS spectra in the fingerprint region in less than 125 ms with a resolution better than 4 cm(-1) without the need for averaging. OCT morphology and CARS spectral maps indicating phosphate and carbonate bond vibrations from human bone samples are extracted to demonstrate the performance of this hybrid imaging platform.
Schlossberg, David J.; Bodner, Grant M.; Bongard, Michael W.; ...
2016-09-16
Here, a novel, cost-effective, multi-point Thomson scattering system has been designed, implemented, and operated on the Pegasus Toroidal Experiment. Leveraging advances in Nd:YAG lasers, high-efficiency volume phase holographic transmission gratings, and increased quantum-efficiency Generation 3 image-intensified charge coupled device (ICCD) cameras, the system provides Thomson spectra at eight spatial locations for a single grating/camera pair. The on-board digitization of the ICCD camera enables easy modular expansion, evidenced by recent extension from 4 to 12 plasma/background spatial location pairs. Stray light is rejected using time-of-flight methods suited to gated ICCDs, and background light is blocked during detector readout by a fastmore » shutter. This –10 3 reduction in background light enables further expansion to up to 24 spatial locations. The implementation now provides single-shot T e(R) for n e > 5 × 10 18 m –3.« less
Using Deep Space Climate Observatory Measurements to Study the Earth as an Exoplanet
NASA Astrophysics Data System (ADS)
Jiang, Jonathan H.; Zhai, Albert J.; Herman, Jay; Zhai, Chengxing; Hu, Renyu; Su, Hui; Natraj, Vijay; Li, Jiazheng; Xu, Feng; Yung, Yuk L.
2018-07-01
Even though it was not designed as an exoplanetary research mission, the Deep Space Climate Observatory ( DSCOVR ) has been opportunistically used for a novel experiment in which Earth serves as a proxy exoplanet. More than 2 yr of DSCOVR Earth images were employed to produce time series of multiwavelength, single-point light sources in order to extract information on planetary rotation, cloud patterns, surface type, and orbit around the Sun. In what follows, we assume that these properties of the Earth are unknown and instead attempt to derive them from first principles. These conclusions are then compared with known data about our planet. We also used the DSCOVR data to simulate phase-angle changes, as well as the minimum data collection rate needed to determine the rotation period of an exoplanet. This innovative method of using the time evolution of a multiwavelength, reflected single-point light source can be deployed for retrieving a range of intrinsic properties of an exoplanet around a distant star.
Near-Infrared Spatially Resolved Spectroscopy for Tablet Quality Determination.
Igne, Benoît; Talwar, Sameer; Feng, Hanzhou; Drennen, James K; Anderson, Carl A
2015-12-01
Near-infrared (NIR) spectroscopy has become a well-established tool for the characterization of solid oral dosage forms manufacturing processes and finished products. In this work, the utility of a traditional single-point NIR measurement was compared with that of a spatially resolved spectroscopic (SRS) measurement for the determination of tablet assay. Experimental designs were used to create samples that allowed for calibration models to be developed and tested on both instruments. Samples possessing a poor distribution of ingredients (highly heterogeneous) were prepared by under-blending constituents prior to compaction to compare the analytical capabilities of the two NIR methods. The results indicate that SRS can provide spatial information that is usually obtainable only through imaging experiments for the determination of local heterogeneity and detection of abnormal tablets that would not be detected with single-point spectroscopy, thus complementing traditional NIR measurement systems for in-line, and in real-time tablet analysis. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Nanometer-scale surface potential and resistance mapping of wide-bandgap Cu(In,Ga)Se2 thin films
NASA Astrophysics Data System (ADS)
Jiang, C.-S.; Contreras, M. A.; Mansfield, L. M.; Moutinho, H. R.; Egaas, B.; Ramanathan, K.; Al-Jassim, M. M.
2015-01-01
We report microscopic characterization studies of wide-bandgap Cu(In,Ga)Se2 photovoltaic thin films using the nano-electrical probes of scanning Kelvin probe force microscopy and scanning spreading resistance microscopy. With increasing bandgap, the potential imaging shows significant increases in both the large potential features due to extended defects or defect aggregations and the potential fluctuation due to unresolvable point defects with single or a few charges. The resistance imaging shows increases in both overall resistance and resistance nonuniformity due to defects in the subsurface region. These defects are expected to affect open-circuit voltage after the surfaces are turned to junction upon device completion.
Plasmonic computing of spatial differentiation
NASA Astrophysics Data System (ADS)
Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui
2017-05-01
Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.
Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector
NASA Astrophysics Data System (ADS)
Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2014-02-01
A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.
Miskowiak, Kamilla W; Kessing, Lars V; Ott, Caroline V; Macoveanu, Julian; Harmer, Catherine J; Jørgensen, Anders; Revsbech, Rasmus; Jensen, Hans M; Paulson, Olaf B; Siebner, Hartwig R; Jørgensen, Martin B
2017-09-01
Negative neurocognitive bias is a core feature of major depressive disorder that is reversed by pharmacological and psychological treatments. This double-blind functional magnetic resonance imaging study investigated for the first time whether electroconvulsive therapy modulates negative neurocognitive bias in major depressive disorder. Patients with major depressive disorder were randomised to one active ( n=15) or sham electroconvulsive therapy ( n=12). The following day they underwent whole-brain functional magnetic resonance imaging at 3T while viewing emotional faces and performed facial expression recognition and dot-probe tasks. A single electroconvulsive therapy session had no effect on amygdala response to emotional faces. Whole-brain analysis revealed no effects of electroconvulsive therapy versus sham therapy after family-wise error correction at the cluster level, using a cluster-forming threshold of Z>3.1 ( p<0.001) to secure family-wise error <5%. Groups showed no differences in behavioural measures, mood and medication. Exploratory cluster-corrected whole-brain analysis ( Z>2.3; p<0.01) revealed electroconvulsive therapy-induced changes in parahippocampal and superior frontal responses to fearful versus happy faces as well as in fear-specific functional connectivity between amygdala and occipito-temporal regions. Across all patients, greater fear-specific amygdala - occipital coupling correlated with lower fear vigilance. Despite no statistically significant shift in neural response to faces after a single electroconvulsive therapy session, the observed trend changes after a single electroconvulsive therapy session point to an early shift in emotional processing that may contribute to antidepressant effects of electroconvulsive therapy.
NASA Astrophysics Data System (ADS)
McReynolds, Naomi; Cooke, Fiona G. M.; Chen, Mingzhou; Powis, Simon J.; Dholakia, Kishan
2017-03-01
The ability to identify and characterise individual cells of the immune system under label-free conditions would be a significant advantage in biomedical and clinical studies where untouched and unmodified cells are required. We present a multi-modal system capable of simultaneously acquiring both single point Raman spectra and digital holographic images of single cells. We use this combined approach to identify and discriminate between immune cell populations CD4+ T cells, B cells and monocytes. We investigate several approaches to interpret the phase images including signal intensity histograms and texture analysis. Both modalities are independently able to discriminate between cell subsets and dual-modality may therefore be used a means for validation. We demonstrate here sensitivities achieved in the range of 86.8% to 100%, and specificities in the range of 85.4% to 100%. Additionally each modality provides information not available from the other providing both a molecular and a morphological signature of each cell.
Synthetic aperture radar and digital processing: An introduction
NASA Technical Reports Server (NTRS)
Dicenzo, A.
1981-01-01
A tutorial on synthetic aperture radar (SAR) is presented with emphasis on digital data collection and processing. Background information on waveform frequency and phase notation, mixing, Q conversion, sampling and cross correlation operations is included for clarity. The fate of a SAR signal from transmission to processed image is traced in detail, using the model of a single bright point target against a dark background. Some of the principal problems connected with SAR processing are also discussed.