DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée; McKay, Erin
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of amore » given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.« less
Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel
2015-12-01
The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.
NASA Astrophysics Data System (ADS)
Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.
2016-05-01
The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K
2012-07-01
A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.
Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre
2012-01-01
Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961
NASA Astrophysics Data System (ADS)
Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary
1999-01-01
The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.
A HWIL test facility of infrared imaging laser radar using direct signal injection
NASA Astrophysics Data System (ADS)
Wang, Qian; Lu, Wei; Wang, Chunhui; Wang, Qi
2005-01-01
Laser radar has been widely used these years and the hardware-in-the-loop (HWIL) testing of laser radar become important because of its low cost and high fidelity compare with On-the-Fly testing and whole digital simulation separately. Scene generation and projection two key technologies of hardware-in-the-loop testing of laser radar and is a complicated problem because the 3D images result from time delay. The scene generation process begins with the definition of the target geometry and reflectivity and range. The real-time 3D scene generation computer is a PC based hardware and the 3D target models were modeled using 3dsMAX. The scene generation software was written in C and OpenGL and is executed to extract the Z-buffer from the bit planes to main memory as range image. These pixels contain each target position x, y, z and its respective intensity and range value. Expensive optical injection technologies of scene projection such as LDP array, VCSEL array, DMD and associated scene generation is ongoing. But the optical scene projection is complicated and always unaffordable. In this paper a cheaper test facility was described that uses direct electronic injection to provide rang images for laser radar testing. The electronic delay and pulse shaping circuits inject the scenes directly into the seeker's signal processing unit.
Generation and assessment of turntable SAR data for the support of ATR development
NASA Astrophysics Data System (ADS)
Cohen, Marvin N.; Showman, Gregory A.; Sangston, K. James; Sylvester, Vincent B.; Gostin, Lamar; Scheer, C. Ruby
1998-10-01
Inverse synthetic aperture radar (ISAR) imaging on a turntable-tower test range permits convenient generation of high resolution two-dimensional images of radar targets under controlled conditions for testing SAR image processing and for supporting automatic target recognition (ATR) algorithm development. However, turntable ISAR images are often obtained under near-field geometries and hence may suffer geometric distortions not present in airborne SAR images. In this paper, turntable data collected at Georgia Tech's Electromagnetic Test Facility are used to begin to assess the utility of two- dimensional ISAR imaging algorithms in forming images to support ATR development. The imaging algorithms considered include a simple 2D discrete Fourier transform (DFT), a 2-D DFT with geometric correction based on image domain resampling, and a computationally-intensive geometric matched filter solution. Images formed with the various algorithms are used to develop ATR templates, which are then compared with an eye toward utilization in an ATR algorithm.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
NASA Technical Reports Server (NTRS)
Smith, Nathanial T.; Durston, Donald A.; Heineck, James T.
2017-01-01
In support of NASA's Commercial Supersonics Technology (CST) project, a test was conducted in the 9-by-7 ft. supersonic section of the NASA Ames Unitary Plan Wind Tunnel (UPWT). The tests were designed to study the interaction of shocks with a supersonic jet characteristic of those that may occur on a commercial supersonic aircraft. Multiple shock generating geometries were tested to examine the interaction dynamics as they pertain to sonic boom mitigation. An integral part of the analyses of these interactions are the interpretation of the data generated from the retroreflective Background Oriented Schlieren (RBOS) imaging technique employed for this test. The regularization- based optical flow methodology used to generate these data is described. Sample results are compared to those using normalized cross-correlation. The reduced noise, additional feature detail, and fewer false artifacts provided by the optical flow technique produced clearer time-averaged images, allowing for better interpretation of the underlying flow phenomena. These images, coupled with pressure signatures in the near field, are used to provide an overview of the detailed interaction flowfields.
DSM Generation from ALSO/PRISM Images Using SAT-PP
NASA Astrophysics Data System (ADS)
Wolff, Kirsten; Gruen, Armin
2008-11-01
One of the most important products of ALOS/PRISM image data are accurate DSMs. To exploit the full potential of the full resolution of PRISM for DSM generation, a highly developed image matcher is needed. As a member of the validation and calibration team for PRISM we published earlier results of DSM generation using PRISM image triplets in combination with our software package SAT-PP. The overall accuracy across all object and image features for all tests lies between 1-5 pixels in matching, depending primarily on surface roughness, vegetation, image texture and image quality. Here we will discuss some new results. We focus on four different topics: the use of two different evaluation methods, the difference between a 5m and a 10m GSD for the final PRISM DSM, the influence of the level of initial information and the comparison of the quality of different combinations of the three different views forward, nadir and backward. All tests have been conducted with our testfield Bern/Thun, Switzerland.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Pouliot, J
2015-06-15
Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less
TESTS OF LOW-FREQUENCY GEOMETRIC DISTORTIONS IN LANDSAT 4 IMAGES.
Batson, R.M.; Borgeson, W.T.; ,
1985-01-01
Tests were performed to investigate the geometric characteristics of Landsat 4 images. The first set of tests was designed to determine the extent of image distortion caused by the physical process of writing the Landsat 4 images on film. The second was designed to characterize the geometric accuracies inherent in the digital images themselves. Test materials consisted of film images of test targets generated by the Laser Beam Recorders at Sioux Falls, the Optronics* Photowrite film writer at Goddard Space Flight Center, and digital image files of a strip 600 lines deep across the full width of band 5 of the Washington, D. C. Thematic Mapper scene. The tests were made by least-squares adjustment of an array of measured image points to a corresponding array of control points.
Visualization Support for an Army Reconnaissance Mission
1994-02-01
transform an aerial photographic image into an orthophoto image. In this process, the horizontal coordinates and elevation of a point on the ground are...to the corresponding horizontal position on the orthophoto . The result is a new digital image without relief displacement. This orthophoto image will...process, the orthophotos were generated. The generation of one orthophoto for every other photo was sufficient to ensure complete coverage of the test
A micro-vibration generated method for testing the imaging quality on ground of space remote sensing
NASA Astrophysics Data System (ADS)
Gu, Yingying; Wang, Li; Wu, Qingwen
2018-03-01
In this paper, a novel method is proposed, which can simulate satellite platform micro-vibration and test the impact of satellite micro-vibration on imaging quality of space optical remote sensor on ground. The method can generate micro-vibration of satellite platform in orbit from vibrational degrees of freedom, spectrum, magnitude, and coupling path. Experiment results show that the relative error of acceleration control is within 7%, in frequencies from 7Hz to 40Hz. Utilizing this method, the system level test about the micro-vibration impact on imaging quality of space optical remote sensor can be realized. This method will have an important applications in testing micro-vibration tolerance margin of optical remote sensor, verifying vibration isolation and suppression performance of optical remote sensor, exploring the principle of micro-vibration impact on imaging quality of optical remote sensor.
Near-infrared fluorescence image quality test methods for standardized performance evaluation
NASA Astrophysics Data System (ADS)
Kanniyappan, Udayakumar; Wang, Bohan; Yang, Charles; Ghassemi, Pejhman; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua
2017-03-01
Near-infrared fluorescence (NIRF) imaging has gained much attention as a clinical method for enhancing visualization of cancers, perfusion and biological structures in surgical applications where a fluorescent dye is monitored by an imaging system. In order to address the emerging need for standardization of this innovative technology, it is necessary to develop and validate test methods suitable for objective, quantitative assessment of device performance. Towards this goal, we develop target-based test methods and investigate best practices for key NIRF imaging system performance characteristics including spatial resolution, depth of field and sensitivity. Characterization of fluorescence properties was performed by generating excitation-emission matrix properties of indocyanine green and quantum dots in biological solutions and matrix materials. A turbid, fluorophore-doped target was used, along with a resolution target for assessing image sharpness. Multi-well plates filled with either liquid or solid targets were generated to explore best practices for evaluating detection sensitivity. Overall, our results demonstrate the utility of objective, quantitative, target-based testing approaches as well as the need to consider a wide range of factors in establishing standardized approaches for NIRF imaging system performance.
EGG: Empirical Galaxy Generator
NASA Astrophysics Data System (ADS)
Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; MichaÅowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.
2018-04-01
The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).
Vision Algorithm for the Solar Aspect System of the HEROES Mission
NASA Technical Reports Server (NTRS)
Cramer, Alexander
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight
Vision Algorithm for the Solar Aspect System of the HEROES Mission
NASA Technical Reports Server (NTRS)
Cramer, Alexander; Christe, Steven; Shih, Albert
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an Average Intersection method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight.
Bedding disposal cabinet for containment of aerosols generated by animal cage cleaning procedures.
Baldwin, C L; Sabel, F L; Henke, C B
1976-01-01
Laboratory tests with aerosolized spores and animal room tests with uranine dye indicate the effectiveness of a prototype bedding disposal cabinet in reducing airborne contamination generated by cage cleaning procedures. Images PMID:826219
High Density Aerial Image Matching: State-Of and Future Prospects
NASA Astrophysics Data System (ADS)
Haala, N.; Cavegn, S.
2016-06-01
Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.
Hase, E; Sato, K; Yonekura, D; Minamikawa, T; Takahashi, M; Yasui, T
2016-11-01
This study aimed to evaluate the histological and mechanical features of tendon healing in a rabbit model with second-harmonic-generation (SHG) imaging and tensile testing. A total of eight male Japanese white rabbits were used for this study. The flexor digitorum tendons in their right leg were sharply transected, and then were repaired by intratendinous stitching. At four weeks post-operatively, the rabbits were killed and the flexor digitorum tendons in both right and left legs were excised and used as specimens for tendon healing (n = 8) and control (n = 8), respectively. Each specimen was examined by SHG imaging, followed by tensile testing, and the results of the two testing modalities were assessed for correlation. While the SHG light intensity of the healing tendon samples was significantly lower than that of the uninjured tendon samples, 2D Fourier transform SHG images showed a clear difference in collagen fibre structure between the uninjured and the healing samples, and among the healing samples. The mean intensity of the SHG image showed a moderate correlation (R 2 = 0.37) with Young's modulus obtained from the tensile testing. Our results indicate that SHG microscopy may be a potential indicator of tendon healing.Cite this article: E. Hase, K. Sato, D. Yonekura, T. Minamikawa, M. Takahashi, T. Yasui. Evaluation of the histological and mechanical features of tendon healing in a rabbit model with the use of second-harmonic-generation imaging and tensile testing. Bone Joint Res 2016;5:577-585. DOI: 10.1302/2046-3758.511.BJR-2016-0162.R1. © 2016 Yasui et al.
Measurement of meat color using a computer vision system.
Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Braghieri, Ada
2013-01-01
The limits of the colorimeter and a technique of image analysis in evaluating the color of beef, pork, and chicken were investigated. The Minolta CR-400 colorimeter and a computer vision system (CVS) were employed to measure colorimetric characteristics. To evaluate the chromatic fidelity of the image of the sample displayed on the monitor, a similarity test was carried out using a trained panel. The panelists found the digital images of the samples visualized on the monitor very similar to the actual ones (P<0.001). During the first similarity test the panelists observed at the same time both the actual meat sample and the sample image on the monitor in order to evaluate the similarity between them (test A). Moreover, the panelists were asked to evaluate the similarity between two colors, both generated by the software Adobe Photoshop CS3 one using the L, a and b values read by the colorimeter and the other obtained using the CVS (test B); which of the two colors was more similar to the sample visualized on the monitor was also assessed (test C). The panelists found the digital images very similar to the actual samples (P<0.001). As to the similarity (test B) between the CVS- and colorimeter-based colors the panelists found significant differences between them (P<0.001). Test C showed that the color of the sample on the monitor was more similar to the CVS generated color than to the colorimeter generated color. The differences between the values of the L, a, b, hue angle and chroma obtained with the CVS and the colorimeter were statistically significant (P<0.05-0.001). These results showed that the colorimeter did not generate coordinates corresponding to the true color of meat. Instead, the CVS method seemed to give valid measurements that reproduced a color very similar to the real one. Copyright © 2012 Elsevier Ltd. All rights reserved.
Image thumbnails that represent blur and noise.
Samadani, Ramin; Mauer, Timothy A; Berfanger, David M; Clark, James H
2010-02-01
The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and low-quality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating component of the new thumbnails should be based on testing with the particular image mix expected for the application.
Development of and Improved Magneto-Optic/Eddy-Current Imager
DOT National Transportation Integrated Search
1997-04-01
Magneto-optic/eddy-current imaging technology has been developed and approved for inspection of cracks in aging aircraft. This relatively new nondestructive test method gives the inspector the ability to quickly generate real-time eddy-current images...
NASA Astrophysics Data System (ADS)
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-29
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Measurement of glucose concentration by image processing of thin film slides
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Heavner, David
2012-02-01
Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.
Yang, Xiuping; Min, Lequan; Wang, Xue
2015-05-01
This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2(1345). As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.
NASA Astrophysics Data System (ADS)
Yang, Xiuping; Min, Lequan; Wang, Xue
2015-05-01
This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 21345. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.
The importance of the keyword-generation method in keyword mnemonics.
Campos, Alfredo; Amor, Angeles; González, María Angeles
2004-01-01
Keyword mnemonics is under certain conditions an effective approach for learning foreign-language vocabulary. It appears to be effective for words with high image vividness but not for words with low image vividness. In this study, two experiments were performed to assess the efficacy of a new keyword-generation procedure (peer generation). In Experiment 1, a sample of 363 high-school students was randomly into four groups. The subjects were required to learn L1 equivalents of a list of 16 Latin words (8 with high image vividness, 8 with low image vividness), using a) the rote method, or the keyword method with b) keywords and images generated and supplied by the experimenter, c) keywords and images generated by themselves, or d) keywords and images previously generated by peers (i.e., subjects with similar sociodemographic characteristics). Recall was tested immediately and one week later. For high-vivideness words, recall was significantly better in the keyword groups than the rote method group. For low-vividness words, learning method had no significant effect. Experiment 2 was basically identical, except that the word lists comprised 32 words (16 high-vividness, 16 low-vividness). In this experiment, the peer-generated-keyword group showed significantly better recall of high-vividness words than the rote method groups and the subject generated keyword group; again, however, learning method had no significant effect on recall of low-vividness words.
SPIDER: Next Generation Chip Scale Imaging Sensor Update
NASA Astrophysics Data System (ADS)
Duncan, A.; Kendrick, R.; Ogden, C.; Wuchenich, D.; Thurman, S.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.
2016-09-01
The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. This paper provides an overview of performance data on the second-generation PIC for SPIDER developed under the Defense Advanced Research Projects Agency (DARPA)'s SPIDER Zoom research funding. We also update the design description of the SPIDER Zoom imaging sensor and the second-generation PIC (high- and low resolution versions).
Processing digital images and calculation of beam emittance (pepper-pot method for the Krion source)
NASA Astrophysics Data System (ADS)
Alexandrov, V. S.; Donets, E. E.; Nyukhalova, E. V.; Kaminsky, A. K.; Sedykh, S. N.; Tuzikov, A. V.; Philippov, A. V.
2016-12-01
Programs for the pre-processing of photographs of beam images on the mask based on Wolfram Mathematica and Origin software are described. Angles of rotation around the axis and in the vertical plane are taken into account in the generation of the file with image coordinates. Results of the emittance calculation by the Pep_emit program written in Visual Basic using the generated file in the test mode are presented.
The design method of CGH for testing the Φ404, F2 primary mirror
NASA Astrophysics Data System (ADS)
Xie, Nian; Duan, Xueting; Li, Hua
2014-09-01
In order to accurately test shape quality of the large diameter aspherical mirror, a kind of binary optical element called Computer generated holograms (CGHs) are widely used .The primary role of the CGHs is to generate any desired wavefronts to realize phase compensation. In this paper, the CGH design principle and design process are reviewed at first. Then an optical testing system for testing the aspheric mirror includes a computer generated hologram (CGH) and an imaging element (IE) is disposed. And an optical testing system only concludes a CGH is proposed too. The CGH is designed for measurement of an aspheric mirror (diameter=404mm, F-number=2). Interferometric simulation test results of the aspheric mirror show that the whole test system obtains the demanded high accuracy. When combined the CGH with an imaging element in the Aspheric Compensator, the smallest feature in the CGH should be decreased. The CGH can also be used to test freeform surface with high precision, it is of great significance to the development of the freeform surface.
Image quality scaling of electrophotographic prints
NASA Astrophysics Data System (ADS)
Johnson, Garrett M.; Patil, Rohit A.; Montag, Ethan D.; Fairchild, Mark D.
2003-12-01
Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Evaluation of ZY-3 for Dsm and Ortho Image Generation
NASA Astrophysics Data System (ADS)
d'Angelo, P.
2013-04-01
DSM generation using stereo satellites is an important topic for many applications. China has launched the three line ZY-3 stereo mapping satellite last year. This paper evaluates the ZY-3 performance for DSM and orthophoto generation on two scenes east of Munich. The direct georeferencing performance is tested using survey points, and the 3D RMSE is 4.5 m for the scene evaluated in this paper. After image orientation with GCPs and tie points, a DSM is generated using the Semi-Global Matching algorithm. For two 5 × 5 km2 test areas, a LIDAR reference DTM was available. After masking out forest areas, the overall RMSE between ZY-3 DSM and LIDAR reference is 2.0 m (RMSE). Additionally, qualitative comparison between ZY-3 and Cartosat-1 DSMs is performed.
Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk
2018-06-01
Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Krach, Soren; Hartje, Wolfgang
2006-01-01
The Wada test is at present the method of choice for preoperative assessment of patients who require surgery close to cortical language areas. It is, however, an invasive test with an attached morbidity risk. By now, an alternative to the Wada test is to combine a lexical word generation paradigm with non-invasive imaging techniques. However,…
Image encryption using random sequence generated from generalized information domain
NASA Astrophysics Data System (ADS)
Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu
2016-05-01
A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.
Ghosh, Adarsh; Singh, Tulika; Singla, Veenu; Bagga, Rashmi; Khandelwal, Niranjan
2017-12-01
Apparent diffusion coefficient (ADC) maps are usually generated by builtin software provided by the MRI scanner vendors; however, various open-source postprocessing software packages are available for image manipulation and parametric map generation. The purpose of this study is to establish the reproducibility of absolute ADC values obtained using different postprocessing software programs. DW images with three b values were obtained with a 1.5-T MRI scanner, and the trace images were obtained. ADC maps were automatically generated by the in-line software provided by the vendor during image generation and were also separately generated on postprocessing software. These ADC maps were compared on the basis of ROIs using paired t test, Bland-Altman plot, mountain plot, and Passing-Bablok regression plot. There was a statistically significant difference in the mean ADC values obtained from the different postprocessing software programs when the same baseline trace DW images were used for the ADC map generation. For using ADC values as a quantitative cutoff for histologic characterization of tissues, standardization of the postprocessing algorithm is essential across processing software packages, especially in view of the implementation of vendor-neutral archiving.
Gislason-Lee, Amber J.; Keeble, Claire; Egleston, Daniel; Bexon, Josephine; Kengyelics, Stephen M.; Davies, Andrew G.
2017-01-01
Abstract. This study aimed to determine whether a reduction in radiation dose was found for percutaneous coronary interventional (PCI) patients using a cardiac interventional x-ray system with state-of-the-art image enhancement and x-ray optimization, compared to the current generation x-ray system, and to determine the corresponding impact on clinical image quality. Patient procedure dose area product (DAP) and fluoroscopy duration of 131 PCI patient cases from each x-ray system were compared using a Wilcoxon test on median values. Significant reductions in patient dose (p≪0.001) were found for the new system with no significant change in fluoroscopy duration (p=0.2); procedure DAP reduced by 64%, fluoroscopy DAP by 51%, and “cine” acquisition DAP by 76%. The image quality of 15 patient angiograms from each x-ray system (30 total) was scored by 75 clinical professionals on a continuous scale for the ability to determine the presence and severity of stenotic lesions; image quality scores were analyzed using a two-sample t-test. Image quality was reduced by 9% (p≪0.01) for the new x-ray system. This demonstrates a substantial reduction in patient dose, from acquisition more than fluoroscopy imaging, with slightly reduced image quality, for the new x-ray system compared to the current generation system. PMID:28491907
Generative Adversarial Networks for Noise Reduction in Low-Dose CT.
Wolterink, Jelmer M; Leiner, Tim; Viergever, Max A; Isgum, Ivana
2017-12-01
Noise is inherent to low-dose CT acquisition. We propose to train a convolutional neural network (CNN) jointly with an adversarial CNN to estimate routine-dose CT images from low-dose CT images and hence reduce noise. A generator CNN was trained to transform low-dose CT images into routine-dose CT images using voxelwise loss minimization. An adversarial discriminator CNN was simultaneously trained to distinguish the output of the generator from routine-dose CT images. The performance of this discriminator was used as an adversarial loss for the generator. Experiments were performed using CT images of an anthropomorphic phantom containing calcium inserts, as well as patient non-contrast-enhanced cardiac CT images. The phantom and patients were scanned at 20% and 100% routine clinical dose. Three training strategies were compared: the first used only voxelwise loss, the second combined voxelwise loss and adversarial loss, and the third used only adversarial loss. The results showed that training with only voxelwise loss resulted in the highest peak signal-to-noise ratio with respect to reference routine-dose images. However, CNNs trained with adversarial loss captured image statistics of routine-dose images better. Noise reduction improved quantification of low-density calcified inserts in phantom CT images and allowed coronary calcium scoring in low-dose patient CT images with high noise levels. Testing took less than 10 s per CT volume. CNN-based low-dose CT noise reduction in the image domain is feasible. Training with an adversarial network improves the CNNs ability to generate images with an appearance similar to that of reference routine-dose CT images.
NASA Stennis Space Center Test Technology Branch Activities
NASA Technical Reports Server (NTRS)
Solano, Wanda M.
2000-01-01
This paper provides a short history of NASA Stennis Space Center's Test Technology Laboratory and briefly describes the variety of engine test technology activities and developmental project initiatives. Theoretical rocket exhaust plume modeling, acoustic monitoring and analysis, hand held fire imaging, heat flux radiometry, thermal imaging and exhaust plume spectroscopy are all examples of current and past test activities that are briefly described. In addition, recent efforts and visions focused on accomodating second, third, and fourth generation flight vehicle engine test requirements are discussed.
Huo, Yuankai; Xu, Zhoubing; Bao, Shunxing; Bermudez, Camilo; Plassard, Andrew J.; Liu, Jiaqi; Yao, Yuang; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2018-01-01
Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.
Threshold matrix for digital halftoning by genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero
1998-10-01
Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Lo, P; Hoffman, J
Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar
2016-10-01
Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Paulus, Daniel H; Oehmigen, Mark; Grüneisen, Johannes; Umutlu, Lale; Quick, Harald H
2016-05-07
Modern radiation therapy (RT) treatment planning is based on multimodality imaging. With the recent availability of whole-body PET/MR hybrid imaging new opportunities arise to improve target volume delineation in RT treatment planning. This, however, requires dedicated RT equipment for reproducible patient positioning on the PET/MR system, which has to be compatible with MR and PET imaging. A prototype flat RT table overlay, radiofrequency (RF) coil holders for head imaging, and RF body bridges for body imaging were developed and tested towards PET/MR system integration. Attenuation correction (AC) of all individual RT components was performed by generating 3D CT-based template models. A custom-built program for μ-map generation assembles all AC templates depending on the presence and position of each RT component. All RT devices were evaluated in phantom experiments with regards to MR and PET imaging compatibility, attenuation correction, PET quantification, and position accuracy. The entire RT setup was then evaluated in a first PET/MR patient study on five patients at different body regions. All tested devices are PET/MR compatible and do not produce visible artifacts or disturb image quality. The RT components showed a repositioning accuracy of better than 2 mm. Photon attenuation of -11.8% in the top part of the phantom was observable, which was reduced to -1.7% with AC using the μ-map generator. Active lesions of 3 subjects were evaluated in terms of SUVmean and an underestimation of -10.0% and -2.4% was calculated without and with AC of the RF body bridges, respectively. The new dedicated RT equipment for hybrid PET/MR imaging enables acquisitions in all body regions. It is compatible with PET/MR imaging and all hardware components can be corrected in hardware AC by using the suggested μ-map generator. These developments provide the technical and methodological basis for integration of PET/MR hybrid imaging into RT planning.
Automated spot defect characterization in a field portable night vision goggle test set
NASA Astrophysics Data System (ADS)
Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume
2018-05-01
This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.
Gray QB-sing-faced version 2 (SF2) open environment test report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plummer, J.; Immel, D.; Bobbitt, J.
This report details the design upgrades incorporated into the new version of the GrayQbTM SF2 device and the characterization testing of this upgraded device. Results from controlled characterization testing in the Savannah River National Laboratory (SRNL) R&D Engineering Imaging and Radiation Lab (IRL) and the Savannah River Site (SRS) Health Physics Instrument Calibration Laboratory (HPICL) is presented, as well as results from the open environment field testing performed in the E-Area Low Level Waste Storage Area. Resultant images presented in this report were generated using the SRNL developed Radiation Analyzer (RAzerTM) software program which overlays the radiation contour images ontomore » the visual image of the location being surveyed.« less
NASA Astrophysics Data System (ADS)
Bethmann, F.; Jepping, C.; Luhmann, T.
2013-04-01
This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.
Projection technologies for imaging sensor calibration, characterization, and HWIL testing at AEDC
NASA Astrophysics Data System (ADS)
Lowry, H. S.; Breeden, M. F.; Crider, D. H.; Steely, S. L.; Nicholson, R. A.; Labello, J. M.
2010-04-01
The characterization, calibration, and mission simulation testing of imaging sensors require continual involvement in the development and evaluation of radiometric projection technologies. Arnold Engineering Development Center (AEDC) uses these technologies to perform hardware-in-the-loop (HWIL) testing with high-fidelity complex scene projection technologies that involve sophisticated radiometric source calibration systems to validate sensor mission performance. Testing with the National Institute of Standards and Technology (NIST) Ballistic Missile Defense Organization (BMDO) transfer radiometer (BXR) and Missile Defense Agency (MDA) transfer radiometer (MDXR) offers improved radiometric and temporal fidelity in this cold-background environment. The development of hardware and test methodologies to accommodate wide field of view (WFOV), polarimetric, and multi/hyperspectral imaging systems is being pursued to support a variety of program needs such as space situational awareness (SSA). Test techniques for the acquisition of data needed for scene generation models (solar/lunar exclusion, radiation effects, etc.) are also needed and are being sought. The extension of HWIL testing to the 7V Chamber requires the upgrade of the current satellite emulation scene generation system. This paper provides an overview of pertinent technologies being investigated and implemented at AEDC.
Menze, Bjoern H.; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-André; Székely, Gabor; Ayache, Nicholas; Golland, Polina
2016-01-01
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM) to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as “tumor core” or “fluid-filled structure”, but without a one-to-one correspondence to the hypo-or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the generative-discriminative model to be one of the top ranking methods in the BRATS evaluation. PMID:26599702
Radar data processing and analysis
NASA Technical Reports Server (NTRS)
Ausherman, D.; Larson, R.; Liskow, C.
1976-01-01
Digitized four-channel radar images corresponding to particular areas from the Phoenix and Huntington test sites were generated in conjunction with prior experiments performed to collect X- and L-band synthetic aperture radar imagery of these two areas. The methods for generating this imagery are documented. A secondary objective was the investigation of digital processing techniques for extraction of information from the multiband radar image data. Following the digitization, the remaining resources permitted a preliminary machine analysis to be performed on portions of the radar image data. The results, although necessarily limited, are reported.
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.
Designing a Virtual Item Bank Based on the Techniques of Image Processing
ERIC Educational Resources Information Center
Liao, Wen-Wei; Ho, Rong-Guey
2011-01-01
One of the major weaknesses of the item exposure rates of figural items in Intelligence Quotient (IQ) tests lies in its inaccuracies. In this study, a new approach is proposed and a useful test tool known as the Virtual Item Bank (VIB) is introduced. The VIB combine Automatic Item Generation theory and image processing theory with the concepts of…
Rapid 3D bioprinting from medical images: an application to bone scaffolding
NASA Astrophysics Data System (ADS)
Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.
2018-03-01
Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.
Applying a CAD-generated imaging marker to assess short-term breast cancer risk
NASA Astrophysics Data System (ADS)
Mirniaharikandehei, Seyedehnafiseh; Zarafshani, Ali; Heidari, Morteza; Wang, Yunzhi; Aghaei, Faranak; Zheng, Bin
2018-02-01
Although whether using computer-aided detection (CAD) helps improve radiologists' performance in reading and interpreting mammograms is controversy due to higher false-positive detection rates, objective of this study is to investigate and test a new hypothesis that CAD-generated false-positives, in particular, the bilateral summation of false-positives, is a potential imaging marker associated with short-term breast cancer risk. An image dataset involving negative screening mammograms acquired from 1,044 women was retrospectively assembled. Each case involves 4 images of craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breasts. In the next subsequent mammography screening, 402 cases were positive for cancer detected and 642 remained negative. A CAD scheme was applied to process all "prior" negative mammograms. Some features from CAD scheme were extracted, which include detection seeds, the total number of false-positive regions, an average of detection scores and the sum of detection scores in CC and MLO view images. Then the features computed from two bilateral images of left and right breasts from either CC or MLO view were combined. In order to predict the likelihood of each testing case being positive in the next subsequent screening, two logistic regression models were trained and tested using a leave-one-case-out based cross-validation method. Data analysis demonstrated the maximum prediction accuracy with an area under a ROC curve of AUC=0.65+/-0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of [2.95, 6.83]. The results also illustrated an increasing trend in the adjusted odds ratio and risk prediction scores (p<0.01). Thus, the study showed that CAD-generated false-positives might provide a new quantitative imaging marker to help assess short-term breast cancer risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashiwagi, T., E-mail: kashiwagi@ims.tsukuba.ac.jp; Minami, H.; Kadowaki, K.
2014-02-24
A computed tomography (CT) imaging system using monochromatic sub-terahertz coherent electromagnetic waves generated from a device constructed from the intrinsic Josephson junctions in a single crystalline mesa structure of the high-T{sub c} superconductor Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8+δ} was developed and tested on three samples: Standing metallic rods supported by styrofoam, a dried plant (heart pea) containing seeds, and a plastic doll inside an egg shell. The images obtained strongly suggest that this CT imaging system may be useful for a variety of practical applications.
Generating virtual training samples for sparse representation of face images and face recognition
NASA Astrophysics Data System (ADS)
Du, Yong; Wang, Yu
2016-03-01
There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.
Simulated altitude exposure assessment by hyperspectral imaging
NASA Astrophysics Data System (ADS)
Calin, Mihaela Antonina; Macovei, Adrian; Miclos, Sorin; Parasca, Sorin Viorel; Savastru, Roxana; Hristea, Razvan
2017-05-01
Testing the human body's reaction to hypoxia (including the one generated by high altitude) is important in aeronautic medicine. This paper presents a method of monitoring blood oxygenation during experimental hypoxia using hyperspectral imaging (HSI) and a spectral unmixing model based on a modified Beer-Lambert law. A total of 20 healthy volunteers (males) aged 25 to 60 years were included in this study. A line-scan HSI system was used to acquire images of the faces of the subjects. The method generated oxyhemoglobin and deoxyhemoglobin distribution maps from the foreheads of the subjects at 5 and 10 min of hypoxia and after recovery in a high oxygen breathing mixture. The method also generated oxygen saturation maps that were validated using pulse oximetry. An interesting pattern of desaturation on the forehead was discovered during the study, showing one of the advantages of using HSI for skin oxygenation monitoring in hypoxic conditions. This could bring new insight into the physiological response to high altitude and may become a step forward in air crew testing.
Simulated altitude exposure assessment by hyperspectral imaging.
Calin, Mihaela Antonina; Macovei, Adrian; Miclos, Sorin; Parasca, Sorin Viorel; Savastru, Roxana; Hristea, Razvan
2017-05-01
Testing the human body’s reaction to hypoxia (including the one generated by high altitude) is important in aeronautic medicine. This paper presents a method of monitoring blood oxygenation during experimental hypoxia using hyperspectral imaging (HSI) and a spectral unmixing model based on a modified Beer–Lambert law. A total of 20 healthy volunteers (males) aged 25 to 60 years were included in this study. A line-scan HSI system was used to acquire images of the faces of the subjects. The method generated oxyhemoglobin and deoxyhemoglobin distribution maps from the foreheads of the subjects at 5 and 10 min of hypoxia and after recovery in a high oxygen breathing mixture. The method also generated oxygen saturation maps that were validated using pulse oximetry. An interesting pattern of desaturation on the forehead was discovered during the study, showing one of the advantages of using HSI for skin oxygenation monitoring in hypoxic conditions. This could bring new insight into the physiological response to high altitude and may become a step forward in air crew testing.
Foley, Mary Ann; Foy, Jeffrey; Schlemmer, Emily; Belser-Ehrlich, Janna
2010-11-01
Imagery encoding effects on source-monitoring errors were explored using the Deese-Roediger-McDermott paradigm in two experiments. While viewing thematically related lists embedded in mixed picture/word presentations, participants were asked to generate images of objects or words (Experiment 1) or to simply name the items (Experiment 2). An encoding task intended to induce spontaneous images served as a control for the explicit imagery instruction conditions (Experiment 1). On the picture/word source-monitoring tests, participants were much more likely to report "seeing" a picture of an item presented as a word than the converse particularly when images were induced spontaneously. However, this picture misattribution error was reversed after generating images of words (Experiment 1) and was eliminated after simply labelling the items (Experiment 2). Thus source misattributions were sensitive to the processes giving rise to imagery experiences (spontaneous vs deliberate), the kinds of images generated (object vs word images), and the ways in which materials were presented (as pictures vs words).
Birch, Gabriel C.; Woo, Bryana L.; Sanchez, Andres L.; ...
2017-08-24
The evaluation of optical system performance in fog conditions typically requires field testing. This can be challenging due to the unpredictable nature of fog generation and the temporal and spatial nonuniformity of the phenomenon itself. We describe the Sandia National Laboratories fog chamber, a new test facility that enables the repeatable generation of fog within a 55 m×3 m×3 m (L×W×H) environment, and demonstrate the fog chamber through a series of optical tests. These tests are performed to evaluate system image quality, determine meteorological optical range (MOR), and measure the number of particles in the atmosphere. Relationships between typical opticalmore » quality metrics, MOR values, and total number of fog particles are described using the data obtained from the fog chamber and repeated over a series of three tests.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birch, Gabriel C.; Woo, Bryana L.; Sanchez, Andres L.
The evaluation of optical system performance in fog conditions typically requires field testing. This can be challenging due to the unpredictable nature of fog generation and the temporal and spatial nonuniformity of the phenomenon itself. We describe the Sandia National Laboratories fog chamber, a new test facility that enables the repeatable generation of fog within a 55 m×3 m×3 m (L×W×H) environment, and demonstrate the fog chamber through a series of optical tests. These tests are performed to evaluate system image quality, determine meteorological optical range (MOR), and measure the number of particles in the atmosphere. Relationships between typical opticalmore » quality metrics, MOR values, and total number of fog particles are described using the data obtained from the fog chamber and repeated over a series of three tests.« less
Automatic item generation implemented for measuring artistic judgment aptitude.
Bezruczko, Nikolaus
2014-01-01
Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.
Chroma-preserved luma controlling technique using YCbCr color space
NASA Astrophysics Data System (ADS)
Lee, Sooyeon; Kwak, Youngshin; Kim, Youn Jin
2013-02-01
YCbCr color space composed of luma and chominance components is preferred for its ease of image processing. However the non-orthogonality between YCbCr components induces unwanted perceived chroma change as controlling luma values. In this study, a new method was designed for the unwanted chroma change compensation generated by luma change. For six different YCC_hue angles, data points named `Original data' generated with uniformly distributed luma and Cb, Cr values. Then the weight values were applied to luma values of `Original data' set resulting in `Test data' set followed by `new YCC_chroma' calculation having miminum CIECAM02 ΔC between original and test data for `Test data' set. Finally mathematical model is developed to predict amount of YCC_chroma values to compensate CIECAM02 chroma changes. This model implemented for luma controlling algorithm having constant perceived chroma. The performance was tested numerically using data points and images. After compensation the result is improved 51.69% than that before compensation when CIECAM02 Δ C between `Original data' and `Test data' after compensation is compared. When new model is applied to test images, there is 32.03% improvement.
NASA Astrophysics Data System (ADS)
Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin
2018-05-01
This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC = 0.65 ± 0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p < 0.01). Thus, this study demonstrated that CAD-generated false-positives might include valuable information, which needs to be further explored for identifying and/or developing more effective imaging markers for predicting short-term breast cancer risk.
Park, S B; Kim, H; Yao, M; Ellis, R; Machtay, M; Sohn, J W
2012-06-01
To quantify the systematic error of a Deformable Image Registration (DIR) system and establish Quality Assurance (QA) procedure. To address the shortfall of landmark approach which it is only available at the significant visible feature points, we adapted a Deformation Vector Map (DVM) comparison approach. We used two CT image sets (R and T image sets) taken for the same patient at different time and generated a DVM, which includes the DIR systematic error. The DVM was calculated using fine-tuned B-Spline DIR and L-BFGS optimizer. By utilizing this DVM we generated R' image set to eliminate the systematic error in DVM,. Thus, we have truth data set, R' and T image sets, and the truth DVM. To test a DIR system, we use R' and T image sets to a DIR system. We compare the test DVM to the truth DVM. If there is no systematic error, they should be identical. We built Deformation Error Histogram (DEH) for quantitative analysis. The test registration was performed with an in-house B-Spline DIR system using a stochastic gradient descent optimizer. Our example data set was generated with a head and neck patient case. We also tested CT to CBCT deformable registration. We found skin regions which interface with the air has relatively larger errors. Also mobile joints such as shoulders had larger errors. Average error for ROIs were as follows; CTV: 0.4mm, Brain stem: 1.4mm, Shoulders: 1.6mm, and Normal tissues: 0.7mm. We succeeded to build DEH approach to quantify the DVM uncertainty. Our data sets are available for testing other systems in our web page. Utilizing DEH, users can decide how much systematic error they would accept. DEH and our data can be a tool for an AAPM task group to compose a DIR system QA guideline. This project is partially supported by the Agency for Healthcare Research and Quality (AHRQ) grant 1R18HS017424-01A2. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-07-01
This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.
NASA Astrophysics Data System (ADS)
Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.
2011-06-01
The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.
High speed imager test station
Yates, George J.; Albright, Kevin L.; Turko, Bojan T.
1995-01-01
A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment.
High speed imager test station
Yates, G.J.; Albright, K.L.; Turko, B.T.
1995-11-14
A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment. 12 figs.
A novel image watermarking method based on singular value decomposition and digital holography
NASA Astrophysics Data System (ADS)
Cai, Zhishan
2016-10-01
According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.
Solar thematic maps for space weather operations
Rigler, E. Joshua; Hill, Steven M.; Reinard, Alysha A.; Steenburgh, Robert A.
2012-01-01
Thematic maps are arrays of labels, or "themes", associated with discrete locations in space and time. Borrowing heavily from the terrestrial remote sensing discipline, a numerical technique based on Bayes' theorem captures operational expertise in the form of trained theme statistics, then uses this to automatically assign labels to solar image pixels. Ultimately, regular thematic maps of the solar corona will be generated from high-cadence, high-resolution SUVI images, the solar ultraviolet imager slated to fly on NOAA's next-generation GOES-R series of satellites starting ~2016. These thematic maps will not only provide quicker, more consistent synoptic views of the sun for space weather forecasters, but digital thematic pixel masks (e.g., coronal hole, active region, flare, etc.), necessary for a new generation of operational solar data products, will be generated. This paper presents the mathematical underpinnings of our thematic mapper, as well as some practical algorithmic considerations. Then, using images from the Solar Dynamics Observatory (SDO) Advanced Imaging Array (AIA) as test data, it presents results from validation experiments designed to ascertain the robustness of the technique with respect to differing expert opinions and changing solar conditions.
A pseudoinverse deformation vector field generator and its applications
Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.
2010-01-01
Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247
Improved 2D/3D registration robustness using local spatial information
NASA Astrophysics Data System (ADS)
De Momi, Elena; Eckman, Kort; Jaramaz, Branislav; DiGioia, Anthony, III
2006-03-01
Xalign is a tool designed to measure implant orientation after joint arthroplasty by co-registering a projection of an implant model and a digitally reconstructed radiograph of the patient's anatomy with a post operative x-ray. A mutual information based registration method is used to automate alignment. When using basic mutual information, the presence of local maxima can result in misregistration. To increase robustness of registration, our research is aimed at improving the similarity function by modifying the information measure and incorporating local spatial information. A test dataset with known groundtruth parameters was created to evaluate the performance of this measure. A synthetic radiograph was generated first from a preoperative pelvic CT scan to act as the gold standard. The voxel weights used to generate the image were then modified and new images were generated with the CT rigidly transformed. The roll, pitch and yaw angles span a range of -10/+10 degrees, while x, y and z translations range from -10mm to +10mm. These images were compared with the reference image. The proposed cost function correctly identified the correct pose in all tests and did not exhibit any local maxima which would slow or prevent locating the global maximum.
Visible-Infrared Hyperspectral Image Projector
NASA Technical Reports Server (NTRS)
Bolcar, Matthew
2013-01-01
The VisIR HIP generates spatially-spectrally complex scenes. The generated scenes simulate real-world targets viewed by various remote sensing instruments. The VisIR HIP consists of two subsystems: a spectral engine and a spatial engine. The spectral engine generates spectrally complex uniform illumination that spans the wavelength range between 380 nm and 1,600 nm. The spatial engine generates two-dimensional gray-scale scenes. When combined, the two engines are capable of producing two-dimensional scenes with a unique spectrum at each pixel. The VisIR HIP can be used to calibrate any spectrally sensitive remote-sensing instrument. Tests were conducted on the Wide-field Imaging Interferometer Testbed at NASA s Goddard Space Flight Center. The device is a variation of the calibrated hyperspectral image projector developed by the National Institute of Standards and Technology in Gaithersburg, MD. It uses Gooch & Housego Visible and Infrared OL490 Agile Light Sources to generate arbitrary spectra. The two light sources are coupled to a digital light processing (DLP(TradeMark)) digital mirror device (DMD) that serves as the spatial engine. Scenes are displayed on the DMD synchronously with desired spectrum. Scene/spectrum combinations are displayed in rapid succession, over time intervals that are short compared to the integration time of the system under test.
Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos
NASA Astrophysics Data System (ADS)
Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.
2018-04-01
It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.
Menze, Bjoern H; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-Andre; Szekely, Gabor; Ayache, Nicholas; Golland, Polina
2016-04-01
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.
2016-08-31
With the lights out, team members perform an optics test on the Advanced Baseline Imager, the primary optical instrument, on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. Carbon dioxide is sprayed on the imager to clean it and test its sensitivity. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-08-31
Team members prepare for an optics test on the Advanced Baseline Imager, the primary optical instrument, on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. Carbon dioxide will be sprayed on the imager to clean it and test its sensitivity. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohli, K; Liu, F; Krishnan, K
Purpose: Multi-frequency EIT has been reported to be a potential tool in distinguishing a tissue anomaly from background. In this study, we investigate the feasibility of acquiring functional information by comparing multi-frequency EIT images in reference to the structural information from the CT image through fusion. Methods: EIT data was acquired from a slice of winter melon using sixteen electrodes around the phantom, injecting a current of 0.4mA at 100, 66, 24.8 and 9.9 kHz. Differential EIT images were generated by considering different combinations of pair frequencies, one serving as reference data and the other as test data. The experimentmore » was repeated after creating an anomaly in the form of an off-centered cavity of diameter 4.5 cm inside the melon. All EIT images were reconstructed using Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software (EIDORS) package in 2-D differential imaging mode using one-step Gaussian Newton minimization solver. CT image of the melon was obtained using a Phillips CT Scanner. A segmented binary mask image was generated based on the reference electrode position and the CT image to define the regions of interest. The region selected by the user was fused with the CT image through logical indexing. Results: Differential images based on the reference and test signal frequencies were reconstructed from EIT data. Result illustrated distinct structural inhomogeneity in seeded region compared to fruit flesh. The seeded region was seen as a higherimpedance region if the test frequency was lower than the base frequency in the differential EIT reconstruction. When the test frequency was higher than the base frequency, the signal experienced less electrical impedance in the seeded region during the EIT data acquisition. Conclusion: Frequency-based differential EIT imaging can be explored to provide additional functional information along with structural information from CT for identifying different tissues.« less
NASA Technical Reports Server (NTRS)
Cramer, Alexander Krishnan
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.
NASA Astrophysics Data System (ADS)
Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.
A Novel Defect Inspection Method for Semiconductor Wafer Based on Magneto-Optic Imaging
NASA Astrophysics Data System (ADS)
Pan, Z.; Chen, L.; Li, W.; Zhang, G.; Wu, P.
2013-03-01
The defects of semiconductor wafer may be generated from the manufacturing processes. A novel defect inspection method of semiconductor wafer is presented in this paper. The method is based on magneto-optic imaging, which involves inducing eddy current into the wafer under test, and detecting the magnetic flux associated with eddy current distribution in the wafer by exploiting the Faraday rotation effect. The magneto-optic image being generated may contain some noises that degrade the overall image quality, therefore, in this paper, in order to remove the unwanted noise present in the magneto-optic image, the image enhancement approach using multi-scale wavelet is presented, and the image segmentation approach based on the integration of watershed algorithm and clustering strategy is given. The experimental results show that many types of defects in wafer such as hole and scratch etc. can be detected by the method proposed in this paper.
NASA Astrophysics Data System (ADS)
Mondini, Alessandro C.; Chang, Kang-Tsung; Chiang, Shou-Hao; Schlögel, Romy; Notarnicola, Claudia; Saito, Hitoshi
2017-12-01
We propose a framework to systematically generate event landslide inventory maps from satellite images in southern Taiwan, where landslides are frequent and abundant. The spectral information is used to assess the pixel land cover class membership probability through a Maximum Likelihood classifier trained with randomly generated synthetic land cover spectral fingerprints, which are obtained from an independent training images dataset. Pixels are classified as landslides when the calculated landslide class membership probability, weighted by a susceptibility model, is higher than membership probabilities of other classes. We generated synthetic fingerprints from two FORMOSAT-2 images acquired in 2009 and tested the procedure on two other images, one in 2005 and the other in 2009. We also obtained two landslide maps through manual interpretation. The agreement between the two sets of inventories is given by the Cohen's k coefficients of 0.62 and 0.64, respectively. This procedure can now classify a new FORMOSAT-2 image automatically facilitating the production of landslide inventory maps.
Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM
NASA Technical Reports Server (NTRS)
Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip
2017-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.
VESGEN Software for Mapping and Quantification of Vascular Regulators
NASA Technical Reports Server (NTRS)
Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.
2012-01-01
VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.
Hierarchical storage of large volume of multidector CT data using distributed servers
NASA Astrophysics Data System (ADS)
Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David
2006-03-01
Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.
Computer Generated Hologram System for Wavefront Measurement System Calibration
NASA Technical Reports Server (NTRS)
Olczak, Gene
2011-01-01
Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.
A Workflow to Improve the Alignment of Prostate Imaging with Whole-mount Histopathology.
Yamamoto, Hidekazu; Nir, Dror; Vyas, Lona; Chang, Richard T; Popert, Rick; Cahill, Declan; Challacombe, Ben; Dasgupta, Prokar; Chandra, Ashish
2014-08-01
Evaluation of prostate imaging tests against whole-mount histology specimens requires accurate alignment between radiologic and histologic data sets. Misalignment results in false-positive and -negative zones as assessed by imaging. We describe a workflow for three-dimensional alignment of prostate imaging data against whole-mount prostatectomy reference specimens and assess its performance against a standard workflow. Ethical approval was granted. Patients underwent motorized transrectal ultrasound (Prostate Histoscanning) to generate a three-dimensional image of the prostate before radical prostatectomy. The test workflow incorporated steps for axial alignment between imaging and histology, size adjustments following formalin fixation, and use of custom-made parallel cutters and digital caliper instruments. The control workflow comprised freehand cutting and assumed homogeneous block thicknesses at the same relative angles between pathology and imaging sections. Thirty radical prostatectomy specimens were histologically and radiologically processed, either by an alignment-optimized workflow (n = 20) or a control workflow (n = 10). The optimized workflow generated tissue blocks of heterogeneous thicknesses but with no significant drifting in the cutting plane. The control workflow resulted in significantly nonparallel blocks, accurately matching only one out of four histology blocks to their respective imaging data. The image-to-histology alignment accuracy was 20% greater in the optimized workflow (P < .0001), with higher sensitivity (85% vs. 69%) and specificity (94% vs. 73%) for margin prediction in a 5 × 5-mm grid analysis. A significantly better alignment was observed in the optimized workflow. Evaluation of prostate imaging biomarkers using whole-mount histology references should include a test-to-reference spatial alignment workflow. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
2011-01-01
Background Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement’s ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. Methods The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. Results In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Conclusions Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations. PMID:22166145
Diedrich, Karl T; Roberts, John A; Schmidt, Richard H; Kang, Chang-Ki; Cho, Zang-Hee; Parker, Dennis L
2011-10-18
Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement's ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations.
Integration of Irma tactical scene generator into directed-energy weapon system simulation
NASA Astrophysics Data System (ADS)
Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.
2003-08-01
Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.
Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror
2016-10-01
To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.
Generating region proposals for histopathological whole slide image retrieval.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu; Shi, Jun
2018-06-01
Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems. Copyright © 2018 Elsevier B.V. All rights reserved.
Random ambience using high fidelity images
NASA Astrophysics Data System (ADS)
Abu, Nur Azman; Sahib, Shahrin
2011-06-01
Most of the secure communication nowadays mandates true random keys as an input. These operations are mostly designed and taken care of by the developers of the cryptosystem. Due to the nature of confidential crypto development today, pseudorandom keys are typically designed and still preferred by the developers of the cryptosystem. However, these pseudorandom keys are predictable, periodic and repeatable, hence they carry minimal entropy. True random keys are believed to be generated only via hardware random number generators. Careful statistical analysis is still required to have any confidence the process and apparatus generates numbers that are sufficiently random to suit the cryptographic use. In this underlying research, each moment in life is considered unique in itself. The random key is unique for the given moment generated by the user whenever he or she needs the random keys in practical secure communication. An ambience of high fidelity digital image shall be tested for its randomness according to the NIST Statistical Test Suite. Recommendation on generating a simple 4 megabits per second random cryptographic keys live shall be reported.
Dynamical Modeling of NGC 6397: Simulated HST Imaging
NASA Astrophysics Data System (ADS)
Dull, J. D.; Cohn, H. N.; Lugger, P. M.; Slavin, S. D.; Murphy, B. W.
1994-12-01
The proximity of NGC 6397 (2.2 kpc) provides an ideal opportunity to test current dynamical models for globular clusters with the HST Wide-Field/Planetary Camera (WFPC2)\\@. We have used a Monte Carlo algorithm to generate ensembles of simulated Planetary Camera (PC) U-band images of NGC 6397 from evolving, multi-mass Fokker-Planck models. These images, which are based on the post-repair HST-PC point-spread function, are used to develop and test analysis methods for recovering structural information from actual HST imaging. We have considered a range of exposure times up to 2.4times 10(4) s, based on our proposed HST Cycle 5 observations. Our Fokker-Planck models include energy input from dynamically-formed binaries. We have adopted a 20-group mass spectrum extending from 0.16 to 1.4 M_sun. We use theoretical luminosity functions for red giants and main sequence stars. Horizontal branch stars, blue stragglers, white dwarfs, and cataclysmic variables are also included. Simulated images are generated for cluster models at both maximal core collapse and at a post-collapse bounce. We are carrying out stellar photometry on these images using ``DAOPHOT-assisted aperture photometry'' software that we have developed. We are testing several techniques for analyzing the resulting star counts, to determine the underlying cluster structure, including parametric model fits and the nonparametric density estimation methods. Our simulated images also allow us to investigate the accuracy and completeness of methods for carrying out stellar photometry in HST Planetary Camera images of dense cluster cores.
Initial Navigation Alignment of Optical Instruments on GOES-R
NASA Technical Reports Server (NTRS)
Isaacson, Peter J.; DeLuccia, Frank J.; Reth, Alan D.; Igli, David A.; Carter, Delano R.
2016-01-01
Post-launch alignment errors for the Advanced Baseline Imager (ABI) and Geospatial Lightning Mapper (GLM) on GOES-R may be too large for the image navigation and registration (INR) processing algorithms to function without an initial adjustment to calibration parameters. We present an approach that leverages a combination of user-selected image-to-image tie points and image correlation algorithms to estimate this initial launch-induced offset and calculate adjustments to the Line of Sight Motion Compensation (LMC) parameters. We also present an approach to generate synthetic test images, to which shifts and rotations of known magnitude are applied. Results of applying the initial alignment tools to a subset of these synthetic test images are presented. The results for both ABI and GLM are within the specifications established for these tools, and indicate that application of these tools during the post-launch test (PLT) phase of GOES-R operations will enable the automated INR algorithms for both instruments to function as intended.
Garment Counting in a Textile Warehouse by Means of a Laser Imaging System
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-01-01
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760
Garment counting in a textile warehouse by means of a laser imaging system.
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-04-29
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.
A novel method for repeatedly generating speckle patterns used in digital image correlation
NASA Astrophysics Data System (ADS)
Zhang, Juan; Sweedy, Ahmed; Gitzhofer, François; Baroud, Gamal
2018-01-01
Speckle patterns play a key role in Digital Image Correlation (DIC) measurement, and generating an optimal speckle pattern has been the goal for decades now. The usual method of generating a speckle pattern is by manually spraying the paint on the specimen. However, this makes it difficult to reproduce the optimal pattern for maintaining identical testing conditions and achieving consistent DIC results. This study proposed and evaluated a novel method using an atomization system to repeatedly generate speckle patterns. To verify the repeatability of the speckle patterns generated by this system, simulation and experimental studies were systematically performed. The results from both studies showed that the speckle patterns and, accordingly, the DIC measurements become highly accurate and repeatable using the proposed atomization system.
Localizing tuberculosis in chest radiographs with deep learning
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Jaeger, Stefan; Antani, Sameer; Long, L. Rodney; Karargyris, Alexandros; Siegelman, Jenifer; Folio, Les R.; Thoma, George R.
2018-03-01
Chest radiography (CXR) has been used as an effective tool for screening tuberculosis (TB). Because of the lack of radiological expertise in resource-constrained regions, automatic analysis of CXR is appealing as a "first reader". In addition to screening the CXR for disease, it is critical to highlight locations of the disease in abnormal CXRs. In this paper, we focus on the task of locating TB in CXRs which is more challenging due to the intrinsic difficulty of locating the abnormality. The method is based on applying a convolutional neural network (CNN) to classify the superpixels generated from the lung area. Specifically, it consists of four major components: lung ROI extraction, superpixel segmentation, multi-scale patch generation/labeling, and patch classification. The TB regions are located by identifying those superpixels whose corresponding patches are classified as abnormal by the CNN. The method is tested on a publicly available TB CXR dataset which contains 336 TB images showing various manifestations of TB. The TB regions in the images were marked by radiologists. To evaluate the method, the images are split into training, validation, and test sets with all the manifestations being represented in each set. The performance is evaluated at both the patch level and image level. The classification accuracy on the patch test set is 72.8% and the average Dice index for the test images is 0.67. The factors that may contribute to misclassification are discussed and directions for future work are addressed.
Reliability of functional MR imaging with word-generation tasks for mapping Broca's area.
Brannen, J H; Badie, B; Moritz, C H; Quigley, M; Meyerand, M E; Haughton, V M
2001-10-01
Functional MR (fMR) imaging of word generation has been used to map Broca's area in some patients selected for craniotomy. The purpose of this study was to measure the reliability, precision, and accuracy of word-generation tasks to identify Broca's area. The Brodmann areas activated during performance of word-generation tasks were tabulated in 34 consecutive patients referred for fMR imaging mapping of language areas. In patients performing two iterations of the letter word-generation tasks, test-retest reliability was quantified by using the concurrence ratio (CR), or the number of voxels activated by each iteration in proportion to the average number of voxels activated from both iterations of the task. Among patients who also underwent category or antonym word generation or both, the similarity of the activation from each task was assessed with the CR. In patients who underwent electrocortical stimulation (ECS) mapping of speech function during craniotomy while awake, the sites with speech function were compared with the locations of activation found during fMR imaging of word generation. In 31 of 34 patients, activation was identified in the inferior frontal gyri or middle frontal gyri or both in Brodmann areas 9, 44, 45, or 46, unilaterally or bilaterally, with one or more of the tasks. Activation was noted in the same gyri when the patient performed a second iteration of the letter word-generation task or second task. The CR for pixel precision in a single section averaged 49%. In patients who underwent craniotomy while awake, speech areas located with ECS coincided with areas of the brain activated during a word-generation task. fMR imaging with word-generation tasks produces technically satisfactory maps of Broca's area, which localize the area accurately and reliably.
A stochastically fully connected conditional random field framework for super resolution OCT
NASA Astrophysics Data System (ADS)
Boroomand, A.; Tan, B.; Wong, A.; Bizheva, K.
2017-02-01
A number of factors can degrade the resolution and contrast of OCT images, such as: (1) changes of the OCT pointspread function (PSF) resulting from wavelength dependent scattering and absorption of light along the imaging depth (2) speckle noise, as well as (3) motion artifacts. We propose a new Super Resolution OCT (SR OCT) imaging framework that takes advantage of a Stochastically Fully Connected Conditional Random Field (SF-CRF) model to generate a Super Resolved OCT (SR OCT) image of higher quality from a set of Low-Resolution OCT (LR OCT) images. The proposed SF-CRF SR OCT imaging is able to simultaneously compensate for all of the factors mentioned above, that degrade the OCT image quality, using a unified computational framework. The proposed SF-CRF SR OCT imaging framework was tested on a set of simulated LR human retinal OCT images generated from a high resolution, high contrast retinal image, and on a set of in-vivo, high resolution, high contrast rat retinal OCT images. The reconstructed SR OCT images show considerably higher spatial resolution, less speckle noise and higher contrast compared to other tested methods. Visual assessment of the results demonstrated the usefulness of the proposed approach in better preservation of fine details and structures of the imaged sample, retaining biological tissue boundaries while reducing speckle noise using a unified computational framework. Quantitative evaluation using both Contrast to Noise Ratio (CNR) and Edge Preservation (EP) parameter also showed superior performance of the proposed SF-CRF SR OCT approach compared to other image processing approaches.
Experimental image alignment system
NASA Technical Reports Server (NTRS)
Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.
1980-01-01
A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.
Kalpathy-Cramer, Jayashree; de Herrera, Alba García Seco; Demner-Fushman, Dina; Antani, Sameer; Bedrick, Steven; Müller, Henning
2014-01-01
Medical image retrieval and classification have been extremely active research topics over the past 15 years. With the ImageCLEF benchmark in medical image retrieval and classification a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluations campaigns. A detailed analysis of the data also highlights the value of the resources created. PMID:24746250
Application of Deep Learning to Detect Precursors of Tropical Cyclone
NASA Astrophysics Data System (ADS)
Matsuoka, D.; Nakano, M.; Sugiyama, D.; Uchida, S.
2017-12-01
Tropical cyclones (TCs) affect significant damage to human society. Predicting TC generation as soon as possible is important issue in both academic and social perspectives. In the present work, we investigate the probability of predicting TCs seven days prior using deep neural networks. The training data is produced from 30-year cloud resolving global atmospheric simulation (NICAM) with 14 km horizontal resolution (Kodama et al., 2015). We employed a TCs tracking algorithm (Sugi et al., 2002; Nakano et al., 2015) to NICAM simulation data in order to generate supervised cloud images (horizontal sizes are 800-1,000km). We generate approximately one million images of "TCs (include their precursors)" and "not TCs (low pressure clouds)". We generate ten types of image classifier based on 2-dimensional convolutional neural network, includes four convolutional layers, three pooling layers and two fully connected layers. The final predicted results are obtained by these ensemble mean values. Generated classifiers are applied to untrained global simulation data (four million test images). As a result, we succeeded in predicting the precursors of TCs seven and five days before their formation with a Recall of 88.6% and 89.6% (Precision is 11.4%), respectively.
X-Ray Phantom Development For Observer Performance Studies
NASA Astrophysics Data System (ADS)
Kelsey, C. A.; Moseley, R. D.; Mettler, F. A.; Parker, T. W.
1981-07-01
The requirements for radiographic imaging phantoms for observer performance testing include realistic tasks which mimic at least some portion of the diagnostic examination presented in a setting which approximates clinically derived images. This study describes efforts to simulate chest and vascular diseases for evaluation of conventional and digital radiographic systems. Images of lung nodules, pulmonary infiltrates, as well as hilar and mediastinal masses are generated with a conventional chest phantom to make up chest disease test series. Vascular images are simulated by hollow tubes embedded in tissue density plastic with widening and narrowing added to mimic aneurysms and stenoses. Both sets of phantoms produce images which allow simultaneous determination of true positive and false positive rates as well as complete ROC curves.
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Multiple Image Arrangement for Subjective Quality Assessment
NASA Astrophysics Data System (ADS)
Wang, Yan; Zhai, Guangtao
2017-12-01
Subjective quality assessment serves as the foundation for almost all visual quality related researches. Size of the image quality databases has expanded from dozens to thousands in the last decades. Since each subjective rating therein has to be averaged over quite a few participants, the ever-increasing overall size of those databases calls for an evolution of existing subjective test methods. Traditional single/double stimulus based approaches are being replaced by multiple image tests, where several distorted versions of the original one are displayed and rated at once. And this naturally brings upon the question of how to arrange those multiple images on screen during the test. In this paper, we answer this question by performing subjective viewing test with eye tracker for different types arrangements. Our research indicates that isometric arrangement imposes less duress on participants and has more uniform distribution of eye fixations and movements and therefore is expected to generate more reliable subjective ratings.
NASA Astrophysics Data System (ADS)
Miller, Victor; Jens, Elizabeth T.; Mechentel, Flora S.; Cantwell, Brian J.; Stanford Propulsion; Space Exploration Group Team
2014-11-01
In this work, we present observations of the overall features and dynamics of flow and combustion in a slab-type hybrid rocket combustor. Tests were conducted in the recently upgraded Stanford Combustion Visualization Facility, a hybrid rocket combustor test platform capable of generating constant mass-flux flows of oxygen. High-speed (3 kHz) schlieren and OH chemiluminescence imaging were used to visualize the flow. We present imaging results for the combustion of two different fuel grains, a classic, low regression rate polymethyl methacrylate (PMMA), and a high regression rate paraffin, and all tests were conducted in gaseous oxygen. Each fuel grain was tested at multiple free-stream pressures at constant oxidizer mass flux (40 kg/m2s). The resulting image sequences suggest that aspects of the dynamics and scaling of the system depend strongly on both pressure and type of fuel.
Generative adversarial networks for brain lesion detection
NASA Astrophysics Data System (ADS)
Alex, Varghese; Safwan, K. P. Mohammed; Chennamsetty, Sai Saketh; Krishnamurthi, Ganapathy
2017-02-01
Manual segmentation of brain lesions from Magnetic Resonance Images (MRI) is cumbersome and introduces errors due to inter-rater variability. This paper introduces a semi-supervised technique for detection of brain lesion from MRI using Generative Adversarial Networks (GANs). GANs comprises of a Generator network and a Discriminator network which are trained simultaneously with the objective of one bettering the other. The networks were trained using non lesion patches (n=13,000) from 4 different MR sequences. The network was trained on BraTS dataset and patches were extracted from regions excluding tumor region. The Generator network generates data by modeling the underlying probability distribution of the training data, (PData). The Discriminator learns the posterior probability P (Label Data) by classifying training data and generated data as "Real" or "Fake" respectively. The Generator upon learning the joint distribution, produces images/patches such that the performance of the Discriminator on them are random, i.e. P (Label Data = GeneratedData) = 0.5. During testing, the Discriminator assigns posterior probability values close to 0.5 for patches from non lesion regions, while patches centered on lesion arise from a different distribution (PLesion) and hence are assigned lower posterior probability value by the Discriminator. On the test set (n=14), the proposed technique achieves whole tumor dice score of 0.69, sensitivity of 91% and specificity of 59%. Additionally the generator network was capable of generating non lesion patches from various MR sequences.
Fundamental performance differences of CMOS and CCD imagers: part V
NASA Astrophysics Data System (ADS)
Janesick, James R.; Elliott, Tom; Andrews, James; Tower, John; Pinter, Jeff
2013-02-01
Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL's SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.
Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo
2017-09-01
To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.
Kinetic Modeling of Accelerated Stability Testing Enabled by Second Harmonic Generation Microscopy.
Song, Zhengtian; Sarkar, Sreya; Vogt, Andrew D; Danzer, Gerald D; Smith, Casey J; Gualtieri, Ellen J; Simpson, Garth J
2018-04-03
The low limits of detection afforded by second harmonic generation (SHG) microscopy coupled with image analysis algorithms enabled quantitative modeling of the temperature-dependent crystallization of active pharmaceutical ingredients (APIs) within amorphous solid dispersions (ASDs). ASDs, in which an API is maintained in an amorphous state within a polymer matrix, are finding increasing use to address solubility limitations of small-molecule APIs. Extensive stability testing is typically performed for ASD characterization, the time frame for which is often dictated by the earliest detectable onset of crystal formation. Here a study of accelerated stability testing on ritonavir, a human immunodeficiency virus (HIV) protease inhibitor, has been conducted. Under the condition for accelerated stability testing at 50 °C/75%RH and 40 °C/75%RH, ritonavir crystallization kinetics from amorphous solid dispersions were monitored by SHG microscopy. SHG microscopy coupled by image analysis yielded limits of detection for ritonavir crystals as low as 10 ppm, which is about 2 orders of magnitude lower than other methods currently available for crystallinity detection in ASDs. The four decade dynamic range of SHG microscopy enabled quantitative modeling with an established (JMAK) kinetic model. From the SHG images, nucleation and crystal growth rates were independently determined.
Multispectral Remote Sensing of the Earth and Environment Using KHawk Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Gowravaram, Saket
This thesis focuses on the development and testing of the KHawk multispectral remote sensing system for environmental and agricultural applications. KHawk Unmanned Aircraft System (UAS), a small and low-cost remote sensing platform, is used as the test bed for aerial video acquisition. An efficient image geotagging and photogrammetric procedure for aerial map generation is described, followed by a comprehensive error analysis on the generated maps. The developed procedure is also used for generation of multispectral aerial maps including red, near infrared (NIR) and colored infrared (CIR) maps. A robust Normalized Difference Vegetation index (NDVI) calibration procedure is proposed and validated by ground tests and KHawk flight test. Finally, the generated aerial maps and their corresponding Digital Elevation Models (DEMs) are used for typical application scenarios including prescribed fire monitoring, initial fire line estimation, and tree health monitoring.
Real-Time Digital Bright Field Technology for Rapid Antibiotic Susceptibility Testing.
Canali, Chiara; Spillum, Erik; Valvik, Martin; Agersnap, Niels; Olesen, Tom
2018-01-01
Optical scanning through bacterial samples and image-based analysis may provide a robust method for bacterial identification, fast estimation of growth rates and their modulation due to the presence of antimicrobial agents. Here, we describe an automated digital, time-lapse, bright field imaging system (oCelloScope, BioSense Solutions ApS, Farum, Denmark) for rapid and higher throughput antibiotic susceptibility testing (AST) of up to 96 bacteria-antibiotic combinations at a time. The imaging system consists of a digital camera, an illumination unit and a lens where the optical axis is tilted 6.25° relative to the horizontal plane of the stage. Such tilting grants more freedom of operation at both high and low concentrations of microorganisms. When considering a bacterial suspension in a microwell, the oCelloScope acquires a sequence of 6.25°-tilted images to form an image Z-stack. The stack contains the best-focus image, as well as the adjacent out-of-focus images (which contain progressively more out-of-focus bacteria, the further the distance from the best-focus position). The acquisition process is repeated over time, so that the time-lapse sequence of best-focus images is used to generate a video. The setting of the experiment, image analysis and generation of time-lapse videos can be performed through a dedicated software (UniExplorer, BioSense Solutions ApS). The acquired images can be processed for online and offline quantification of several morphological parameters, microbial growth, and inhibition over time.
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Powell, Mark W.; Fox, Jason M.; Crockett, Thomas M.; Joswig, Joseph C.
2009-01-01
Cliffbot Maestro permits teleoperation of remote rovers for field testing in extreme environments. The application user interface provides two sets of tools for operations: stereo image browsing and command generation.
NASA Astrophysics Data System (ADS)
Wojcieszak, D.; Przybył, J.; Lewicki, A.; Ludwiczak, A.; Przybylak, A.; Boniecki, P.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Witaszek, K.
2015-07-01
The aim of this research was investigate the possibility of using methods of computer image analysis and artificial neural networks for to assess the amount of dry matter in the tested compost samples. The research lead to the conclusion that the neural image analysis may be a useful tool in determining the quantity of dry matter in the compost. Generated neural model may be the beginning of research into the use of neural image analysis assess the content of dry matter and other constituents of compost. The presented model RBF 19:19-2-1:1 characterized by test error 0.092189 may be more efficient.
Cai, Bin; Dolly, Steven; Kamal, Gregory; Yaddanapudi, Sridhar; Sun, Baozhou; Goddu, S Murty; Mutic, Sasa; Li, Hua
2018-04-28
To investigate the feasibility of using kV flat panel detector on linac for consistency evaluations of kV X-ray generator performance. An in-house designed aluminum (Al) array phantom with six 9×9 cm 2 square regions having various thickness was proposed and used in this study. Through XML script-driven image acquisition, kV images with various acquisition settings were obtained using the kV flat panel detector. Utilizing pre-established baseline curves, the consistency of X-ray tube output characteristics including tube voltage accuracy, exposure accuracy and exposure linearity were assessed through image quality assessment metrics including ROI mean intensity, ROI standard deviation (SD) and noise power spectrums (NPS). The robustness of this method was tested on two linacs for a three-month period. With the proposed method, tube voltage accuracy can be verified through conscience check with a 2% tolerance and 2 kVp intervals for forty different kVp settings. The exposure accuracy can be tested with a 4% consistency tolerance for three mAs settings over forty kVp settings. The exposure linearity tested with three mAs settings achieved a coefficient of variation (CV) of 0.1. We proposed a novel approach that uses the kV flat panel detector available on linac for X-ray generator test. This approach eliminates the inefficiencies and variability associated with using third party QA detectors while enabling an automated process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Zhang, Zhiqing; Kuzmin, Nikolay V; Groot, Marie Louise; de Munck, Jan C
2017-06-01
The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering, segmentation and validation, to extract this information challenging. We developed a salient edge-enhancing model of anisotropic diffusion for image filtering, based on higher order statistics. We split the intrinsic 3-phase segmentation problem into two 2-phase segmentation problems, each of which we solved with a dedicated model, active contour weighted by prior extreme. We applied the novel proposed algorithms to THG images of structurally normal ex-vivo human brain tissue, revealing key tissue components-brain cells, microvessels and neuropil, enabling statistical characterization of these components. Comprehensive comparison to manually delineated ground truth validated the proposed algorithms. Quantitative comparison to second harmonic generation/auto-fluorescence images, acquired simultaneously from the same tissue area, confirmed the correctness of the main THG features detected. The software and test datasets are available from the authors. z.zhang@vu.nl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Design of an MR image processing module on an FPGA chip
NASA Astrophysics Data System (ADS)
Li, Limin; Wyrwicz, Alice M.
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.
Design of an MR image processing module on an FPGA chip
Li, Limin; Wyrwicz, Alice M.
2015-01-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646
Permutation coding technique for image recognition systems.
Kussul, Ernst M; Baidyk, Tatiana N; Wunsch, Donald C; Makeyev, Oleksandr; Martín, Anabel
2006-11-01
A feature extractor and neural classifier for image recognition systems are proposed. The proposed feature extractor is based on the concept of random local descriptors (RLDs). It is followed by the encoder that is based on the permutation coding technique that allows to take into account not only detected features but also the position of each feature on the image and to make the recognition process invariant to small displacements. The combination of RLDs and permutation coding permits us to obtain a sufficiently general description of the image to be recognized. The code generated by the encoder is used as an input data for the neural classifier. Different types of images were used to test the proposed image recognition system. It was tested in the handwritten digit recognition problem, the face recognition problem, and the microobject shape recognition problem. The results of testing are very promising. The error rate for the Modified National Institute of Standards and Technology (MNIST) database is 0.44% and for the Olivetti Research Laboratory (ORL) database it is 0.1%.
Optical Flow for Flight and Wind Tunnel Background Oriented Schlieren Imaging
NASA Technical Reports Server (NTRS)
Smith, Nathanial T.; Heineck, James T.; Schairer, Edward T.
2017-01-01
Background oriented Schlieren images have historically been generated by calculating the observed pixel displacement between a wind-on and wind-o image pair using normalized cross-correlation. This work uses optical flow to solve the displacement fields which generate the Schlieren images. A well established method used in the computer vision community, optical flow is the apparent motion in an image sequence due to brightness changes. The regularization method of Horn and Schunck is used to create Schlieren images using two data sets: a supersonic jet plume shock interaction from the NASA Ames Unitary Plan Wind Tunnel, and a transonic flight test of a T-38 aircraft using a naturally occurring background, performed in conjunction with NASA Ames and Armstrong Research Centers. Results are presented and contrasted with those using normalized cross-correlation. The optical flow Schlieren images are found to provided significantly more detail. We apply the method to historical data sets to demonstrate the broad applicability and limitations of the technique.
An earth imaging camera simulation using wide-scale construction of reflectance surfaces
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk
2013-10-01
Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.
Likelihood Ratio Test Polarimetric SAR Ship Detection Application
2005-12-01
menu. Under the Matlab menu, the user can export an area of an image to the MatlabTM MAT file format, as well as call RGB image and Pauli...must specify various parameters such as the area of the image to analyze. Export Image Area to MatlabTM (PoIGASP & COASP) Generates a MatlabTM file...represented by the Minister of National Defence, 2005 (0 Sa majest6 la reine, repr(sent(e par le ministre de la Defense nationale, 2005 Abstract This
Simulation of bright-field microscopy images depicting pap-smear specimen
Malm, Patrik; Brun, Anders; Bengtsson, Ewert
2015-01-01
As digital imaging is becoming a fundamental part of medical and biomedical research, the demand for computer-based evaluation using advanced image analysis is becoming an integral part of many research projects. A common problem when developing new image analysis algorithms is the need of large datasets with ground truth on which the algorithms can be tested and optimized. Generating such datasets is often tedious and introduces subjectivity and interindividual and intraindividual variations. An alternative to manually created ground-truth data is to generate synthetic images where the ground truth is known. The challenge then is to make the images sufficiently similar to the real ones to be useful in algorithm development. One of the first and most widely studied medical image analysis tasks is to automate screening for cervical cancer through Pap-smear analysis. As part of an effort to develop a new generation cervical cancer screening system, we have developed a framework for the creation of realistic synthetic bright-field microscopy images that can be used for algorithm development and benchmarking. The resulting framework has been assessed through a visual evaluation by experts with extensive experience of Pap-smear images. The results show that images produced using our described methods are realistic enough to be mistaken for real microscopy images. The developed simulation framework is very flexible and can be modified to mimic many other types of bright-field microscopy images. © 2015 The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC PMID:25573002
Halo-free Phase Contrast Microscopy
NASA Astrophysics Data System (ADS)
Nguyen, Tan H.; Kandel, Mikhail; Shakir, Haadi M.; Best-Popescu, Catherine; Arikkath, Jyothi; Do, Minh N.; Popescu, Gabriel
2017-03-01
We present a new approach for retrieving halo-free phase contrast microscopy (hfPC) images by upgrading the conventional PC microscope with an external interferometric module, which generates sufficient data for reversing the halo artifact. Acquiring four independent intensity images, our approach first measures haloed phase maps of the sample. We solve for the halo-free sample transmission function by using a physical model of the image formation under partial spatial coherence. Using this halo-free sample transmission, we can numerically generate artifact-free PC images. Furthermore, this transmission can be further used to obtain quantitative information about the sample, e.g., the thickness with known refractive indices, dry mass of live cells during their cycles. We tested our hfPC method on various control samples, e.g., beads, pillars and validated its potential for biological investigation by imaging live HeLa cells, red blood cells, and neurons.
Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging
NASA Astrophysics Data System (ADS)
Micó, Vicente; Zalevsky, Zeev
2010-07-01
Digital in-line holographic microscopy (DIHM) is a modern approach capable of achieving micron-range lateral and depth resolutions in three-dimensional imaging. DIHM in combination with numerical imaging reconstruction uses an extremely simplified setup while retaining the advantages provided by holography with enhanced capabilities derived from algorithmic digital processing. We introduce superresolved DIHM incoming from time and angular multiplexing of the sample spatial frequency information and yielding in the generation of a synthetic aperture (SA). The SA expands the cutoff frequency of the imaging system, allowing submicron resolutions in both transversal and axial directions. The proposed approach can be applied when imaging essentially transparent (low-concentration dilutions) and static (slow dynamics) samples. Validation of the method for both a synthetic object (U.S. Air Force resolution test) to quantify the resolution improvement and a biological specimen (sperm cells biosample) are reported showing the generation of high synthetic numerical aperture values working without lenses.
Infrared thermal imaging figures of merit
NASA Technical Reports Server (NTRS)
Kaplan, Herbert
1989-01-01
Commercially available types of infrared thermal imaging instruments, both viewers (qualitative) and imagers (quantitative) are discussed. The various scanning methods by which thermal images (thermograms) are generated will be reviewed. The performance parameters (figures of merit) that define the quality of performance of infrared radiation thermometers will be introduced. A discussion of how these parameters are extended and adapted to define the performance of thermal imaging instruments will be provided. Finally, the significance of each of the key performance parameters of thermal imaging instruments will be reviewed and procedures currently used for testing to verify performance will be outlined.
Quantitative evaluation of 3D images produced from computer-generated holograms
NASA Astrophysics Data System (ADS)
Sheerin, David T.; Mason, Ian R.; Cameron, Colin D.; Payne, Douglas A.; Slinger, Christopher W.
1999-08-01
Advances in computing and optical modulation techniques now make it possible to anticipate the generation of near real- time, reconfigurable, high quality, three-dimensional images using holographic methods. Computer generated holography (CGH) is the only technique which holds promise of producing synthetic images having the full range of visual depth cues. These realistic images will be viewable by several users simultaneously, without the need for headtracking or special glasses. Such a data visualization tool will be key to speeding up the manufacture of new commercial and military equipment by negating the need for the production of physical 3D models in the design phase. DERA Malvern has been involved in designing and testing fixed CGH in order to understand the connection between the complexity of the CGH, the algorithms used to design them, the processes employed in their implementation and the quality of the images produced. This poster describes results from CGH containing up to 108 pixels. The methods used to evaluate the reconstructed images are discussed and quantitative measures of image fidelity made. An understanding of the effect of the various system parameters upon final image quality enables a study of the possible system trade-offs to be carried out. Such an understanding of CGH production and resulting image quality is key to effective implementation of a reconfigurable CGH system currently under development at DERA.
Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems
NASA Astrophysics Data System (ADS)
Williams, John W.; Potter, Gary E.
2002-11-01
QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.
TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, X
2016-06-15
Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When amore » new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.« less
Epp: A C++ EGSnrc user code for x-ray imaging and scattering simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lippuner, Jonas; Elbakri, Idris A.; Cui Congwu
2011-03-15
Purpose: Easy particle propagation (Epp) is a user code for the EGSnrc code package based on the C++ class library egspp. A main feature of egspp (and Epp) is the ability to use analytical objects to construct simulation geometries. The authors developed Epp to facilitate the simulation of x-ray imaging geometries, especially in the case of scatter studies. While direct use of egspp requires knowledge of C++, Epp requires no programming experience. Methods: Epp's features include calculation of dose deposited in a voxelized phantom and photon propagation to a user-defined imaging plane. Projection images of primary, single Rayleigh scattered, singlemore » Compton scattered, and multiple scattered photons may be generated. Epp input files can be nested, allowing for the construction of complex simulation geometries from more basic components. To demonstrate the imaging features of Epp, the authors simulate 38 keV x rays from a point source propagating through a water cylinder 12 cm in diameter, using both analytical and voxelized representations of the cylinder. The simulation generates projection images of primary and scattered photons at a user-defined imaging plane. The authors also simulate dose scoring in the voxelized version of the phantom in both Epp and DOSXYZnrc and examine the accuracy of Epp using the Kawrakow-Fippel test. Results: The results of the imaging simulations with Epp using voxelized and analytical descriptions of the water cylinder agree within 1%. The results of the Kawrakow-Fippel test suggest good agreement between Epp and DOSXYZnrc. Conclusions: Epp provides the user with useful features, including the ability to build complex geometries from simpler ones and the ability to generate images of scattered and primary photons. There is no inherent computational time saving arising from Epp, except for those arising from egspp's ability to use analytical representations of simulation geometries. Epp agrees with DOSXYZnrc in dose calculation, since they are both based on the well-validated standard EGSnrc radiation transport physics model.« less
Vehicle classification in WAMI imagery using deep network
NASA Astrophysics Data System (ADS)
Yi, Meng; Yang, Fan; Blasch, Erik; Sheaff, Carolyn; Liu, Kui; Chen, Genshe; Ling, Haibin
2016-05-01
Humans have always had a keen interest in understanding activities and the surrounding environment for mobility, communication, and survival. Thanks to recent progress in photography and breakthroughs in aviation, we are now able to capture tens of megapixels of ground imagery, namely Wide Area Motion Imagery (WAMI), at multiple frames per second from unmanned aerial vehicles (UAVs). WAMI serves as a great source for many applications, including security, urban planning and route planning. These applications require fast and accurate image understanding which is time consuming for humans, due to the large data volume and city-scale area coverage. Therefore, automatic processing and understanding of WAMI imagery has been gaining attention in both industry and the research community. This paper focuses on an essential step in WAMI imagery analysis, namely vehicle classification. That is, deciding whether a certain image patch contains a vehicle or not. We collect a set of positive and negative sample image patches, for training and testing the detector. Positive samples are 64 × 64 image patches centered on annotated vehicles. We generate two sets of negative images. The first set is generated from positive images with some location shift. The second set of negative patches is generated from randomly sampled patches. We also discard those patches if a vehicle accidentally locates at the center. Both positive and negative samples are randomly divided into 9000 training images and 3000 testing images. We propose to train a deep convolution network for classifying these patches. The classifier is based on a pre-trained AlexNet Model in the Caffe library, with an adapted loss function for vehicle classification. The performance of our classifier is compared to several traditional image classifier methods using Support Vector Machine (SVM) and Histogram of Oriented Gradient (HOG) features. While the SVM+HOG method achieves an accuracy of 91.2%, the accuracy of our deep network-based classifier reaches 97.9%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, B; Hu, E; Yu, C
2015-06-15
Purpose: A Tomo-Cinegraphy (TC) is a method to generate a series of temporal tomographic images from projection images of the on-board imager (OBI) while gantry is moving. It is to test if this technique is useful to determine a lung tumor position during treatments. Methods: Tomographic image via background subtraction, TIBS uses a priori anatomical information from a previous CT scan to isolate a SOI from a planar kV image by factoring out the attenuations by tissues outside the SOI (background). This idea was extended to a TC, which enables to generate tomographic images of same geometry from the projectionmore » of different gantry angles and different breathing phases. Projection images of a lung patient for CBCT acquisition are used to generate TC images. A region of interest (ROI) is selected around a tumor adding 2cm margins. Center of mass (COM) of the ROI is traced to determine tumor position for every projection images. Results: Tumor is visible in the TC images while the OBI projections are not. The coordinates of the COMs represent the temporal tumor positions. While, it is not possible to trace the tumor motion using the projection images. A source of time delay is the time to acquire projection images, which is always less than a second. Conclusion: TC allows tracking the tumor positions without fiducial markers in real time for some lung patients, if the projection images are acquired during treatments. Partially supported by NIH R01CA133539.« less
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
Simulation of brain tumors in MR images for evaluation of segmentation efficacy.
Prastawa, Marcel; Bullitt, Elizabeth; Gerig, Guido
2009-04-01
Obtaining validation data and comparison metrics for segmentation of magnetic resonance images (MRI) are difficult tasks due to the lack of reliable ground truth. This problem is even more evident for images presenting pathology, which can both alter tissue appearance through infiltration and cause geometric distortions. Systems for generating synthetic images with user-defined degradation by noise and intensity inhomogeneity offer the possibility for testing and comparison of segmentation methods. Such systems do not yet offer simulation of sufficiently realistic looking pathology. This paper presents a system that combines physical and statistical modeling to generate synthetic multi-modal 3D brain MRI with tumor and edema, along with the underlying anatomical ground truth, Main emphasis is placed on simulation of the major effects known for tumor MRI, such as contrast enhancement, local distortion of healthy tissue, infiltrating edema adjacent to tumors, destruction and deformation of fiber tracts, and multi-modal MRI contrast of healthy tissue and pathology. The new method synthesizes pathology in multi-modal MRI and diffusion tensor imaging (DTI) by simulating mass effect, warping and destruction of white matter fibers, and infiltration of brain tissues by tumor cells. We generate synthetic contrast enhanced MR images by simulating the accumulation of contrast agent within the brain. The appearance of the the brain tissue and tumor in MRI is simulated by synthesizing texture images from real MR images. The proposed method is able to generate synthetic ground truth and synthesized MR images with tumor and edema that exhibit comparable segmentation challenges to real tumor MRI. Such image data sets will find use in segmentation reliability studies, comparison and validation of different segmentation methods, training and teaching, or even in evaluating standards for tumor size like the RECIST criteria (response evaluation criteria in solid tumors).
Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms
Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer
2014-01-01
In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart. PMID:25587877
Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer
2014-11-26
In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart.
2016-06-01
theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images
Compact time- and space-integrating SAR processor: performance analysis
NASA Astrophysics Data System (ADS)
Haney, Michael W.; Levy, James J.; Michael, Robert R., Jr.; Christensen, Marc P.
1995-06-01
Progress made during the previous 12 months toward the fabrication and test of a flight demonstration prototype of the acousto-optic time- and space-integrating real-time SAR image formation processor is reported. Compact, rugged, and low-power analog optical signal processing techniques are used for the most computationally taxing portions of the SAR imaging problem to overcome the size and power consumption limitations of electronic approaches. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results reported for this year include tests of a laboratory version of the RAPID SAR concept on phase history data generated from real SAR high-resolution imagery; a description of the new compact 2D acousto-optic scanner that has a 2D space bandwidth product approaching 106 sports, specified and procured for NEOS Technologies during the last year; and a design and layout of the optical module portion of the flight-worthy prototype.
A software platform for phase contrast x-ray breast imaging research.
Bliznakova, K; Russo, P; Mettivier, G; Requardt, H; Popov, P; Bravin, A; Buliev, I
2015-06-01
To present and validate a computer-based simulation platform dedicated for phase contrast x-ray breast imaging research. The software platform, developed at the Technical University of Varna on the basis of a previously validated x-ray imaging software simulator, comprises modules for object creation and for x-ray image formation. These modules were updated to take into account the refractive index for phase contrast imaging as well as implementation of the Fresnel-Kirchhoff diffraction theory of the propagating x-ray waves. Projection images are generated in an in-line acquisition geometry. To test and validate the platform, several phantoms differing in their complexity were constructed and imaged at 25 keV and 60 keV at the beamline ID17 of the European Synchrotron Radiation Facility. The software platform was used to design computational phantoms that mimic those used in the experimental study and to generate x-ray images in absorption and phase contrast modes. The visual and quantitative results of the validation process showed an overall good correlation between simulated and experimental images and show the potential of this platform for research in phase contrast x-ray imaging of the breast. The application of the platform is demonstrated in a feasibility study for phase contrast images of complex inhomogeneous and anthropomorphic breast phantoms, compared to x-ray images generated in absorption mode. The improved visibility of mammographic structures suggests further investigation and optimisation of phase contrast x-ray breast imaging, especially when abnormalities are present. The software platform can be exploited also for educational purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spherical Images for Cultural Heritage: Survey and Documentation with the Nikon KM360
NASA Astrophysics Data System (ADS)
Gottardi, C.; Guerra, F.
2018-05-01
The work presented here focuses on the analysis of the potential of spherical images acquired with specific cameras for documentation and three-dimensional reconstruction of Cultural Heritage. Nowadays, thanks to the introduction of cameras able to generate panoramic images automatically, without the requirement of a stitching software to join together different photos, spherical images allow the documentation of spaces in an extremely fast and efficient way. In this particular case, the Nikon Key Mission 360 spherical camera was tested on the Tolentini's cloister, which used to be part of the convent of the close church and now location of the Iuav University of Venice. The aim of the research is based on testing the acquisition of spherical images with the KM360 and comparing the obtained photogrammetric models with data acquired from a laser scanning survey in order to test the metric accuracy and the level of detail achievable with this particular camera. This work is part of a wider research project that the Photogrammetry Laboratory of the Iuav University of Venice has been dealing with in the last few months; the final aim of this research project will be not only the comparison between 3D models obtained from spherical images and laser scanning survey's techniques, but also the examination of their reliability and accuracy with respect to the previous methods of generating spherical panoramas. At the end of the research work, we would like to obtain an operational procedure for spherical cameras applied to metric survey and documentation of Cultural Heritage.
Bubble masks for time-encoded imaging of fast neutrons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brubaker, Erik; Brennan, James S.; Marleau, Peter
2013-09-01
Time-encoded imaging is an approach to directional radiation detection that is being developed at SNL with a focus on fast neutron directional detection. In this technique, a time modulation of a detected neutron signal is inducedtypically, a moving mask that attenuates neutrons with a time structure that depends on the source position. An important challenge in time-encoded imaging is to develop high-resolution two-dimensional imaging capabilities; building a mechanically moving high-resolution mask presents challenges both theoretical and technical. We have investigated an alternative to mechanical masks that replaces the solid mask with a liquid such as mineral oil. Instead of fixedmore » blocks of solid material that move in pre-defined patterns, the oil is contained in tubing structures, and carefully introduced air gapsbubblespropagate through the tubing, generating moving patterns of oil mask elements and air apertures. Compared to current moving-mask techniques, the bubble mask is simple, since mechanical motion is replaced by gravity-driven bubble propagation; it is flexible, since arbitrary bubble patterns can be generated by a software-controlled valve actuator; and it is potentially high performance, since the tubing and bubble size can be tuned for high-resolution imaging requirements. We have built and tested various single-tube mask elements, and will present results on bubble introduction and propagation as a function of tubing size and cross-sectional shape; real-time bubble position tracking; neutron source imaging tests; and reconstruction techniques demonstrated on simple test data as well as a simulated full detector system.« less
Influence of bedrock topography on the runoff generation under use of ERT data
NASA Astrophysics Data System (ADS)
Kiese, Nina; Loritz, Ralf; Allroggen, Niklas; Zehe, Erwin
2017-04-01
Subsurface topography has been identified to play a major role for the runoff generation in different hydrological landscapes. Sinks and ridges in the bedrock can control how water is stored and transported to the stream. Detecting the subsurface structure is difficult and laborious and frequently done by auger measurements. Recently, the geophysical imaging of the subsurface by Electrical Resistivity Tomography (ERT) gained much interest in the field of hydrology, as it is a non-invasive method to collect information on the subsurface characteristics and particularly bedrock topography. As it is impossible to characterize the subsurface of an entire hydrological landscape using ERT, it is of key interest to identify the bedrock characteristics which dominate runoff generation to adapt and optimize the sampling design to the question of interest. For this study, we used 2D ERT images and auger measurements, collected on different sites in the Attert basin in Luxembourg, to characterize bedrock topography using geostatistics and shed light on those aspects which dominate runoff generation. Based on ERT images, we generated stochastic bedrock topographies and implemented them in a physically-based 2D hillslope model. With this approach, we were able to test the influence of different subsurface structures on the runoff generation. Our results highlight that ERT images can be useful for hydrological modelling. Especially the connection from the hillslope to the stream could be identified as important feature in the subsurface for the runoff generation whereas the microtopography of the bedrock seemed to be less relevant.
Design of a reading test for low-vision image warping
NASA Astrophysics Data System (ADS)
Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. Shane
1993-08-01
NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision -- maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer- generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.
Design of a reading test for low vision image warping
NASA Technical Reports Server (NTRS)
Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. S.
1993-01-01
NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision - maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer-generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We will describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.
Learning without labeling: domain adaptation for ultrasound transducer localization.
Heimann, Tobias; Mountney, Peter; John, Matthias; Ionasec, Razvan
2013-01-01
The fusion of image data from trans-esophageal echography (TEE) and X-ray fluoroscopy is attracting increasing interest in minimally-invasive treatment of structural heart disease. In order to calculate the needed transform between both imaging systems, we employ a discriminative learning based approach to localize the TEE transducer in X-ray images. Instead of time-consuming manual labeling, we generate the required training data automatically from a single volumetric image of the transducer. In order to adapt this system to real X-ray data, we use unlabeled fluoroscopy images to estimate differences in feature space density and correct covariate shift by instance weighting. An evaluation on more than 1900 images reveals that our approach reduces detection failures by 95% compared to cross validation on the test set and improves the localization error from 1.5 to 0.8 mm. Due to the automatic generation of training data, the proposed system is highly flexible and can be adapted to any medical device with minimal efforts.
Virtual reality exposure using three-dimensional images for the treatment of social phobia.
Gebara, Cristiane M; Barros-Neto, Tito P de; Gertsenchtein, Leticia; Lotufo-Neto, Francisco
2016-03-01
To test a potential treatment for social phobia, which provides exposure to phobia-inducing situations via computer-generated, three-dimensional images, using an open clinical trial design. Twenty-one patients with a DSM-IV diagnosis of social phobia took part in the trial. Treatment consisted of up to 12 sessions of exposure to relevant images, each session lasting 50 minutes. Improvements in social anxiety were seen in all scales and instruments used, including at follow-up 6 months after the end of treatment. The average number of sessions was seven, as the participants habituated rapidly to the process. Only one participant dropped out. This study provides evidence that exposure to computer-generated three-dimensional images is relatively inexpensive, leads to greater treatment adherence, and can reduce social anxiety. Further studies are needed to corroborate these findings.
Ghose, Soumya; Greer, Peter B; Sun, Jidi; Pichler, Peter; Rivest-Henault, David; Mitra, Jhimli; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Dowling, Jason A
2017-10-27
In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most 'similar' to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be [Formula: see text] (mean ± standard deviation) for 39 patients. The 3D Gamma pass rate was [Formula: see text] (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.
NASA Astrophysics Data System (ADS)
Ghose, Soumya; Greer, Peter B.; Sun, Jidi; Pichler, Peter; Rivest-Henault, David; Mitra, Jhimli; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Dowling, Jason A.
2017-11-01
In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most ‘similar’ to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be 0.3%+/-0.9% (mean ± standard deviation) for 39 patients. The 3D Gamma pass rate was 99.8+/-0.00 (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.
Data processing device test apparatus and method therefor
Wilcox, Richard Jacob; Mulig, Jason D.; Eppes, David; Bruce, Michael R.; Bruce, Victoria J.; Ring, Rosalinda M.; Cole, Jr., Edward I.; Tangyunyong, Paiboon; Hawkins, Charles F.; Louie, Arnold Y.
2003-04-08
A method and apparatus mechanism for testing data processing devices are implemented. The test mechanism isolates critical paths by correlating a scanning microscope image with a selected speed path failure. A trigger signal having a preselected value is generated at the start of each pattern vector. The sweep of the scanning microscope is controlled by a computer, which also receives and processes the image signals returned from the microscope. The value of the trigger signal is correlated with a set of pattern lines being driven on the DUT. The trigger is either asserted or negated depending the detection of a pattern line failure and the particular line that failed. In response to the detection of the particular speed path failure being characterized, and the trigger signal, the control computer overlays a mask on the image of the device under test (DUT). The overlaid image provides a visual correlation of the failure with the structural elements of the DUT at the level of resolution of the microscope itself.
Design of two-DMD based zoom MW and LW dual-band IRSP using pixel fusion
NASA Astrophysics Data System (ADS)
Pan, Yue; Xu, Xiping; Qiao, Yang
2018-06-01
In order to test the anti-jamming ability of mid-wave infrared (MWIR) and long-wave infrared (LWIR) dual-band imaging system, a zoom mid-wave (MW) and long-wave (LW) dual-band infrared scene projector (IRSP) based on two-digital micro-mirror device (DMD) was designed by using a projection method of pixel fusion. Two illumination systems, which illuminate the two DMDs directly with Kohler telecentric beam respectively, were combined with projection system by a spatial layout way. The distances of projection entrance pupil and illumination exit pupil were also analyzed separately. MWIR and LWIR virtual scenes were generated respectively by two DMDs and fused by a dichroic beam combiner (DBC), resulting in two radiation distributions in projected image. The optical performance of each component was evaluated by ray tracing simulations. Apparent temperature and image contrast were demonstrated by imaging experiments. On the basis of test and simulation results, the aberrations of optical system were well corrected, and the quality of projected image meets test requirements.
Development of an imaging system for single droplet characterization using a droplet generator.
Minov, S Vulgarakis; Cointault, F; Vangeyte, J; Pieters, J G; Hijazi, B; Nuyttens, D
2012-01-01
The spray droplets generated by agricultural nozzles play an important role in the application accuracy and efficiency of plant protection products. The limitations of the non-imaging techniques and the recent improvements in digital image acquisition and processing increased the interest in using high speed imaging techniques in pesticide spray characterisation. The goal of this study was to develop an imaging technique to evaluate the characteristics of a single spray droplet using a piezoelectric single droplet generator and a high speed imaging technique. Tests were done with different camera settings, lenses, diffusers and light sources. The experiments have shown the necessity for having a good image acquisition and processing system. Image analysis results contributed in selecting the optimal set-up for measuring droplet size and velocity which consisted of a high speed camera with a 6 micros exposure time, a microscope lens at a working distance of 43 cm resulting in a field of view of 1.0 cm x 0.8 cm and a Xenon light source without diffuser used as a backlight. For measuring macro-spray characteristics as the droplet trajectory, the spray angle and the spray shape, a Macro Video Zoom lens at a working distance of 14.3 cm with a bigger field of view of 7.5 cm x 9.5 cm in combination with a halogen spotlight with a diffuser and the high speed camera can be used.
Building structural similarity database for metric learning
NASA Astrophysics Data System (ADS)
Jin, Guoxin; Pappas, Thrasyvoulos N.
2015-03-01
We propose a new approach for constructing databases for training and testing similarity metrics for structurally lossless image compression. Our focus is on structural texture similarity (STSIM) metrics and the matched-texture compression (MTC) approach. We first discuss the metric requirements for structurally lossless compression, which differ from those of other applications such as image retrieval, classification, and understanding. We identify "interchangeability" as the key requirement for metric performance, and partition the domain of "identical" textures into three regions, of "highest," "high," and "good" similarity. We design two subjective tests for data collection, the first relies on ViSiProG to build a database of "identical" clusters, and the second builds a database of image pairs with the "highest," "high," "good," and "bad" similarity labels. The data for the subjective tests is generated during the MTC encoding process, and consist of pairs of candidate and target image blocks. The context of the surrounding image is critical for training the metrics to detect lighting discontinuities, spatial misalignments, and other border artifacts that have a noticeable effect on perceptual quality. The identical texture clusters are then used for training and testing two STSIM metrics. The labelled image pair database will be used in future research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, J; Shi, F; Hrycushko, B
2015-06-15
Purpose: For tandem and ovoid (T&O) HDR brachytherapy in our clinic, it is required that the planning physicist manually capture ∼10 images during planning, perform a secondary dose calculation and generate a report, combine them into a single PDF document, and upload it to a record- and-verify system to prove to an independent plan checker that the case was planned correctly. Not only does this slow down the already time-consuming clinical workflow, the PDF document also limits the number of parameters that can be checked. To solve these problems, we have developed a web-based automatic quality assurance (QA) program. Methods:more » We set up a QA server accessible through a web- interface. A T&O plan and CT images are exported as DICOMRT files and uploaded to the server. The software checks 13 geometric features, e.g. if the dwell positions are reasonable, and 10 dosimetric features, e.g. secondary dose calculations via TG43 formalism and D2cc to critical structures. A PDF report is automatically generated with errors and potential issues highlighted. It also contains images showing important geometric and dosimetric aspects to prove the plan was created following standard guidelines. Results: The program has been clinically implemented in our clinic. In each of the 58 T&O plans we tested, a 14- page QA report was automatically generated. It took ∼45 sec to export the plan and CT images and ∼30 sec to perform the QA tests and generate the report. In contrast, our manual QA document preparation tooks on average ∼7 minutes under optimal conditions and up to 20 minutes when mistakes were made during the document assembly. Conclusion: We have tested the efficiency and effectiveness of an automated process for treatment plan QA of HDR T&O cases. This software was shown to improve the workflow compared to our conventional manual approach.« less
Wintermark, M; Zeineh, M; Zaharchuk, G; Srivastava, A; Fischbein, N
2016-07-01
A neuroradiologist's activity includes many tasks beyond interpreting relative value unit-generating imaging studies. Our aim was to test a simple method to record and quantify the non-relative value unit-generating clinical activity represented by consults and clinical conferences, including tumor boards. Four full-time neuroradiologists, working an average of 50% clinical and 50% academic activity, systematically recorded all the non-relative value unit-generating consults and conferences in which they were involved during 3 months by using a simple, Web-based, computer-based application accessible from smartphones, tablets, or computers. The number and type of imaging studies they interpreted during the same period and the associated relative value units were extracted from our billing system. During 3 months, the 4 neuroradiologists working an average of 50% clinical activity interpreted 4241 relative value unit-generating imaging studies, representing 8152 work relative value units. During the same period, they recorded 792 non-relative value unit-generating study reviews as part of consults and conferences (not including reading room consults), representing 19% of the interpreted relative value unit-generating imaging studies. We propose a simple Web-based smartphone app to record and quantify non-relative value unit-generating activities including consults, clinical conferences, and tumor boards. The quantification of non-relative value unit-generating activities is paramount in this time of a paradigm shift from volume to value. It also represents an important tool for determining staffing levels, which cannot be performed on the basis of relative value unit only, considering the importance of time spent by radiologists on non-relative value unit-generating activities. It may also influence payment models from medical centers to radiology departments or practices. © 2016 by American Journal of Neuroradiology.
Mobile Phones Democratize and Cultivate Next-Generation Imaging, Diagnostics and Measurement Tools
Ozcan, Aydogan
2014-01-01
In this article, I discuss some of the emerging applications and the future opportunities and challenges created by the use of mobile phones and their embedded components for the development of next-generation imaging, sensing, diagnostics and measurement tools. The massive volume of mobile phone users, which has now reached ~7 billion, drives the rapid improvements of the hardware, software and high-end imaging and sensing technologies embedded in our phones, transforming the mobile phone into a cost-effective and yet extremely powerful platform to run e.g., biomedical tests and perform scientific measurements that would normally require advanced laboratory instruments. This rapidly evolving and continuing trend will help us transform how medicine, engineering and sciences are practiced and taught globally. PMID:24647550
A Comparative Study of Random Patterns for Digital Image Correlation
NASA Astrophysics Data System (ADS)
Stoilov, G.; Kavardzhikov, V.; Pashkouleva, D.
2012-06-01
Digital Image Correlation (DIC) is a computer based image analysis technique utilizing random patterns, which finds applications in experimental mechanics of solids and structures. In this paper a comparative study of three simulated random patterns is done. One of them is generated according to a new algorithm, introduced by the authors. A criterion for quantitative evaluation of random patterns after the calculation of their autocorrelation functions is introduced. The patterns' deformations are simulated numerically and realized experimentally. The displacements are measured by using the DIC method. Tensile tests are performed after printing the generated random patterns on surfaces of standard iron sheet specimens. It is found that the new designed random pattern keeps relatively good quality until reaching 20% deformation.
Predicting cotton yield of small field plots in a cotton breeding program using UAV imagery data
NASA Astrophysics Data System (ADS)
Maja, Joe Mari J.; Campbell, Todd; Camargo Neto, Joao; Astillo, Philip
2016-05-01
One of the major criteria used for advancing experimental lines in a breeding program is yield performance. Obtaining yield performance data requires machine picking each plot with a cotton picker, modified to weigh individual plots. Harvesting thousands of small field plots requires a great deal of time and resources. The efficiency of cotton breeding could be increased significantly while the cost could be decreased with the availability of accurate methods to predict yield performance. This work is investigating the feasibility of using an image processing technique using a commercial off-the-shelf (COTS) camera mounted on a small Unmanned Aerial Vehicle (sUAV) to collect normal RGB images in predicting cotton yield on small plot. An orthonormal image was generated from multiple images and used to process multiple, segmented plots. A Gaussian blur was used to eliminate the high frequency component of the images, which corresponds to the cotton pixels, and used image subtraction technique to generate high frequency pixel images. The cotton pixels were then separated using k-means cluster with 5 classes. Based on the current work, the calculated percentage cotton area was computed using the generated high frequency image (cotton pixels) divided by the total area of the plot. Preliminary results showed (five flights, 3 altitudes) that cotton cover on multiple pre-selected 227 sq. m. plots produce an average of 8% which translate to approximately 22.3 kgs. of cotton. The yield prediction equation generated from the test site was then use on a separate validation site and produced a prediction error of less than 10%. In summary, the results indicate that a COTS camera with an appropriate image processing technique can produce results that are comparable to expensive sensors.
R&D Progress of HTS Magnet Project for Ultrahigh-field MRI
NASA Astrophysics Data System (ADS)
Tosaka, Taizo; Miyazaki, Hiroshi; Iwai, Sadanori; Otani, Yasumi; Takahashi, Masahiko; Tasaki, Kenji; Nomura, Shunji; Kurusu, Tsutomu; Ueda, Hiroshi; Noguchi, So; Ishiyama, Atsushi; Urayama, Shinichi; Fukuyama, Hidenao
An R&D project on high-temperature superconducting (HTS) magnets using rare-earth Ba2Cu3O7 (REBCO) wires was started in 2013. The project objective is to investigate the feasibility of adapting REBCO magnets to ultrahigh field (UHF) magnetic resonance imaging (MRI) systems. REBCO wires are promising components for UHF-MRI magnets because of their superior superconducting and mechanical properties, which make them smaller and lighter than conventional ones. Moreover, REBCO magnets can be cooled by the conduction-cooling method, making liquid helium unnecessary. In the past two years, some test coils and model magnets have been fabricated and tested. This year is the final year of the project. The goals of the project are: (1) to generate a 9.4 T magnetic field with a small test coil, (2) to generate a homogeneous magnetic field in a 200 mm diameter spherical volume with a 1.5 T model magnet, and (3) to perform imaging with the 1.5 T model magnet. In this paper, the progress of this R&D is described. The knowledge gained through these R&D results will be reflected in the design of 9.4 T MRI magnets for brain and whole body imaging.
Multifacet structure of observed reconstructed integral images.
Martínez-Corral, Manuel; Javidi, Bahram; Martínez-Cuenca, Raúl; Saavedra, Genaro
2005-04-01
Three-dimensional images generated by an integral imaging system suffer from degradations in the form of grid of multiple facets. This multifacet structure breaks the continuity of the observed image and therefore reduces its visual quality. We perform an analysis of this effect and present the guidelines in the design of lenslet imaging parameters for optimization of viewing conditions with respect to the multifacet degradation. We consider the optimization of the system in terms of field of view, observer position and pupil function, lenslet parameters, and type of reconstruction. Numerical tests are presented to verify the theoretical analysis.
Classical Statistics and Statistical Learning in Imaging Neuroscience
Bzdok, Danilo
2017-01-01
Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896
2014-01-01
devices with indirect-bandgap materials such as silicon . KEYWORDS: Ultrafast imaging , strained nanomaterials, spectroscopy Lattice strain produced by...photogenerated charge cloud as a result of carrier diffusion . Normalized carrier profiles, generated by integrating the images along the direction normal to the...To test this idea, Figure 2. Charge carrier diffusion in a Si NW locally strained by a bending deformation (A) SEM image of a bent Si nanowire ∼100
Mueller, Jenna; Asma, Betsy; Asiedu, Mercy; Krieger, Marlee S.; Chitalia, Rhea; Dahl, Denali; Taylor, Peyton; Schmitt, John W.; Ramanujam, Nimmi
2018-01-01
Introduction We have previously developed a portable Pocket Colposcope for cervical cancer screening in resource-limited settings. In this manuscript we report two different strategies (cross-polarization and an integrated reflector) to improve image contrast levels achieved with the Pocket Colposcope and evaluate the merits of each strategy compared to a standard-of-care digital colposcope. The desired outcomes included reduced specular reflection (glare), increased illumination beam pattern uniformity, and reduced electrical power budget. In addition, anti-fogging and waterproofing features were incorporated to prevent the Pocket Colposcope from fogging in the vaginal canal and to enable rapid disinfection by submersion in chemical agents. Methods Cross-polarization (Generation 3 Pocket Colposcope) and a new reflector design (Generation 4 Pocket Colposcope) were used to reduce glare and improve contrast. The reflector design (including the angle and height of the reflector sidewalls) was optimized through ray-tracing simulations. Both systems were characterized with a series of bench tests to assess specular reflection, beam pattern uniformity, and image contrast. A pilot clinical study was conducted to compare the Generation 3 and 4 Pocket Colposcopes to a standard-of-care colposcope (Leisegang Optik 2). Specifically, paired images of cervices were collected from the standard-of-care colposcope and either the Generation 3 (n = 24 patients) or the Generation 4 (n = 32 patients) Pocket Colposcopes. The paired images were blinded by device, randomized, and sent to an expert physician who provided a diagnosis for each image. Corresponding pathology was obtained for all image pairs. The primary outcome measures were the level of agreement (%) and κ (kappa) statistic between the standard-of-care colposcope and each Pocket Colposcope (Generation 3 and Generation 4). Results Both generations of Pocket Colposcope had significantly higher image contrast when compared to the standard-of-care colposcope. The addition of anti-fog and waterproofing features to the Generation 3 and 4 Pocket Colposcope did not impact image quality based on qualitative and quantitative metrics. The level of agreement between the Generation 3 Pocket Colposcope and the standard-of-care colposcope was 75.0% (kappa = 0.4000, p = 0.0028, n = 24). This closely matched the level of agreement between the Generation 4 Pocket Colposcope and the standard-of-care colposcope which was also 75.0% (kappa = 0.4941, p = 0.0024, n = 32). Conclusion Our results indicate that the Generation 3 and 4 Pocket Colposcopes perform comparably to the standard-of-care colposcope, with the added benefit of being low-cost and waterproof, which is ideal for use in resource-limited settings. Additionally, the reflector significantly reduces the electrical requirements of the Generation 4 Pocket Colposcope enhancing portability without altering performance compared to the Generation 3 system. PMID:29425225
Design of an MR image processing module on an FPGA chip.
Li, Limin; Wyrwicz, Alice M
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiuping, E-mail: yangxiuping-1990@163.com; Min, Lequan, E-mail: minlequan@sina.com; Wang, Xue, E-mail: wangxue-20130818@163.com
This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% keymore » streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2{sup 1345}. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.« less
Cascaded image analysis for dynamic crack detection in material testing
NASA Astrophysics Data System (ADS)
Hampel, U.; Maas, H.-G.
Concrete probes in civil engineering material testing often show fissures or hairline-cracks. These cracks develop dynamically. Starting at a width of a few microns, they usually cannot be detected visually or in an image of a camera imaging the whole probe. Conventional image analysis techniques will detect fissures only if they show a width in the order of one pixel. To be able to detect and measure fissures with a width of a fraction of a pixel at an early stage of their development, a cascaded image analysis approach has been developed, implemented and tested. The basic idea of the approach is to detect discontinuities in dense surface deformation vector fields. These deformation vector fields between consecutive stereo image pairs, which are generated by cross correlation or least squares matching, show a precision in the order of 1/50 pixel. Hairline-cracks can be detected and measured by applying edge detection techniques such as a Sobel operator to the results of the image matching process. Cracks will show up as linear discontinuities in the deformation vector field and can be vectorized by edge chaining. In practical tests of the method, cracks with a width of 1/20 pixel could be detected, and their width could be determined at a precision of 1/50 pixel.
Mirzaei, Alireza; Jalilian, Amir R; Akhlaghi, Mehdi; Beiki, Davood
2016-01-01
Gallium-68 citrate has been successfully applied in the PET imaging of infections and inflammation in some centers; however further evaluation of the tracer in inflammation models is of great importance. 68Ga-citrate prepared from [68Ga]GaCl3 (eluted form an SnO2 based 68Ge/68Ga generator) and sodium citrate at optimized conditions followed by quality control tests was injected to normal and turpentine-oil induced rats PET/CT imaging studies up to 290 min. 68Ga-citrate was prepared with acceptable radiochemical purity (>99 ITLC, >99% HPLC), specific activity (28-30 GBq/mM), chemical purity (Sn, Fe <0.3 ppm; Zn<0.2 ppm) in 15 min at 50°C. PET/CT imaging of the tracer demonstrated early detection of inflamed site in animal models in 60-80 min. This study demonstrated possible early detection of inflammation foci in vivo using 68Ga-citrate prepared using commercially available 68Ge/68Ga generators for PET imaging. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Standard imaging techniques cannot accurately locate sites of prostate cancer metastasis. The use of 18F-DCFPyL, a second-generation PET agent, aims to improve doctors’ ability to assess high-risk primary tumors, detect sites of recurrent prostate cancer and target therapies to specific sites of recurrence. Read more...
Lee, Seung-Hwan; Wynn, Jonathan K; Green, Michael F; Kim, Hyun; Lee, Kang-Joon; Nam, Min; Park, Joong-Kyu; Chung, Young-Cho
2006-04-01
Electrophysiological studies have demonstrated gamma and beta frequency oscillations in response to auditory stimuli. The purpose of this study was to test whether auditory hallucinations (AH) in schizophrenia patients reflect abnormalities in gamma and beta frequency oscillations and to investigate source generators of these abnormalities. This theory was tested using quantitative electroencephalography (qEEG) and low-resolution electromagnetic tomography (LORETA) source imaging. Twenty-five schizophrenia patients with treatment refractory AH, lasting for at least 2 years, and 23 schizophrenia patients with non-AH (N-AH) in the past 2 years were recruited for the study. Spectral analysis of the qEEG and source imaging of frequency bands of artifact-free 30 s epochs were examined during rest. AH patients showed significantly increased beta 1 and beta 2 frequency amplitude compared with N-AH patients. Gamma and beta (2 and 3) frequencies were significantly correlated in AH but not in N-AH patients. Source imaging revealed significantly increased beta (1 and 2) activity in the left inferior parietal lobule and the left medial frontal gyrus in AH versus N-AH patients. These results imply that AH is reflecting increased beta frequency oscillations with neural generators localized in speech-related areas.
Results of a low power ice protection system test and a new method of imaging data analysis
NASA Technical Reports Server (NTRS)
Shin, Jaiwon; Bond, Thomas H.; Mesander, Geert A.
1992-01-01
Tests were conducted on a BF Goodrich De-Icing System's Pneumatic Impulse Ice Protection (PIIP) system in the NASA Lewis Icing Research Tunnel (IRT). Characterization studies were done on shed ice particle size by changing the input pressure and cycling time of the PIIP de-icer. The shed ice particle size was quantified using a newly developed image software package. The tests were conducted on a 1.83 m (6 ft) span, 0.53 m (221 in) chord NACA 0012 airfoil operated at a 4 degree angle of attack. The IRT test conditions were a -6.7 C (20 F) glaze ice, and a -20 C (-4 F) rime ice. The ice shedding events were recorded with a high speed video system. A detailed description of the image processing package and the results generated from this analytical tool are presented.
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-09-07
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure.
Computer generated maps from digital satellite data - A case study in Florida
NASA Technical Reports Server (NTRS)
Arvanitis, L. G.; Reich, R. M.; Newburne, R.
1981-01-01
Ground cover maps are important tools to a wide array of users. Over the past three decades, much progress has been made in supplementing planimetric and topographic maps with ground cover details obtained from aerial photographs. The present investigation evaluates the feasibility of using computer maps of ground cover from satellite input tapes. Attention is given to the selection of test sites, a satellite data processing system, a multispectral image analyzer, general purpose computer-generated maps, the preliminary evaluation of computer maps, a test for areal correspondence, the preparation of overlays and acreage estimation of land cover types on the Landsat computer maps. There is every indication to suggest that digital multispectral image processing systems based on Landsat input data will play an increasingly important role in pattern recognition and mapping land cover in the years to come.
Adaptive template generation for amyloid PET using a deep learning approach.
Kang, Seung Kwan; Seo, Seongho; Shin, Seong A; Byun, Min Soo; Lee, Dong Young; Kim, Yu Kyeong; Lee, Dong Soo; Lee, Jae Sung
2018-05-11
Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired 11 C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto-encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research. © 2018 Wiley Periodicals, Inc.
Saleh, Khaled; Hossny, Mohammed; Nahavandi, Saeid
2018-06-12
Traffic collisions between kangaroos and motorists are on the rise on Australian roads. According to a recent report, it was estimated that there were more than 20,000 kangaroo vehicle collisions that occurred only during the year 2015 in Australia. In this work, we are proposing a vehicle-based framework for kangaroo detection in urban and highway traffic environment that could be used for collision warning systems. Our proposed framework is based on region-based convolutional neural networks (RCNN). Given the scarcity of labeled data of kangaroos in traffic environments, we utilized our state-of-the-art data generation pipeline to generate 17,000 synthetic depth images of traffic scenes with kangaroo instances annotated in them. We trained our proposed RCNN-based framework on a subset of the generated synthetic depth images dataset. The proposed framework achieved a higher average precision (AP) score of 92% over all the testing synthetic depth image datasets. We compared our proposed framework against other baseline approaches and we outperformed it with more than 37% in AP score over all the testing datasets. Additionally, we evaluated the generalization performance of the proposed framework on real live data and we achieved a resilient detection accuracy without any further fine-tuning of our proposed RCNN-based framework.
Performance evaluation of infrared imaging system in field test
NASA Astrophysics Data System (ADS)
Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie
2014-11-01
Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.
Different Plasticity Patterns of Language Function in Children With Perinatal and Childhood Stroke
Tomberg, Tiiu; Kepler, Joosep; Laugesaar, Rael; Kaldoja, Mari-Liis; Kepler, Kalle; Kolk, Anneli
2014-01-01
Plasticity of language function after brain damage can depend on maturation of the brain. Children with left-hemisphere perinatal (n = 7) or childhood stroke (n = 5) and 12 controls were investigated using functional magnetic resonance imaging. The verb generation and the sentence comprehension tasks were employed to activate the expressive and receptive language areas, respectively. Weighted laterality indices were calculated and correlated with results assessed by neuropsychological test battery. Compared to controls, children with childhood stroke showed significantly lower mean scores for the expressive (P < .05) and receptive (P = .05) language tests. On functional magnetic resonance imaging they showed left-side cortical activation, as did controls. Perinatal stroke patients showed atypical right-side or bilateral language lateralization during both tasks. Negative correlation for stroke patients was found between scores for expressive language tests and laterality index during the verb generation task. (Re)organization of language function differs in children with perinatal and childhood stroke and correlates with neurocognitive performance. PMID:23748202
Module for multiphoton high-resolution hyperspectral imaging and spectroscopy
NASA Astrophysics Data System (ADS)
Zeytunyan, Aram; Baldacchini, Tommaso; Zadoyan, Ruben
2018-02-01
We developed a module for dual-output, dual-wavelength lasers that facilitates multiphoton imaging and spectroscopy experiments and enables hyperspectral imaging with spectral resolution up to 5 cm-1. High spectral resolution is achieved by employing spectral focusing. Specifically, two sets of grating pairs are used to control the chirps in each laser beam. In contrast with the approach that uses fixed-length glass rods, grating pairs allow matching the spectral resolution and the linewidths of the Raman lines of interest. To demonstrate the performance of the module, we report the results of spectral focusing CARS and SRS microscopy experiments for various test samples and Raman shifts. The developed module can be used for a variety of multimodal imaging and spectroscopy applications, such as single- and multi-color two-photon fluorescence, second harmonic generation, third harmonic generation, pump-probe, transient absorption, and others.
Smartphone-based analysis of biochemical tests for health monitoring support at home.
Velikova, Marina; Smeets, Ruben L; van Scheltinga, Josien Terwisscha; Lucas, Peter J F; Spaanderman, Marc
2014-09-01
In the context of home-based healthcare monitoring systems, it is desirable that the results obtained from biochemical tests - tests of various body fluids such as blood and urine - are objective and automatically generated to reduce the number of man-made errors. The authors present the StripTest reader - an innovative smartphone-based interpreter of biochemical tests based on paper-based strip colour using image processing techniques. The working principles of the reader include image acquisition of the colour strip pads using the camera phone, analysing the images within the phone and comparing them with reference colours provided by the manufacturer to obtain the test result. The detection of kidney damage was used as a scenario to illustrate the application of, and test, the StripTest reader. An extensive evaluation using laboratory and human urine samples demonstrates the reader's accuracy and precision of detection, indicating the successful development of a cheap, mobile and smart reader for home-monitoring of kidney functioning, which can facilitate the early detection of health problems and a timely treatment intervention.
View generation for 3D-TV using image reconstruction from irregularly spaced samples
NASA Astrophysics Data System (ADS)
Vázquez, Carlos
2007-02-01
Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).
Comparison of algorithms for automatic border detection of melanoma in dermoscopy images
NASA Astrophysics Data System (ADS)
Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert
2016-09-01
Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.
Reference software implementation for GIFTS ground data processing
NASA Astrophysics Data System (ADS)
Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.
2006-08-01
Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.
Next-generation pushbroom filter radiometers for remote sensing
NASA Astrophysics Data System (ADS)
Tarde, Richard W.; Dittman, Michael G.; Kvaran, Geir E.
2012-09-01
Individual focal plane size, yield, and quality continue to improve, as does the technology required to combine these into large tiled formats. As a result, next-generation pushbroom imagers are replacing traditional scanning technologies in remote sensing applications. Pushbroom architecture has inherently better radiometric sensitivity and significantly reduced payload mass, power, and volume than previous generation scanning technologies. However, the architecture creates challenges achieving the required radiometric accuracy performance. Achieving good radiometric accuracy, including image spectral and spatial uniformity, requires creative optical design, high quality focal planes and filters, careful consideration of on-board calibration sources, and state-of-the-art ground test facilities. Ball Aerospace built the Landsat Data Continuity Mission (LDCM) next-generation Operational Landsat Imager (OLI) payload. Scheduled to launch in 2013, OLI provides imagery consistent with the historical Landsat spectral, spatial, radiometric, and geometric data record and completes the generational technology upgrade from the Enhanced Thematic Mapper (ETM+) whiskbroom technology to modern pushbroom technology afforded by advanced focal planes. We explain how Ball's capabilities allowed producing the innovative next-generational OLI pushbroom filter radiometer that meets challenging radiometric accuracy or calibration requirements. OLI will improve the multi-decadal land surface observation dataset dating back to the 1972 launch of ERTS-1 or Landsat 1.
Obstacle Detection Algorithms for Rotorcraft Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)
2001-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.
Comparison of virtual unenhanced CT images of the abdomen under different iodine flow rates.
Li, Yongrui; Li, Ye; Jackson, Alan; Li, Xiaodong; Huang, Ning; Guo, Chunjie; Zhang, Huimao
2017-01-01
To assess the effect of varying iodine flow rate (IFR) and iodine concentration on the quality of virtual unenhanced (VUE) images of the abdomen obtained with dual-energy CT. 94 subjects underwent unenhanced and triphasic contrast-enhanced CT scan of the abdomen, including arterial phase, portal venous phase, and delayed phase using dual-energy CT. Patients were randomized into 4 groups with different IFRs or iodine concentrations. VUE images were generated at 70 keV. The CT values, image noise, SNR and CNR of aorta, portal vein, liver, liver lesion, pancreatic parenchyma, spleen, erector spinae, and retroperitoneal fat were recorded. Dose-length product and effective dose for an examination with and without plain phase scan were calculated to assess the potential dose savings. Two radiologists independently assessed subjective image quality using a five-point scale. The Kolmogorov-Smirnov test was used first to test for normal distribution. Where data conformed to a normal distribution, analysis of variance was used to compare mean HU values, image noise, SNRs and CNRs for the 4 image sets. Where data distribution was not normal, a nonparametric test (Kruskal-Wallis test followed by stepwise step-down comparisons) was used. The significance level for all tests was 0.01 (two-sided) to allow for type 2 errors due to multiple testing. The CT numbers (HU) of VUE images showed no significant differences between the 4 groups (p > 0.05) or between different phases within the same group (p > 0.05). VUE images had equal or higher SNR and CNR than true unenhanced images. VUE images received equal or lower subjective image quality scores than unenhanced images but were of acceptable quality for diagnostic use. Calculated dose-length product and estimated dose showed that the use of VUE images in place of unenhanced images would be associated with a dose saving of 25%. VUE images can replace conventional unenhanced images. VUE images are not affected by varying iodine flow rates and iodine concentrations, and diagnostic examinations could be acquired with a potential dose saving of 25%.
Vadnjal, Ana Laura; Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H
2013-03-20
We present a method to determine micro and nano in-plane displacements based on the phase singularities generated by application of directional wavelet transforms to speckle pattern images. The spatial distribution of the obtained phase singularities by the wavelet transform configures a network, which is characterized by two quasi-orthogonal directions. The displacement value is determined by identifying the intersection points of the network before and after the displacement produced by the tested object. The performance of this method is evaluated using simulated speckle patterns and experimental data. The proposed approach is compared with the optical vortex metrology and digital image correlation methods in terms of performance and noise robustness, and the advantages and limitations associated to each method are also discussed.
NASA Astrophysics Data System (ADS)
Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang
2018-05-01
Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.
Adams, Robert; Zboray, Robert; Prasser, Horst-Michael
2016-01-01
Very few experimental imaging studies using a compact neutron generator have been published, and to the knowledge of the authors none have included tomography results using multiple projection angles. Radiography results with a neutron generator, scintillator screen, and camera can be seen in Bogolubov et al. (2005), Cremer et al. (2012), and Li et al. (2014). Comparable results with a position-sensitive photomultiplier tube can be seen in Popov et al. (2011). One study using an array of individual fast neutron detectors in the context of cargo scanning for security purposes is detailed in Eberhardt et al. (2005). In that case, however, the emphasis was on very large objects with a resolution on the order of 1cm, whereas this study focuses on less massive objects and a finer spatial resolution. In Andersson et al. (2014) three fast neutron counters and a D-T generator were used to perform attenuation measurements of test phantoms. Based on the axisymmetry of the test phantoms, the single-projection information was used to calculate radial attenuation distributions of the object, which was compared with the known geometry. In this paper a fast-neutron tomography system based on an array of individual detectors and a purpose-designed compact D-D neutron generator is presented. Each of the 88 detectors consists of a plastic scintillator read out by two Silicon photomultipliers and a dedicated pulse-processing board. Data acquisition for all channels was handled by four single-board microcontrollers. Details of the individual detector design and testing are elaborated upon. Using the complete array, several fast-neutron images of test phantoms were reconstructed, one of which was compared with results using a Co-60 gamma source. The system was shown to be capable of 2mm resolution, with exposure times on the order of several hours per reconstructed tomogram. Details about these measurements and the analysis of the reconstructed images are given, along with a discussion of the capabilities of the system and its outlook. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multi-isotope SPECT imaging of the 225Ac decay chain: feasibility studies
NASA Astrophysics Data System (ADS)
Robertson, A. K. H.; Ramogida, C. F.; Rodríguez-Rodríguez, C.; Blinder, Stephan; Kunz, Peter; Sossi, Vesna; Schaffer, Paul
2017-06-01
Effective use of the {}225Ac decay chain in targeted internal radioimmunotherapy requires the retention of both {}225Ac and progeny isotopes at the target site. Imaging-based pharmacokinetic tests of these pharmaceuticals must therefore separately yet simultaneously image multiple isotopes that may not be colocalized despite being part of the same decay chain. This work presents feasibility studies demonstrating the ability of a microSPECT/CT scanner equipped with a high energy collimator to simultaneously image two components of the {}225Ac decay chain: {}221Fr (218 keV) and {}213Bi (440 keV). Image quality phantoms were used to assess the performance of two collimators for simultaneous {}221Fr and {}213Bi imaging in terms of contrast and noise. A hotrod resolution phantom containing clusters of thin rods with diameters ranging between 0.85 and 1.70 mm was used to assess resolution. To demonstrate ability to simultaneously image dynamic {}221Fr and {}213Bi activity distributions, a phantom containing a {}213Bi generator from {}225Ac was imaged. These tests were performed with two collimators, a high-energy ultra-high resolution (HEUHR) collimator and an ultra-high sensitivity (UHS) collimator. Values consistent with activity concentrations determined independently via gamma spectroscopy were observed in high activity regions of the images. In hotrod phantom images, the HEUHR collimator resolved all rods for both {}221Fr and {}213Bi images. With the UHS collimator, no rods were resolvable in {}213Bi images and only rods ⩾1.3 mm were resolved in {}221Fr images. After eluting the {}213Bi generator, images accurately visualized the reestablishment of transient equilibrium of the {}225Ac decay chain. The feasibility of evaluating the pharmacokinetics of the {}225Ac decay chain in vivo has been demonstrated. This presented method requires the use of a high-performance high-energy collimator.
High-quality JPEG compression history detection for fake uncompressed images
NASA Astrophysics Data System (ADS)
Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan
2017-05-01
Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.
Supervised guiding long-short term memory for image caption generation based on object classes
NASA Astrophysics Data System (ADS)
Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan
2018-03-01
The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.
Cancer diagnostics using neural network sorting of processed images
NASA Astrophysics Data System (ADS)
Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.
1996-03-01
A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.
Simulation of transmission electron microscope images of biological specimens.
Rullgård, H; Ofverstedt, L-G; Masich, S; Daneholt, B; Oktem, O
2011-09-01
We present a new approach to simulate electron cryo-microscope images of biological specimens. The framework for simulation consists of two parts; the first is a phantom generator that generates a model of a specimen suitable for simulation, the second is a transmission electron microscope simulator. The phantom generator calculates the scattering potential of an atomic structure in aqueous buffer and allows the user to define the distribution of molecules in the simulated image. The simulator includes a well defined electron-specimen interaction model based on the scalar Schrödinger equation, the contrast transfer function for optics, and a noise model that includes shot noise as well as detector noise including detector blurring. To enable optimal performance, the simulation framework also includes a calibration protocol for setting simulation parameters. To test the accuracy of the new framework for simulation, we compare simulated images to experimental images recorded of the Tobacco Mosaic Virus (TMV) in vitreous ice. The simulated and experimental images show good agreement with respect to contrast variations depending on dose and defocus. Furthermore, random fluctuations present in experimental and simulated images exhibit similar statistical properties. The simulator has been designed to provide a platform for development of new instrumentation and image processing procedures in single particle electron microscopy, two-dimensional crystallography and electron tomography with well documented protocols and an open source code into which new improvements and extensions are easily incorporated. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.
Yang, C; Paulson, E; Li, X
2012-06-01
To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.
Precision targeting with a tracking adaptive optics scanning laser ophthalmoscope
NASA Astrophysics Data System (ADS)
Hammer, Daniel X.; Ferguson, R. Daniel; Bigelow, Chad E.; Iftimia, Nicusor V.; Ustun, Teoman E.; Noojin, Gary D.; Stolarski, David J.; Hodnett, Harvey M.; Imholte, Michelle L.; Kumru, Semih S.; McCall, Michelle N.; Toth, Cynthia A.; Rockwell, Benjamin A.
2006-02-01
Precise targeting of retinal structures including retinal pigment epithelial cells, feeder vessels, ganglion cells, photoreceptors, and other cells important for light transduction may enable earlier disease intervention with laser therapies and advanced methods for vision studies. A novel imaging system based upon scanning laser ophthalmoscopy (SLO) with adaptive optics (AO) and active image stabilization was designed, developed, and tested in humans and animals. An additional port allows delivery of aberration-corrected therapeutic/stimulus laser sources. The system design includes simultaneous presentation of non-AO, wide-field (~40 deg) and AO, high-magnification (1-2 deg) retinal scans easily positioned anywhere on the retina in a drag-and-drop manner. The AO optical design achieves an error of <0.45 waves (at 800 nm) over +/-6 deg on the retina. A MEMS-based deformable mirror (Boston Micromachines Inc.) is used for wave-front correction. The third generation retinal tracking system achieves a bandwidth of greater than 1 kHz allowing acquisition of stabilized AO images with an accuracy of ~10 μm. Normal adult human volunteers and animals with previously-placed lesions (cynomolgus monkeys) were tested to optimize the tracking instrumentation and to characterize AO imaging performance. Ultrafast laser pulses were delivered to monkeys to characterize the ability to precisely place lesions and stimulus beams. Other advanced features such as real-time image averaging, automatic highresolution mosaic generation, and automatic blink detection and tracking re-lock were also tested. The system has the potential to become an important tool to clinicians and researchers for early detection and treatment of retinal diseases.
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Davis, Jason; Farrington, Seth; Walker, James
2007-01-01
Low density polyurethane foam has been an important insulation material for space launch vehicles for several decades. The potential for damage from foam breaking away from the NASA External Tank was not realized until the foam impacts on the Columbia Orbiter vehicle caused damage to its Leading Edge thermal protection systems (TPS). Development of improved inspection techniques on the foam TPS is necessary to prevent similar occurrences in the future. Foamed panels with drilled holes for volumetric flaws and Teflon inserts to simulate debonded conditions have been used to evaluate and calibrate nondestructive testing (NDT) methods. Unfortunately the symmetric edges and dissimilar materials used in the preparation of these simulated flaws provide an artificially large signal while very little signal is generated from the actual defects themselves. In other words, the same signal are not generated from the artificial defects in the foam test panels as produced when inspecting natural defect in the ET foam TPS. A project to create more realistic voids similar to what actually occurs during manufacturing operations was began in order to improve detection of critical voids during inspections. This presentation describes approaches taken to create more natural voids in foam TPS in order to provide a more realistic evaluation of what the NDT methods can detect. These flaw creation techniques were developed with both sprayed foam and poured foam used for insulation on the External Tank. Test panels with simulated defects have been used to evaluate NDT methods for the inspection of the External Tank. A comparison of images between natural flaws and machined flaws generated from backscatter x-ray radiography, x-ray laminography, terahertz imaging and millimeter wave imaging show significant differences in identifying defect regions.
NASA Astrophysics Data System (ADS)
Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.
2007-04-01
AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.
Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D
2010-08-01
Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.
Knoll, Florian; Hammernik, Kerstin; Kobler, Erich; Pock, Thomas; Recht, Michael P; Sodickson, Daniel K
2018-05-17
Although deep learning has shown great promise for MR image reconstruction, an open question regarding the success of this approach is the robustness in the case of deviations between training and test data. The goal of this study is to assess the influence of image contrast, SNR, and image content on the generalization of learned image reconstruction, and to demonstrate the potential for transfer learning. Reconstructions were trained from undersampled data using data sets with varying SNR, sampling pattern, image contrast, and synthetic data generated from a public image database. The performance of the trained reconstructions was evaluated on 10 in vivo patient knee MRI acquisitions from 2 different pulse sequences that were not used during training. Transfer learning was evaluated by fine-tuning baseline trainings from synthetic data with a small subset of in vivo MR training data. Deviations in SNR between training and testing led to substantial decreases in reconstruction image quality, whereas image contrast was less relevant. Trainings from heterogeneous training data generalized well toward the test data with a range of acquisition parameters. Trainings from synthetic, non-MR image data showed residual aliasing artifacts, which could be removed by transfer learning-inspired fine-tuning. This study presents insights into the generalization ability of learned image reconstruction with respect to deviations in the acquisition settings between training and testing. It also provides an outlook for the potential of transfer learning to fine-tune trainings to a particular target application using only a small number of training cases. © 2018 International Society for Magnetic Resonance in Medicine.
The Gemini Planet Imager: integration and status
NASA Astrophysics Data System (ADS)
Macintosh, Bruce A.; Anthony, Andre; Atwood, Jennifer; Barriga, Nicolas; Bauman, Brian; Caputa, Kris; Chilcote, Jeffery; Dillon, Daren; Doyon, René; Dunn, Jennifer; Gavel, Donald T.; Galvez, Ramon; Goodsell, Stephen J.; Graham, James R.; Hartung, Markus; Isaacs, Joshua; Kerley, Dan; Konopacky, Quinn; Labrie, Kathleen; Larkin, James E.; Maire, Jerome; Marois, Christian; Millar-Blanchaer, Max; Nunez, Arturo; Oppenheimer, Ben R.; Palmer, David W.; Pazder, John; Perrin, Marshall; Poyneer, Lisa A.; Quirez, Carlos; Rantakyro, Frederik; Reshtov, Vlad; Saddlemyer, Leslie; Sadakuni, Naru; Savransky, Dmitry; Sivaramakrishnan, Anand; Smith, Malcolm; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Weiss, Jason; Wiktorowicz, Sloane
2012-09-01
The Gemini Planet Imager is a next-generation instrument for the direct detection and characterization of young warm exoplanets, designed to be an order of magnitude more sensitive than existing facilities. It combines a 1700-actuator adaptive optics system, an apodized-pupil Lyot coronagraph, a precision interferometric infrared wavefront sensor, and a integral field spectrograph. All hardware and software subsystems are now complete and undergoing integration and test at UC Santa Cruz. We will present test results on each subsystem and the results of end-to-end testing. In laboratory testing, GPI has achieved a raw contrast (without post-processing) of 10-6 5σ at 0.4", and with multiwavelength speckle suppression, 2x10-7 at the same separation.
A prototype tap test imaging system: Initial field test results
NASA Astrophysics Data System (ADS)
Peters, J. J.; Barnard, D. J.; Hudelson, N. A.; Simpson, T. S.; Hsu, D. K.
2000-05-01
This paper describes a simple, field-worthy tap test imaging system that gives quantitative information about the size, shape, and severity of defects and damages. The system consists of an accelerometer, electronic circuits for conditioning the signal and measuring the impact duration, a laptop PC and data acquisition and processing software. The images are generated manually by tapping on a grid printed on a plastic sheet laid over the part's surface. A mechanized scanner is currently under development. The prototype has produced images for a variety of aircraft composite and metal honeycomb structures containing flaws, damages, and repairs. Images of the local contact stiffness, deduced from the impact duration using a spring model, revealed quantitatively the stiffness reduction due to flaws and damages, as well as the stiffness enhancement due to substructures. The system has been field tested on commercial and military aircraft as well as rotor blades and engine decks on helicopters. Field test results will be shown and the operation of the system will be demonstrated.—This material is based upon work supported by the Federal Aviation Administration under Contract #DTFA03-98-D-00008, Delivery Order No. IA016 and performed at Iowa State University's Center for NDE as part of the Center for Aviation Systems Reliability program.
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
Wang, Jinke; Cheng, Yuanzhi; Guo, Changyong; Wang, Yadong; Tamura, Shinichi
2016-05-01
Propose a fully automatic 3D segmentation framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images. First, all of the atlases are weighted in the selected training datasets by calculating the similarities between the atlases and the test image to dynamically generate a subject-specific probabilistic atlas for the test image. The most likely liver region of the test image is further determined based on the generated atlas. A rough segmentation is obtained by a maximum a posteriori classification of probability map, and the final liver segmentation is produced by a shape-intensity prior level set in the most likely liver region. Our method is evaluated and demonstrated on 25 test CT datasets from our partner site, and its results are compared with two state-of-the-art liver segmentation methods. Moreover, our performance results on 10 MICCAI test datasets are submitted to the organizers for comparison with the other automatic algorithms. Using the 25 test CT datasets, average symmetric surface distance is [Formula: see text] mm (range 0.62-2.12 mm), root mean square symmetric surface distance error is [Formula: see text] mm (range 0.97-3.01 mm), and maximum symmetric surface distance error is [Formula: see text] mm (range 12.73-26.67 mm) by our method. Our method on 10 MICCAI test data sets ranks 10th in all the 47 automatic algorithms on the site as of July 2015. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our method is a promising tool to improve the efficiency of both techniques. The applicability of the proposed method to some challenging clinical problems and the segmentation of the liver are demonstrated with good results on both quantitative and qualitative experimentations. This study suggests that the proposed framework can be good enough to replace the time-consuming and tedious slice-by-slice manual segmentation approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, R.M.; Zander, M.E.; Brown, S.K.
1992-09-01
This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, R.M.; Zander, M.E.; Brown, S.K.
1992-01-01
This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less
A Procedure for High Resolution Satellite Imagery Quality Assessment
Crespi, Mattia; De Vendictis, Laura
2009-01-01
Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312
Umehara, Kensuke; Ota, Junko; Ishida, Takayuki
2017-10-18
In this study, the super-resolution convolutional neural network (SRCNN) scheme, which is the emerging deep-learning-based super-resolution method for enhancing image resolution in chest CT images, was applied and evaluated using the post-processing approach. For evaluation, 89 chest CT cases were sampled from The Cancer Imaging Archive. The 89 CT cases were divided randomly into 45 training cases and 44 external test cases. The SRCNN was trained using the training dataset. With the trained SRCNN, a high-resolution image was reconstructed from a low-resolution image, which was down-sampled from an original test image. For quantitative evaluation, two image quality metrics were measured and compared to those of the conventional linear interpolation methods. The image restoration quality of the SRCNN scheme was significantly higher than that of the linear interpolation methods (p < 0.001 or p < 0.05). The high-resolution image reconstructed by the SRCNN scheme was highly restored and comparable to the original reference image, in particular, for a ×2 magnification. These results indicate that the SRCNN scheme significantly outperforms the linear interpolation methods for enhancing image resolution in chest CT images. The results also suggest that SRCNN may become a potential solution for generating high-resolution CT images from standard CT images.
Nondestructive Evaluation of Carbon Fiber Bicycle Frames Using Infrared Thermography
Ibarra-Castanedo, Clemente; Klein, Matthieu; Maldague, Xavier; Sanchez-Beato, Alvaro
2017-01-01
Bicycle frames made of carbon fibre are extremely popular for high-performance cycling due to the stiffness-to-weight ratio, which enables greater power transfer. However, products manufactured using carbon fibre are sensitive to impact damage. Therefore, intelligent nondestructive evaluation is a required step to prevent failures and ensure a secure usage of the bicycle. This work proposes an inspection method based on active thermography, a proven technique successfully applied to other materials. Different configurations for the inspection are tested, including power and heating time. Moreover, experiments are applied to a real bicycle frame with generated impact damage of different energies. Tests show excellent results, detecting the generated damage during the inspection. When the results are combined with advanced image post-processing methods, the SNR is greatly increased, and the size and localization of the defects are clearly visible in the images. PMID:29156650
Sawicki, Piotr
2018-01-01
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679
Gabara, Grzegorz; Sawicki, Piotr
2018-03-06
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.
Blind identification of image manipulation type using mixed statistical moments
NASA Astrophysics Data System (ADS)
Jeong, Bo Gyu; Moon, Yong Ho; Eom, Il Kyu
2015-01-01
We present a blind identification of image manipulation types such as blurring, scaling, sharpening, and histogram equalization. Motivated by the fact that image manipulations can change the frequency characteristics of an image, we introduce three types of feature vectors composed of statistical moments. The proposed statistical moments are generated from separated wavelet histograms, the characteristic functions of the wavelet variance, and the characteristic functions of the spatial image. Our method can solve the n-class classification problem. Through experimental simulations, we demonstrate that our proposed method can achieve high performance in manipulation type detection. The average rate of the correctly identified manipulation types is as high as 99.22%, using 10,800 test images and six manipulation types including the authentic image.
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
1992-03-15
Pipes, Computer Modelling, Nondestructive Testing. Tomography , Planar Converter, Cesium Reservoir 19. ABSTRACT (Continue on reverse if necessary and...Investigation ........................ 32 4.3 Computed Tomography ................................ 33 4.4 X-Ray Radiography...25 3.4 LEOS generated output data for Mo-Re converter ................ 26 4.1 Distance along converter imaged by the computed tomography
The Draw a Scientist Test: A Different Population and a Somewhat Different Story
ERIC Educational Resources Information Center
Thomas, Mark D.; Henley, Tracy B.; Snell, Catherine M.
2006-01-01
This study examined Draw-a-Scientist-Test (DAST) images solicited from 212 undergraduate students for the presence of traditional gender stereotypes. Participants were 100 males and 112 females enrolled in psychology or computer science courses with a mean age of 21.02 years. A standard multiple regression generated a model that accounts for the…
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Huang, Xia; Qian, Wei
2017-03-01
Deep learning is a trending promising method in medical image analysis area, but how to efficiently prepare the input image for the deep learning algorithms remains a challenge. In this paper, we introduced a novel artificial multichannel region of interest (ROI) generation procedure for convolutional neural networks (CNN). From LIDC database, we collected 54880 benign nodule samples and 59848 malignant nodule samples based on the radiologists' annotations. The proposed CNN consists of three pairs of convolutional layers and two fully connected layers. For each original ROI, two new ROIs were generated: one contains the segmented nodule which highlighted the nodule shape, and the other one contains the gradient of the original ROI which highlighted the textures. By combining the three channel images into a pseudo color ROI, the CNN was trained and tested on the new multichannel ROIs (multichannel ROI II). For the comparison, we generated another type of multichannel image by replacing the gradient image channel with a ROI contains whitened background region (multichannel ROI I). With the 5-fold cross validation evaluation method, the CNN using multichannel ROI II achieved the ROI based area under the curve (AUC) of 0.8823+/-0.0177, compared to the AUC of 0.8484+/-0.0204 generated by the original ROI. By calculating the average of ROI scores from one nodule, the lesion based AUC using multichannel ROI was 0.8793+/-0.0210. By comparing the convolved features maps from CNN using different types of ROIs, it can be noted that multichannel ROI II contains more accurate nodule shapes and surrounding textures.
Compact time- and space-integrating SAR processor: design and development status
NASA Astrophysics Data System (ADS)
Haney, Michael W.; Levy, James J.; Christensen, Marc P.; Michael, Robert R., Jr.; Mock, Michael M.
1994-06-01
Progress toward a flight demonstration of the acousto-optic time- and space- integrating real-time SAR image formation processor program is reported. The concept overcomes the size and power consumption limitations of electronic approaches by using compact, rugged, and low-power analog optical signal processing techniques for the most computationally taxing portions of the SAR imaging problem. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results reported include tests of a laboratory version of the concept, a description of the compact optical design that will be implemented, and an overview of the electronic interface and controller modules of the flight-test system.
Continuous stacking computational approach based automated microscope slide scanner
NASA Astrophysics Data System (ADS)
Murali, Swetha; Adhikari, Jayesh Vasudeva; Jagannadh, Veerendra Kalyan; Gorthi, Sai Siva
2018-02-01
Cost-effective and automated acquisition of whole slide images is a bottleneck for wide-scale deployment of digital pathology. In this article, a computation augmented approach for the development of an automated microscope slide scanner is presented. The realization of a prototype device built using inexpensive off-the-shelf optical components and motors is detailed. The applicability of the developed prototype to clinical diagnostic testing is demonstrated by generating good quality digital images of malaria-infected blood smears. Further, the acquired slide images have been processed to identify and count the number of malaria-infected red blood cells and thereby perform quantitative parasitemia level estimation. The presented prototype would enable cost-effective deployment of slide-based cyto-diagnostic testing in endemic areas.
Osteoarthritis Severity Determination using Self Organizing Map Based Gabor Kernel
NASA Astrophysics Data System (ADS)
Anifah, L.; Purnomo, M. H.; Mengko, T. L. R.; Purnama, I. K. E.
2018-02-01
The number of osteoarthritis patients in Indonesia is enormous, so early action is needed in order for this disease to be handled. The aim of this paper to determine osteoarthritis severity based on x-ray image template based on gabor kernel. This research is divided into 3 stages, the first step is image processing that is using gabor kernel. The second stage is the learning stage, and the third stage is the testing phase. The image processing stage is by normalizing the image dimension to be template to 50 □ 200 image. Learning stage is done with parameters initial learning rate of 0.5 and the total number of iterations of 1000. The testing stage is performed using the weights generated at the learning stage. The testing phase has been done and the results were obtained. The result shows KL-Grade 0 has an accuracy of 36.21%, accuracy for KL-Grade 2 is 40,52%, while accuracy for KL-Grade 2 and KL-Grade 3 are 15,52%, and 25,86%. The implication of this research is expected that this research as decision support system for medical practitioners in determining KL-Grade on X-ray images of knee osteoarthritis.
A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication
Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo
2017-01-01
Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934
Si, Dong; He, Jing
2014-01-01
Electron cryo-microscopy (Cryo-EM) technique produces 3-dimensional (3D) density images of proteins. When resolution of the images is not high enough to resolve the molecular details, it is challenging for image processing methods to enhance the molecular features. β-barrel is a particular structure feature that is formed by multiple β-strands in a barrel shape. There is no existing method to derive β-strands from the 3D image of a β-barrel at medium resolutions. We propose a new method, StrandRoller, to generate a small set of possible β-traces from the density images at medium resolutions of 5-10Å. StrandRoller has been tested using eleven β-barrel images simulated to 10Å resolution and one image isolated from the experimentally derived cryo-EM density image at 6.7Å resolution. StrandRoller was able to detect 81.84% of the β-strands with an overall 1.5Å 2-way distance between the detected and the observed β-traces, if the best of fifteen detections is considered. Our results suggest that it is possible to derive a small set of possible β-traces from the β-barrel cryo-EM image at medium resolutions even when no separation of the β-strands is visible in the images.
Google glass based immunochromatographic diagnostic test analysis
NASA Astrophysics Data System (ADS)
Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan
2015-03-01
Integration of optical imagers and sensors into recently emerging wearable computational devices allows for simpler and more intuitive methods of integrating biomedical imaging and medical diagnostics tasks into existing infrastructures. Here we demonstrate the ability of one such device, the Google Glass, to perform qualitative and quantitative analysis of immunochromatographic rapid diagnostic tests (RDTs) using a voice-commandable hands-free software-only interface, as an alternative to larger and more bulky desktop or handheld units. Using the built-in camera of Glass to image one or more RDTs (labeled with Quick Response (QR) codes), our Glass software application uploads the captured image and related information (e.g., user name, GPS, etc.) to our servers for remote analysis and storage. After digital analysis of the RDT images, the results are transmitted back to the originating Glass device, and made available through a website in geospatial and tabular representations. We tested this system on qualitative human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) RDTs. For qualitative HIV tests, we demonstrate successful detection and labeling (i.e., yes/no decisions) for up to 6-fold dilution of HIV samples. For quantitative measurements, we activated and imaged PSA concentrations ranging from 0 to 200 ng/mL and generated calibration curves relating the RDT line intensity values to PSA concentration. By providing automated digitization of both qualitative and quantitative test results, this wearable colorimetric diagnostic test reader platform on Google Glass can reduce operator errors caused by poor training, provide real-time spatiotemporal mapping of test results, and assist with remote monitoring of various biomedical conditions.
Investigation of the Helmholtz-Kohlrausch effect using wide-gamut display
NASA Astrophysics Data System (ADS)
Oh, Semin; Kwak, Youngshin
2015-01-01
The aim of this study is to investigate whether the Helmholtz-Kohlrausch effect exists among the images having various luminance and chroma levels. Firstly, five images were selected. Then each image was adjusted to have 4 different average CIECAM02 C and 5 different average CIECAM02 J. In total 20 test images were generated per each image for the psychophysical experiment. The psychophysical experiment was done in a dark room using a LCD display. To evaluate the overall perceived brightness of images a magnitude estimation method was used. Fifteen participants evaluated the brightness of each image comparing with the reference image. As a result, participants tended to evaluate the brightness higher as the average CIECAM02 J and also CIECAM02 C of the image increases proving the Helmholtz- Kohlrausch effect in images.
A back-illuminated megapixel CMOS image sensor
NASA Technical Reports Server (NTRS)
Pain, Bedabrata; Cunningham, Thomas; Nikzad, Shouleh; Hoenk, Michael; Jones, Todd; Wrigley, Chris; Hancock, Bruce
2005-01-01
In this paper, we present the test and characterization results for a back-illuminated megapixel CMOS imager. The imager pixel consists of a standard junction photodiode coupled to a three transistor-per-pixel switched source-follower readout [1]. The imager also consists of integrated timing and control and bias generation circuits, and provides analog output. The analog column-scan circuits were implemented in such a way that the imager could be configured to run in off-chip correlated double-sampling (CDS) mode. The imager was originally designed for normal front-illuminated operation, and was fabricated in a commercially available 0.5 pn triple-metal CMOS-imager compatible process. For backside illumination, the imager was thinned by etching away the substrate was etched away in a post-fabrication processing step.
Chahl, J S
2014-01-20
This paper describes an application for arrays of narrow-field-of-view sensors with parallel optical axes. These devices exhibit some complementary characteristics with respect to conventional perspective projection or angular projection imaging devices. Conventional imaging devices measure rotational egomotion directly by measuring the angular velocity of the projected image. Translational egomotion cannot be measured directly by these devices because the induced image motion depends on the unknown range of the viewed object. On the other hand, a known translational motion generates image velocities which can be used to recover the ranges of objects and hence the three-dimensional (3D) structure of the environment. A new method is presented for computing egomotion and range using the properties of linear arrays of independent narrow-field-of-view optical sensors. An approximate parallel projection can be used to measure translational egomotion in terms of the velocity of the image. On the other hand, a known rotational motion of the paraxial sensor array generates image velocities, which can be used to recover the 3D structure of the environment. Results of tests of an experimental array confirm these properties.
Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.
Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo
2015-01-01
In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.
A JPEG backward-compatible HDR image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2012-10-01
High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.
Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.
Dsm Based Orientation of Large Stereo Satellite Image Blocks
NASA Astrophysics Data System (ADS)
d'Angelo, P.; Reinartz, P.
2012-07-01
High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research
Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-01-01
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Geyuan
My research projects are focused on application of photonics, optics and micro- fabrication technology in energy related fields. Photonic crystal fabrication research has the potential to help us generate and use light more efficiently. In order to fabricate active 3D woodpile photonic structure devices, a woodpile template is needed to enable the crystal growth process. We developed a silica woodpile template fabrication process based on two polymer transfer molding technique. A silica woodpile template is demonstrated to work with temperature up to 900 C. It provides a more economical way to explore making better 3D active woodpile photonic devices likemore » 3D photonic light emitting diodes (LED). Optical research on solar cell testing has the potential to make our energy generation more e cient and greener. PL imaging and LBIC mapping are used to measure CdTe solar cells with different back contacts. A strong correlation between PL image defects and LBIC map defects is observed. This opens up potential application for PL imaging in fast solar cell inspection. 2D laser IV scan shows its usage in 2D parameter mapping. We show its ability to generate important information about solar cell performance locally around PL image defects.« less
Redundancy of stereoscopic images: Experimental evaluation
NASA Astrophysics Data System (ADS)
Yaroslavsky, L. P.; Campos, J.; Espínola, M.; Ideses, I.
2005-12-01
With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to convey 3D content by means of stereoscopic displays, one needs to transmit and display at least 2 points of view of the video content. This has profound implications on the resources required to transmit the content, as well as demands on the complexity of the visualization system. It is known that stereoscopic images are redundant which may prove useful for compression and may have positive effect on the construction of the visualization device. In this paper we describe an experimental evaluation of data redundancy in color stereoscopic images. In the experiments with computer generated and real life test stereo images, several observers visually tested the stereopsis threshold and accuracy of parallax measurement in anaglyphs and stereograms as functions of the blur degree of one of two stereo images. In addition, we tested the color saturation threshold in one of two stereo images for which full color 3D perception with no visible color degradations was maintained. The experiments support a theoretical estimate that one has to add, to data required to reproduce one of two stereoscopic images, only several percents of that amount of data in order to achieve stereoscopic perception.
Performance test and image correction of CMOS image sensor in radiation environment
NASA Astrophysics Data System (ADS)
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2016-09-01
CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.
NASA Astrophysics Data System (ADS)
Korendyke, Clarence M.; Vourlidas, Angelos; Plunkett, Simon P.; Howard, Russell A.; Wang, Dennis; Marshall, Cheryl J.; Waczynski, Augustyn; Janesick, James J.; Elliott, Thomas; Tun, Samuel; Tower, John; Grygon, Mark; Keller, David; Clifford, Gregory E.
2013-10-01
The Naval Research Laboratory is developing next generation CMOS imaging arrays for the Solar Orbiter and Solar Probe Plus missions. The device development is nearly complete with flight device delivery scheduled for summer of 2013. The 4Kx4K mosaic array with 10micron pixels is well suited to the panoramic imaging required for the Solar Orbiter mission. The devices are robust (<100krad) and exhibit minimal performance degradation with respect to radiation. The device design and performance are described.
Virtual phantom magnetic resonance imaging (ViP MRI) on a clinical MRI platform.
Saint-Jalmes, Hervé; Bordelois, Alejandro; Gambarota, Giulio
2018-01-01
The purpose of this study was to implement Virtual Phantom Magnetic Resonance Imaging (ViP MRI), a technique that allows for generating reference signals in MR images using radiofrequency (RF) signals, on a clinical MR system and to test newly designed virtual phantoms. MRI experiments were conducted on a 1.5 T MRI scanner. Electromagnetic modelling of the ViP system was done using the principle of reciprocity. The ViP RF signals were generated using a compact waveform generator (dimensions of 26 cm × 18 cm × 16 cm), connected to a homebuilt 25 mm-diameter RF coil. The ViP RF signals were transmitted to the MRI scanner bore, simultaneously with the acquisition of the signal from the object of interest. Different types of MRI data acquisition (2D and 3D gradient-echo) as well as different phantoms, including the Shepp-Logan phantom, were tested. Furthermore, a uniquely designed virtual phantom - in the shape of a grid - was generated; this newly proposed phantom allows for the investigations of the vendor distortion correction field. High quality MR images of virtual phantoms were obtained. An excellent agreement was found between the experimental data and the inverse cube law, which was the expected functional dependence obtained from the electromagnetic modelling of the ViP system. Short-term time stability measurements yielded a coefficient of variation in the signal intensity over time equal to 0.23% and 0.13% for virtual and physical phantom, respectively. MR images of the virtual grid-shaped phantom were reconstructed with the vendor distortion correction; this allowed for a direct visualization of the vendor distortion correction field. Furthermore, as expected from the electromagnetic modelling of the ViP system, a very compact coil (diameter ~ cm) and very small currents (intensity ~ mA) were sufficient to generate a signal comparable to that of physical phantoms in MRI experiments. The ViP MRI technique was successfully implemented on a clinical MR system. One of the major advantages of ViP MRI over previous approaches is that the generation and transmission of RF signals can be achieved with a self-contained apparatus. As such, the ViP MRI technique is transposable to different platforms (preclinical and clinical) of different vendors. It is also shown here that ViP MRI could be used to generate signals whose characteristics cannot be reproduced by physical objects. This could be exploited to assess MRI system properties, such as the vendor distortion correction field. © 2017 American Association of Physicists in Medicine.
Content Based Image Matching for Planetary Science
NASA Astrophysics Data System (ADS)
Deans, M. C.; Meyer, C.
2006-12-01
Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct matches can also be confirmed by verifying MI images taken in the same z-stack, or MOC image tiles taken from the same image strip. False negatives are difficult to quantify as it would mean finding matches in the database of thousands of images that the algorithm did not detect.
MUSIC electromagnetic imaging with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Chen, Xudong; Zhong, Yu
2009-01-01
This paper investigates the influence of the test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC does not apply.
A new MUSIC electromagnetic imaging method with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Zhong, Yu; Chen, Xudong
2008-11-01
This paper investigates the influence of test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply.
Utilization of a multimedia PACS workstation for surgical planning of epilepsy
NASA Astrophysics Data System (ADS)
Soo Hoo, Kent; Wong, Stephen T.; Hawkins, Randall A.; Knowlton, Robert C.; Laxer, Kenneth D.; Rowley, Howard A.
1997-05-01
Surgical treatment of temporal lobe epilepsy requires the localization of the epileptogenic zone for surgical resection. Currently, clinicians utilize electroencephalography, various neuroimaging modalities, and psychological tests together to determine the location of this zone. We investigate how a multimedia neuroimaging workstation built on top of the UCSF Picture Archiving and Communication System can be used to aid surgical planning of epilepsy and related brain diseases. This usage demonstrates the ability of the workstation to retrieve image and textural data from PACS and other image sources, register multimodality images, visualize and render 3D data sets, analyze images, generate new image and text data from the analysis, and organize all data in a relational database management system.
NASA Technical Reports Server (NTRS)
Fisher, Kevin; Chang, Chein-I
2009-01-01
Progressive band selection (PBS) reduces spectral redundancy without significant loss of information, thereby reducing hyperspectral image data volume and processing time. Used onboard a spacecraft, it can also reduce image downlink time. PBS prioritizes an image's spectral bands according to priority scores that measure their significance to a specific application. Then it uses one of three methods to select an appropriate number of the most useful bands. Key challenges for PBS include selecting an appropriate criterion to generate band priority scores, and determining how many bands should be retained in the reduced image. The image's Virtual Dimensionality (VD), once computed, is a reasonable estimate of the latter. We describe the major design details of PBS and test PBS in a land classification experiment.
A data grid for imaging-based clinical trials
NASA Astrophysics Data System (ADS)
Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.
2007-03-01
Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.
Measuring droplet size distributions from overlapping interferometric particle images.
Bocanegra Evans, Humberto; Dam, Nico; van der Voort, Dennis; Bertens, Guus; van de Water, Willem
2015-02-01
Interferometric particle imaging provides a simple way to measure the probability density function (PDF) of droplet sizes from out-focus images. The optical setup is straightforward, but the interpretation of the data is a problem when particle images overlap. We propose a new way to analyze the images. The emphasis is not on a precise identification of droplets, but on obtaining a good estimate of the PDF of droplet sizes in the case of overlapping particle images. The algorithm is tested using synthetic and experimental data. We next use these methods to measure the PDF of droplet sizes produced by spinning disk aerosol generators. The mean primary droplet diameter agrees with predictions from the literature, but we find a broad distribution of satellite droplet sizes.
ERIC Educational Resources Information Center
Besken, Miri
2016-01-01
The perceptual fluency hypothesis claims that items that are easy to perceive at encoding induce an illusion that they will be easier to remember, despite the finding that perception does not generally affect recall. The current set of studies tested the predictions of the perceptual fluency hypothesis with a picture generation manipulation.…
Using Cross Correlation for Evaluating Shape Models of Asteroids
NASA Astrophysics Data System (ADS)
Palmer, Eric; Weirich, John; Barnouin, Olivier; Campbell, Tanner; Lambert, Diane
2017-10-01
The Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer (OSIRIS-REx) sample return mission to Bennu will be using optical navigation during its proximity operations. Optical navigation is heavily dependent upon having an accurate shape model to calculate the spacecraft's position and pointing. In support of this, we have conducted extensive testing of the accuracy and precision of shape models. OSIRIS-REx will be using the shape models generated by stereophotoclinometry (Gaskell, 2008). The most typical technique to evaluate models is to subtract two shape models and produce the differences in the height of each node between the two models. During flight, absolute accuracy cannot be determined; however, our testing allowed us to characterize both systematic and non-systematic errors. We have demonstrated that SPC provides an accurate and reproducible shape model (Weirich, et al., 2017), but also that shape model subtraction only tells part of the story. Our advanced shape model evaluation uses normalized cross-correlation to show a different aspect of quality of the shape model. In this method, we generate synthetic images using the shape model and calculate their cross-correlation with images of the truth asteroid. This technique tests both the shape model's representation of the topographic features (size, shape, depth and relative position), but also estimates of the surface's albedo. This albedo can be used to determine both Bond and geometric albedo of the surface (Palmer, et al., 2014). A high correlation score between the model's synthetic images and the truth images shows that the local topography and albedo has been well represented over the length scale of the image. A global evaluation, such as global shape and size, is best shown by shape model subtraction.
Quantitative analysis of tympanic membrane perforation: a simple and reliable method.
Ibekwe, T S; Adeosun, A A; Nwaorgu, O G
2009-01-01
Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.
Tai, Meng Wei; Chong, Zhen Feng; Asif, Muhammad Khan; Rahmat, Rabiah A; Nambiar, Phrabhakaran
2016-09-01
This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, p<0.05). In conclusion, bite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Space debris detection in optical image sequences.
Xi, Jiangbo; Wen, Desheng; Ersoy, Okan K; Yi, Hongwei; Yao, Dalei; Song, Zongxi; Xi, Shaobo
2016-10-01
We present a high-accuracy, low false-alarm rate, and low computational-cost methodology for removing stars and noise and detecting space debris with low signal-to-noise ratio (SNR) in optical image sequences. First, time-index filtering and bright star intensity enhancement are implemented to remove stars and noise effectively. Then, a multistage quasi-hypothesis-testing method is proposed to detect the pieces of space debris with continuous and discontinuous trajectories. For this purpose, a time-index image is defined and generated. Experimental results show that the proposed method can detect space debris effectively without any false alarms. When the SNR is higher than or equal to 1.5, the detection probability can reach 100%, and when the SNR is as low as 1.3, 1.2, and 1, it can still achieve 99%, 97%, and 85% detection probabilities, respectively. Additionally, two large sets of image sequences are tested to show that the proposed method performs stably and effectively.
Delamination Detection Using Guided Wave Phased Arrays
NASA Technical Reports Server (NTRS)
Tian, Zhenhua; Yu, Lingyu; Leckey, Cara
2016-01-01
This paper presents a method for detecting multiple delaminations in composite laminates using non-contact phased arrays. The phased arrays are implemented with a non-contact scanning laser Doppler vibrometer (SLDV). The array imaging algorithm is performed in the frequency domain where both the guided wave dispersion effect and direction dependent wave properties are considered. By using the non-contact SLDV array with a frequency domain imaging algorithm, an intensity image of the composite plate can be generated for delamination detection. For the proof of concept, a laboratory test is performed using a non-contact phased array to detect two delaminations (created through quasi-static impact test) at different locations in a composite plate. Using the non-contact phased array and frequency domain imaging, the two impact-induced delaminations are successfully detected. This study shows that the non-contact phased array method is a potentially effective method for rapid delamination inspection in large composite structures.
Physical basis of tap test as a quantitative imaging tool for composite structures on aircraft
NASA Astrophysics Data System (ADS)
Hsu, David K.; Barnard, Daniel J.; Peters, John J.; Dayal, Vinay
2000-05-01
Tap test is a simple but effective way for finding flaws in composite and honeycomb sandwich structures; it has been practiced in aircraft inspection for decades. The mechanics of tap test was extensively researched by P. Cawley et al., and several versions of instrumented tap test have emerged in recent years. This paper describes a quantitative study of the impact duration as a function of the mass, radius, velocity, and material property of the impactor. The impact response is compared to the predictions of Hertzian-type contact theory and a simple spring model. The electronically measured impact duration, τ, is used for generating images of the tapped region. Using the spring model, the images are converted into images of a spring constant, k, which is a measure of the local contact stiffness. The images of k, largely independent of tapper mass and impact velocity, reveal the size, shape and severity (cf. Percent stiffness reduction) of defects and damages, as well as the presence of substructures and the associated stiffness increase. The studies are carried out on a variety of real aircraft components and the results serve to guide the development of a fieldable tap test imaging system for aircraft inspection.—This material is based upon work supported by the Federal Aviation Administration under Contract #DTFA03-98-D-00008, Delivery Order No. IA016 and performed at Iowa State University's Center for NDE as part of the Center for Aviation Systems Reliability program.
García Arroyo, Jose Luis; García Zapirain, Begoña
2014-01-01
By means of this study, a detection algorithm for the "pigment network" in dermoscopic images is presented, one of the most relevant indicators in the diagnosis of melanoma. The design of the algorithm consists of two blocks. In the first one, a machine learning process is carried out, allowing the generation of a set of rules which, when applied over the image, permit the construction of a mask with the pixels candidates to be part of the pigment network. In the second block, an analysis of the structures over this mask is carried out, searching for those corresponding to the pigment network and making the diagnosis, whether it has pigment network or not, and also generating the mask corresponding to this pattern, if any. The method was tested against a database of 220 images, obtaining 86% sensitivity and 81.67% specificity, which proves the reliability of the algorithm. © 2013 The Authors. Published by Elsevier Ltd. All rights reserved.
Chocolate smells pink and stripy: Exploring olfactory-visual synesthesia
Russell, Alex; Stevenson, Richard J.; Rich, Anina N.
2015-01-01
Odors are often difficult to identify, and can be perceived either via the nose or mouth (“flavor”; not usually perceived as a “smell”). These features provide a unique opportunity to contrast conceptual and perceptual accounts of synesthesia. We presented six olfactory-visual synesthetes with a range of odorants. They tried to identify each smell, evaluate its attributes and illustrate their elicited visual experience. Judges rated the similarity of each synesthetes’ illustrations over time (test-retest reliability). Synesthetic images were most similar from the same odor named consistently, but even inconsistently named same odors generated more similar images than different odors. This was driven by hedonic similarity. Odors presented as flavors only resulted in similar images when consistently named. Thus, the primary factor in generating a reliable synesthetic image is the name, with some influence of odor hedonics. Hedonics are a basic form of semantic knowledge, making this consistent with a conceptual basis for synaesthetic links. PMID:25895152
Applications of phase-contrast x-ray imaging to medicine using an x-ray interferometer
NASA Astrophysics Data System (ADS)
Momose, Atsushi; Yoneyama, Akio; Takeda, Tohoru; Itai, Yuji; Tu, Jinhong; Hirano, Keiichi
1999-10-01
We are investigating possible medical applications of phase- contrast X-ray imaging using an X-ray interferometer. This paper introduces the strategy of the research project and the present status. The main subject is to broaden the observation area to enable in vivo observation. For this purpose, large X-ray interferometers were developed, and 2.5 cm X 1.5 cm interference patterns were generated using synchrotron X-rays. An improvement of the spatial resolution is also included in the project, and an X-ray interferometer designed for high-resolution phase-contrast X-ray imaging was fabricated and tested. In parallel with the instrumental developments, various soft tissues are observed by phase- contrast X-ray CT to find correspondence between the generated contrast and our histological knowledge. The observation done so far suggests that cancerous tissues are differentiated from normal tissues and that blood can produce phase contrast. Furthermore, this project includes exploring materials that modulate phase contrast for selective imaging.
NASA Astrophysics Data System (ADS)
Trinks, I.; Wallner, M.; Kucera, M.; Verhoeven, G.; Torrejón Valdelomar, J.; Löcker, K.; Nau, E.; Sevara, C.; Aldrian, L.; Neubauer, E.; Klein, M.
2017-02-01
The excavated architecture of the exceptional prehistoric site of Akrotiri on the Greek island of Thera/Santorini is endangered by gradual decay, damage due to accidents, and seismic shocks, being located on an active volcano in an earthquake-prone area. Therefore, in 2013 and 2014 a digital documentation project has been conducted with support of the National Geographic Society in order to generate a detailed digital model of Akrotiri's architecture using terrestrial laser scanning and image-based modeling. Additionally, non-invasive geophysical prospection has been tested in order to investigate its potential to explore and map yet buried archaeological remains. This article describes the project and the generated results.
Random bits, true and unbiased, from atmospheric turbulence
Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo
2014-01-01
Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499
NASA Astrophysics Data System (ADS)
Manickam, Kavitha; Machireddy, Ramasubba Reddy; Raghavan, Bagyam
2016-04-01
It has been observed that many pathological process increase the elastic modulus of soft tissue compared to normal. In order to image tissue stiffness using ultrasound, a mechanical compression is applied to tissues of interest and local tissue deformation is measured. Based on the mechanical excitation, ultrasound stiffness imaging methods are classified as compression or strain imaging which is based on external compression and Acoustic Radiation Force Impulse (ARFI) imaging which is based on force generated by focused ultrasound. When ultrasound is focused on tissue, shear wave is generated in lateral direction and shear wave velocity is proportional to stiffness of tissues. The work presented in this paper investigates strain elastography and ARFI imaging in clinical cancer diagnostics using real time patient data. Ultrasound B-mode imaging, strain imaging, ARFI displacement and ARFI shear wave velocity imaging were conducted on 50 patients (31 Benign and 23 malignant categories) using Siemens S2000 machine. True modulus contrast values were calculated from the measured shear wave velocities. For ultrasound B-mode, ARFI displacement imaging and strain imaging, observed image contrast and Contrast to Noise Ratio were calculated for benign and malignant cancers. Observed contrast values were compared based on the true modulus contrast values calculated from shear wave velocity imaging. In addition to that, student unpaired t-test was conducted for all the four techniques and box plots are presented. Results show that, strain imaging is better for malignant cancers whereas ARFI imaging is superior than strain imaging and B-mode for benign lesions representations.
Ion source and beam guiding studies for an API neutron generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sy, A.; Ji, Q.; Persaud, A.
2013-04-19
Recently developed neutron imaging methods require high neutron yields for fast imaging times and small beam widths for good imaging resolution. For ion sources with low current density to be viable for these types of imaging methods, large extraction apertures and beam focusing must be used. We present recent work on the optimization of a Penning-type ion source for neutron generator applications. Two multi-cusp magnet configurations have been tested and are shown to increase the extracted ion current density over operation without multi-cusp magnetic fields. The use of multi-cusp magnetic confinement and gold electrode surfaces have resulted in increased ionmore » current density, up to 2.2 mA/cm{sup 2}. Passive beam focusing using tapered dielectric capillaries has been explored due to its potential for beam compression without the cost and complexity issues associated with active focusing elements. Initial results from first experiments indicate the possibility of beam compression. Further work is required to evaluate the viability of such focusing methods for associated particle imaging (API) systems.« less
Identification of novel loci for the generation of reporter mice
Rebecchi, Monica; Levandis, Giovanna
2017-01-01
Abstract Deciphering the etiology of complex pathologies at molecular level requires longitudinal studies encompassing multiple biochemical pathways (apoptosis, proliferation, inflammation, oxidative stress). In vivo imaging of current reporter animals enabled the spatio-temporal analysis of specific molecular events, however, the lack of a multiplicity of loci for the generalized and regulated expression of the integrated transgenes hampers the creation of systems for the simultaneous analysis of more than a biochemical pathways at the time. We here developed and tested an in vivo-based methodology for the identification of multiple insertional loci suitable for the generation of reliable reporter mice. The validity of the methodology was tested with the generation of novel mice useful to report on inflammation and oxidative stress. PMID:27899606
Image analysis to evaluate the browning degree of banana (Musa spp.) peel.
Cho, Jeong-Seok; Lee, Hyeon-Jeong; Park, Jung-Hoon; Sung, Jun-Hyung; Choi, Ji-Young; Moon, Kwang-Deog
2016-03-01
Image analysis was applied to examine banana peel browning. The banana samples were divided into 3 treatment groups: no treatment and normal packaging (Cont); CO2 gas exchange packaging (CO); normal packaging with an ethylene generator (ET). We confirmed that the browning of banana peels developed more quickly in the CO group than the other groups based on sensory test and enzyme assay. The G (green) and CIE L(∗), a(∗), and b(∗) values obtained from the image analysis sharply increased or decreased in the CO group. And these colour values showed high correlation coefficients (>0.9) with the sensory test results. CIE L(∗)a(∗)b(∗) values using a colorimeter also showed high correlation coefficients but comparatively lower than those of image analysis. Based on this analysis, browning of the banana occurred more quickly for CO2 gas exchange packaging, and image analysis can be used to evaluate the browning of banana peels. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bosnjak, J; Ciraj-Bjelac, O; Strbac, B
2008-01-01
Application of a quality control (QC) programme is very important when optimisation of image quality and reduction of patient exposure is desired. QC surveys of diagnostics imaging equipment in Republic of Srpska (entity of Bosnia and Herzegovina) has been systematically performed since 2001. The presented results are mostly related to the QC test results of X-ray tubes and generators for diagnostic radiology units in 92 radiology departments. In addition, results include workplace monitoring and usage of personal protective devices for staff and patients. Presented results showed the improvements in the implementation of the QC programme within the period 2001--2005. Also, more attention is given to appropriate maintenance of imaging equipment, which was one of the main problems in the past. Implementation of a QC programme is a continuous and complex process. To achieve good performance of imaging equipment, additional tests are to be introduced, along with image quality assessment and patient dosimetry. Training is very important in order to achieve these goals.
Wideband radar for airborne minefield detection
NASA Astrophysics Data System (ADS)
Clark, William W.; Burns, Brian; Dorff, Gary; Plasky, Brian; Moussally, George; Soumekh, Mehrdad
2006-05-01
Ground Penetrating Radar (GPR) has been applied for several years to the problem of detecting both antipersonnel and anti-tank landmines. RDECOM CERDEC NVESD is developing an airborne wideband GPR sensor for the detection of minefields including surface and buried mines. In this paper, we describe the as-built system, data and image processing techniques to generate imagery, and current issues with this type of radar. Further, we will display images from a recent field test.
Wave Propagation and Inversion in Shallow Water and Poro-elastic Sediment
1997-09-30
water and high freq. acoustics LONG-TERM GOALS To create codes accurately model wave propagation and scattering in shallow water, and to quantify...is undergoing testing for the acoustic stratified Green’s function. We have adapted code generated by J. Schuster in Geophysics for the FDTD model ...inversions and modelling , and have repercussions in environmental imaging [5], acoustic imaging [1,4,5,6,7] and early breast cancer diagnosis
Tan, S; Hu, A; Wilson, T; Ladak, H; Haase, P; Fung, K
2012-04-01
(1) To investigate the efficacy of a computer-generated three-dimensional laryngeal model for laryngeal anatomy teaching; (2) to explore the relationship between students' spatial ability and acquisition of anatomical knowledge; and (3) to assess participants' opinion of the computerised model. Forty junior doctors were randomised to undertake laryngeal anatomy study supplemented by either a three-dimensional computer model or two-dimensional images. Outcome measurements comprised a laryngeal anatomy test, the modified Vandenberg and Kuse mental rotation test, and an opinion survey. Mean scores ± standard deviations for the anatomy test were 15.7 ± 2.0 for the 'three dimensions' group and 15.5 ± 2.3 for the 'standard' group (p = 0.7222). Pearson's correlation between the rotation test scores and the scores for the spatial ability questions in the anatomy test was 0.4791 (p = 0.086, n = 29). Opinion survey answers revealed significant differences in respondents' perceptions of the clarity and 'user friendliness' of, and their preferences for, the three-dimensional model as regards anatomical study. The three-dimensional computer model was equivalent to standard two-dimensional images, for the purpose of laryngeal anatomy teaching. There was no association between students' spatial ability and functional anatomy learning. However, students preferred to use the three-dimensional model.
Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra
NASA Astrophysics Data System (ADS)
Luo, Yi; Celenk, Mehmet; Bejai, Prashanth
2006-03-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.
Iterative electromagnetic Born inversion applied to earth conductivity imaging
NASA Astrophysics Data System (ADS)
Alumbaugh, D. L.
1993-08-01
This thesis investigates the use of a fast imaging technique to deduce the spatial conductivity distribution in the earth from low frequency (less than 1 MHz), cross well electromagnetic (EM) measurements. The theory embodied in this work is the extension of previous strategies and is based on the Born series approximation to solve both the forward and inverse problem. Nonlinear integral equations are employed to derive the series expansion which accounts for the scattered magnetic fields that are generated by inhomogeneities embedded in either a homogenous or a layered earth. A sinusoidally oscillating, vertically oriented magnetic dipole is employed as a source, and it is assumed that the scattering bodies are azimuthally symmetric about the source dipole axis. The use of this model geometry reduces the 3-D vector problem to a more manageable 2-D scalar form. The validity of the cross well EM method is tested by applying the imaging scheme to two sets of field data. Images of the data collected at the Devine, Texas test site show excellent correlation with the well logs. Unfortunately there is a drift error present in the data that limits the accuracy of the results. A more complete set of data collected at the Richmond field station in Richmond, California demonstrates that cross well EM can be successfully employed to monitor the position of an injected mass of salt water. Both the data and the resulting images clearly indicate the plume migrates toward the north-northwest. The plausibility of these conclusions is verified by applying the imaging code to synthetic data generated by a 3-D sheet model.
Dengg, S; Kneissl, S
2013-01-01
Ferromagnetic material in microchips, used for animal identification, causes local signal increase, signal void or distortion (susceptibility artifact) on MR images. To measure the impact of microchip geometry on the artifact's size, an MRI phantom study was performed. Microchips of the labels Datamars®, Euro-I.D.® and Planet-ID® (n = 15) were placed consecutively in a phantom and examined with respect to the ASTM Standard Test Method F2119-07 using spin echo (TR 500 ms, TE 20 ms), gradient echo (TR 300 ms, TE 15 ms, flip angel 30°) and otherwise constant imaging parameters (slice thickness 3 mm, field of view 250 x 250 mm, acquisition matrix 256 x 256 pixel, bandwidth 32 kHz) at 1.5 Tesla. Image acquisition was undertaken with a microchip positioned in the x- and z-direction and in each case with a phase-encoding direction in the y- and z-direction. The artifact size was determined with a) a measurement according to the test method F2119-07 using a homogeneous point operation, b) signal intensity measurement according to Matsuura et al. and c) pixel counts in the artifact according to Port and Pomper. There was a significant difference in artifact size between the three microchips tested (Wilcoxon p = 0.032). A two- to three-fold increase in microchip volume generated an up to 76% larger artifact, depending on the sequence type, phase-encoding direction and chip position to B0. The smaller the microchip geometry, the less is the susceptibility artifact. Spin echoes (SE) generated smaller artifacts than gradient echoes (GE). In relation to the spatial measurement of the artifact, the switch in phase-encoding direction had less influence on the artifact size in GE- than in SE-sequences. However, the artifact shape and direction of SE-sequences can be changed by altering the phase. The artifact size, caused by the microchip, plays a major clinical role in the evaluation of MRI from the head, shoulder and neck regions.
Haptic augmented skin surface generation toward telepalpation from a mobile skin image.
Kim, K
2018-05-01
Very little is known about the methods of integrating palpation techniques to existing mobile teleskin imaging that delivers low quality tactile information (roughness) for telepalpation. However, no study has been reported yet regarding telehaptic palpation using mobile phone images for teledermatology or teleconsultations of skincare. This study is therefore aimed at introducing a new algorithm accurately reconstructing a haptic augmented skin surface for telehaptic palpation using a low-cost clip-on microscope simply attached to a mobile phone. Multiple algorithms such as gradient-based image enhancement, roughness-adaptive tactile mask generation, roughness-enhanced 3D tactile map building, and visual and haptic rendering with a three-degrees-of-freedom (DOF) haptic device were developed and integrated as one system. Evaluation experiments have been conducted to test the performance of 3D roughness reconstruction with/without the tactile mask. The results confirm that reconstructed haptic roughness with the tactile mask is superior to the reconstructed haptic roughness without the tactile mask. Additional experiments demonstrate that the proposed algorithm is robust against varying lighting conditions and blurring. In last, a user study has been designed to see the effect of the haptic modality to the existing visual only interface and the results attest that the haptic skin palpation can significantly improve the skin exam performance. Mobile image-based telehaptic palpation technology was proposed, and an initial version was developed. The developed technology was tested with several skin images and the experimental results showed the superiority of the proposed scheme in terms of the performance of haptic augmentation of real skin images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System
Wu, Xiangjun; Li, Yang; Kurths, Jürgen
2015-01-01
The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602
Research on sparse feature matching of improved RANSAC algorithm
NASA Astrophysics Data System (ADS)
Kong, Xiangsi; Zhao, Xian
2018-04-01
In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.
Viewpoint and pose in body-form adaptation.
Sekunova, Alla; Black, Michael; Parkinson, Laura; Barton, Jason J S
2013-01-01
Faces and bodies are complex structures, perception of which can play important roles in person identification and inference of emotional state. Face representations have been explored using behavioural adaptation: in particular, studies have shown that face aftereffects show relatively broad tuning for viewpoint, consistent with origin in a high-level structural descriptor far removed from the retinal image. Our goals were to determine first, if body aftereffects also showed a degree of viewpoint invariance, and second if they also showed pose invariance, given that changes in pose create even more dramatic changes in the 2-D retinal image. We used a 3-D model of the human body to generate headless body images, whose parameters could be varied to generate different body forms, viewpoints, and poses. In the first experiment, subjects adapted to varying viewpoints of either slim or heavy bodies in a neutral stance, followed by test stimuli that were all front-facing. In the second experiment, we used the same front-facing bodies in neutral stance as test stimuli, but compared adaptation from bodies in the same neutral stance to adaptation with the same bodies in different poses. We found that body aftereffects were obtained over substantial viewpoint changes, with no significant decline in aftereffect magnitude with increasing viewpoint difference between adapting and test images. Aftereffects also showed transfer across one change in pose but not across another. We conclude that body representations may have more viewpoint invariance than faces, and demonstrate at least some transfer across pose, consistent with a high-level structural description.
NASA Astrophysics Data System (ADS)
Jiménez del Toro, Oscar; Atzori, Manfredo; Otálora, Sebastian; Andersson, Mats; Eurén, Kristian; Hedlund, Martin; Rönnquist, Peter; Müller, Henning
2017-03-01
The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high-grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions-of-interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high vs. low Gleason grade. Grades 7-8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re-training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.
A field-emission based vacuum device for the generation of THz waves
NASA Astrophysics Data System (ADS)
Lin, Ming-Chieh
2005-03-01
Terahertz waves have been used to characterize the electronic, vibrational and compositional properties of solid, liquid and gas phase materials during the past decade. More and more applications in imaging science and technology call for the well development of THz wave sources. Amplification and generation of a high frequency electromagnetic wave are a common interest of field emission based devices. In the present work, we propose a vacuum electronic device based on field emission mechanism for the generation of THz waves. To verify our thinking and designs, the cold tests and the hot tests have been studied via the simulation tools, SUPERFISH and MAGIC. In the hot tests, two types of electron emission mechanisms are considered. One is the field emission and the other is the explosive emission. The preliminary design of the device is carried out and tested by the numerical simulations. The simulation results show that an electronic efficiency up to 4% can be achieved without employing any magnetic circuits.
How Visuo-Spatial Mental Imagery Develops: Image Generation and Maintenance
Wimmer, Marina C.; Maras, Katie L.; Robinson, Elizabeth J; Doherty, Martin J; Pugeault, Nicolas
2015-01-01
Two experiments examined the nature of visuo-spatial mental imagery generation and maintenance in 4-, 6-, 8-, 10-year old children and adults (N = 211). The key questions were how image generation and maintenance develop (Experiment 1) and how accurately children and adults coordinate mental and visually perceived images (Experiment 2). Experiment 1 indicated that basic image generation and maintenance abilities are present at 4 years of age but the precision with which images are generated and maintained improves particularly between 4 and 8 years. In addition to increased precision, Experiment 2 demonstrated that generated and maintained mental images become increasingly similar to visually perceived objects. Altogether, findings suggest that for simple tasks demanding image generation and maintenance, children attain adult-like precision younger than previously reported. This research also sheds new light on the ability to coordinate mental images with visual images in children and adults. PMID:26562296
High resolution macroscopy (HRMac) of the eye using nonlinear optical imaging
NASA Astrophysics Data System (ADS)
Winkler, Moritz; Jester, Bryan E.; Nien-Shy, Chyong; Chai, Dongyul; Brown, Donald J.; Jester, James V.
2010-02-01
Non-linear optical (NLO) imaging using femtosecond lasers provides a non-invasive means of imaging the structural organization of the eye through the generation of second harmonic signals (SHG). While NLO imaging is able to detect collagen, the small field of view (FoV) limits the ability to study how collagen is structurally organized throughout the larger tissue. To address this issue we have used computed tomography on optical and mechanical sectioned tissue to greatly expand the FoV and provide high resolution macroscopic (HRMac) images that cover the entire tissue (cornea and optic nerve head). Whole, fixed cornea (13 mm diameter) or optic nerve (3 mm diameter) were excised and either 1) embedded in agar and sectioned using a vibratome (200-300 um), or 2) embedded in LR White plastic resin and serially sectioned (2 um). Vibratome and plastic sections were then imaged using a Zeiss LSM 510 Meta and Chameleon femtosecond laser to generate NLO signals and assemble large macroscopic 3-dimensional tomographs with high resolution that varied in size from 9 to 90 Meg pixels per plane having a resolution of 0.88 um lateral and 2.0 um axial. 3-D reconstructions allowed for regional measurements within the cornea and optic nerve to quantify collagen content, orientation and organization over the entire tissue. We conclude that NLO based tomography to generate HRMac images provides a powerful new tool to assess collagen structural organization. Biomechanical testing combined with NLO tomography may provide new insights into the relationship between the extracellular matrix and tissue mechanics.
NASA Technical Reports Server (NTRS)
Partridge, James D.
2002-01-01
'NASA is preparing to launch the Next Generation Space Telescope (NGST). This telescope will be larger than the Hubble Space Telescope, be launched on an Atlas missile rather than the Space Shuttle, have a segmented primary mirror, and be placed in a higher orbit. All these differences pose significant challenges.' This effort addresses the challenge of implementing an algorithm for aligning the segments of the primary mirror during the initial deployment that was designed by Philip Olivier and members of SOMTC (Space Optics Manufacturing Technology Center). The implementation was to be performed on the SIBOA (Systematic Image Based Optical Alignment) test bed. Unfortunately, hardware/software aspect concerning SIBOA and an extended time period for algorithm development prevented testing before the end of the study period. Properties of the digital camera were studied and understood, resulting in the current ability of selecting optimal settings regarding saturation. The study was successful in manually capturing several images of two stacked segments with various relative phases. These images can be used to calibrate the algorithm for future implementation. Currently the system is ready for testing.
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Stiles, J. A.; Moore, R. K.; Holtzman, J. C.
1981-01-01
Image simulation techniques were employed to generate synthetic aperture radar images of a 17.7 km x 19.3 km test site located east of Lawrence, Kansas. The simulations were performed for a space SAR at an orbital altitude of 600 km, with the following sensor parameters: frequency = 4.75 GHz, polarization = HH, and angle of incidence range = 7 deg to 22 deg from nadir. Three sets of images were produced corresponding to three different spatial resolutions; 20 m x 20 m with 12 looks, 100 m x 100 m with 23 looks, and 1 km x 1 km with 1000 looks. Each set consisted of images for four different soil moisture distributions across the test site. Results indicate that, for the agricultural portion of the test site, the soil moisture in about 90% of the pixels can be predicted with an accuracy of = + or - 20% of field capacity. Among the three spatial resolutions, the 1 km x 1 km resolution gave the best results for most cases, however, for very dry soil conditions, the 100 m x 100 m resolution was slightly superior.
Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de; Gallo, José Hiran
2011-01-01
To present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). This is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. At Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. The results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand.
ROSE: the road simulation environment
NASA Astrophysics Data System (ADS)
Liatsis, Panos; Mitronikas, Panogiotis
1997-05-01
Evaluation of advanced sensing systems for autonomous vehicle navigation (AVN) is currently carried out off-line with prerecorded image sequences taken by physically attaching the sensors to the ego-vehicle. The data collection process is cumbersome and costly as well as highly restricted to specific road environments and weather conditions. This work proposes the use of scientific animation in modeling and representation of real-world traffic scenes and aims to produce an efficient, reliable and cost-effective concept evaluation suite for AVN sensing algorithms. ROSE is organized in a modular fashion consisting of the route generator, the journey generator, the sequence description generator and the renderer. The application was developed in MATLAB and POV-Ray was selected as the rendering module. User-friendly graphical user interfaces have been designed to allow easy selection of animation parameters and monitoring of the generation proces. The system, in its current form, allows the generation of various traffic scenarios, providing for an adequate number of static/dynamic objects, road types and environmental conditions. Initial tests on the robustness of various image processing algorithms to varying lighting and weather conditions have been already carried out.
Testbed Experiment for SPIDER: A Photonic Integrated Circuit-based Interferometric imaging system
NASA Astrophysics Data System (ADS)
Badham, K.; Duncan, A.; Kendrick, R. L.; Wuchenich, D.; Ogden, C.; Chriqui, G.; Thurman, S. T.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.
The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. In this paper we describe the photonic integrated circuit design and the testbed used to create the first images of extended scenes. We summarize the image reconstruction steps and present the final images. We also describe our next generation PIC design for a larger (16x area, 4x field of view) image.
NASA Astrophysics Data System (ADS)
Amalia, A.; Rachmawati, D.; Lestari, I. A.; Mourisa, C.
2018-03-01
Colposcopy has been used primarily to diagnose pre-cancer and cancerous lesions because this procedure gives an exaggerated view of the tissues of the vagina and the cervix. But, the poor quality of colposcopy image sometimes makes physician challenging to recognize and analyze it. Generally, Implementation of image processing to identify cervical cancer have to implement a complex classification or clustering method. In this study, we wanted to prove that by only applying the identification of edge detection in the colposcopy image, identification of cervical cancer can be determined. In this study, we implement and comparing two edge detection operator which are isotropic and canny operator. Research methodology in this paper composed by image processing, training, and testing stages. In the image processing step, colposcopy image transformed by nth root power transformation to get better detection result and continued with edge detection process. Training is a process of labelling all dataset image with cervical cancer stage. This process involved pathology doctor as an expert in diagnosing the colposcopy image as a reference. Testing is a process of deciding cancer stage classification by comparing the similarity image of colposcopy results in the testing stage with the image of the results of the training process. We used 30 images as a dataset. The result gets same accuracy which is 80% for both Canny or Isotropic operator. Average running time for Canny operator implementation is 0.3619206 ms while Isotropic get 1.49136262 ms. The result showed that Canny operator is better than isotropic operator because Canny operator generates a more precise edge with a fast time instead.
Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee
2017-07-01
Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.
Bénard, Antoine; Palle, Sabine; Doucet, Luc Serge; Ionov, Dmitri A
2011-12-01
We report the first application of multiphoton microscopy (MPM) to generate three-dimensional (3D) images of natural minerals (micron-sized sulfides) in thick (∼120 μm) rock sections. First, reflection mode (RM) using confocal laser scanning microscopy (CLSM), combined with differential interference contrast (DIC), was tested on polished sections. Second, two-photon fluorescence (TPF) and second harmonic signal (SHG) images were generated using a femtosecond-laser on the same rock section without impregnation by a fluorescent dye. CSLM results show that the silicate matrix is revealed with DIC and RM, while sulfides can be imaged in 3D at low resolution by RM. Sulfides yield strong autofluorescence from 392 to 715 nm with TPF, while SHG is only produced by the embedding medium. Simultaneous recording of TPF and SHG images enables efficient discrimination between different components of silicate rocks. Image stacks obtained with MPM enable complete reconstruction of the 3D structure of a rock slice and of sulfide morphology at submicron resolution, which has not been previously reported for 3D imaging of minerals. Our work suggests that MPM is a highly efficient tool for 3D studies of microstructures and morphologies of minerals in silicate rocks, which may find other applications in geosciences.
Yang, Hui; Trouillon, Raphaël; Huszka, Gergely; Gijs, Martin A M
2016-08-10
Dielectric microspheres with appropriate refractive index can image objects with super-resolution, that is, with a precision well beyond the classical diffraction limit. A microsphere is also known to generate upon illumination a photonic nanojet, which is a scattered beam of light with a high-intensity main lobe and very narrow waist. Here, we report a systematic study of the imaging of water-immersed nanostructures by barium titanate glass microspheres of different size. A numerical study of the light propagation through a microsphere points out the light focusing capability of microspheres of different size and the waist of their photonic nanojet. The former correlates to the magnification factor of the virtual images obtained from linear test nanostructures, the biggest magnification being obtained with microspheres of ∼6-7 μm in size. Analyzing the light intensity distribution of microscopy images allows determining analytically the point spread function of the optical system and thereby quantifies its resolution. We find that the super-resolution imaging of a microsphere is dependent on the waist of its photonic nanojet, the best resolution being obtained with a 6 μm Ø microsphere, which generates the nanojet with the minimum waist. This comparison allows elucidating the super-resolution imaging mechanism.
A Kinect™ camera based navigation system for percutaneous abdominal puncture
NASA Astrophysics Data System (ADS)
Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao
2016-08-01
Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26 ± 1.94 mm, 2.92 ± 1.67 mm, and 5.23 ± 2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40 ± 2.72 mm, and its lateral and longitudinal component were 4.30 ± 2.51 mm and 3.80 ± 3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable, and that the second generation Kinect™-based navigation is superior to the first-generation Kinect™, and has potential of clinical application in percutaneous abdominal puncture.
D Modelling with the Samsung Gear 360
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2017-02-01
The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.
Selective interference with image retention and generation: evidence for the workspace model.
van der Meulen, Marian; Logie, Robert H; Della Sala, Sergio
2009-08-01
We address three types of model of the relationship between working memory (WM) and long-term memory (LTM): (a) the gateway model, in which WM acts as a gateway between perceptual input and LTM; (b) the unitary model, in which WM is seen as the currently activated areas of LTM; and (c) the workspace model, in which perceptual input activates LTM, and WM acts as a separate workspace for processing and temporary retention of these activated traces. Predictions of these models were tested, focusing on visuospatial working memory and using dual-task methodology to combine two main tasks (visual short-term retention and image generation) with two interference tasks (irrelevant pictures and spatial tapping). The pictures selectively disrupted performance on the generation task, whereas the tapping selectively interfered with the retention task. Results are consistent with the predictions of the workspace model.
1. Credit BG. View looking southeast down onto roof and ...
1. Credit BG. View looking southeast down onto roof and the north and west facades of Steam Generator Plant, Building 4280/E-81. Vents on roof were from gas-fired steam generators. Pipes emerging from north facade are for steam. Elevated narrow tray is for electrical cables. To lower left of image (immediate north of 4280/E-81) is concrete-lined pond originally built to neutralize rocket engine exhaust compounds; it was only used as a cooling pond. To the lower right of this image are concrete pads which held two 7,500 gallon feedwater tanks for the boilers in 4280/E-81; these tanks were transferred to another federal space science organization and removed from the JPL compound in 1994. Beyond 4280/E-81 to the upper left is a reclamation pond. ... - Jet Propulsion Laboratory Edwards Facility, Test Stand D, Steam Generator Plant, Edwards Air Force Base, Boron, Kern County, CA
Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin
2011-07-15
Massive datasets comprising high-resolution images, generated in neuro-imaging studies and in clinical imaging research, are increasingly challenging our ability to analyze, share, and filter such images in clinical and basic translational research. Pivot collection exploratory analysis provides each user the ability to fully interact with the massive amounts of visual data to fully facilitate sufficient sorting, flexibility and speed to fluidly access, explore or analyze the massive image data sets of high-resolution images and their associated meta information, such as neuro-imaging databases from the Allen Brain Atlas. It is used in clustering, filtering, data sharing and classifying of the visual data into various deep zoom levels and meta information categories to detect the underlying hidden pattern within the data set that has been used. We deployed prototype Pivot collections using the Linux CentOS running on the Apache web server. We also tested the prototype Pivot collections on other operating systems like Windows (the most common variants) and UNIX, etc. It is demonstrated that the approach yields very good results when compared with other approaches used by some researchers for generation, creation, and clustering of massive image collections such as the coronal and horizontal sections of the mouse brain from the Allen Brain Atlas. Pivot visual analytics was used to analyze a prototype of dataset Dab2 co-expressed genes from the Allen Brain Atlas. The metadata along with high-resolution images were automatically extracted using the Allen Brain Atlas API. It is then used to identify the hidden information based on the various categories and conditions applied by using options generated from automated collection. A metadata category like chromosome, as well as data for individual cases like sex, age, and plan attributes of a particular gene, is used to filter, sort and to determine if there exist other genes with a similar characteristics to Dab2. And online access to the mouse brain pivot collection can be viewed using the link http://edtech-dev.uthsc.edu/CTSI/teeDev1/unittest/PaPa/collection.html (user name: tviangte and password: demome) Our proposed algorithm has automated the creation of large image Pivot collections; this will enable investigators of clinical research projects to easily and quickly analyse the image collections through a perspective that is useful for making critical decisions about the image patterns discovered.
Segmentation of left atrial intracardiac ultrasound images for image guided cardiac ablation therapy
NASA Astrophysics Data System (ADS)
Rettmann, M. E.; Stephens, T.; Holmes, D. R.; Linte, C.; Packer, D. L.; Robb, R. A.
2013-03-01
Intracardiac echocardiography (ICE), a technique in which structures of the heart are imaged using a catheter navigated inside the cardiac chambers, is an important imaging technique for guidance in cardiac ablation therapy. Automatic segmentation of these images is valuable for guidance and targeting of treatment sites. In this paper, we describe an approach to segment ICE images by generating an empirical model of blood pool and tissue intensities. Normal, Weibull, Gamma, and Generalized Extreme Value (GEV) distributions are fit to histograms of tissue and blood pool pixels from a series of ICE scans. A total of 40 images from 4 separate studies were evaluated. The model was trained and tested using two approaches. In the first approach, the model was trained on all images from 3 studies and subsequently tested on the 40 images from the 4th study. This procedure was repeated 4 times using a leave-one-out strategy. This is termed the between-subjects approach. In the second approach, the model was trained on 10 randomly selected images from a single study and tested on the remaining 30 images in that study. This is termed the within-subjects approach. For both approaches, the model was used to automatically segment ICE images into blood and tissue regions. Each pixel is classified using the Generalized Liklihood Ratio Test across neighborhood sizes ranging from 1 to 49. Automatic segmentation results were compared against manual segmentations for all images. In the between-subjects approach, the GEV distribution using a neighborhood size of 17 was found to be the most accurate with a misclassification rate of approximately 17%. In the within-subjects approach, the GEV distribution using a neighborhood size of 19 was found to be the most accurate with a misclassification rate of approximately 15%. As expected, the majority of misclassified pixels were located near the boundaries between tissue and blood pool regions for both methods.
Siddiqui, M Minhaj; Truong, Hong; Rais-Bahrami, Soroush; Stamatakis, Lambros; Logan, Jennifer; Walton-Diaz, Annerleim; Turkbey, Baris; Choyke, Peter L; Wood, Bradford J; Simon, Richard M; Pinto, Peter A
2015-06-01
Multiparametric magnetic resonance imaging may be beneficial in the search for rational ways to decrease prostate cancer intervention in patients on active surveillance. We applied a previously generated nomogram based on multiparametric magnetic resonance imaging to predict active surveillance eligibility based on repeat biopsy outcomes. We reviewed the records of 85 patients who met active surveillance criteria at study entry based on initial biopsy and who then underwent 3.0 Tesla multiparametric magnetic resonance imaging with subsequent magnetic resonance imaging/ultrasound fusion guided prostate biopsy between 2007 and 2012. We assessed the accuracy of a previously published nomogram in patients on active surveillance before confirmatory biopsy. For each cutoff we determined the number of biopsies avoided (ie reliance on magnetic resonance imaging alone without rebiopsy) over the full range of nomogram cutoffs. We assessed the performance of the multiparametric magnetic resonance imaging active surveillance nomogram based on a decision to perform biopsy at various nomogram generated probabilities. Based on cutoff probabilities of 19% to 32% on the nomogram the number of patients who could be spared repeat biopsy was 27% to 68% of the active surveillance cohort. The sensitivity of the test in this interval was 97% to 71% and negative predictive value was 91% to 81%. Multiparametric magnetic resonance imaging based nomograms may reasonably decrease the number of repeat biopsies in patients on active surveillance by as much as 68%. Analysis over the full range of nomogram generated probabilities allows patient and caregiver preference based decision making on the risk assumed for the benefit of fewer repeat biopsies. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
On-Wafer Measurement of a Silicon-Based CMOS VCO at 324 GHz
NASA Technical Reports Server (NTRS)
Samoska, Lorene; Man Fung, King; Gaier, Todd; Huang, Daquan; Larocca, Tim; Chang, M. F.; Campbell, Richard; Andrews, Michael
2008-01-01
The world s first silicon-based complementary metal oxide/semiconductor (CMOS) integrated-circuit voltage-controlled oscillator (VCO) operating in a frequency range around 324 GHz has been built and tested. Concomitantly, equipment for measuring the performance of this oscillator has been built and tested. These accomplishments are intermediate steps in a continuing effort to develop low-power-consumption, low-phase-noise, electronically tunable signal generators as local oscillators for heterodyne receivers in submillimeter-wavelength (frequency > 300 GHz) scientific instruments and imaging systems. Submillimeter-wavelength imaging systems are of special interest for military and law-enforcement use because they could, potentially, be used to detect weapons hidden behind clothing and other opaque dielectric materials. In comparison with prior submillimeter- wavelength signal generators, CMOS VCOs offer significant potential advantages, including great reductions in power consumption, mass, size, and complexity. In addition, there is potential for on-chip integration of CMOS VCOs with other CMOS integrated circuitry, including phase-lock loops, analog- to-digital converters, and advanced microprocessors.
A simple method for MR elastography: a gradient-echo type multi-echo sequence.
Numano, Tomokazu; Mizuhara, Kazuyuki; Hata, Junichi; Washio, Toshikatsu; Homma, Kazuhiro
2015-01-01
To demonstrate the feasibility of a novel MR elastography (MRE) technique based on a conventional gradient-echo type multi-echo MR sequence which does not need additional bipolar magnetic field gradients (motion encoding gradient: MEG), yet is sensitive to vibration. In a gradient-echo type multi-echo MR sequence, several images are produced from each echo of the train with different echo times (TEs). If these echoes are synchronized with the vibration, each readout's gradient lobes achieve a MEG-like effect, and the later generated echo causes a greater MEG-like effect. The sequence was tested for the tissue-mimicking agarose gel phantoms and the psoas major muscles of healthy volunteers. It was confirmed that the readout gradient lobes caused an MEG-like effect and the later TE images had higher sensitivity to vibrations. The magnitude image of later generated echo suffered the T2 decay and the susceptibility artifacts, but the wave image and elastogram of later generated echo were unaffected by these effects. In in vivo experiments, this method was able to measure the mean shear modulus of the psoas major muscle. From the results of phantom experiments and volunteer studies, it was shown that this method has clinical application potential. Copyright © 2014 Elsevier Inc. All rights reserved.
Horga, Guillermo; Kaur, Tejal; Peterson, Bradley S
2014-06-01
The widespread use of Magnetic Resonance Imaging (MRI) in the study of child- and adult-onset developmental psychopathologies has generated many investigations that have measured brain structure and function in vivo throughout development, often generating great excitement over our ability to visualize the living, developing brain using the attractive, even seductive images that these studies produce. Often lost in this excitement is the recognition that brain imaging generally, and MRI in particular, is simply a technology, one that does not fundamentally differ from any other technology, be it a blood test, a genotyping assay, a biochemical assay, or behavioral test. No technology alone can generate valid scientific findings. Rather, it is only technology coupled with a strong experimental design that can generate valid and reproducible findings that lead to new insights into the mechanisms of disease and therapeutic response. In this review we discuss selected studies to illustrate the most common and important limitations of MRI study designs as most commonly implemented thus far, as well as the misunderstanding that the interpretations of findings from those studies can create for our theories of developmental psychopathologies. Common limitations of MRI study designs are in large part responsible thus far for the generally poor reproducibility of findings across studies, poor generalizability to the larger population, failure to identify developmental trajectories, inability to distinguish causes from effects of illness, and poor ability to infer causal mechanisms in most MRI studies of developmental psychopathologies. For each of these limitations in study design and the difficulties they entail for the interpretation of findings, we discuss various approaches that numerous laboratories are now taking to address those difficulties, which have in common the yoking of brain imaging technologies to studies with inherently stronger designs that permit more valid and more powerful causal inferences. Those study designs include epidemiological, longitudinal, high-risk, clinical trials, and multimodal imaging studies. We highlight several studies that have yoked brain imaging technologies to these stronger designs to illustrate how doing so can aid our understanding of disease mechanisms and in the foreseeable future can improve clinical diagnosis, prevention, and treatment planning for developmental psychopathologies. © 2014 The Authors. Journal of Child Psychology and Psychiatry © 2014 Association for Child and Adolescent Mental Health.
Newell, John D; Fuld, Matthew K; Allmendinger, Thomas; Sieren, Jered P; Chan, Kung-Sik; Guo, Junfeng; Hoffman, Eric A
2015-01-01
The purpose of this study was to evaluate the impact of ultralow radiation dose single-energy computed tomographic (CT) acquisitions with Sn prefiltration and third-generation iterative reconstruction on density-based quantitative measures of growing interest in phenotyping pulmonary disease. The effects of both decreasing dose and different body habitus on the accuracy of the mean CT attenuation measurements and the level of image noise (SD) were evaluated using the COPDGene 2 test object, containing 8 different materials of interest ranging from air to acrylic and including various density foams. A third-generation dual-source multidetector CT scanner (Siemens SOMATOM FORCE; Siemens Healthcare AG, Erlangen, Germany) running advanced modeled iterative reconstruction (ADMIRE) software (Siemens Healthcare AG) was used.We used normal and very large body habitus rings at dose levels varying from 1.5 to 0.15 mGy using a spectral-shaped (0.6-mm Sn) tube output of 100 kV(p). Three CT scans were obtained at each dose level using both rings. Regions of interest for each material in the test object scans were automatically extracted. The Hounsfield unit values of each material using weighted filtered back projection (WFBP) at 1.5 mGy was used as the reference value to evaluate shifts in CT attenuation at lower dose levels using either WFBP or ADMIRE. Statistical analysis included basic statistics, Welch t tests, multivariable covariant model using the F test to assess the significance of the explanatory (independent) variables on the response (dependent) variable, and CT mean attenuation, in the multivariable covariant model including reconstruction method. Multivariable regression analysis of the mean CT attenuation values showed a significant difference with decreasing dose between ADMIRE and WFBP. The ADMIRE has reduced noise and more stable CT attenuation compared with WFBP. There was a strong effect on the mean CT attenuation values of the scanned materials for ring size (P < 0.0001) and dose level (P < 0.0001). The number of voxels in the region of interest for the particular material studied did not demonstrate a significant effect (P > 0.05). The SD was lower with ADMIRE compared with WFBP at all dose levels and ring sizes (P < 0.05). The third-generation dual-source CT scanners using third-generation iterative reconstruction methods can acquire accurate quantitative CT images with acceptable image noise at very low-dose levels (0.15 mGy). This opens up new diagnostic and research opportunities in CT phenotyping of the lung for developing new treatments and increased understanding of pulmonary disease.
MRI-based treatment planning with pseudo CT generated through atlas registration.
Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-05-01
To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.
MRI-based treatment planning with pseudo CT generated through atlas registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho
2014-05-15
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less
MRI-based treatment planning with pseudo CT generated through atlas registration
Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-01-01
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377
Mirage: a visible signature evaluation tool
NASA Astrophysics Data System (ADS)
Culpepper, Joanne B.; Meehan, Alaster J.; Shao, Q. T.; Richards, Noel
2017-10-01
This paper presents the Mirage visible signature evaluation tool, designed to provide a visible signature evaluation capability that will appropriately reflect the effect of scene content on the detectability of targets, providing a capability to assess visible signatures in the context of the environment. Mirage is based on a parametric evaluation of input images, assessing the value of a range of image metrics and combining them using the boosted decision tree machine learning method to produce target detectability estimates. It has been developed using experimental data from photosimulation experiments, where human observers search for vehicle targets in a variety of digital images. The images used for tool development are synthetic (computer generated) images, showing vehicles in many different scenes and exhibiting a wide variation in scene content. A preliminary validation has been performed using k-fold cross validation, where 90% of the image data set was used for training and 10% of the image data set was used for testing. The results of the k-fold validation from 200 independent tests show a prediction accuracy between Mirage predictions of detection probability and observed probability of detection of r(262) = 0:63, p < 0:0001 (Pearson correlation) and a MAE = 0:21 (mean absolute error).
Ocular screening tests of elementary school children
NASA Technical Reports Server (NTRS)
Richardson, J.
1983-01-01
This report presents an analysis of 507 abnormal retinal reflex images taken of Huntsville kindergarten and first grade students. The retinal reflex images were obtained by using an MSFC-developed Generated Retinal Reflex Image System (GRRIS) photorefractor. The system uses a 35 mm camera with a telephoto lens with an electronic flash attachment. Slide images of the eyes were examined for abnormalities. Of a total of 1835 students screened for ocular abnormalities, 507 were found to have abnormal retinal reflexes. The types of ocular abnormalities detected were hyperopia, myopia, astigmatism, esotropia, exotropia, strabismus, and lens obstuctions. The report shows that the use of the photorefractor screening system is an effective low-cost means of screening school children for abnormalities.
Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2017-10-01
The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from the nodule-like object functions A and B of the phantom spheres was found, suggesting the validity of the nodule-like object functions. The virtual nodules generated from the nodule-like object function A of the phantom spheres were similar to the real images obtained with Scanner C; the root mean square errors (RMSEs) between them were 10.8, 11.1, and 12.5 Hounsfield units (HU) for 5-, 7-, and 10-mm-diameter spheres, respectively. The equivalent results (RMSEs) using the nodule-like object function B were 15.9, 16.8, and 16.5 HU, respectively. These RMSEs were small considering the high contrast between the sphere density and background density (approximately 674 HU). The virtual nodules generated from the nodule-like object functions of the five laboratory-made nodules were similar to the real images obtained with Scanner C; the RMSEs between them ranged from 6.2 to 8.6 HU in five cases. The nodule-like object functions calculated from real nodule images would be effective to generate realistic virtual nodules. The proposed method would be feasible for generating virtual nodules that have the characteristics of the spatial resolution of the CT system used in each institution, allowing for site-specific nodule generation. © 2017 American Association of Physicists in Medicine.
Retinal Image Simulation of Subjective Refraction Techniques.
Perches, Sara; Collados, M Victoria; Ares, Jorge
2016-01-01
Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient's response-guided refraction) is the most commonly used approach. In this context, this paper's main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques--including Jackson's Cross-Cylinder test (JCC)--relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software's usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training.
Characterization of equipment for shaping and imaging hadron minibeams
NASA Astrophysics Data System (ADS)
Pugatch, V.; Brons, S.; Campbell, M.; Kovalchuk, O.; Llopart, X.; Martínez-Rovira, I.; Momot, Ie.; Okhrimenko, O.; Prezado, Y.; Sorokin, Yu.
2017-11-01
For the feasibility studies of spatially fractionated hadron therapy prototypes of the equipment for hadron minibeams shaping and monitoring have been designed, built and tested. The collimators design was based on Monte Carlo simulations (Gate v.6.2). Slit and matrix collimators were used for minibeams shaping. Gafchromic films, micropixel detectors Timepix in a hybrid as well as metal mode were tested for measuring hadrons intensity distribution in minibeams. An overall beam profile was measured by the metal microstrip detector. The performance of a mini-beams shaping and monitoring equipment was characterized exploring low energy protons at the KINR Tandem generator as well as high energy carbon and oxygen ion beams at HIT (Heidelberg). The results demonstrate reliable performance of the tested equipment for shaping and imaging hadron mini-beam structures.
NASA Astrophysics Data System (ADS)
Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Pfefer, T. Joshua
2017-03-01
As Photoacoustic Tomography (PAT) matures and undergoes clinical translation, objective performance test methods are needed to facilitate device development, regulatory clearance and clinical quality assurance. For mature medical imaging modalities such as CT, MRI, and ultrasound, tissue-mimicking phantoms are frequently incorporated into consensus standards for performance testing. A well-validated set of phantom-based test methods is needed for evaluating performance characteristics of PAT systems. To this end, we have constructed phantoms using a custom tissue-mimicking material based on PVC plastisol with tunable, biologically-relevant optical and acoustic properties. Each phantom is designed to enable quantitative assessment of one or more image quality characteristics including 3D spatial resolution, spatial measurement accuracy, ultrasound/PAT co-registration, uniformity, penetration depth, geometric distortion, sensitivity, and linearity. Phantoms contained targets including high-intensity point source targets and dye-filled tubes. This suite of phantoms was used to measure the dependence of performance of a custom PAT system (equipped with four interchangeable linear array transducers of varying design) on design parameters (e.g., center frequency, bandwidth, element geometry). Phantoms also allowed comparison of image artifacts, including surface-generated clutter and bandlimited sensing artifacts. Results showed that transducer design parameters create strong variations in performance including a trade-off between resolution and penetration depth, which could be quantified with our method. This study demonstrates the utility of phantom-based image quality testing in device performance assessment, which may guide development of consensus standards for PAT systems.
A study on scattering correction for γ-photon 3D imaging test method
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao
2018-03-01
A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.
NASA Astrophysics Data System (ADS)
Hess, M.; Robson, S.
2012-07-01
3D colour image data generated for the recording of small museum objects and archaeological finds are highly variable in quality and fitness for purpose. Whilst current technology is capable of extremely high quality outputs, there are currently no common standards or applicable guidelines in either the museum or engineering domain suited to scientific evaluation, understanding and tendering for 3D colour digital data. This paper firstly explains the rationale towards and requirements for 3D digital documentation in museums. Secondly it describes the design process, development and use of a new portable test object suited to sensor evaluation and the provision of user acceptance metrics. The test object is specifically designed for museums and heritage institutions and includes known surface and geometric properties which support quantitative and comparative imaging on different systems. The development for a supporting protocol will allow object reference data to be included in the data processing workflow with specific reference to conservation and curation.
Human perception testing methodology for evaluating EO/IR imaging systems
NASA Astrophysics Data System (ADS)
Graybeal, John J.; Monfort, Samuel S.; Du Bosq, Todd W.; Familoni, Babajide O.
2018-04-01
The U.S. Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD's Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.
NASA Technical Reports Server (NTRS)
Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.
2013-01-01
A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster-scanning mode in which successive, gimbaled images of the hazard field are mosaicked together as well as in a wider, 4.85deg FOV staring mode in which digital magnification, via a novel 3-D superresolution technique, is used to effectively achieve the same spatial precision attained with the more narrow FOV optics. The lidar generates calibrated and corrected 3-D range images of the hazard field in real-time and passes them to the ALHAT Hazard Detection System (HDS) which stitches the images together to generate on-the-fly Digital Elevation Maps (DEM's) and identifies hazards and safe-landing sites which the ALHAT GN&C system can then use to guide the host vehicle to a safe landing on the selected site. Results indicate that, for the KSC hazard field, the lidar operational range extends from 100m to 1.35 km for a 30 degree line-of-sight angle and a range precision as low as 8 cm which permits hazards as small as 25 cm to be identified. Based on the Flash Lidar images, the HDS correctly found and reported safe sites in near-real-time during several of the flights. A follow-on field test, planned for 2013, seeks to complete the closing of the GN&C loop for fully-autonomous operations on-board the Morpheus robotic, rocket-powered, free-flyer test bed in which the ALHAT system would scan the KSC hazard field (which was vetted during the present testing) and command the vehicle to landing on one of the selected safe sites.
Motion effects in multistatic millimeter-wave imaging systems
NASA Astrophysics Data System (ADS)
Schiessl, Andreas; Ahmed, Sherif Sayed; Schmidt, Lorenz-Peter
2013-10-01
At airport security checkpoints, authorities are demanding improved personnel screening devices for increased security. Active mm-wave imaging systems deliver the high quality images needed for reliable automatic detection of hidden threats. As mm-wave imaging systems assume static scenarios, motion effects caused by movement of persons during the screening procedure can degrade image quality, so very short measurement time is required. Multistatic imaging array designs and fully electronic scanning in combination with digital beamforming offer short measurement time together with high resolution and high image dynamic range, which are critical parameters for imaging systems used for passenger screening. In this paper, operational principles of such systems are explained, and the performance of the imaging systems with respect to motion within the scenarios is demonstrated using mm-wave images of different test objects and standing as well as moving persons. Electronic microwave imaging systems using multistatic sparse arrays are suitable for next generation screening systems, which will support on the move screening of passengers.
More About The Video Event Trigger
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1996-01-01
Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.
Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight
NASA Technical Reports Server (NTRS)
Suorsa, Raymond; Sridhar, Banavar
1991-01-01
A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.
NASA Astrophysics Data System (ADS)
Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin
2017-08-01
Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.
Soft x-ray submicron imaging detector based on point defects in LiF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldacchini, G.; Bollanti, S.; Bonfigli, F.
2005-11-15
The use of lithium fluoride (LiF) crystals and films as imaging detectors for EUV and soft-x-ray radiation is discussed. The EUV or soft-x-ray radiation can generate stable color centers, emitting in the visible spectral range an intense fluorescence from the exposed areas. The high dynamic response of the material to the received dose and the atomic scale of the color centers make this detector extremely interesting for imaging at a spatial resolution which can be much smaller than the light wavelength. Experimental results of contact microscopy imaging of test meshes demonstrate a resolution of the order of 400 nm. Thismore » high spatial resolution has been obtained in a wide field of view, up to several mm{sup 2}. Images obtained on different biological samples, as well as an investigation of a soft x-ray laser beam are presented. The behavior of the generated color centers density as a function of the deposited x-ray dose and the advantages of this new diagnostic technique for both coherent and noncoherent EUV sources, compared with CCDs detectors, photographic films, and photoresists are discussed.« less
Halo-free phase contrast microscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Nguyen, Tan H.; Kandel, Mikhail E.; Shakir, Haadi M.; Best, Catherine; Do, Minh N.; Popescu, Gabriel
2017-02-01
The phase contrast (PC) method is one of the most impactful developments in the four-century long history of microscopy. It allows for intrinsic, nondestructive contrast of transparent specimens, such as live cells. However, PC is plagued by the halo artifact, a result of insufficient spatial coherence in the illumination field, which limits its applicability. We present a new approach for retrieving halo-free phase contrast microscopy (hfPC) images by upgrading the conventional PC microscope with an external interferometric module, which generates sufficient data for reversing the halo artifact. Measuring four independent intensity images, our approach first measures haloed phase maps of the sample. We solve for the halo-free sample transmission function by using a physical model of the image formation under partial spatial coherence. Using this halo-free sample transmission, we can numerically generate artifact-free PC images. Furthermore, this transmission can be further used to obtain quantitative information about the sample, e.g., the thickness with known refractive indices, dry mass of live cells during their cycles. We tested our hfPC method on various control samples, e.g., beads, pillars and validated its potential for biological investigation by imaging live HeLa cells, red blood cells, and neurons.
Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.
Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B
2018-02-01
Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. © RSNA, 2017 Online supplemental material is available for this article.
Automated synthesis, insertion and detection of polyps for CT colonography
NASA Astrophysics Data System (ADS)
Sezille, Nicolas; Sadleir, Robert J. T.; Whelan, Paul F.
2003-03-01
CT Colonography (CTC) is a new non-invasive colon imaging technique which has the potential to replace conventional colonoscopy for colorectal cancer screening. A novel system which facilitates automated detection of colorectal polyps at CTC is introduced. As exhaustive testing of such a system using real patient data is not feasible, more complete testing is achieved through synthesis of artificial polyps and insertion into real datasets. The polyp insertion is semi-automatic: candidate points are manually selected using a custom GUI, suitable points are determined automatically from an analysis of the local neighborhood surrounding each of the candidate points. Local density and orientation information are used to generate polyps based on an elliptical model. Anomalies are identified from the modified dataset by analyzing the axial images. Detected anomalies are classified as potential polyps or natural features using 3D morphological techniques. The final results are flagged for review. The system was evaluated using 15 scenarios. The sensitivity of the system was found to be 65% with 34% false positive detections. Automated diagnosis at CTC is possible and thorough testing is facilitated by augmenting real patient data with computer generated polyps. Ultimately, automated diagnosis will enhance standard CTC and increase performance.
The analysis of optical-electro collimated light tube measurement system
NASA Astrophysics Data System (ADS)
Li, Zhenhui; Jiang, Tao; Cao, Guohua; Wang, Yanfei
2005-12-01
A new type of collimated light tube (CLT) is mentioned in this paper. The analysis and structure of CLT are described detail. The reticle and discrimination board are replaced by a optical-electro graphics generator, or DLP-Digital Light Processor. DLP gives all kinds of graphics controlled by computer, the lighting surface lies on the focus of the CLT. The rays of light pass through the CLT, and the tested products, the image of aim is received by variant focus objective CCD camera, the image can be processed by computer, then, some basic optical parameters will be obtained, such as optical aberration, image slope, etc. At the same time, motorized translation stage carry the DLP moving to simulate the limited distance. The grating ruler records the displacement of the DLP. The key technique is optical-electro auto-focus, the best imaging quality can be gotten by moving 6-D motorized positioning stage. Some principal questions can be solved in this device, for example, the aim generating, the structure of receiving system and optical matching.
Second harmonic generation microscopy of the living human cornea
NASA Astrophysics Data System (ADS)
Artal, Pablo; Ávila, Francisco; Bueno, Juan
2018-02-01
Second Harmonic Generation (SHG) microscopy provides high-resolution structural imaging of the corneal stroma without the need of labelling techniques. This powerful tool has never been applied to living human eyes so far. Here, we present a new compact SHG microscope specifically developed to image the structural organization of the corneal lamellae in living healthy human volunteers. The research prototype incorporates a long-working distance dry objective that allows non-contact three-dimensional SHG imaging of the cornea. Safety assessment and effectiveness of the system were firstly tested in ex-vivo fresh eyes. The maximum average power of the used illumination laser was 20 mW, more than 10 times below the maximum permissible exposure (according to ANSI Z136.1-2000). The instrument was successfully employed to obtain non-contact and non-invasive SHG of the living human eye within well-established light safety limits. This represents the first recording of in vivo SHG images of the human cornea using a compact multiphoton microscope. This might become an important tool in Ophthalmology for early diagnosis and tracking ocular pathologies.
Quantitative evaluation of skeletal muscle defects in second harmonic generation images.
Liu, Wenhua; Raben, Nina; Ralston, Evelyn
2013-02-01
Skeletal muscle pathologies cause irregularities in the normally periodic organization of the myofibrils. Objective grading of muscle morphology is necessary to assess muscle health, compare biopsies, and evaluate treatments and the evolution of disease. To facilitate such quantitation, we have developed a fast, sensitive, automatic imaging analysis software. It detects major and minor morphological changes by combining texture features and Fourier transform (FT) techniques. We apply this tool to second harmonic generation (SHG) images of muscle fibers which visualize the repeating myosin bands. Texture features are then calculated by using a Haralick gray-level cooccurrence matrix in MATLAB. Two scores are retrieved from the texture correlation plot by using FT and curve-fitting methods. The sensitivity of the technique was tested on SHG images of human adult and infant muscle biopsies and of mouse muscle samples. The scores are strongly correlated to muscle fiber condition. We named the software MARS (muscle assessment and rating scores). It is executed automatically and is highly sensitive even to subtle defects. We propose MARS as a powerful and unbiased tool to assess muscle health.
Quantitative evaluation of skeletal muscle defects in second harmonic generation images
NASA Astrophysics Data System (ADS)
Liu, Wenhua; Raben, Nina; Ralston, Evelyn
2013-02-01
Skeletal muscle pathologies cause irregularities in the normally periodic organization of the myofibrils. Objective grading of muscle morphology is necessary to assess muscle health, compare biopsies, and evaluate treatments and the evolution of disease. To facilitate such quantitation, we have developed a fast, sensitive, automatic imaging analysis software. It detects major and minor morphological changes by combining texture features and Fourier transform (FT) techniques. We apply this tool to second harmonic generation (SHG) images of muscle fibers which visualize the repeating myosin bands. Texture features are then calculated by using a Haralick gray-level cooccurrence matrix in MATLAB. Two scores are retrieved from the texture correlation plot by using FT and curve-fitting methods. The sensitivity of the technique was tested on SHG images of human adult and infant muscle biopsies and of mouse muscle samples. The scores are strongly correlated to muscle fiber condition. We named the software MARS (muscle assessment and rating scores). It is executed automatically and is highly sensitive even to subtle defects. We propose MARS as a powerful and unbiased tool to assess muscle health.
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.
2014-04-01
The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.
Meteor localization via statistical analysis of spatially temporal fluctuations in image sequences
NASA Astrophysics Data System (ADS)
Kukal, Jaromír.; Klimt, Martin; Šihlík, Jan; Fliegel, Karel
2015-09-01
Meteor detection is one of the most important procedures in astronomical imaging. Meteor path in Earth's atmosphere is traditionally reconstructed from double station video observation system generating 2D image sequences. However, the atmospheric turbulence and other factors cause spatially-temporal fluctuations of image background, which makes the localization of meteor path more difficult. Our approach is based on nonlinear preprocessing of image intensity using Box-Cox and logarithmic transform as its particular case. The transformed image sequences are then differentiated along discrete coordinates to obtain statistical description of sky background fluctuations, which can be modeled by multivariate normal distribution. After verification and hypothesis testing, we use the statistical model for outlier detection. Meanwhile the isolated outlier points are ignored, the compact cluster of outliers indicates the presence of meteoroids after ignition.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
Feasibility of fabricating personalized 3D-printed bone grafts guided by high-resolution imaging
NASA Astrophysics Data System (ADS)
Hong, Abigail L.; Newman, Benjamin T.; Khalid, Arbab; Teter, Olivia M.; Kobe, Elizabeth A.; Shukurova, Malika; Shinde, Rohit; Sipzner, Daniel; Pignolo, Robert J.; Udupa, Jayaram K.; Rajapakse, Chamith S.
2017-03-01
Current methods of bone graft treatment for critical size bone defects can give way to several clinical complications such as limited available bone for autografts, non-matching bone structure, lack of strength which can compromise a patient's skeletal system, and sterilization processes that can prevent osteogenesis in the case of allografts. We intend to overcome these disadvantages by generating a patient-specific 3D printed bone graft guided by high-resolution medical imaging. Our synthetic model allows us to customize the graft for the patients' macro- and microstructure and correct any structural deficiencies in the re-meshing process. These 3D-printed models can presumptively serve as the scaffolding for human mesenchymal stem cell (hMSC) engraftment in order to facilitate bone growth. We performed highresolution CT imaging of a cadaveric human proximal femur at 0.030-mm isotropic voxels. We used these images to generate a 3D computer model that mimics bone geometry from micro to macro scale represented by STereoLithography (STL) format. These models were then reformatted to a format that can be interpreted by the 3D printer. To assess how much of the microstructure was replicated, 3D-printed models were re-imaged using micro-CT at 0.025-mm isotropic voxels and compared to original high-resolution CT images used to generate the 3D model in 32 sub-regions. We found a strong correlation between 3D-printed bone volume and volume of bone in the original images used for 3D printing (R2 = 0.97). We expect to further refine our approach with additional testing to create a viable synthetic bone graft with clinical functionality.
Modelling the degree of porosity of the ceramic surface intended for implants.
Stach, Sebastian; Kędzia, Olga; Garczyk, Żaneta; Wróbel, Zygmunt
2018-05-18
The main goal of the study was to develop a model of the degree of surface porosity of a biomaterial intended for implants. The model was implemented using MATLAB. A computer simulation was carried out based on the developed model, which resulted in a two-dimensional image of the modelled surface. Then, an algorithm for computerised image analysis of the surface of the actual oxide bioceramic layer was developed, which enabled determining its degree of porosity. In order to obtain the confocal micrographs of a few areas of the biomaterial, measurements were performed using the LEXT OLS4000 confocal laser microscope. The image analysis was carried out using MountainsMap Premium and SPIP. The obtained results allowed determining the input parameters of the program, on the basis of which porous biomaterial surface images were generated. The last part of the study involved verification of the developed model. The modelling method was tested by comparing the obtained results with the experimental data obtained from the analysis of surface images of the test material.
An atmospheric turbulence and telescope simulator for the development of AOLI
NASA Astrophysics Data System (ADS)
Puga, Marta; López, Roberto; King, David; Oscoz, Alejandro
2014-08-01
AOLI, Adaptive Optics Lucky Imager, is the next generation of extremely high resolution instruments in the optical range, combining the two more promising techniques: Adaptive optics and lucky imaging. The possibility of reaching fainter objects at maximum resolution implies a better use of weak energy on each lucky image. AOLI aims to achieve this by using an adaptive optics system to reduce the dispersion that seeing causes on the spot and therefore increasing the number of optimal images to accumulate, maximizing the efficiency of the lucky imaging technique. The complexity of developments in hardware, control and software for in-site telescope tests claim for a system to simulate the telescope performance. This paper outlines the requirements and a concept/preliminary design for the William Herschel Telescope (WHT) and atmospheric turbulence simulator. The design consists of pupil resemble, a variable intensity point source, phase plates and a focal plane mask to assist in the alignment, diagnostics and calibration of AOLI wavefront sensor, AO loop and science detectors, as well as enabling stand-alone test operation of AOLI.
X-ray phase scanning setup for non-destructive testing using Talbot-Lau interferometer
NASA Astrophysics Data System (ADS)
Bachche, S.; Nonoguchi, M.; Kato, K.; Kageyama, M.; Koike, T.; Kuribayashi, M.; Momose, A.
2016-09-01
X-ray grating interferometry has a great potential for X-ray phase imaging over conventional X-ray absorption imaging which does not provide significant contrast for weakly absorbing objects and soft biological tissues. X-ray Talbot and Talbot-Lau interferometers which are composed of transmission gratings and measure the differential X-ray phase shifts have gained popularity because they operate with polychromatic beams. In X-ray radiography, especially for nondestructive testing in industrial applications, the feasibility of continuous sample scanning is not yet completely revealed. A scanning setup is frequently advantageous when compared to a direct 2D static image acquisition in terms of field of view, exposure time, illuminating radiation, etc. This paper demonstrates an efficient scanning setup for grating-based Xray phase imaging using laboratory-based X-ray source. An apparatus consisting of an X-ray source that emits X-rays vertically, optical gratings and a photon-counting detector was used with which continuously moving objects across the field of view as that of conveyor belt system can be imaged. The imaging performance of phase scanner was tested by scanning a long continuous moving sample at a speed of 5 mm/s and absorption, differential-phase and visibility images were generated by processing non-uniform moire movie with our specially designed phase measurement algorithm. A brief discussion on the feasibility of phase scanner with scanning setup approach including X-ray phase imaging performance is reported. The successful results suggest a breakthrough for scanning objects those are moving continuously on conveyor belt system non-destructively using the scheme of X-ray phase imaging.
Delisser, Peter J; Carwardine, Darren
2017-11-29
Diagnostic imaging technology is becoming more advanced and widely available to veterinary patients with the growing popularity of veterinary-specific computed tomography (CT) and magnetic resonance imaging (MRI). Veterinary students must, therefore, be familiar with these technologies and understand the importance of sound anatomic knowledge for interpretation of the resultant images. Anatomy teaching relies heavily on visual perception of structures and their function. In addition, visual spatial ability (VSA) positively correlates with anatomy test scores. We sought to assess the impact of including more diagnostic imaging, particularly CT/MRI, in the teaching of veterinary anatomy on the students' perceived level of usefulness and ease of understanding content. Finally, we investigated survey answers' relationship to the students' inherent baseline VSA, measured by a standard Mental Rotations Test. Students viewed diagnostic imaging as a useful inclusion that provided clear links to clinical relevance, thus improving the students' perceived benefits in its use. Use of CT and MRI images was not viewed as more beneficial, more relevant, or more useful than the use of radiographs. Furthermore, students felt that the usefulness of CT/MRI inclusion was mitigated by the lack of prior formal instruction on the basics of CT/MRI image generation and interpretation. To be of significantly greater use, addition of learning resources labeling relevant anatomy in tomographical images would improve utility of this novel teaching resource. The present study failed to find any correlation between student perceptions of diagnostic imaging in anatomy teaching and their VSA.
SOFIA Science Instruments: Commissioning, Upgrades and Future Opportunities
NASA Technical Reports Server (NTRS)
Smith, Erin C.
2014-01-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) is the world's largest airborne observatory, featuring a 2.5 meter telescope housed in the aft section of a Boeing 747sp aircraft. SOFIA's current instrument suite includes: FORCAST (Faint Object InfraRed CAmera for the SOFIA Telescope), a 5-40 µm dual band imager/grism spectrometer developed at Cornell University; HIPO (High-speed Imaging Photometer for Occultations), a 0.3-1.1 micron imager built by Lowell Observatory; FLITECAM (First Light Infrared Test Experiment CAMera), a 1-5 micron wide-field imager/grism spectrometer developed at UCLA; FIFI-LS (Far-Infrared Field-Imaging Line Spectrometer), a 42-210 micron IFU grating spectrograph completed by University Stuttgart; and EXES (Echelon-Cross- Echelle Spectrograph), a 5-28 micron high-resolution spectrometer being completed by UC Davis and NASA Ames. A second generation instrument, HAWC+ (Highresolution Airborne Wideband Camera), is a 50-240 micron imager being upgraded at JPL to add polarimetry and new detectors developed at GSFC. SOFIA will continually update its instrument suite with new instrumentation, technology demonstration experiments and upgrades to the existing instrument suite. This paper details instrument capabilities and status as well as plans for future instrumentation, including the call for proposals for 3rd generation SOFIA science instruments.
Rhoades, Glendon W; Belev, George S; Chapman, L Dean; Wiebe, Sheldon P; Cooper, David M; Wong, Adelaine TF; Rosenberg, Alan M
2015-01-01
The objective of this project was to develop and test a new technology for imaging growing joints by means of diffraction-enhanced imaging (DEI) combined with CT and using a synchrotron radiation source. DEI–CT images of an explanted 4-wk-old piglet stifle joint were acquired by using a 40-keV beam. The series of scanned slices was later ‘stitched’ together, forming a 3D dataset. High-resolution DEI-CT images demonstrated fine detail within all joint structures and tissues. Striking detail of vasculature traversing between bone and cartilage, a characteristic of growing but not mature joints, was demonstrated. This report documents for the first time that DEI combined with CT and a synchrotron radiation source can generate more detailed images of intact, growing joints than can currently available conventional imaging modalities. PMID:26310464
Automatic face recognition in HDR imaging
NASA Astrophysics Data System (ADS)
Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.
2014-05-01
The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.
Next-generation spectrometer aids study of Mediterranean
NASA Astrophysics Data System (ADS)
Abrams, M. J.; Bianchi, R.; Buongiorno, M. F.
The Mediterranean region's highly diverse topography, lithology, soils, microclimates, vegetation, and seawater result in a variety of ecosystems. Remote sensing techniques, especially imaging spectrometry, have the potential to provide data for environmental studies on a regional scale in this part of the world.A test deployment of the multispectral infrared and visible imaging spectrometer (MIVIS), a new 102-channel imaging spectrometer, was carried out in Sicily in July 1994. Active volcanoes were surveyed to differentiate volcanic products and determine SO2 emissions in plumes (Figure 1), coastlines were imaged jointly with LIDAR to study pollution, ecosystems at several ocean areas were monitored, vegetated areas were imaged to determine the health of the biota, and archeological sites were studied to reconstruct ancient land use practices. For sites, refer to Figure 2.
1993-11-01
Despite the emergence of several alternative angiographic imaging techniques (i.e., magnetic resonance imaging, computed tomography, and ultrasound angiography), x-ray angiography remains the predominant vascular imaging modality, generating over $4 billion in revenue a year in U.S. hospitals. In this issue, we provide a brief overview of the various angiographic imaging techniques, comparing them with x-ray angiography, and discuss the clinical aspects of x-ray vascular imaging, including catheterization and clinical applications. Clinical, cost, usage, and legal issues related to contrast media are discussed in "Contrast Media: Ionic versus Nonionic and Low-osmolality Agents." We also provide a technical overview and selection guidance for a basic x-ray angiography imaging system, including the gantry and table system, x-ray generator, x-ray tube, image intensifier, video camera and display monitors, image-recording devices, and digital acquisition and processing systems. This issue also contains our Evaluation of the GE Advantx L/C cardiac angiography system and the GE Advantx AFM general-purpose angiography system; the AFM can be used for peripheral, pulmonary, and cerebral vascular studied, among others, and can also be configured for cardiac angiography. Many features of the Advantx L/C system, including generator characteristics and ease of use, also apply to the Advantx AFM as configured for cardiac angiography. Our ratings are based on the systems' ability to provide the best possible image quality for diagnosis and therapy while minimizing patient and personnel exposure to radiation, as well as its ability to minimize operator effort and inconvenience. Both units are rated Acceptable. In the Guidance Section, "Radiation Safety and Protection," we discuss the importance of keeping patient and personnel exposures to radiation as low as reasonably possible, especially in procedures such as cardiac catheterization, angiographic imaging for special procedures, and interventional radiology, which produce among the highest radiation exposure of all x-ray imaging techniques. We also provide recommendations for minimizing personnel and patient exposures to radiation. For more information about x-ray angiography systems and similar devices, as well as for additional perspectives on which we based this study, see the following Health Devices Evaluations: "Mobile C-arm Units" (19[8], August 1990) and "Noninvasive Electronic Quality Control Devices for X-ray Generator Testing" (21[6-7], June-July 1992).(ABSTRACT TRUNCATED AT 400 WORDS)
Choi, Jaewon; Jung, Hyung-Sup; Yun, Sang-Ho
2015-03-09
As the aerospace industry grows, images obtained from Earth observation satellites have been successfully used in various fields. Specifically, the demand for a high-resolution (HR) optical images is gradually increasing, and hence the generation of a high-quality mosaic image is being magnified as an interesting issue. In this paper, we have proposed an efficient mosaic algorithm for HR optical images that are significantly different due to seasonal change. The algorithm includes main steps such as: (1) seamline extraction from gradient magnitude and seam images; (2) histogram matching; and (3) image feathering. Eleven Kompsat-2 images characterized by seasonal variations are used for the performance validation of the proposed method. The results of the performance test show that the proposed method effectively mosaics Kompsat-2 adjacent images including severe seasonal changes. Moreover, the results reveal that the proposed method is applicable to HR optic images such as GeoEye, IKONOS, QuickBird, RapidEye, SPOT, WorldView, etc.
Automatic retinal interest evaluation system (ARIES).
Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang
2014-01-01
In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.
Effect of different runway size on pilot performance during simulated night landing approaches.
DOT National Transportation Integrated Search
1981-02-01
In Experiment I, three pilots flew simulated approaches and landings in a fixed-base simulator with a computer-generated-image visual display. Practice approaches were flown with an 8,000-ft-long runway that was either 75, 150, or 300 ft wide; test a...
Kokki, Tommi; Sipilä, Hannu T; Teräs, Mika; Noponen, Tommi; Durand-Schaefer, Nicolas; Klén, Riku; Knuuti, Juhani
2010-01-01
In PET imaging respiratory and cardiac contraction motions interfere the imaging of heart. The aim was to develop and evaluate dual gating method for improving the detection of small targets of the heart. The method utilizes two independent triggers which are sent periodically into list mode data based on respiratory and ECG cycles. An algorithm for generating dual gated segments from list mode data was developed. The test measurements showed that rotational and axial movements of point source can be separated spatially to different segments with well-defined borders. The effect of dual gating on detection of small moving targets was tested with a moving heart phantom. Dual gated images showed 51% elimination (3.6 mm out of 7.0 mm) of contraction motion of hot spot (diameter 3 mm) and 70% elimination (14 mm out of 20 mm) of respiratory motion. Averaged activity value of hot spot increases by 89% when comparing to non-gated images. Patient study of suspected cardiac sarcoidosis shows sharper spatial myocardial uptake profile and improved detection of small myocardial structures such as papillary muscles. The dual gating method improves detection of small moving targets in a phantom and it is feasible in clinical situations.
Ober, Christopher P
Second-year veterinary students are often challenged by concepts in veterinary radiology, including the fundamentals of image quality and generation of differential lists. Four card games were developed to provide veterinary students with a supplemental means of learning about radiographic image quality and differential diagnoses in urogenital imaging. Students played these games and completed assessments of their subject knowledge before and after playing. The hypothesis was that playing each game would improve students' understanding of the topic area. For each game, students who played the game performed better on the post-test than students who did not play that game (all p<.01). For three of the four games, students who played each respective game demonstrated significant improvement in scores between the pre-test and the post-test (p<.002). The majority of students expressed that the games were both helpful and enjoyable. Educationally focused games can help students learn classroom and laboratory material. However, game design is important, as the game using the most passive learning process also demonstrated the weakest results. In addition, based on participants' comments, the games were very useful in improving student engagement in the learning process. Thus, use of games in the classroom and laboratory setting seems to benefit the learning process.
Computational analysis of Pelton bucket tip erosion using digital image processing
NASA Astrophysics Data System (ADS)
Shrestha, Bim Prasad; Gautam, Bijaya; Bajracharya, Tri Ratna
2008-03-01
Erosion of hydro turbine components through sand laden river is one of the biggest problems in Himalayas. Even with sediment trapping systems, complete removal of fine sediment from water is impossible and uneconomical; hence most of the turbine components in Himalayan Rivers are exposed to sand laden water and subject to erode. Pelton bucket which are being wildly used in different hydropower generation plant undergoes erosion on the continuous presence of sand particles in water. The subsequent erosion causes increase in splitter thickness, which is supposed to be theoretically zero. This increase in splitter thickness gives rise to back hitting of water followed by decrease in turbine efficiency. This paper describes the process of measurement of sharp edges like bucket tip using digital image processing. Image of each bucket is captured and allowed to run for 72 hours; sand concentration in water hitting the bucket is closely controlled and monitored. Later, the image of the test bucket is taken in the same condition. The process is repeated for 10 times. In this paper digital image processing which encompasses processes that performs image enhancement in both spatial and frequency domain. In addition, the processes that extract attributes from images, up to and including the measurement of splitter's tip. Processing of image has been done in MATLAB 6.5 platform. The result shows that quantitative measurement of edge erosion of sharp edges could accurately be detected and the erosion profile could be generated using image processing technique.
Supervised learning of tools for content-based search of image databases
NASA Astrophysics Data System (ADS)
Delanoy, Richard L.
1996-03-01
A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.
Satellite angular velocity estimation based on star images and optical flow techniques.
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-09-25
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.
Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-01-01
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023
Improved grid-noise removal in single-frame digital moiré 3D shape measurement
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Kofman, Jonathan
2016-11-01
A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.
Smart Image Enhancement Process
NASA Technical Reports Server (NTRS)
Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)
2012-01-01
Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.
NASA Astrophysics Data System (ADS)
Moorhead, Ian R.; Gilmore, Marilyn A.; Houlbrook, Alexander W.; Oxford, David E.; Filbee, David R.; Stroud, Colin A.; Hutchings, G.; Kirk, Albert
2001-09-01
Assessment of camouflage, concealment, and deception (CCD) methodologies is not a trivial problem; conventionally the only method has been to carry out field trials, which are both expensive and subject to the vagaries of the weather. In recent years computing power has increased, such that there are now many research programs using synthetic environments for CCD assessments. Such an approach is attractive; the user has complete control over the environmental parameters and many more scenarios can be investigated. The UK Ministry of Defence is currently developing a synthetic scene generation tool for assessing the effectiveness of air vehicle camouflage schemes. The software is sufficiently flexible to allow it to be used in a broader range of applications, including full CCD assessment. The synthetic scene simulation system (CAMEO- SIM) has been developed, as an extensible system, to provide imagery within the 0.4 to 14 micrometers spectral band with as high a physical fidelity as possible. it consists of a scene design tool, an image generator, that incorporates both radiosity and ray-tracing process, and an experimental trials tool. The scene design tool allows the user to develop a 3D representation of the scenario of interest from a fixed viewpoint. Target(s) of interest can be placed anywhere within this 3D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects. The user has complete control over the level of fidelity of the final image. The output from the rendering tool is a sequence of radiance maps, which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks. The software also maintains an audit trail of all data selected to generate a particular image, both in terms of material properties used and the rendering options chosen. A range of verification tests has shown that the software computes the correct values for analytically tractable scenarios. Validation test using simple scenes have also been undertaken. More complex validation tests using observer trials are planned. The current version of CAMEO-SIM and how its images are used for camouflage assessment is described. The verification and validation tests undertaken are discussed. In addition, example images will be used to demonstrate the significance of different effects, such as spectral rendering and shadows. Planned developments of CAMEO-SIM are also outlined.
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
An Explorative Study to Use DBD Plasma Generation for Aircraft Icing Mitigation
NASA Astrophysics Data System (ADS)
Hu, Hui; Zhou, Wenwu; Liu, Yang; Kolbakir, Cem
2017-11-01
An explorative investigation was performed to demonstrate the feasibility of utilizing thermal effect induced by Dielectric-Barrier-Discharge (DBD) plasma generation for aircraft icing mitigation. The experimental study was performed in an Icing Research Tunnel available at Iowa State University (i.e., ISU-IRT). A NACA0012 airfoil/wing model embedded with DBD plasma actuators was installed in ISU-IRT under typical glaze icing conditions pertinent to aircraft inflight icing phenomena. While a high-speed imaging system was used to record the dynamic ice accretion process over the airfoil surface for the test cases with and without switching on the DBD plasma actuators, an infrared (IR) thermal imaging system was utilized to map the corresponding temperature distributions to quantify the unsteady heat transfer and phase changing process over the airfoil surface. The thermal effect induced by DBD plasma generation was demonstrated to be able to keep the airfoil surface staying free of ice during the entire ice accretion experiment. The measured quantitative surface temperature distributions were correlated with the acquired images of the dynamic ice accretion and water runback processes to elucidate the underlying physics. National Science Foundation CBET-1064196 and CBET-1435590.
Parallel traveling-wave MRI: a feasibility study.
Pang, Yong; Vigneron, Daniel B; Zhang, Xiaoliang
2012-04-01
Traveling-wave magnetic resonance imaging utilizes far fields of a single-piece patch antenna in the magnet bore to generate radio frequency fields for imaging large-size samples, such as the human body. In this work, the feasibility of applying the "traveling-wave" technique to parallel imaging is studied using microstrip patch antenna arrays with both the numerical analysis and experimental tests. A specific patch array model is built and each array element is a microstrip patch antenna. Bench tests show that decoupling between two adjacent elements is better than -26-dB while matching of each element reaches -36-dB, demonstrating excellent isolation performance and impedance match capability. The sensitivity patterns are simulated and g-factors are calculated for both unloaded and loaded cases. The results on B 1- sensitivity patterns and g-factors demonstrate the feasibility of the traveling-wave parallel imaging. Simulations also suggest that different array configuration such as patch shape, position and orientation leads to different sensitivity patterns and g-factor maps, which provides a way to manipulate B(1) fields and improve the parallel imaging performance. The proposed method is also validated by using 7T MR imaging experiments. Copyright © 2011 Wiley-Liss, Inc.
Image enhancement by spatial frequency post-processing of images obtained with pupil filters
NASA Astrophysics Data System (ADS)
Estévez, Irene; Escalera, Juan C.; Stefano, Quimey Pears; Iemmi, Claudio; Ledesma, Silvia; Yzuel, María J.; Campos, Juan
2016-12-01
The use of apodizing or superresolving filters improves the performance of an optical system in different frequency bands. This improvement can be seen as an increase in the OTF value compared to the OTF for the clear aperture. In this paper we propose a method to enhance the contrast of an image in both its low and its high frequencies. The method is based on the generation of a synthetic Optical Transfer Function, by multiplexing the OTFs given by the use of different non-uniform transmission filters on the pupil. We propose to capture three images, one obtained with a clear pupil, one obtained with an apodizing filter that enhances the low frequencies and another one taken with a superresolving filter that improves the high frequencies. In the Fourier domain the three spectra are combined by using smoothed passband filters, and then the inverse transform is performed. We show that we can create an enhanced image better than the image obtained with the clear aperture. To evaluate the performance of the method, bar tests (sinusoidal tests) with different frequency content are used. The results show that a contrast improvement in the high and low frequencies is obtained.
Congruence analysis of point clouds from unstable stereo image sequences
NASA Astrophysics Data System (ADS)
Jepping, C.; Bethmann, F.; Luhmann, T.
2014-06-01
This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.
EyeMIAS: a cloud-based ophthalmic image reading and auxiliary diagnosis system
NASA Astrophysics Data System (ADS)
Wu, Di; Zhao, Heming; Yu, Kai; Chen, Xinjian
2018-03-01
Relying solely on ophthalmic equipment is unable to meet the present health needs. It is urgent to find an efficient way to provide a quick screening and early diagnosis on diabetic retinopathy and other ophthalmic diseases. The purpose of this study is to develop a cloud-base system for medical image especially ophthalmic image to store, view and process and accelerate the screening and diagnosis. In this purpose the system with web application, upload client, storage dependency and algorithm support is implemented. After five alpha tests, the system bore the thousands of large traffic access and generated hundreds of reports with diagnosis.
Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method
NASA Astrophysics Data System (ADS)
Asavaskulkiet, Krissada
2018-04-01
In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korhonen, Juha, E-mail: juha.p.korhonen@hus.fi; Kapanen, Mika; Department of Oncology, Helsinki University Central Hospital, POB-180, 00029 HUS
Purpose: The lack of electron density information in magnetic resonance images (MRI) poses a major challenge for MRI-based radiotherapy treatment planning (RTP). In this study the authors convert MRI intensity values into Hounsfield units (HUs) in the male pelvis and thus enable accurate MRI-based RTP for prostate cancer patients with varying tissue anatomy and body fat contents. Methods: T{sub 1}/T{sub 2}*-weighted MRI intensity values and standard computed tomography (CT) image HUs in the male pelvis were analyzed using image data of 10 prostate cancer patients. The collected data were utilized to generate a dual model HU conversion technique from MRImore » intensity values of the single image set separately within and outside of contoured pelvic bones. Within the bone segment local MRI intensity values were converted to HUs by applying a second-order polynomial model. This model was tuned for each patient by two patient-specific adjustments: MR signal normalization to correct shifts in absolute intensity level and application of a cutoff value to accurately represent low density bony tissue HUs. For soft tissues, such as fat and muscle, located outside of the bone contours, a threshold-based segmentation method without requirements for any patient-specific adjustments was introduced to convert MRI intensity values into HUs. The dual model HU conversion technique was implemented by constructing pseudo-CT images for 10 other prostate cancer patients. The feasibility of these images for RTP was evaluated by comparing HUs in the generated pseudo-CT images with those in standard CT images, and by determining deviations in MRI-based dose distributions compared to those in CT images with 7-field intensity modulated radiation therapy (IMRT) with the anisotropic analytical algorithm and 360° volumetric-modulated arc therapy (VMAT) with the Voxel Monte Carlo algorithm. Results: The average HU differences between the constructed pseudo-CT images and standard CT images of each test patient ranged from −2 to 5 HUs and from 22 to 78 HUs in soft and bony tissues, respectively. The average local absolute value differences were 11 HUs in soft tissues and 99 HUs in bones. The planning target volume doses (volumes 95%, 50%, 5%) in the pseudo-CT images were within 0.8% compared to those in CT images in all of the 20 treatment plans. The average deviation was 0.3%. With all the test patients over 94% (IMRT) and 92% (VMAT) of dose points within body (lower than 10% of maximum dose suppressed) passed the 1 mm and 1% 2D gamma index criterion. The statistical tests (t- and F-tests) showed significantly improved (p ≤ 0.05) HU and dose calculation accuracies with the soft tissue conversion method instead of homogeneous representation of these tissues in MRI-based RTP images. Conclusions: This study indicates that it is possible to construct high quality pseudo-CT images by converting the intensity values of a single MRI series into HUs in the male pelvis, and to use these images for accurate MRI-based prostate RTP dose calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korhonen, Juha, E-mail: juha.p.korhonen@hus.fi; Department of Oncology, Helsinki University Central Hospital, POB-180, 00029 HUS; Kapanen, Mika
2014-01-15
Purpose: The lack of electron density information in magnetic resonance images (MRI) poses a major challenge for MRI-based radiotherapy treatment planning (RTP). In this study the authors convert MRI intensity values into Hounsfield units (HUs) in the male pelvis and thus enable accurate MRI-based RTP for prostate cancer patients with varying tissue anatomy and body fat contents. Methods: T{sub 1}/T{sub 2}*-weighted MRI intensity values and standard computed tomography (CT) image HUs in the male pelvis were analyzed using image data of 10 prostate cancer patients. The collected data were utilized to generate a dual model HU conversion technique from MRImore » intensity values of the single image set separately within and outside of contoured pelvic bones. Within the bone segment local MRI intensity values were converted to HUs by applying a second-order polynomial model. This model was tuned for each patient by two patient-specific adjustments: MR signal normalization to correct shifts in absolute intensity level and application of a cutoff value to accurately represent low density bony tissue HUs. For soft tissues, such as fat and muscle, located outside of the bone contours, a threshold-based segmentation method without requirements for any patient-specific adjustments was introduced to convert MRI intensity values into HUs. The dual model HU conversion technique was implemented by constructing pseudo-CT images for 10 other prostate cancer patients. The feasibility of these images for RTP was evaluated by comparing HUs in the generated pseudo-CT images with those in standard CT images, and by determining deviations in MRI-based dose distributions compared to those in CT images with 7-field intensity modulated radiation therapy (IMRT) with the anisotropic analytical algorithm and 360° volumetric-modulated arc therapy (VMAT) with the Voxel Monte Carlo algorithm. Results: The average HU differences between the constructed pseudo-CT images and standard CT images of each test patient ranged from −2 to 5 HUs and from 22 to 78 HUs in soft and bony tissues, respectively. The average local absolute value differences were 11 HUs in soft tissues and 99 HUs in bones. The planning target volume doses (volumes 95%, 50%, 5%) in the pseudo-CT images were within 0.8% compared to those in CT images in all of the 20 treatment plans. The average deviation was 0.3%. With all the test patients over 94% (IMRT) and 92% (VMAT) of dose points within body (lower than 10% of maximum dose suppressed) passed the 1 mm and 1% 2D gamma index criterion. The statistical tests (t- and F-tests) showed significantly improved (p ≤ 0.05) HU and dose calculation accuracies with the soft tissue conversion method instead of homogeneous representation of these tissues in MRI-based RTP images. Conclusions: This study indicates that it is possible to construct high quality pseudo-CT images by converting the intensity values of a single MRI series into HUs in the male pelvis, and to use these images for accurate MRI-based prostate RTP dose calculations.« less
Online biospeckle assessment without loss of definition and resolution by motion history image
NASA Astrophysics Data System (ADS)
Godinho, R. P.; Silva, M. M.; Nozela, J. R.; Braga, R. A.
2012-03-01
The application of the dynamic laser speckle as a reliable instrument to achieve maps of activity in biological material is available in literature optics and laser. The application, particularly in live specimens, such as animals and human beings necessitated some approaches to avoid the kinking of the bodies, which creates changes in the patterns undermining the biological activity under monitoring. The adoption of online techniques circumvented the noise generated by the kinking, however, with considerable reduction in the resolution and definition of the activity maps. This work presents a feasible alternative to the routine online methods based on the Motion History Image (MHI) methodology. The adoption of MHI was tested in biological and non-biological samples and compared with online as well as offline procedures of biospeckle image analysis. Tests on paint drying was associated to alcohol volatilization, and tests on a maize seed and on growing of roots confirmed the hypothesis that the MHI would be able to implement an online approach without the reduction of resolution and definition on the resultant images, thereby presenting in some cases results that were comparable to the offline procedures.
Medical ultrasonic tomographic system
NASA Technical Reports Server (NTRS)
Heyser, R. C.; Lecroissette, D. H.; Nathan, R.; Wilson, R. L.
1977-01-01
An electro-mechanical scanning assembly was designed and fabricated for the purpose of generating an ultrasound tomogram. A low cost modality was demonstrated in which analog instrumentation methods formed a tomogram on photographic film. Successful tomogram reconstructions were obtained on in vitro test objects by using the attenuation of the fist path ultrasound signal as it passed through the test object. The nearly half century tomographic methods of X-ray analysis were verified as being useful for ultrasound imaging.
Retinal Image Simulation of Subjective Refraction Techniques
Perches, Sara; Collados, M. Victoria; Ares, Jorge
2016-01-01
Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient’s response-guided refraction) is the most commonly used approach. In this context, this paper’s main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques—including Jackson’s Cross-Cylinder test (JCC)—relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software’s usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training. PMID:26938648
Enhancement of the MODIS Snow and Ice Product Suite Utilizing Image Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Hall, Dorothy K.; Riggs, George A.
2006-01-01
A problem has been noticed with the current NODIS Snow and Ice Product in that fringes of certain snow fields are labeled as "cloud" whereas close inspection of the data indicates that the correct labeling is a non-cloud category such as snow or land. This occurs because the current MODIS Snow and Ice Product generation algorithm relies solely on the MODIS Cloud Mask Product for the labeling of image pixels as cloud. It is proposed here that information obtained from image segmentation can be used to determine when it is appropriate to override the cloud indication from the cloud mask product. Initial tests show that this approach can significantly reduce the cloud "fringing" in modified snow cover labeling. More comprehensive testing is required to determine whether or not this approach consistently improves the accuracy of the snow and ice product.
3D Deep Learning Angiography (3D-DLA) from C-arm Conebeam CT.
Montoya, J C; Li, Y; Strother, C; Chen, G-H
2018-05-01
Deep learning is a branch of artificial intelligence that has demonstrated unprecedented performance in many medical imaging applications. Our purpose was to develop a deep learning angiography method to generate 3D cerebral angiograms from a single contrast-enhanced C-arm conebeam CT acquisition in order to reduce image artifacts and radiation dose. A set of 105 3D rotational angiography examinations were randomly selected from an internal data base. All were acquired using a clinical system in conjunction with a standard injection protocol. More than 150 million labeled voxels from 35 subjects were used for training. A deep convolutional neural network was trained to classify each image voxel into 3 tissue types (vasculature, bone, and soft tissue). The trained deep learning angiography model was then applied for tissue classification into a validation cohort of 8 subjects and a final testing cohort of the remaining 62 subjects. The final vasculature tissue class was used to generate the 3D deep learning angiography images. To quantify the generalization error of the trained model, we calculated the accuracy, sensitivity, precision, and Dice similarity coefficients for vasculature classification in relevant anatomy. The 3D deep learning angiography and clinical 3D rotational angiography images were subjected to a qualitative assessment for the presence of intersweep motion artifacts. Vasculature classification accuracy and 95% CI in the testing dataset were 98.7% (98.3%-99.1%). No residual signal from osseous structures was observed for any 3D deep learning angiography testing cases except for small regions in the otic capsule and nasal cavity compared with 37% (23/62) of the 3D rotational angiographies. Deep learning angiography accurately recreated the vascular anatomy of the 3D rotational angiography reconstructions without a mask. Deep learning angiography reduced misregistration artifacts induced by intersweep motion, and it reduced radiation exposure required to obtain clinically useful 3D rotational angiography. © 2018 by American Journal of Neuroradiology.
NASA Astrophysics Data System (ADS)
Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang
2005-04-01
Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.
High-speed 3D imaging using digital binary defocusing method vs sinusoidal method
NASA Astrophysics Data System (ADS)
Zhang, Song; Hyun, Jae-Sang; Li, Beiwen
2017-02-01
This paper presents our research findings on high-speed 3D imaging using digital light processing (DLP) technologies. In particular, we compare two different sinusoidal fringe generation techniques using the DLP projection devices: direct projection of 8-bit computer generated sinusoidal patterns (a.k.a, the sinusoidal method), and the creation of sinusoidal patterns by defocusing binary patterns (a.k.a., the binary defocusing method). This paper mainly examines their performance on high-accuracy measurement applications under precisely controlled settings. Two different projection systems were tested in this study: the commercially available inexpensive projector, and the DLP development kit. Experimental results demonstrated that the binary defocusing method always outperforms the sinusoidal method if a sufficient number of phase-shifted fringe patterns can be used.
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Optical transmission testing based on asynchronous sampling techniques
NASA Astrophysics Data System (ADS)
Mrozek, T.; Perlicki, K.; Wilczewski, G.
2016-09-01
This paper presents a method of analysis of images obtained with the Asynchronous Delay Tap Sampling technique, which is used for simultaneous monitoring of a number of phenomena in the physical layer of an optical network. This method allows visualization of results in a form of an optical signal's waveform (characteristics depicting phase portraits). Depending on a specific phenomenon being observed (i.e.: chromatic dispersion, polarization mode dispersion and ASE noise), the shape of the waveform changes. Herein presented original waveforms were acquired utilizing the OptSim 4.0 simulation package. After specific simulation testing, the obtained numerical data was transformed into an image form, that was further subjected to the analysis using authors' custom algorithms. These algorithms utilize various pixel operations and creation of reports each image might be characterized with. Each individual report shows the number of black pixels being present in the specific image segment. Afterwards, generated reports are compared with each other, across the original-impaired relationship. The differential report is created which consists of a "binary key" that shows the increase in the number of pixels in each particular segment. The ultimate aim of this work is to find the correlation between the generated binary keys and the analyzed common phenomenon being observed, allowing identification of the type of interference occurring. In the further course of the work it is evitable to determine their respective values. The presented work delivers the first objective - the ability to recognize interference.
Images of turbulent, absorbing-emitting atmospheres and their application to windshear detection
NASA Astrophysics Data System (ADS)
Watt, David W.; Philbrick, Daniel A.
1991-03-01
The simulation of images generated by thermally-radiating, optically- thick turbulent media are discussed and the time-dependent evolution of these images is modeled. This characteristics of these images are particularly applicable to the atmosphere in the 13-15 mm band and their behavior may have application in detecting aviation hazards. The image is generated by volumetric thermal emission by atmospheric constituents within the field-of-view of the detector. The structure of the turbulent temperature field and the attenuating properties of the atmosphere interact with the field-of-view's geometry to produce a localized region which dominates the optical flow of the image. The simulations discussed in this paper model the time-dependent behavior of images generated by atmospheric flows viewed from an airborne platform. The images ar modelled by (1) generating a random field of temperature fluctuations have the proper spatial structure, (2) adding these fluctuation to the baseline temperature field of the atmospheric event, (3) accumulating the image on the detector from radiation emitted in the imaging volume, (4) allowing the individual radiating points within the imaging volume to move with the local velocity, (5) recalculating the thermal field and generating a new image. This approach was used to simulate the images generated by the temperature and velocity fields of a windshear. The simulation generated pais of images separated by a small time interval. These image paris were analyzed by image cross-correlation. The displacement of the cross-correlation peak was used to infer the velocity at the localized region. The localized region was found to depend weakly on the shape of the velocity profile. Prediction of the localized region, the effects of imaging from a moving platform, alternative image analysis schemes, and possible application to aviation hazards are discussed.
Trabecular Bone Mechanical Properties and Fractal Dimension
NASA Technical Reports Server (NTRS)
Hogan, Harry A.
1996-01-01
Countermeasures for reducing bone loss and muscle atrophy due to extended exposure to the microgravity environment of space are continuing to be developed and improved. An important component of this effort is finite element modeling of the lower extremity and spinal column. These models will permit analysis and evaluation specific to each individual and thereby provide more efficient and effective exercise protocols. Inflight countermeasures and post-flight rehabilitation can then be customized and targeted on a case-by-case basis. Recent Summer Faculty Fellowship participants have focused upon finite element mesh generation, muscle force estimation, and fractal calculations of trabecular bone microstructure. Methods have been developed for generating the three-dimensional geometry of the femur from serial section magnetic resonance images (MRI). The use of MRI as an imaging modality avoids excessive exposure to radiation associated with X-ray based methods. These images can also detect trabecular bone microstructure and architecture. The goal of the current research is to determine the degree to which the fractal dimension of trabecular architecture can be used to predict the mechanical properties of trabecular bone tissue. The elastic modulus and the ultimate strength (or strain) can then be estimated from non-invasive, non-radiating imaging and incorporated into the finite element models to more accurately represent the bone tissue of each individual of interest. Trabecular bone specimens from the proximal tibia are being studied in this first phase of the work. Detailed protocols and procedures have been developed for carrying test specimens through all of the steps of a multi-faceted test program. The test program begins with MRI and X-ray imaging of the whole bones before excising a smaller workpiece from the proximal tibia region. High resolution MRI scans are then made and the piece further cut into slabs (roughly 1 cm thick). The slabs are X-rayed again and also scanned using dual-energy X-ray absorptiometry (DEXA). Cube specimens are then cut from the slabs and tested mechanically in compression. Correlations between mechanical properties and fractal dimension will then be examined to assess and quantify the predictive capability of the fractal calculations.
JPSS-1 VIIRS Pre-Launch Radiometric Performance
NASA Technical Reports Server (NTRS)
Oudrari, Hassan; McIntire, Jeff; Xiong, Xiaoxiong; Butler, James; Efremova, Boryana; Ji, Jack; Lee, Shihyan; Schwarting, Tom
2015-01-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) on-board the first Joint Polar Satellite System (JPSS) completed its sensor level testing on December 2014. The JPSS-1 (J1) mission is scheduled to launch in December 2016, and will be very similar to the Suomi-National Polar-orbiting Partnership (SNPP) mission. VIIRS instrument was designed to provide measurements of the globe twice daily. It is a wide-swath (3,040 kilometers) cross-track scanning radiometer with spatial resolutions of 370 and 740 meters at nadir for imaging and moderate bands, respectively. It covers the wavelength spectrum from reflective to long-wave infrared through 22 spectral bands [0.412 microns to 12.01 microns]. VIIRS observations are used to generate 22 environmental data products (EDRs). This paper will briefly describe J1 VIIRS characterization and calibration performance and methodologies executed during the pre-launch testing phases by the independent government team, to generate the at-launch baseline radiometric performance, and the metrics needed to populate the sensor data record (SDR) Look-Up-Tables (LUTs). This paper will also provide an assessment of the sensor pre-launch radiometric performance, such as the sensor signal to noise ratios (SNRs), dynamic range, reflective and emissive bands calibration performance, polarization sensitivity, bands spectral performance, response-vs-scan (RVS), near field and stray light responses. A set of performance metrics generated during the pre-launch testing program will be compared to the SNPP VIIRS pre-launch performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, D
2015-06-15
Purpose: AAPM radiation therapy committee task group No. 66 (TG-66) published a report which described a general approach to CT simulator QA. The report outlines the testing procedures and specifications for the evaluation of patient dose, radiation safety, electromechanical components, and image quality for a CT simulator. The purpose of this study is to thoroughly evaluate the performance of a second generation Toshiba Aquilion Large Bore CT simulator with 90 cm bore size (Toshiba, Nasu, JP) based on the TG-66 criteria. The testing procedures and results from this study provide baselines for a routine QA program. Methods: Different measurements andmore » analysis were performed including CTDIvol measurements, alignment and orientation of gantry lasers, orientation of the tabletop with respect to the imaging plane, table movement and indexing accuracy, Scanogram location accuracy, high contrast spatial resolution, low contrast resolution, field uniformity, CT number accuracy, mA linearity and mA reproducibility using a number of different phantoms and measuring devices, such as CTDI phantom, ACR image quality phantom, TG-66 laser QA phantom, pencil ion chamber (Fluke Victoreen) and electrometer (RTI Solidose 400). Results: The CTDI measurements were within 20% of the console displayed values. The alignment and orientation for both gantry laser and tabletop, as well as the table movement and indexing and scanogram location accuracy were within 2mm as specified in TG66. The spatial resolution, low contrast resolution, field uniformity and CT number accuracy were all within ACR’s recommended limits. The mA linearity and reproducibility were both well below the TG66 threshold. Conclusion: The 90 cm bore size second generation Toshiba Aquilion Large Bore CT simulator that comes with 70 cm true FOV can consistently meet various clinical needs. The results demonstrated that this simulator complies with the TG-66 protocol in all aspects including electromechanical component, radiation safety component, and image quality component. Employee of Toshiba America Medical Systems.« less
NASA Technical Reports Server (NTRS)
Wiegman, E. J.; Evans, W. E.; Hadfield, R.
1975-01-01
Measurements are examined of snow coverage during the snow-melt season in 1973 and 1974 from LANDSAT imagery for the three Columbia River Subbasins. Satellite derived snow cover inventories for the three test basins were obtained as an alternative to inventories performed with the current operational practice of using small aircraft flights over selected snow fields. The accuracy and precision versus cost for several different interactive image analysis procedures was investigated using a display device, the Electronic Satellite Image Analysis Console. Single-band radiance thresholding was the principal technique employed in the snow detection, although this technique was supplemented by an editing procedure involving reference to hand-generated elevation contours. For each data and view measured, a binary thematic map or "mask" depicting the snow cover was generated by a combination of objective and subjective procedures. Photographs of data analysis equipment (displays) are shown.
NASA Astrophysics Data System (ADS)
Yu, Fenghai; Zhang, Jianguo; Chen, Xiaomeng; Huang, H. K.
2005-04-01
Next Generation Internet (NGI) technology with new communication protocol IPv6 emerges as a potential solution for low-cost and high-speed networks for image data transmission. IPv6 is designed to solve many of the problems of the current version of IP (known as IPv4) with regard to address depletion, security, autoconfiguration, extensibility, and more. We choose CTN (Central Test Node) DICOM software developed by The Mallinckrodt Institute of Radiology to implement IPv6/IPv4 enabled DICOM communication software on different operating systems (Windows/Linux), and used this DICOM software to evaluate the performance of the IPv6/IPv4 enabled DICOM image communication with different security setting and environments. We compared the security communications of IPsec with SSL/TLS on different TCP/IP protocols (IPv6/IPv4), and find that there are some trade-offs to choose security solution between IPsec and SSL/TLS in the security implementation of IPv6/IPv4 communication networks.
Multiport backside-illuminated CCD imagers for high-frame-rate camera applications
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Sauer, Donald J.; Hseuh, Fu-Lung; Shallcross, Frank V.; Taylor, Gordon C.; Meray, Grazyna M.; Tower, John R.; Harrison, Lorna J.; Lawler, William B.
1994-05-01
Two multiport, second-generation CCD imager designs have been fabricated and successfully tested. They are a 16-port 512 X 512 array and a 32-port 1024 X 1024 array. Both designs are back illuminated, have on-chip CDS, lateral blooming control, and use a split vertical frame transfer architecture with full frame storage. The 512 X 512 device has been operated at rates over 800 frames per second. The 1024 X 1024 device has been operated at rates over 300 frames per second. The major changes incorporated in the second-generation design are, reduction in gate length in the output area to give improved high-clock-rate performance, modified on-chip CDS circuitry for reduced noise, and optimized implants to improve performance of blooming control at lower clock amplitude. This paper discusses the imager design improvements and presents measured performance results at high and moderate frame rates. The design and performance of three moderate frame rate cameras are discussed.
a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery
NASA Astrophysics Data System (ADS)
Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.
2018-04-01
Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.
Mosier, Jarrod; Joseph, Bellal; Sakles, John C
2013-02-01
Since the first remote intubation with telemedicine guidance, wireless technology has advanced to enable more portable methods of telemedicine involvement in remote airway management. Three voice over Internet protocol (VoIP) services were evaluated for quality of image transmitted, data lag, and audio quality with remotely observed and assisted intubations in an academic emergency department. The VoIP clients evaluated were Apple (Cupertino, CA) FaceTime(®), Skype™ (a division of Microsoft, Luxembourg City, Luxembourg), and Tango(®) (TangoMe, Palo Alto, CA). Each client was tested over a Wi-Fi network as well as cellular third generation (3G) (Skype and Tango). All three VoIP clients provided acceptable image and audio quality. There is a significant data lag in image transmission and quality when VoIP clients are used over cellular broadband (3G) compared with Wi-Fi. Portable remote telemedicine guidance is possible with newer technology devices such as a smartphone or tablet, as well as VoIP clients used over Wi-Fi or cellular broadband.
Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria
2017-04-01
To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.
2011-01-01
Background Massive datasets comprising high-resolution images, generated in neuro-imaging studies and in clinical imaging research, are increasingly challenging our ability to analyze, share, and filter such images in clinical and basic translational research. Pivot collection exploratory analysis provides each user the ability to fully interact with the massive amounts of visual data to fully facilitate sufficient sorting, flexibility and speed to fluidly access, explore or analyze the massive image data sets of high-resolution images and their associated meta information, such as neuro-imaging databases from the Allen Brain Atlas. It is used in clustering, filtering, data sharing and classifying of the visual data into various deep zoom levels and meta information categories to detect the underlying hidden pattern within the data set that has been used. Method We deployed prototype Pivot collections using the Linux CentOS running on the Apache web server. We also tested the prototype Pivot collections on other operating systems like Windows (the most common variants) and UNIX, etc. It is demonstrated that the approach yields very good results when compared with other approaches used by some researchers for generation, creation, and clustering of massive image collections such as the coronal and horizontal sections of the mouse brain from the Allen Brain Atlas. Results Pivot visual analytics was used to analyze a prototype of dataset Dab2 co-expressed genes from the Allen Brain Atlas. The metadata along with high-resolution images were automatically extracted using the Allen Brain Atlas API. It is then used to identify the hidden information based on the various categories and conditions applied by using options generated from automated collection. A metadata category like chromosome, as well as data for individual cases like sex, age, and plan attributes of a particular gene, is used to filter, sort and to determine if there exist other genes with a similar characteristics to Dab2. And online access to the mouse brain pivot collection can be viewed using the link http://edtech-dev.uthsc.edu/CTSI/teeDev1/unittest/PaPa/collection.html (user name: tviangte and password: demome) Conclusions Our proposed algorithm has automated the creation of large image Pivot collections; this will enable investigators of clinical research projects to easily and quickly analyse the image collections through a perspective that is useful for making critical decisions about the image patterns discovered. PMID:21884637
Mechanism for detecting NAPL using electrical resistivity imaging.
Halihan, Todd; Sefa, Valina; Sale, Tom; Lyverse, Mark
2017-10-01
The detection of non-aqueous phase liquid (NAPL) related impacts in freshwater environments by electrical resistivity imaging (ERI) has been clearly demonstrated in field conditions, but the mechanism generating the resistive signature is poorly understood. An electrical barrier mechanism which allows for detecting NAPLs with ERI is tested by developing a theoretical basis for the mechanism, testing the mechanism in a two-dimensional sand tank with ERI, and performing forward modeling of the laboratory experiment. The NAPL barrier theory assumes at low bulk soil NAPL concentrations, thin saturated NAPL barriers can block pore throats and generate a detectable electrically resistive signal. The sand tank experiment utilized a photographic technique to quantify petroleum saturation, and to help determine whether ERI can detect and quantify NAPL across the water table. This experiment demonstrates electrical imaging methods can detect small quantities of NAPL of sufficient thickness in formations. The bulk volume of NAPL is not the controlling variable for the amount of resistivity signal generated. The resistivity signal is primarily due to a zone of high resistivity separate phase liquid blocking current flow through the fully NAPL saturated pores spaces. For the conditions in this tank experiment, NAPL thicknesses of 3.3cm and higher in the formation was the threshold for detectable changes in resistivity of 3% and greater. The maximum change in resistivity due to the presence of NAPL was an increase of 37%. Forward resistivity models of the experiment confirm the barrier mechanism theory for the tank experiment. Copyright © 2017 Elsevier B.V. All rights reserved.
Programmable CGH on photochromic material using DMD generated masks
NASA Astrophysics Data System (ADS)
Alata, Romain; Zamkotsian, Frédéric; Lanzoni, Patrick; Pariani, Giorgio; Bianco, Andrea; Bertarelli, Chiara
2018-02-01
Computer Generated Holograms (CGHs) are used for wavefront shaping and complex optics testing, including aspherical and free-form optics. Today, CGHs are recorded directly with a laser or intermediate masks, allowing only the realization of binary CGHs; they are efficient but can reconstruct only pixilated images. We propose a Digital Micromirror Device (DMD) as a reconfigurable mask, to record rewritable binary and grayscale CGHs on a photochromic plate. The DMD is composed of 2048x1080 individually controllable micro-mirrors, with a pitch of 13.68 μm. This is a real-time reconfigurable mask, perfect for recording CGHs. The photochromic plate is opaque at rest and becomes transparent when it is illuminated with visible light of suitable wavelength. We have successfully recorded the very first amplitude grayscale CGH, in equally spaced levels, so called stepped CGH. We recorded up to 1000x1000 pixels CGHs with a contrast greater than 50, using Fresnel as well as Fourier coding scheme. Fresnel's CGH are obtained by calculating the inverse Fresnel transform of the original image at a given focus, ranging from 50cm to 2m. The reconstruction of the recorded images with a 632.8nm He-Ne laser beam leads to images with a high fidelity in shape, intensity, size and location. These results reveal the high potential of this method for generating programmable/rewritable grayscale CGHs, which combine DMDs and photochromic substrates.
Imaging mechanical properties of hepatic tissue by magnetic resonance elastography
NASA Astrophysics Data System (ADS)
Yin, Meng; Rouviere, Olivier; Burgart, Lawrence J.; Fidler, Jeff L.; Manduca, Armando; Ehman, Richard L.
2006-03-01
PURPOSE: To assess the feasibility of a modified phase-contrast MRI technique (MR Elastography) for quantitatively assessing the mechanical properties of hepatic tissues by imaging propagating acoustic shear waves. MATERIALS AND METHODS: Both phantom and human studies were performed to develop and optimize a practical imaging protocol by visualizing and investigating the diffraction field of shear waves generated from pneumatic longitudinal drivers. The effects of interposed ribs in a transcostal approach were also investigated. A gradient echo MRE pulse sequence was adapted for shear wave imaging in the liver during suspended respiration, and then tested to measure hepatic shear stiffness in 13 healthy volunteers and 1 patient with chronic liver disease to determine the potential of non-invasively detecting liver fibrosis. RESULTS: Phantom studies demonstrate that longitudinal waves generated by the driver are mode-converted to shear waves in a distribution governed by diffraction principles. The transcostal approach was determined to be the most effective method for generating shear waves in human studies. Hepatic stiffness measurements in the 13 normal volunteers demonstrated a mean value of 2.0+/-0.2kPa. The shear stiffness measurement in the patient was much higher at 8.5kPa. CONCLUSION: MR Elastography of the liver shows promise as a method to non-invasively detect and characterize diffuse liver disease, potentially reducing the need for biopsy to diagnose hepatic fibrosis.
Automated assembly of camera modules using active alignment with up to six degrees of freedom
NASA Astrophysics Data System (ADS)
Bräuniger, K.; Stickler, D.; Winters, D.; Volmer, C.; Jahn, M.; Krey, S.
2014-03-01
With the upcoming Ultra High Definition (UHD) cameras, the accurate alignment of optical systems with respect to the UHD image sensor becomes increasingly important. Even with a perfect objective lens, the image quality will deteriorate when it is poorly aligned to the sensor. For evaluating the imaging quality the Modulation Transfer Function (MTF) is used as the most accepted test. In the first part it is described how the alignment errors that lead to a low imaging quality can be measured. Collimators with crosshair at defined field positions or a test chart are used as object generators for infinite-finite or respectively finite-finite conjugation. The process how to align the image sensor accurately to the optical system will be described. The focus position, shift, tilt and rotation of the image sensor are automatically corrected to obtain an optimized MTF for all field positions including the center. The software algorithm to grab images, calculate the MTF and adjust the image sensor in six degrees of freedom within less than 30 seconds per UHD camera module is described. The resulting accuracy of the image sensor rotation is better than 2 arcmin and the accuracy position alignment in x,y,z is better 2 μm. Finally, the process of gluing and UV-curing is described and how it is managed in the integrated process.
NASA Astrophysics Data System (ADS)
Finke, U.; Blakeslee, R. J.; Mach, D. M.
2017-12-01
The next generation of European geostationary weather observing satellites (MTG) will operate an optical lightning location instrument (LI) which will be very similar to the Global Lightning Mapper (GLM) on board of GOES-R. For the development and verification of the product processing algorithms realistic test data are necessary. This paper presents a method of test data generation on the basis of optical lightning data from the LIS instrument and cloud image data from the Seviri radiometer.The basis is the lightning data gathered during the 15 year LIS operation time, particularly the empirical distribution functions of the optical pulse size, duration and radiance as well as the inter-correlation of lightning in space and time. These allow for a realistically structured simulation of lightning test data. Due to its low orbit the instantaneous field of view of the LIS is limited and moving with time. For the generation of test data which cover the geostationary visible disk, the LIS data have to be extended. This is realized by 1. simulating random lightning pulses according to the established distribution functions of the lightning parameters and 2. using the cloud radiometer data of the Seviri instrument on board of the geostationary Meteosat second generation (MSG). Particularly, the cloud top height product (CTH) identifies convective storm clouds wherein the simulation places random lightning pulses. The LIS instrument was recently deployed on the International Space Station (ISS). The ISS orbit reaches higher latitudes, particularly Europe. The ISS-LIS data is analyzed for single observation days. Additionally, the statistical distribution of parameters such as radiance, footprint size, and space time correlation of the groups are compared against the long time statistics from TRMM-LIS.Optical lightning detection efficiency from space is affected by the solar radiation reflected from the clouds. This effect is changing with day and night areas across the field of view. For a realistic simulation of this cloud background radiance the Seviri visual channel VIS08 data is used.Additionally to the test data study, this paper gives a comparison of the MTG-LI to the GLM and discusses differences in instrument design, product definition and generation and the merging of data from both geostationary instruments.
An approach for quantitative image quality analysis for CT
NASA Astrophysics Data System (ADS)
Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe
2016-03-01
An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.
Formation of parametric images using mixed-effects models: a feasibility study.
Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh
2016-03-01
Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Utilization of 3D imaging flash lidar technology for autonomous safe landing on planetary bodies
NASA Astrophysics Data System (ADS)
Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrottet, Diego; Busch, George; Bulyshev, Alexander
2010-01-01
NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight comptuer can use the 3-D map of terain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing airctarft. The aircraft flight tests were perfomed over Moonlike terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.
Utilization of 3-D Imaging Flash Lidar Technology for Autonomous Safe Landing on Planetary Bodies
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrotter, Diego; Busch, George; Bulyshev, Alexander
2010-01-01
NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight computer can use the 3-D map of terrain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing aircraft. The aircraft flight tests were performed over Moon-like terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.
NASA Astrophysics Data System (ADS)
Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 +/- 0.09.
Srivastava, Nishant R; Troyk, Philip R; Dagnelie, Gislin
2014-01-01
In order to assess visual performance using a future cortical prosthesis device, the ability of normally sighted and low vision subjects to adapt to a dotted ‘phosphene’ image was studied. Similar studies have been conduced in the past and adaptation to phosphene maps has been shown but the phosphene maps used have been square or hexagonal in pattern. The phosphene map implemented for this testing is what is expected from a cortical implantation of the arrays of intracortical electrodes, generating multiple phosphenes. The dotted image created depends upon the surgical location of electrodes decided for implantation and the expected cortical response. The subjects under tests were required to perform tasks requiring visual inspection, eye–hand coordination and way finding. The subjects did not have any tactile feedback and the visual information provided was live dotted images captured by a camera on a head-mounted low vision enhancing system and processed through a filter generating images similar to the images we expect the blind persons to perceive. The images were locked to the subject’s gaze by means of video-based pupil tracking. In the detection and visual inspection task, the subject scanned a modified checkerboard and counted the number of square white fields on a square checkerboard, in the eye–hand coordination task, the subject placed black checkers on the white fields of the checkerboard, and in the way-finding task, the subjects maneuvered themselves through a virtual maze using a game controller. The accuracy and the time to complete the task were used as the measured outcome. As per the surgical studies by this research group, it might be possible to implant up to 650 electrodes; hence, 650 dots were used to create images and performance studied under 0% dropout (650 dots), 25% dropout (488 dots) and 50% dropout (325 dots) conditions. It was observed that all the subjects under test were able to learn the given tasks and showed improvement in performance with practice even with a dropout condition of 50% (325 dots). Hence, if a cortical prosthesis is implanted in human subjects, they might be able to perform similar tasks and with practice should be able to adapt to dotted images even with a low resolution of 325 dots of phosphene. PMID:19458397
Effective structural descriptors for natural and engineered radioactive waste confinement barriers
NASA Astrophysics Data System (ADS)
Lemmens, Laurent; Rogiers, Bart; De Craen, Mieke; Laloy, Eric; Jacques, Diederik; Huysmans, Marijke; Swennen, Rudy; Urai, Janos L.; Desbois, Guillaume
2017-04-01
The microstructure of a radioactive waste confinement barrier strongly influences its flow and transport properties. Numerical flow and transport simulations for these porous media at the pore scale therefore require input data that describe the microstructure as accurately as possible. To date, no imaging method can resolve all heterogeneities within important radioactive waste confinement barrier materials as hardened cement paste and natural clays at the micro scale (nm-cm). Therefore, it is necessary to merge information from different 2D and 3D imaging methods using porous media reconstruction techniques. To qualitatively compare the results of different reconstruction techniques, visual inspection might suffice. To quantitatively compare training-image based algorithms, Tan et al. (2014) proposed an algorithm using an analysis of distance. However, the ranking of the algorithm depends on the choice of the structural descriptor, in their case multiple-point or cluster-based histograms. We present here preliminary work in which we will review different structural descriptors and test their effectiveness, for capturing the main structural characteristics of radioactive waste confinement barrier materials, to determine the descriptors to use in the analysis of distance. The investigated descriptors are particle size distributions, surface area distributions, two point probability functions, multiple point histograms, linear functions and two point cluster functions. The descriptor testing consists of stochastically generating realizations from a reference image using the simulated annealing optimization procedure introduced by Karsanina et al. (2015). This procedure basically minimizes the differences between pre-specified descriptor values associated with the training image and the image being produced. The most efficient descriptor set can therefore be identified by comparing the image generation quality among the tested descriptor combinations. The assessment of the quality of the simulations will be made by combining all considered descriptors. Once the set of the most efficient descriptors is determined, they can be used in the analysis of distance, to rank different reconstruction algorithms in a more objective way in future work. Karsanina MV, Gerke KM, Skvortsova EB, Mallants D (2015) Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure. PLoS ONE 10(5): e0126515. doi:10.1371/journal.pone.0126515 Tan, Xiaojin, Pejman Tahmasebi, and Jef Caers. "Comparing training-image based algorithms using an analysis of distance." Mathematical Geosciences 46.2 (2014): 149-169.
NASA Astrophysics Data System (ADS)
Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei
2018-02-01
Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.
Miki, Kohei; Masamune, Ken
2015-10-01
Low-field open magnetic resonance imaging (MRI) is frequently used for performing image-guided neurosurgical procedures. Intraoperative magnetic resonance (MR) images are useful for tracking brain shifts and verifying residual tumors. However, it is difficult to precisely determine the boundary of the brain tumors and normal brain tissues because the MR image resolution is low, especially when using a low-field open MRI scanner. To overcome this problem, a high-resolution MR image acquisition system was developed and tested. An MR-compatible manipulator with pneumatic actuators containing an MR signal receiver with a small radiofrequency (RF) coil was developed. The manipulator had five degrees of freedom for position and orientation control of the RF coil. An 8-mm planar RF coil with resistance and inductance of 2.04 [Formula: see text] and 1.00 [Formula: see text] was attached to the MR signal receiver at the distal end of the probe. MR images of phantom test devices were acquired using the MR signal receiver and normal head coil for signal-to-noise ratio (SNR) testing. The SNR of MR images acquired using the MR signal receiver was 8.0 times greater than that of MR images acquired using the normal head coil. The RF coil was moved by the manipulator, and local MR images of a phantom with a 2-mm grid were acquired using the MR signal receiver. A wide field-of-view MR image was generated from a montage of local MR images. A small field-of-view RF system with a pneumatic manipulator was integrated in a low-field MRI scanner to allow acquisition of both wide field-of-view and high-resolution MR images. This system is promising for image-guided neurosurgery as it may allow brain tumors to be observed more clearly and removed precisely.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram.
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-09-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security.
Scalable ranked retrieval using document images
NASA Astrophysics Data System (ADS)
Jain, Rajiv; Oard, Douglas W.; Doermann, David
2013-12-01
Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-01-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security. PMID:28913433
Development and validation of a Social Images Evaluation Questionnaire for youth in residential care
2017-01-01
Social images are defined as prevailing shared ideas about specific groups or societies without concrete or objective evidence of their accuracy or truthfulness. These images frequently have a negative impact on individuals and groups. Although of outmost importance, the study of the social images of youth in residential care is still scarce. In this article we present two studies for the development and validation of the Social Images Evaluation Questionnaire (SIEQ). In study 1, participants were asked to freely generate words that could be associated to youth in residential care in order to obtain a list of attributes to be used in the SIEQ. In study 2, the main psychometric characteristics of the SIEQ were tested with samples of laypeople and professionals. The main results support the proposal of a new and psychometrically sound measurement–the SIEQ–to analyze the social images of youth in residential care. PMID:28662056
Zhao, Ming; Li, Yu; Peng, Leilei
2014-01-01
We report a fast non-iterative lifetime data analysis method for the Fourier multiplexed frequency-sweeping confocal FLIM (Fm-FLIM) system [ Opt. Express22, 10221 ( 2014)24921725]. The new method, named R-method, allows fast multi-channel lifetime image analysis in the system’s FPGA data processing board. Experimental tests proved that the performance of the R-method is equivalent to that of single-exponential iterative fitting, and its sensitivity is well suited for time-lapse FLIM-FRET imaging of live cells, for example cyclic adenosine monophosphate (cAMP) level imaging with GFP-Epac-mCherry sensors. With the R-method and its FPGA implementation, multi-channel lifetime images can now be generated in real time on the multi-channel frequency-sweeping FLIM system, and live readout of FRET sensors can be performed during time-lapse imaging. PMID:25321778
NASA Astrophysics Data System (ADS)
Kamangir, H.; Momeni, M.; Satari, M.
2017-09-01
This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.
Van den Abbeele, Annick D; Krajewski, Katherine M; Tirumani, Sree Harsha; Fennessy, Fiona M; DiPiro, Pamela J; Nguyen, Quang-Dé; Harris, Gordon J; Jacene, Heather A; Lefever, Greg; Ramaiya, Nikhil H
2016-04-01
The authors propose one possible vision for the transformative role that cancer imaging in an academic setting can play in the current era of personalized and precision medicine by sharing a conceptual model that is based on experience and lessons learned designing a multidisciplinary, integrated clinical and research practice at their institution. The authors' practice and focus are disease-centric rather than imaging-centric. A "wall-less" infrastructure has been developed, with bidirectional integration of preclinical and clinical cancer imaging research platforms, enabling rapid translation of novel cancer drugs from discovery to clinical trial evaluation. The talents and expertise of medical professionals, scientists, and staff members have been coordinated in a horizontal and vertical fashion through the creation of Cancer Imaging Consultation Services and the "Adopt-a-Radiologist" campaign. Subspecialized imaging consultation services at the hub of an outpatient cancer center facilitate patient decision support and management at the point of care. The Adopt-a-Radiologist campaign has led to the creation of a novel generation of imaging clinician-scientists, fostered new collaborations, increased clinical and academic productivity, and improved employee satisfaction. Translational cancer research is supported, with a focus on early in vivo testing of novel cancer drugs, co-clinical trials, and longitudinal tumor imaging metrics through the imaging research core laboratory. Finally, a dedicated cancer imaging fellowship has been developed, promoting the future generation of cancer imaging specialists as multidisciplinary, multitalented professionals who are trained to effectively communicate with clinical colleagues and positively influence patient care. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Minato, Shohei; Ghose, Ranajit; Tsuji, Takeshi; Ikeda, Michiharu; Onishi, Kozo
2017-10-01
Fluid-filled fractures and fissures often determine the pathways and volume of fluid movement. They are critically important in crustal seismology and in the exploration of geothermal and hydrocarbon reservoirs. We introduce a model for tube wave scattering and generation at dipping, parallel-wall fractures intersecting a fluid-filled borehole. A new equation reveals the interaction of tube wavefield with multiple, closely spaced fractures, showing that the fracture dip significantly affects the tube waves. Numerical modeling demonstrates the possibility of imaging these fractures using a focusing analysis. The focused traces correspond well with the known fracture density, aperture, and dip angles. Testing the method on a VSP data set obtained at a fault-damaged zone in the Median Tectonic Line, Japan, presents evidences of tube waves being generated and scattered at open fractures and thin cataclasite layers. This finding leads to a new possibility for imaging, characterizing, and monitoring in situ hydraulic properties of dipping fractures using the tube wavefield.
Ohara, Nobumasa; Kaneko, Masanori; Kitazawa, Masaru; Uemura, Yasuyuki; Minagawa, Shinichi; Miyakoshi, Masashi; Kaneko, Kenzo; Kamoi, Kyuzi
2017-02-06
Graves' disease is an autoimmune thyroid disorder characterized by hyperthyroidism, and patients exhibit thyroid-stimulating hormone receptor antibody. The major methods of measuring circulating thyroid-stimulating hormone receptor antibody include the thyroid-stimulating hormone-binding inhibitory immunoglobulin assays. Although the diagnostic accuracy of these assays has been improved, a minority of patients with Graves' disease test negative even on second-generation and third-generation thyroid-stimulating hormone-binding inhibitory immunoglobulins. We report a rare case of a thyroid-stimulating hormone-binding inhibitory immunoglobulin-positive patient with Graves' disease who showed rapid lowering of thyroid-stimulating hormone-binding inhibitory immunoglobulin levels following administration of the anti-thyroid drug thiamazole, but still experienced Graves' hyperthyroidism. A 45-year-old Japanese man presented with severe hyperthyroidism (serum free triiodothyronine >25.0 pg/mL; reference range 1.7 to 3.7 pg/mL) and tested weakly positive for thyroid-stimulating hormone-binding inhibitory immunoglobulins on second-generation tests (2.1 IU/L; reference range <1.0 IU/L). Within 9 months of treatment with oral thiamazole (30 mg/day), his thyroid-stimulating hormone-binding inhibitory immunoglobulin titers had normalized, but he experienced sustained hyperthyroidism for more than 8 years, requiring 15 mg/day of thiamazole to correct. During that period, he tested negative on all first-generation, second-generation, and third-generation thyroid-stimulating hormone-binding inhibitory immunoglobulin assays, but thyroid scintigraphy revealed diffuse and increased uptake, and thyroid ultrasound and color flow Doppler imaging showed typical findings of Graves' hyperthyroidism. The possible explanations for serial changes in the thyroid-stimulating hormone-binding inhibitory immunoglobulin results in our patient include the presence of thyroid-stimulating hormone receptor antibody, which is bioactive but less reactive on thyroid-stimulating hormone-binding inhibitory immunoglobulin assays, or the effect of reduced levels of circulating thyroid-stimulating hormone receptor antibody upon improvement of thyroid autoimmunity with thiamazole treatment. Physicians should keep in mind that patients with Graves' disease may show thyroid-stimulating hormone-binding inhibitory immunoglobulin assay results that do not reflect the severity of Graves' disease or indicate the outcome of the disease, and that active Graves' disease may persist even after negative results on thyroid-stimulating hormone-binding inhibitory immunoglobulin assays. Timely performance of thyroid function tests in combination with sensitive imaging tests, including thyroid ultrasound and scintigraphy, are necessary to evaluate the severity of Graves' disease and treatment efficacy.
Language Lateralization in Children Using Functional Transcranial Doppler Sonography
ERIC Educational Resources Information Center
Haag, Anja; Moeller, Nicola; Knake, Susanne; Hermsen, Anke; Oertel, Wolfgang H.; Rosenow, Felix; Hamer, Hajo M.
2010-01-01
Aim: Language lateralization with functional transcranial Doppler sonography (fTCD) and lexical word generation has been shown to have high concordance with the Wada test and functional magnetic resonance imaging in adults. We evaluated a nonlexical paradigm to determine language dominance in children. Method: In 23 right-handed children (12…
A Psychophysical Test of the Visual Pathway of Children with Autism
ERIC Educational Resources Information Center
Sanchez-Marin, Francisco J.; Padilla-Medina, Jose A.
2008-01-01
Signal detection psychophysical experiments were conducted to investigate the visual path of children with autism. Computer generated images with Gaussian noise were used. Simple signals, still and in motion were embedded in the background noise. The computer monitor was linearized to properly display the contrast changes. To our knowledge, this…
Infrared thermal imagers for avionic applications
NASA Astrophysics Data System (ADS)
Uda, Gianni; Livi, Massimo; Olivieri, Monica; Sabatini, Maurizio; Torrini, Daniele; Baldini, Stefano; Bardazzi, Riccardo; Falli, Pietro; Maestrini, Mauro
1999-07-01
This paper deals with the design of two second generation thermal imagers that Alenia Difesa OFFICINE GALILEO has successfully developed for the Navigation FLIR of the NH90 Tactical Transportation Helicopter (NH90 TTH) and for the Electro-Optical Surveillance and Tracking System for the Italian 'Guardia di Finanza' ATR42 Maritime Patrol Aircraft (ATR42 MPA). Small size, lightweight and low power consumption have been the main design goals of the two programs. In particular the NH90 TTH Thermal Imager is a compact camera operating in the 8 divided by 12 micrometers bandwidth with a single wide field of view. The thermal imager developed for the ATR42 MPA features a three remotely switchable fields of view objective equipped with diffractive optics. Performance goals, innovative design aspects and test results of these two thermal imagers are reported.
NASA Technical Reports Server (NTRS)
Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.
1991-01-01
The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.
Mennes, Maarten
2016-03-01
'Big Data' and 'Population Imaging' are becoming integral parts of inspiring research aimed at delineating the biological underpinnings of psychiatric disorders. The scientific strategies currently associated with big data and population imaging are typically embedded in so-called discovery science, thereby pointing to the hypothesis-generating rather than hypothesis-testing nature of discovery science. In this issue, Yihong Zhao and F. Xavier Castellanos provide a compelling overview of strategies for discovery science aimed at progressing our understanding of neuropsychiatric disorders. In particular, they focus on efforts in genetic and neuroimaging research, which, together with extended behavioural testing, form the main pillars of psychopathology research. © 2016 Association for Child and Adolescent Mental Health.
A Comparison of the AVS-9 and the Panoramic Night Vision Goggles During Rotorcraft Hover and Landing
NASA Technical Reports Server (NTRS)
Szoboszlay, Zoltan; Haworth, Loran; Simpson, Carol
2000-01-01
A flight test was conducted to assess any differences in pilot-vehicle performance and pilot opinion between the use of a current generation night vision goggle (the AVS-9) and one variant of the prototype panoramic night vision goggle (the PNVGII). The panoramic goggle has more than double the horizontal field-of-view of the AVS-9, but reduced image quality. Overall the panoramic goggles compared well to the AVS-9 goggles. However, pilot comment and data are consistent with the assertion that some of the benefits of additional field-of-view with the panoramic goggles were negated by the reduced image quality of the particular variant of the panoramic goggles tested.
A Kinect(™) camera based navigation system for percutaneous abdominal puncture.
Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao
2016-08-07
Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect(™) was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect(™). For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect(™) depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator's skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect(™) for Windows version 2 (Kinect(™) V2). The target registration error (TRE), user error, and TPE are 4.26 ± 1.94 mm, 2.92 ± 1.67 mm, and 5.23 ± 2.29 mm, respectively. No statistically significant differences in TPE regarding operator's skill and trajectory are observed. Additionally, a Kinect(™) for Windows version 1 (Kinect(™) V1) was tested with 12 insertions, and the TRE evaluated with the Kinect(™) V1 is statistically significantly larger than that with the Kinect(™) V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40 ± 2.72 mm, and its lateral and longitudinal component were 4.30 ± 2.51 mm and 3.80 ± 3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable, and that the second generation Kinect(™)-based navigation is superior to the first-generation Kinect(™), and has potential of clinical application in percutaneous abdominal puncture.
NASA Technical Reports Server (NTRS)
2006-01-01
Topics covered include: Medical Signal-Conditioning and Data-Interface System; Instruments for Reading Direct-Marked Data-Matrix Symbols; Processing EOS MLS Level-2 Data; Ground Processing of Data From the Mars Exploration Rovers; Estimating Total Electron Content Using 1,000+ GPS Receivers; NASA Solar Array Demonstrates Commercial Potential; Improved Control of Charging Voltage for Li-Ion Battery; Programmable Pulse-Position-Modulation Encoder; Wavelength-Agile External-Cavity Diode Laser for DWDM; Pattern-Recognition Processor Using Holographic Photopolymer; Submicrosecond Power-Switching Test Circuit; Three-Function Logic Gate Controlled by Analog Voltage; Integrated System for Autonomous Science; Montage Version 3.0; Utilizing AI in Temporal, Spatial, and Resource Scheduling; Satellite Image Mosaic Engine; Architecture for Control of the K9 Rover; HFGMC Enhancement of MAC/GMC; Automated Activation and Deactivation of a System Under Test; Cleaning Carbon Nanotubes by Use of Mild Oxygen Plasmas; Generating Aromatics From CO2 on Mars or Natural Gas on Earth; Attaching Thermocouples by Peening or Crimping; Heat Treatment of Friction-Stir-Welded 7050 Aluminum Plates; Generating Breathable Air Through Dissociation of N2O; High-Performance Scanning Acousto-Ultrasonic System; Correction for Thermal EMFs in Thermocouple Feedthroughs; Using Quasiparticle Poisoning To Detect Photons; Estimating Resolution Lengths of Hybrid Turbulence Models; Education and Training Module in Alertness Management; Cargo-Positioning System for Next-Generation Spacecraft; Micro-Imagers for Spaceborne Cell-Growth Experiments; Holographic Solar Photon Thrusters; Plasma-Based Detector of Outer-Space Dust Particles; and Generation of Data-Rate Profiles of Ka-Band Deep-Space Links.
Neural Network for Nanoscience Scanning Electron Microscope Image Recognition.
Modarres, Mohammad Hadi; Aversa, Rossella; Cozzini, Stefano; Ciancio, Regina; Leto, Angelo; Brandino, Giuseppe Piero
2017-10-16
In this paper we applied transfer learning techniques for image recognition, automatic categorization, and labeling of nanoscience images obtained by scanning electron microscope (SEM). Roughly 20,000 SEM images were manually classified into 10 categories to form a labeled training set, which can be used as a reference set for future applications of deep learning enhanced algorithms in the nanoscience domain. The categories chosen spanned the range of 0-Dimensional (0D) objects such as particles, 1D nanowires and fibres, 2D films and coated surfaces, and 3D patterned surfaces such as pillars. The training set was used to retrain on the SEM dataset and to compare many convolutional neural network models (Inception-v3, Inception-v4, ResNet). We obtained compatible results by performing a feature extraction of the different models on the same dataset. We performed additional analysis of the classifier on a second test set to further investigate the results both on particular cases and from a statistical point of view. Our algorithm was able to successfully classify around 90% of a test dataset consisting of SEM images, while reduced accuracy was found in the case of images at the boundary between two categories or containing elements of multiple categories. In these cases, the image classification did not identify a predominant category with a high score. We used the statistical outcomes from testing to deploy a semi-automatic workflow able to classify and label images generated by the SEM. Finally, a separate training was performed to determine the volume fraction of coherently aligned nanowires in SEM images. The results were compared with what was obtained using the Local Gradient Orientation method. This example demonstrates the versatility and the potential of transfer learning to address specific tasks of interest in nanoscience applications.
A signature dissimilarity measure for trabecular bone texture in knee radiographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woloszynski, T.; Podsiadlo, P.; Stachowiak, G. W.
Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size,more » anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binary patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle (<25%), blur generated by a sharp film screen, and image size (>64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system achieved 78.8% classification accuracy and outperformed the WND-CHARM system (64.2%). Conclusions: The SDM is well suited for the classification of TB texture images in knee OA detection and may be useful for the texture classification of medical images in general.« less
Enhancement of digital radiography image quality using a convolutional neural network.
Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing
2017-01-01
Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.
Processing and performance of self-healing materials
NASA Astrophysics Data System (ADS)
Tan, P. S.; Zhang, M. Q.; Bhattacharyya, D.
2009-08-01
Two self-healing methods were implemented into composite materials with self-healing capabilities, using hollow glass fibres (HGF) and microencapsulated epoxy resin with mercaptan as the hardener. For the HGF approach, two perpendicular layers of HGF were put into an E-glass/epoxy composite, and were filled with coloured epoxy resin and hardener. The HGF samples had a novel ball indentation test method done on them. The samples were analysed using micro-CT scanning, confocal microscopy and penetrant dye. Micro-CT and confocal microscopy produced limited success, but their viability was established. Penetrant dye images showed resin obstructing flow of dye through damage regions, suggesting infiltration of resin into cracks. Three-point bend tests showed that overall performance could be affected by the flaws arising from embedding HGF in the material. For the microcapsule approach, samples were prepared for novel double-torsion tests used to generate large cracks. The samples were compared with pure resin samples by analysing them using photoelastic imaging and scanning electron microscope (SEM) on crack surfaces. Photoelastic imaging established the consolidation of cracks while SEM showed a wide spread of microcapsules with their distribution being affected by gravity. Further double-torsion testing showed that healing recovered approximately 24% of material strength.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
Characterization techniques for incorporating backgrounds into DIRSIG
NASA Astrophysics Data System (ADS)
Brown, Scott D.; Schott, John R.
2000-07-01
The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.
Photoacid generator study for chemically amplified negative resists for high-resolution lithography
NASA Astrophysics Data System (ADS)
Dentinger, Paul M.; Knapp, Kurtis G.; Reynolds, Geoffrey W.; Taylor, James W.; Fedynyshyn, Theodore H.; Richardson, Todd A.
1998-06-01
The effect of photoacid generator and photogenerated acid molecular structures on a negative-tone chemically-amplified resist was tested using two different sets of acid generators, each set with one formulation creating a 'volatile' acid, and the other formulation creating a 'non- volatile' acid when exposed to x-rays. The acids from one set were generated from a derivative of iodonium salt and the acids from the other set were generated from a covalently bound photoacid generator. Both sets were compared to Shipley SAL 605 resist. In this study of five formulations, normalized remaining thickness (NRT) curves, SEM images of printed lines, spectrophotometric titration of the photogenerated acid, real-time curves, SEM images of printed lines, spectrophotometric titration of the photogenerated acid, real-time FTIR for kinetics of the PEB reaction, dissolution rate measurements, and atomic force microscopy for surface roughness were employed. RT-FTIR suggested that both the proposed 'volatile' and 'non- volatile' acids were retained to approximately the same extent within the films cast from these formulations. A mechanism is suggested where the type of photogenerated acid has an effect on the kinetics of the reaction and the photogenerated acid or photoacid effect on the kinetics of the reaction and the photogenerated acid or photoacid generator has a large effect on the ability of the aqueous developer to penetrate or dissolve the film.
Identification of uncommon objects in containers
Bremer, Peer-Timo; Kim, Hyojin; Thiagarajan, Jayaraman J.
2017-09-12
A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.
NASA Astrophysics Data System (ADS)
McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.
2010-01-01
In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.
High Speed Thermal Imaging on Ballistic Impact of Triaxially Braided Composites
NASA Technical Reports Server (NTRS)
Johnston, Joel P.; Pereira, J. Michael; Ruggeri, Charles R.; Roberts, Gary D.
2017-01-01
Ballistic impact experiments were performed on triaxially braided polymer matrix composites to study the heat generated in the material due to projectile velocity and penetration damage. Quantifying the heat generation phenomenon is crucial for attaining a better understanding of composite behavior and failure under impact loading. The knowledge gained can also be used to improve physics-based models which can numerically simulate impact of composites. Triaxially braided (0/+60/-60) composite panels were manufactured with T700S standard modulus carbon fiber and two epoxy resins. The PR520 (toughened) and 3502 (untoughened) resin systems were used to make different panels to study the effects of resin properties on temperature rise. Ballistic impact tests were conducted on these composite panels using a gas gun, and different projectile velocities were applied to study the effect on the temperature results. Temperature contours were obtained from the rear surface of the panel during the test through a high speed, infrared (IR) thermal imaging system. The contours show that high temperatures were locally generated and more pronounced along the axial tows for the T700S/PR520 composite specimens; whereas, tests performed on T700S/3502 composite panels using similar impact velocities demonstrated a widespread area of lower temperature rises. Nondestructive, ultrasonic C-scan analyses were performed to observe and verify the failure patterns in the impacted panels. Overall, the impact experimentation showed temperatures exceeding 525 K (485degF) in both composites which is well above the respective glass transition temperatures for the polymer constituents. This expresses the need for further high strain rate testing and measurement of the temperature and deformation fields to fully understand the complex behavior and failure of the material in order to improve the confidence in designing aerospace components with these materials.
Automated Construction of Coverage Catalogues of Aster Satellite Image for Urban Areas of the World
NASA Astrophysics Data System (ADS)
Miyazaki, H.; Iwao, K.; Shibasaki, R.
2012-07-01
We developed an algorithm to determine a combination of satellite images according to observation extent and image quality. The algorithm was for testing necessity for completing coverage of the search extent. The tests excluded unnecessary images with low quality and preserve necessary images with good quality. The search conditions of the satellite images could be extended, indicating the catalogue could be constructed with specified periods required for time series analysis. We applied the method to a database of metadata of ASTER satellite images archived in GEO Grid of National Institute of Advanced Industrial Science and Technology (AIST), Japan. As indexes of populated places with geographical coordinates, we used a database of 3372 populated place of more than 0.1 million populations retrieved from GRUMP Settlement Points, a global gazetteer of cities, which has geographical names of populated places associated with geographical coordinates and population data. From the coordinates of populated places, 3372 extents were generated with radiuses of 30 km, a half of swath of ASTER satellite images. By merging extents overlapping each other, they were assembled into 2214 extents. As a result, we acquired combinations of good quality for 1244 extents, those of low quality for 96 extents, incomplete combinations for 611 extents. Further improvements would be expected by introducing pixel-based cloud assessment and pixel value correction over seasonal variations.
Analysis of second-harmonic-generation microscopy in a mouse model of ovarian carcinoma
NASA Astrophysics Data System (ADS)
Watson, Jennifer M.; Rice, Photini F.; Marion, Samuel L.; Brewer, Molly A.; Davis, John R.; Rodriguez, Jeffrey J.; Utzinger, Urs; Hoyer, Patricia B.; Barton, Jennifer K.
2012-07-01
Second-harmonic-generation (SHG) imaging of mouse ovaries ex vivo was used to detect collagen structure changes accompanying ovarian cancer development. Dosing with 4-vinylcyclohexene diepoxide and 7,12-dimethylbenz[a]anthracene resulted in histologically confirmed cases of normal, benign abnormality, dysplasia, and carcinoma. Parameters for each SHG image were calculated using the Fourier transform matrix and gray-level co-occurrence matrix (GLCM). Cancer versus normal and cancer versus all other diagnoses showed the greatest separation using the parameters derived from power in the highest-frequency region and GLCM energy. Mixed effects models showed that these parameters were significantly different between cancer and normal (P<0.008). Images were classified with a support vector machine, using 25% of the data for training and 75% for testing. Utilizing all images with signal greater than the noise level, cancer versus not-cancer specimens were classified with 81.2% sensitivity and 80.0% specificity, and cancer versus normal specimens were classified with 77.8% sensitivity and 79.3% specificity. Utilizing only images with greater than of 75% of the field of view containing signal improved sensitivity and specificity for cancer versus normal to 81.5% and 81.1%. These results suggest that using SHG to visualize collagen structure in ovaries could help with early cancer detection.
Automated optical testing of LWIR objective lenses using focal plane array sensors
NASA Astrophysics Data System (ADS)
Winters, Daniel; Erichsen, Patrik; Domagalski, Christian; Peter, Frank; Heinisch, Josef; Dumitrescu, Eugen
2012-10-01
The image quality of today's state-of-the-art IR objective lenses is constantly improving while at the same time the market for thermography and vision grows strongly. Because of increasing demands on the quality of IR optics and increasing production volumes, the standards for image quality testing increase and tests need to be performed in shorter time. Most high-precision MTF testing equipment for the IR spectral bands in use today relies on the scanning slit method that scans a 1D detector over a pattern in the image generated by the lens under test, followed by image analysis to extract performance parameters. The disadvantages of this approach are that it is relatively slow, it requires highly trained operators for aligning the sample and the number of parameters that can be extracted is limited. In this paper we present lessons learned from the R and D process on using focal plane array (FPA) sensors for testing of long-wave IR (LWIR, 8-12 m) optics. Factors that need to be taken into account when switching from scanning slit to FPAs are e.g.: the thermal background from the environment, the low scene contrast in the LWIR, the need for advanced image processing algorithms to pre-process camera images for analysis and camera artifacts. Finally, we discuss 2 measurement systems for LWIR lens characterization that we recently developed with different target applications: 1) A fully automated system suitable for production testing and metrology that uses uncooled microbolometer cameras to automatically measure MTF (on-axis and at several o-axis positions) and parameters like EFL, FFL, autofocus curves, image plane tilt, etc. for LWIR objectives with an EFL between 1 and 12mm. The measurement cycle time for one sample is typically between 6 and 8s. 2) A high-precision research-grade system using again an uncooled LWIR camera as detector, that is very simple to align and operate. A wide range of lens parameters (MTF, EFL, astigmatism, distortion, etc.) can be easily and accurately measured with this system.
NASA Astrophysics Data System (ADS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-05-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Technical Reports Server (NTRS)
DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Astrophysics Data System (ADS)
Erdélyi, Miklós; Sinkó, József; Gajdos, Tamás.; Novák, Tibor
2017-02-01
Optical super-resolution techniques such as single molecule localization have become one of the most dynamically developed areas in optical microscopy. These techniques routinely provide images of fixed cells or tissues with sub-diffraction spatial resolution, and can even be applied for live cell imaging under appropriate circumstances. Localization techniques are based on the precise fitting of the point spread functions (PSF) to the measured images of stochastically excited, identical fluorescent molecules. These techniques require controlling the rate between the on, off and the bleached states, keeping the number of active fluorescent molecules at an optimum value, so their diffraction limited images can be detected separately both spatially and temporally. Because of the numerous (and sometimes unknown) parameters, the imaging system can only be handled stochastically. For example, the rotation of the dye molecules obscures the polarization dependent PSF shape, and only an averaged distribution - typically estimated by a Gaussian function - is observed. TestSTORM software was developed to generate image stacks for traditional localization microscopes, where localization meant the precise determination of the spatial position of the molecules. However, additional optical properties (polarization, spectra, etc.) of the emitted photons can be used for further monitoring the chemical and physical properties (viscosity, pH, etc.) of the local environment. The image stack generating program was upgraded by several new features, such as: multicolour, polarization dependent PSF, built-in 3D visualization, structured background. These features make the program an ideal tool for optimizing the imaging and sample preparation conditions.
NASA Technical Reports Server (NTRS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Astrophysics Data System (ADS)
López-Coto, R.; Mazin, D.; Paoletti, R.; Blanch Bigas, O.; Cortina, J.
2016-04-01
Imaging atmospheric Cherenkov telescopes (IACTs) such as the Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) telescopes endeavor to reach the lowest possible energy threshold. In doing so the trigger system is a key element. Reducing the trigger threshold is hampered by the rapid increase of accidental triggers generated by ambient light (the so-called Night Sky Background NSB). In this paper we present a topological trigger, dubbed Topo-trigger, which rejects events on the basis of their relative orientation in the telescope cameras. We have simulated and tested the trigger selection algorithm in the MAGIC telescopes. The algorithm was tested using MonteCarlo simulations and shows a rejection of 85% of the accidental stereo triggers while preserving 99% of the gamma rays. A full implementation of this trigger system would achieve an increase in collection area between 10 and 20% at the energy threshold. The analysis energy threshold of the instrument is expected to decrease by ~ 8%. The selection algorithm was tested on real MAGIC data taken with the current trigger configuration and no γ-like events were found to be lost.
Pan, Deyun; Sun, Ning; Cheung, Kei-Hoi; Guan, Zhong; Ma, Ligeng; Holford, Matthew; Deng, Xingwang; Zhao, Hongyu
2003-11-07
To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i) upload and populate microarray data into a database; (ii) integrate gene expression with enzymes of the pathways; (iii) generate pathway diagrams without building image files manually; (iv) visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v) perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i) automatic generation of pathways associated with gene expression and (ii) statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s).
Pan, Deyun; Sun, Ning; Cheung, Kei-Hoi; Guan, Zhong; Ma, Ligeng; Holford, Matthew; Deng, Xingwang; Zhao, Hongyu
2003-01-01
Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i) upload and populate microarray data into a database; (ii) integrate gene expression with enzymes of the pathways; (iii) generate pathway diagrams without building image files manually; (iv) visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v) perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i) automatic generation of pathways associated with gene expression and (ii) statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s). PMID:14604444
The Pan-STARRS PS1 Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Magnier, E.
The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.
Post Launch Calibration and Testing of the Advanced Baseline Imager on the GOES-R Satellite
NASA Technical Reports Server (NTRS)
Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.
2016-01-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United State's National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.
Post Launch Calibration and Testing of the Advanced Baseline Imager on the GOES-R Satellite
NASA Technical Reports Server (NTRS)
Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.
2016-01-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United States National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.
Post launch calibration and testing of the Advanced Baseline Imager on the GOES-R satellite
NASA Astrophysics Data System (ADS)
Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.
2016-05-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United State's National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.
Bansal, Ravi; Hao, Xuejun; Liu, Jun; Peterson, Bradley S.
2014-01-01
Many investigators have tried to apply machine learning techniques to magnetic resonance images (MRIs) of the brain in order to diagnose neuropsychiatric disorders. Usually the number of brain imaging measures (such as measures of cortical thickness and measures of local surface morphology) derived from the MRIs (i.e., their dimensionality) has been large (e.g. >10) relative to the number of participants who provide the MRI data (<100). Sparse data in a high dimensional space increases the variability of the classification rules that machine learning algorithms generate, thereby limiting the validity, reproducibility, and generalizability of those classifiers. The accuracy and stability of the classifiers can improve significantly if the multivariate distributions of the imaging measures can be estimated accurately. To accurately estimate the multivariate distributions using sparse data, we propose to estimate first the univariate distributions of imaging data and then combine them using a Copula to generate more accurate estimates of their multivariate distributions. We then sample the estimated Copula distributions to generate dense sets of imaging measures and use those measures to train classifiers. We hypothesize that the dense sets of brain imaging measures will generate classifiers that are stable to variations in brain imaging measures, thereby improving the reproducibility, validity, and generalizability of diagnostic classification algorithms in imaging datasets from clinical populations. In our experiments, we used both computer-generated and real-world brain imaging datasets to assess the accuracy of multivariate Copula distributions in estimating the corresponding multivariate distributions of real-world imaging data. Our experiments showed that diagnostic classifiers generated using imaging measures sampled from the Copula were significantly more accurate and more reproducible than were the classifiers generated using either the real-world imaging measures or their multivariate Gaussian distributions. Thus, our findings demonstrate that estimated multivariate Copula distributions can generate dense sets of brain imaging measures that can in turn be used to train classifiers, and those classifiers are significantly more accurate and more reproducible than are those generated using real-world imaging measures alone. PMID:25093634
Sekine, Tetsuro; Burgos, Ninon; Warnock, Geoffrey; Huellner, Martin; Buck, Alfred; Ter Voert, Edwin E G W; Cardoso, M Jorge; Hutton, Brian F; Ourselin, Sebastien; Veit-Haibach, Patrick; Delso, Gaspar
2016-08-01
In this work, we assessed the feasibility of attenuation correction (AC) based on a multi-atlas-based method (m-Atlas) by comparing it with a clinical AC method (single-atlas-based method [s-Atlas]), on a time-of-flight (TOF) PET/MRI scanner. We enrolled 15 patients. The median patient age was 59 y (age range, 31-80). All patients underwent clinically indicated whole-body (18)F-FDG PET/CT for staging, restaging, or follow-up of malignant disease. All patients volunteered for an additional PET/MRI scan of the head (no additional tracer being injected). For each patient, 3 AC maps were generated. Both s-Atlas and m-Atlas AC maps were generated from the same patient-specific LAVA-Flex T1-weighted images being acquired by default on the PET/MRI scanner during the first 18 s of the PET scan. An s-Atlas AC map was extracted by the PET/MRI scanner, and an m-Atlas AC map was created using a Web service tool that automatically generates m-Atlas pseudo-CT images. For comparison, the AC map generated by PET/CT was registered and used as a gold standard. PET images were reconstructed from raw data on the TOF PET/MRI scanner using each AC map. All PET images were normalized to the SPM5 PET template, and (18)F-FDG accumulation was quantified in 67 volumes of interest (VOIs; automated anatomic labeling atlas). Relative (%diff) and absolute differences (|%diff|) between images based on each atlas AC and CT-AC were calculated. (18)F-FDG uptake in all VOIs and generalized merged VOIs were compared using the paired t test and Bland-Altman test. The range of error on m-Atlas in all 1,005 VOIs was -4.99% to 4.09%. The |%diff| on the m-Atlas was improved by about 20% compared with s-Atlas (s-Atlas vs. m-Atlas: 1.49% ± 1.06% vs. 1.21% ± 0.89%, P < 0.01). In generalized VOIs, %diff on m-Atlas in the temporal lobe and cerebellum was significantly smaller (s-Atlas vs. m-Atlas: temporal lobe, 1.49% ± 1.37% vs. -0.37% ± 1.41%, P < 0.01; cerebellum, 1.55% ± 1.97% vs. -1.15% ± 1.72%, P < 0.01). The errors introduced using either s-Atlas or m-Atlas did not exceed 5% in any brain region investigated. When compared with the clinical s-Atlas, m-Atlas is more accurate, especially in regions close to the skull base. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Three-Dimensional Super-Resolution: Theory, Modeling, and Field Tests Results
NASA Technical Reports Server (NTRS)
Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Vincent E.; Hines, Glenn; Pierrottet, Diego; Reisse, Robert
2014-01-01
Many flash lidar applications continue to demand higher three-dimensional image resolution beyond the current state-of-the-art technology of the detector arrays and their associated readout circuits. Even with the available number of focal plane pixels, the required number of photons for illuminating all the pixels may impose impractical requirements on the laser pulse energy or the receiver aperture size. Therefore, image resolution enhancement by means of a super-resolution algorithm in near real time presents a very attractive solution for a wide range of flash lidar applications. This paper describes a superresolution technique and illustrates its performance and merits for generating three-dimensional image frames at a video rate.
Parallel object-oriented data mining system
Kamath, Chandrika; Cantu-Paz, Erick
2004-01-06
A data mining system uncovers patterns, associations, anomalies and other statistically significant structures in data. Data files are read and displayed. Objects in the data files are identified. Relevant features for the objects are extracted. Patterns among the objects are recognized based upon the features. Data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) sky survey was used to search for bent doubles. This test was conducted on data from the Very Large Array in New Mexico which seeks to locate a special type of quasar (radio-emitting stellar object) called bent doubles. The FIRST survey has generated more than 32,000 images of the sky to date. Each image is 7.1 megabytes, yielding more than 100 gigabytes of image data in the entire data set.
NASA Astrophysics Data System (ADS)
Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas
2016-05-01
We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.
Dutra, Kamile Leonardi; Pachêco-Pereira, Camila; Bortoluzzi, Eduardo Antunes; Flores-Mir, Carlos; Lagravère, Manuel O; Corrêa, Márcio
2017-07-01
Investigating the vertical root fracture (VRF) pathway under different clinical scenarios may help to diagnose this condition properly. We aimed to determine the capability and intrareliability of VRF pathway detection through cone-beam computed tomographic (CBCT) imaging as well as analyze the influence of different intracanal and crown materials. VRFs were mechanically induced in 30 teeth, and 4 clinical situations were reproduced in vitro: no filling, gutta-percha, post, and metal crown. A Prexion (San Mateo, CA) 3-dimensional tomographic device was used to generate 104 CBCT scans. The VRF pathway was determined by using landmarks in the Avizo software (Version 8.1; FEI Visualization Sciences Group, Burlington, MA) by 1 observer repeated 3 times. Analysis of variance and post hoc tests were applied to compare groups. Intrareliability demonstrated an excellent agreement (intraclass correlation coefficient mean = 0.93). Descriptive analysis showed that the fracture line measurement was smaller in the post and metal crown groups than in the no-filling and gutta-percha groups. The 1-way analysis of variance test found statistically significant differences among the groups measurements. The Bonferroni correction showed statistically significant differences related to the no-filling and gutta-percha groups versus the post and metal crown groups. The VRF pathway can be accurately detected in a nonfilled tooth using limited field of view CBCT imaging. The presence of gutta-percha generated a low beam hardening artifact that did not hinder the VRF extent. The presence of an intracanal gold post made the fracture line appear smaller than it really was in the sagittal images; in the axial images, a VRF was only detected when the apical third was involved. The presence of a metal crown did not generate additional artifacts on the root surface compared to the intracanal gold post by itself. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Nested Focusing Optics for Compact Neutron Sources
NASA Technical Reports Server (NTRS)
Nabors, Sammy A.
2015-01-01
NASA's Marshall Space Flight Center, the Massachusetts Institute of Technology (MIT), and the University of Alabama Huntsville (UAH) have developed novel neutron grazing incidence optics for use with small-scale portable neutron generators. The technology was developed to enable the use of commercially available neutron generators for applications requiring high flux densities, including high performance imaging and analysis. Nested grazing incidence mirror optics, with high collection efficiency, are used to produce divergent, parallel, or convergent neutron beams. Ray tracing simulations of the system (with source-object separation of 10m for 5 meV neutrons) show nearly an order of magnitude neutron flux increase on a 1-mm diameter object. The technology is a result of joint development efforts between NASA and MIT researchers seeking to maximize neutron flux from diffuse sources for imaging and testing applications.
Saliency detection by conditional generative adversarial network
NASA Astrophysics Data System (ADS)
Cai, Xiaoxu; Yu, Hui
2018-04-01
Detecting salient objects in images has been a fundamental problem in computer vision. In recent years, deep learning has shown its impressive performance in dealing with many kinds of vision tasks. In this paper, we propose a new method to detect salient objects by using Conditional Generative Adversarial Network (GAN). This type of network not only learns the mapping from RGB images to salient regions, but also learns a loss function for training the mapping. To the best of our knowledge, this is the first time that Conditional GAN has been used in salient object detection. We evaluate our saliency detection method on 2 large publicly available datasets with pixel accurate annotations. The experimental results have shown the significant and consistent improvements over the state-of-the-art method on a challenging dataset, and the testing speed is much faster.
Ou, Jao J.; Ong, Rowena E.; Miga, Michael I.
2013-01-01
Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method. PMID:21690002
Pheiffer, Thomas S; Ou, Jao J; Ong, Rowena E; Miga, Michael I
2011-09-01
Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method.
Image analysis of single event transient effects on charge coupled devices irradiated by protons
NASA Astrophysics Data System (ADS)
Wang, Zujun; Xue, Yuanyuan; Liu, Jing; He, Baoping; Yao, Zhibin; Ma, Wuying
2016-10-01
The experiments of single event transient (SET) effects on charge coupled devices (CCDs) irradiated by protons are presented. The radiation experiments have been carried out at the accelerator protons with the energy of 200 MeV and 60 MeV.The incident angles of the protons are at 30°and 90° to the plane of the CCDs to obtain the images induced by the perpendicularity and incline incident angles. The experimental results show that the typical characteristics of the SET effects on a CCD induced by protons are the generation of a large number of dark signal spikes (hot pixels) which are randomly distributed in the "pepper" images. The characteristics of SET effects are investigated by observing the same imaging area at different time during proton radiation to verify the transient effects. The experiment results also show that the number of dark signal spikes increases with increasing integration time during proton radiation. The CCDs were tested at on-line and off-line to distinguish the radiation damage induced by the SET effects or DD effects. The mechanisms of the dark signal spike generation induced by the SET effects and the DD effects are demonstrated respectively.
Metabolic microscopy of head and neck cancer organoids
NASA Astrophysics Data System (ADS)
Shah, Amy T.; Skala, Melissa C.
2016-03-01
Studies for head and neck cancer have primarily relied on cell lines or in vivo animal studies. However, a technique that combines the benefits of high-throughput in vitro studies with a complex, physiologically relevant microenvironment would be advantageous for understanding drug effects. Organoids provide a unique platform that fulfills these goals. Organoids are generated from excised and digested tumor tissue and are grown in culture. Fluorescence microscopy provides high-resolution images on a similar spatial scale as organoids. In particular, autofluorescence imaging of the metabolic cofactors NAD(P)H and FAD can provide insight into response to anti-cancer treatment. The optical redox ratio reflects relative amounts of NAD(P)H and FAD, and the fluorescence lifetime reflects enzyme activity of NAD(P)H and FAD. This study optimizes and characterizes the generation and culture of organoids grown from head and neck cancer tissue. Additionally, organoids were treated for 24 hours with a standard chemotherapy, and metabolic response in the organoids was measured using optical metabolic imaging. Ultimately, combining head and neck cancer organoids with optical metabolic imaging could be applied to test drug sensitivity for drug development studies as well as treatment planning for cancer patients.
Wu, Mingquan; Li, Hua; Huang, Wenjiang; Niu, Zheng; Wang, Changyao
2015-08-01
There is a shortage of daily high spatial land surface temperature (LST) data for use in high spatial and temporal resolution environmental process monitoring. To address this shortage, this work used the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), and the Spatial and Temporal Data Fusion Approach (STDFA) to estimate high spatial and temporal resolution LST by combining Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) LST and Moderate Resolution Imaging Spectroradiometer (MODIS) LST products. The actual ASTER LST products were used to evaluate the precision of the combined LST images using the correlation analysis method. This method was tested and validated in study areas located in Gansu Province, China. The results show that all the models can generate daily synthetic LST image with a high correlation coefficient (r) of 0.92 between the synthetic image and the actual ASTER LST observations. The ESTARFM has the best performance, followed by the STDFA and the STARFM. Those models had better performance in desert areas than in cropland. The STDFA had better noise immunity than the other two models.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Meyer, Mathias; Haubenreisser, Holger; Raupach, Rainer; Schmidt, Bernhard; Lietzmann, Florian; Leidecker, Christianne; Allmendinger, Thomas; Flohr, Thomas; Schad, Lothar R; Schoenberg, Stefan O; Henzler, Thomas
2015-01-01
To prospectively evaluate radiation dose and image quality of a third generation dual-source CT (DSCT) without z-axis filter behind the patient for temporal bone CT. Forty-five patients were either examined on a first, second, or third generation DSCT in an ultra-high-resolution (UHR) temporal bone-imaging mode. On the third generation DSCT system, the tighter focal spot of 0.2 mm(2) removes the necessity for an additional z-axis-filter, leading to an improved z-axis radiation dose efficiency. Images of 0.4 mm were reconstructed using standard filtered-back-projection or iterative reconstruction (IR) technique for previous generations of DSCT and a novel IR algorithm for the third generation DSCT. Radiation dose and image quality were compared between the three DSCT systems. The statistically significantly highest subjective and objective image quality was evaluated for the third generation DSCT when compared to the first or second generation DSCT systems (all p < 0.05). Total effective dose was 63%/39% lower for the third generation examination as compared to the first and second generation DSCT. Temporal bone imaging without z-axis-UHR-filter and a novel third generation IR algorithm allows for significantly higher image quality while lowering effective dose when compared to the first two generations of DSCTs. • Omitting the z-axis-filter allows a reduction in radiation dose of 50% • A smaller focal spot of 0.2 mm (2) significantly improves spatial resolution • Ultra-high-resolution temporal-bone-CT helps to gain diagnostic information of the middle/inner ear.
Integration of optical imaging with a small animal irradiator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An
Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less
Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho
2018-01-01
To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p < 0.001) and for the overall image quality (mean, 3.8; range, 3.3-4.0; p < 0.001). The most preferred anatomical regions were the azygoesophageal recess, thoracic spine, and unobscured lung. The visibility of chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.
NASA Astrophysics Data System (ADS)
Koh, Jaehan; Alomari, Raja S.; Chaudhary, Vipin; Dhillon, Gurmeet
2011-03-01
An imaging test has an important role in the diagnosis of lumbar abnormalities since it allows to examine the internal structure of soft tissues and bony elements without the need of an unnecessary surgery and recovery time. For the past decade, among various imaging modalities, magnetic resonance imaging (MRI) has taken the significant part of the clinical evaluation of the lumbar spine. This is mainly due to technological advancements that lead to the improvement of imaging devices in spatial resolution, contrast resolution, and multi-planar capabilities. In addition, noninvasive nature of MRI makes it easy to diagnose many common causes of low back pain such as disc herniation, spinal stenosis, and degenerative disc diseases. In this paper, we propose a method to diagnose lumbar spinal stenosis (LSS), a narrowing of the spinal canal, from magnetic resonance myelography (MRM) images. Our method segments the thecal sac in the preprocessing stage, generates the features based on inter- and intra-context information, and diagnoses lumbar disc stenosis. Experiments with 55 subjects show that our method achieves 91.3% diagnostic accuracy. In the future, we plan to test our method on more subjects.
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
Dust-penetrating (DUSPEN) see-through lidar for helicopter situational awareness in DVE
NASA Astrophysics Data System (ADS)
Murray, James T.; Seely, Jason; Plath, Jeff; Gotfredson, Eric; Engel, John; Ryder, Bill; Van Lieu, Neil; Goodwin, Ron; Wagner, Tyler; Fetzer, Greg; Kridler, Nick; Melancon, Chris; Panici, Ken; Mitchell, Anthony
2013-10-01
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté's DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Dust-Penetrating (DUSPEN) "see-through" lidar for helicopter situational awareness in DVE
NASA Astrophysics Data System (ADS)
Murray, James T.; Seely, Jason; Plath, Jeff; Gotfreson, Eric; Engel, John; Ryder, Bill; Van Lieu, Neil; Goodwin, Ron; Wagner, Tyler; Fetzer, Greg; Kridler, Nick; Melancon, Chris; Panici, Ken; Mitchell, Anthony
2013-05-01
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté's DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Fatigue Crack Closure Analysis Using Digital Image Correlation
NASA Technical Reports Server (NTRS)
Leser, William P.; Newman, John A.; Johnston, William M.
2010-01-01
Fatigue crack closure during crack growth testing is analyzed in order to evaluate the critieria of ASTM Standard E647 for measurement of fatigue crack growth rates. Of specific concern is remote closure, which occurs away from the crack tip and is a product of the load history during crack-driving-force-reduction fatigue crack growth testing. Crack closure behavior is characterized using relative displacements determined from a series of high-magnification digital images acquired as the crack is loaded. Changes in the relative displacements of features on opposite sides of the crack are used to generate crack closure data as a function of crack wake position. For the results presented in this paper, remote closure did not affect fatigue crack growth rate measurements when ASTM Standard E647 was strictly followed and only became a problem when testing parameters (e.g., load shed rate, initial crack driving force, etc.) greatly exceeded the guidelines of the accepted standard.
Optimization of super-resolution processing using incomplete image sets in PET imaging.
Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2008-12-01
Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of the point source study showed that the three SR images exhibited similar signal amplitudes and FWHM. The NEMA/IEC study showed that the average difference in SNR among the three SR images was 2.1% with respect to one another and they contained similar noise structure. ISR-1 and ISR-2 can be used to replace CSR, thereby reducing the total SR processing time and memory storage while maintaining similar contrast, resolution, SNR, and noise structure.
Orhan, Kaan; Misirli, Melis; Aksoy, Secil; Seki, Umut; Hincal, Evren; Ormeci, Tugrul; Arslan, Ahmet
2016-01-01
The aim of this study was to examine the anatomy and variations of the infraorbital foramen and its surroundings via morphometric measurements using cone beam computed tomography (CBCT) scans derived from a 3D volumetric rendering program. 354 sides of CBCT scans from 177 patients were examined in this study. DICOM data from these images were exported to Maxilim® software in order to generate 3D surface models. The morphometric measurements were done for infraorbital foramen (IOF), infraorbital groove (IOG) and infraorbital canal (IOC). All images were evaluated by 1 radiologist. To assess intra-observer reliability, the Wilcoxon matched-pairs signed rank test was used. Differences between sex, side, age and measurements were evaluated using chi-square and paired t-test and measurements were evaluated using 1-way ANOVA tests. Differences were considered significant when p<0.05. The most common shape was oval for IOF and parallel for IOC without any accessory foramen. The results showed that females have smaller dimensions for the measurements between the two foramen rotundum (FR), FR-IOF, sella-FR, center of the IOF (cIOF)-nasion (N), cIOF-NB (nasion-B) (p>0.05). No significant difference was found according to age groups (p>0.05). These results provide detailed knowledge of the anatomical characteristics in this particular area. CBCT imaging with lower radiation dose and thin slices can be a powerful tool for anesthesia procedures like infra orbital nerve blocks, for surgical approaches like osteotomies and neurectomies and also for generating artificial prostheses.
Longitudinal timed function tests in Duchenne muscular dystrophy: ImagingDMD cohort natural history.
Arora, Harneet; Willcocks, Rebecca J; Lott, Donovan J; Harrington, Ann T; Senesac, Claudia R; Zilke, Kirsten L; Daniels, Michael J; Xu, Dandan; Tennekoon, Gihan I; Finanger, Erika L; Russman, Barry S; Finkel, Richard S; Triplett, William T; Byrne, Barry J; Walter, Glenn A; Sweeney, H Lee; Vandenborne, Krista
2018-05-09
Tests of ambulatory function are common clinical trial endpoints in Duchenne muscular dystrophy (DMD). The ImagingDMD study has generated a large data set using these tests, which can describe the contemporary natural history of DMD in 5-12.9 year olds. 92 corticosteroid treated boys with DMD and 45 controls participated in this longitudinal study. Subjects performed the 6 minute walk test (6MWT) and timed function tests (TFTs: 10m walk/run, 4 stairs, supine to stand). Boys with DMD had impaired functional performance even at 5-6.9 years. Boys older than 7 had significant declines in function over 1 year for 10m walk/run and 6MWT. 80% of subjects could perform all functional tests at 9 years old. TFTs appear to be slightly more responsive and predictive of disease progression than 6MWT in 7-12.9 year olds. This study provides insight into the contemporary natural history of key functional endpoints in DMD. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B
2010-06-01
To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.
Design Method For Ultra-High Resolution Linear CCD Imagers
NASA Astrophysics Data System (ADS)
Sheu, Larry S.; Truong, Thanh; Yuzuki, Larry; Elhatem, Abdul; Kadekodi, Narayan
1984-11-01
This paper presents the design method to achieve ultra-high resolution linear imagers. This method utilizes advanced design rules and novel staggered bilinear photo sensor arrays with quadrilinear shift registers. Design constraint in the detector arrays and shift registers are analyzed. Imager architecture to achieve ultra-high resolution is presented. The characteristics of MTF, aliasing, speed, transfer efficiency and fine photolithography requirements associated with this architecture are also discussed. A CCD imager with advanced 1.5 um minimum feature size was fabricated. It is intended as a test vehicle for the next generation small sampling pitch ultra-high resolution CCD imager. Standard double-poly, two-phase shift registers were fabricated at an 8 um pitch using the advanced design rules. A special process step that blocked the source-drain implant from the shift register area was invented. This guaranteed excellent performance of the shift registers regardless of the small poly overlaps. A charge transfer efficiency of better than 0.99995 and maximum transfer speed of 8 MHz were achieved. The imager showed excellent performance. The dark current was less than 0.2 mV/ms, saturation 250 mV, adjacent photoresponse non-uniformity ± 4% and responsivity 0.7 V/ μJ/cm2 for the 8 μm x 6 μm photosensor size. The MTF was 0.6 at 62.5 cycles/mm. These results confirm the feasibility of the next generation ultra-high resolution CCD imagers.
Fourier-based linear systems description of free-breathing pulmonary magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Capaldi, D. P. I.; Svenningsen, S.; Cunningham, I. A.; Parraga, G.
2015-03-01
Fourier-decomposition of free-breathing pulmonary magnetic resonance imaging (FDMRI) was recently piloted as a way to provide rapid quantitative pulmonary maps of ventilation and perfusion without the use of exogenous contrast agents. This method exploits fast pulmonary MRI acquisition of free-breathing proton (1H) pulmonary images and non-rigid registration to compensate for changes in position and shape of the thorax associated with breathing. In this way, ventilation imaging using conventional MRI systems can be undertaken but there has been no systematic evaluation of fundamental image quality measurements based on linear systems theory. We investigated the performance of free-breathing pulmonary ventilation imaging using a Fourier-based linear system description of each operation required to generate FDMRI ventilation maps. Twelve subjects with chronic obstructive pulmonary disease (COPD) or bronchiectasis underwent pulmonary function tests and MRI. Non-rigid registration was used to co-register the temporal series of pulmonary images. Pulmonary voxel intensities were aligned along a time axis and discrete Fourier transforms were performed on the periodic signal intensity pattern to generate frequency spectra. We determined the signal-to-noise ratio (SNR) of the FDMRI ventilation maps using a conventional approach (SNRC) and using the Fourier-based description (SNRF). Mean SNR was 4.7 ± 1.3 for subjects with bronchiectasis and 3.4 ± 1.8, for COPD subjects (p>.05). SNRF was significantly different than SNRC (p<.01). SNRF was approximately 50% of SNRC suggesting that the linear system model well-estimates the current approach.
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
Evolving, innovating, and revolutionary changes in cardiovascular imaging: We've only just begun!
Shaw, Leslee J; Hachamovitch, Rory; Min, James K; Di Carli, Marcelo; Mieres, Jennifer H; Phillips, Lawrence; Blankstein, Ron; Einstein, Andrew; Taqueti, Viviany R; Hendel, Robert; Berman, Daniel S
2018-06-01
In this review, we highlight the need for innovation and creativity to reinvent the field of nuclear cardiology. Revolutionary ideas brought forth today are needed to create greater value in patient care and highlight the need for more contemporary evidence supporting the use of nuclear cardiology practices. We put forth discussions on the need for disruptive innovation in imaging-guided care that places the imager as a central force in care coordination. Value-based nuclear cardiology is defined as care that is both efficient and effective. Novel testing strategies that defer testing in lower risk patients are examples of the kind of innovation needed in today's healthcare environment. A major focus of current research is the evolution of the importance of ischemia and the prognostic significance of non-obstructive atherosclerotic plaque and coronary microvascular dysfunction. Embracing novel paradigms, such as this, can aid in the development of optimal strategies for coronary disease management. We hope that our article will spurn the field toward greater innovation and focus on transformative imaging leading the way for new generations of novel cardiovascular care.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
Implementation of digital image encryption algorithm using logistic function and DNA encoding
NASA Astrophysics Data System (ADS)
Suryadi, MT; Satria, Yudi; Fauzi, Muhammad
2018-03-01
Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.
Harnessing the Power of Light to See and Treat Breast Cancer
2011-10-01
generate sarcomas include LSL- KrasG12D/+;Trp53Flox/Flox, BrafCa/+;Trp53 Flox/Flox and BrafCa/Ca;Trp53Flox/Flox.7,8 Soft tissue sarcomas were generated...temporally restricted mouse model of soft tissue sarcoma , Nat Med, 2007. 13(8): p. 992-7. 8. Dankort, D., et al., A new mouse model to explore the...resolution anatomical images of heterogeneous tissue. To do so we are employing the use of two ex vivo test beds: 1) murine sarcoma margins and 2
LANTCET: laser nanotechnology for screening and treating tumors ex vivo and in vivo
NASA Astrophysics Data System (ADS)
Lapotko, Dmitri O.; Lukianova-Hleb, Ekaterina Y.; Zhdanok, Sergei A.; Hafner, Jason H.; Rostro, Betty C.; Scully, Peter; Konopleva, Marina; Andreeff, Michael; Li, Chun; Hanna, Ehab Y.; Myers, Jeffrey N.; Oraevsky, Alexander A.
2007-06-01
LANTCET (laser-activated nano-thermolysis as cell elimination technology) was developed for selective detection and destruction of individual tumor cells through generation of photothermal bubbles around clusters of light absorbing gold nanoparticles (nanorods and nanoshells) that are selectively formed in target tumor cells. We have applied bare nanoparticles and their conjugates with cell-specific vectors such as monoclonal antibodies CD33 (specific for Acute Myeloid Leukemia) and C225 (specific for carcinoma cells that express epidermal growth factor -EGF). Clusters were formed by using vector-receptor interactions with further clusterization of nanoparticles due to endocytosis. Formation of clusters was verified directly with optical resonance scattering microscopy and microspectroscopy. LANTCET method was tested in vitro for living cell samples with: (1) model myeloid K562 cells (CD33 positive), (2) primary human bone marrow CD33-positive blast cells from patients with the diagnosis of acute myeloid leukemia, (3) monolayers of living EGF-positive carcinoma cells (Hep-2C), (4) human lymphocytes and red blood cells as normal cells. The LANTCET method was also tested in vivo using rats with experimental polymorphic sarcoma. Photothermal bubbles were generated and detected in vitro with a photothermal microscope equipped with a tunable Ti-Sa pulsed laser. We have found that cluster formation caused an almost 100-fold decrease in the bubble generation threshold of laser pulse fluence in tumor cells compared to the bubble generation threshold for normal cells. The animal tumor that was treated with a single laser pulse showed a necrotic area of diameter close to the pump laser beam diameter and a depth of 1-2 mm. Cell level selectivity of tumor damage with single laser pulse was demonstrated. Combining lightscattering imaging with bubble imaging, we introduced a new image-guided mode of the LANTCET operation for screening and treatment of tumors ex vivo and in vivo.
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Gursel, Yekta; Sepulveda, Cesar A.; Anderson, Mark; La Baw, Clayton; Johnson, Kenneth R.; Deans, Matthew; Beegle, Luther; Boynton, John
2008-01-01
Conducting high resolution field microscopy with coupled laser spectroscopy that can be used to selectively analyze the surface chemistry of individual pixels in a scene is an enabling capability for next generation robotic and manned spaceflight missions, civil, and military applications. In the laboratory, we use a range of imaging and surface preparation tools that provide us with in-focus images, context imaging for identifying features that we want to investigate at high magnification, and surface-optical coupling that allows us to apply optical spectroscopic analysis techniques for analyzing surface chemistry particularly at high magnifications. The camera, hand lens, and microscope probe with scannable laser spectroscopy (CHAMP-SLS) is an imaging/spectroscopy instrument capable of imaging continuously from infinity down to high resolution microscopy (resolution of approx. 1 micron/pixel in a final camera format), the closer CHAMP-SLS is placed to a feature, the higher the resultant magnification. At hand lens to microscopic magnifications, the imaged scene can be selectively interrogated with point spectroscopic techniques such as Raman spectroscopy, microscopic Laser Induced Breakdown Spectroscopy (micro-LIBS), laser ablation mass-spectrometry, Fluorescence spectroscopy, and/or Reflectance spectroscopy. This paper summarizes the optical design, development, and testing of the CHAMP-SLS optics.
Agrawal, Anant; Chen, Chao-Wei; Baxi, Jigesh; Chen, Yu; Pfefer, T Joshua
2013-07-01
In optical coherence tomography (OCT), axial resolution is one of the most critical parameters impacting image quality. It is commonly measured by determining the point spread function (PSF) based on a specular surface reflection. The contrast transfer function (CTF) provides more insights into an imaging system's resolving characteristics and can be readily generated in a system-independent manner, without consideration for image pixel size. In this study, we developed a test method for determination of CTF based on multi-layer, thin-film phantoms, evaluated using spectral- and time-domain OCT platforms with different axial resolution values. Phantoms representing six spatial frequencies were fabricated and imaged. The fabrication process involved spin coating silicone films with precise thicknesses in the 8-40 μm range. Alternating layers were doped with a specified concentration of scattering particles. Validation of layer optical properties and thicknesses were achieved with spectrophotometry and stylus profilometry, respectively. OCT B-scans were used to calculate CTFs and results were compared with convetional PSF measurements based on specular reflections. Testing of these phantoms indicated that our approach can provide direct access to axial resolution characteristics highly relevant to image quality. Furthermore, tissue phantoms based on our thin-film fabrication approach may have a wide range of additional applications in optical imaging and spectroscopy.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
Cocoa bean quality assessment by using hyperspectral images and fuzzy logic techniques
NASA Astrophysics Data System (ADS)
Soto, Juan; Granda, Guillermo; Prieto, Flavio; Ipanaque, William; Machacuay, Jorge
2015-04-01
Nowadays, cocoa bean exportation from Piura-Peru is having a positive international market response due to their inherent high quality. Nevertheless, when using subjective techniques for quality assessment, such as the cut test, a wastefulness of grains is generated, additional to a restriction in the selection as well as improvement approaches in earlier stages for optimizing the quality. Thus, in an attempt to standardize the internal features analyzed by the cut test, for instance, crack formation and internal color changes during the fermentation, this research is submitted as an approach which aims to make use of hyperspectral images, with the purpose of having a quick and accurate analysis. Hyperspectral cube size was reduced by using Principal Component Analysis (PCA). The image generated by principal component PC1 provides enough information to clearly distinguish the internal cracks of the cocoa bean, since the zones where these cracks are, have a negative correlation with PC1. The features taken were processed through a fuzzy block, which is able to describe the cocoa bean quality. Three membership functions were defined in the output: unfermented, partly fermented and well fermented, by using trapezoidal-shaped and triangular-shaped functions. A total of twelve rules were propounded. Furthermore, the bisector method was chosen for the defuzzification. Begin the abstract two lines below author names and addresses.
Parametric Methods for Dynamic 11C-Phenytoin PET Studies.
Mansor, Syahir; Yaqub, Maqsood; Boellaard, Ronald; Froklage, Femke E; de Vries, Anke; Bakker, Esther D M; Voskuyl, Rob A; Eriksson, Jonas; Schwarte, Lothar A; Verbeek, Joost; Windhorst, Albert D; Lammertsma, Adriaan A
2017-03-01
In this study, the performance of various methods for generating quantitative parametric images of dynamic 11 C-phenytoin PET studies was evaluated. Methods: Double-baseline 60-min dynamic 11 C-phenytoin PET studies, including online arterial sampling, were acquired for 6 healthy subjects. Parametric images were generated using Logan plot analysis, a basis function method, and spectral analysis. Parametric distribution volume (V T ) and influx rate ( K 1 ) were compared with those obtained from nonlinear regression analysis of time-activity curves. In addition, global and regional test-retest (TRT) variability was determined for parametric K 1 and V T values. Results: Biases in V T observed with all parametric methods were less than 5%. For K 1 , spectral analysis showed a negative bias of 16%. The mean TRT variabilities of V T and K 1 were less than 10% for all methods. Shortening the scan duration to 45 min provided similar V T and K 1 with comparable TRT performance compared with 60-min data. Conclusion: Among the various parametric methods tested, the basis function method provided parametric V T and K 1 values with the least bias compared with nonlinear regression data and showed TRT variabilities lower than 5%, also for smaller volume-of-interest sizes (i.e., higher noise levels) and shorter scan duration. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Creep-Fatigue Damage Investigation and Modeling of Alloy 617 at High Temperatures
NASA Astrophysics Data System (ADS)
Tahir, Fraaz
The Very High Temperature Reactor (VHTR) is one of six conceptual designs proposed for Generation IV nuclear reactors. Alloy 617, a solid solution strengthened Ni-base superalloy, is currently the primary candidate material for the tubing of the Intermediate Heat Exchanger (IHX) in the VHTR design. Steady-state operation of the nuclear power plant at elevated temperatures leads to creep deformation, whereas loading transients including startup and shutdown generate fatigue. A detailed understanding of the creep-fatigue interaction in Alloy 617 is necessary before it can be considered as a material for nuclear construction in ASME Boiler and Pressure Vessel Code. Current design codes for components undergoing creep-fatigue interaction at elevated temperatures require creep-fatigue testing data covering the entire range from fatigue-dominant to creep-dominant loading. Classical strain-controlled tests, which produce stress relaxation during the hold period, show a saturation in cycle life with increasing hold periods due to the rapid stress-relaxation of Alloy 617 at high temperatures. Therefore, applying longer hold time in these tests cannot generate creep-dominated failure. In this study, uniaxial isothermal creep-fatigue tests with non-traditional loading waveforms were designed and performed at 850 and 950°C, with an objective of generating test data in the creep-dominant regime. The new loading waveforms are hybrid strain-controlled and force-controlled testing which avoid stress relaxation during the creep hold. The experimental data showed varying proportions of creep and fatigue damage, and provided evidence for the inadequacy of the widely-used time fraction rule for estimating creep damage under creep-fatigue conditions. Micro-scale damage features in failed test specimens, such as fatigue cracks and creep voids, were quantified using a Scanning Electron Microscope (SEM) to find a correlation between creep and fatigue damage. Quantitative statistical imaging analysis showed that the microstructural damage features (cracks and voids) are correlated with a new mechanical driving force parameter. The results from this image-based damage analysis were used to develop a phenomenological life-prediction methodology called the effective time fraction approach. Finally, the constitutive creep-fatigue response of the material at 950°C was modeled using a unified viscoplastic model coupled with a damage accumulation model. The simulation results were used to validate an energy-based constitutive life-prediction model, as a mechanistic model for potential component and structure level creep-fatigue analysis.
NASA Astrophysics Data System (ADS)
Schlueter, S.; Sheppard, A.; Wildenschild, D.
2013-12-01
Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.
2002-05-01
Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.
Oliveira, M; Lopez, G; Geambastiani, P; Ubeda, C
2018-05-01
A quality assurance (QA) program is a valuable tool for the continuous production of optimal quality images. The aim of this paper is to assess a newly developed automatic computer software for image quality (IR) evaluation in fluoroscopy X-ray systems. Test object images were acquired using one fluoroscopy system, Siemens Axiom Artis model (Siemens AG, Medical Solutions Erlangen, Germany). The software was developed as an ImageJ plugin. Two image quality parameters were assessed: high-contrast spatial resolution (HCSR) and signal-to-noise ratio (SNR). The time between manual and automatic image quality assessment procedures were compared. The paired t-test was used to assess the data. p Values of less than 0.05 were considered significant. The Fluoro-QC software generated faster IQ evaluation results (mean = 0.31 ± 0.08 min) than manual procedure (mean = 4.68 ± 0.09 min). The mean difference between techniques was 4.36 min. Discrepancies were identified in the region of interest (ROI) areas drawn manually with evidence of user dependence. The new software presented the results of two tests (HCSR = 3.06, SNR = 5.17) and also collected information from the DICOM header. Significant differences were not identified between manual and automatic measures of SNR (p value = 0.22) and HCRS (p value = 0.46). The Fluoro-QC software is a feasible, fast and free to use method for evaluating imaging quality parameters on fluoroscopy systems. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
Aytac-Kipergil, Esra; Demirkiran, Aytac; Uluc, Nasire; Yavas, Seydi; Kayikcioglu, Tunc; Salman, Sarper; Karamuk, Sohret Gorkem; Ilday, Fatih Omer; Unlu, Mehmet Burcin
2016-12-08
Photoacoustic imaging is based on the detection of generated acoustic waves through thermal expansion of tissue illuminated by short laser pulses. Fiber lasers as an excitation source for photoacoustic imaging have recently been preferred for their high repetition frequencies. Here, we report a unique fiber laser developed specifically for multiwavelength photoacoustic microscopy system. The laser is custom-made for maximum flexibility in adjustment of its parameters; pulse duration (5-10 ns), pulse energy (up to 10 μJ) and repetition frequency (up to 1 MHz) independently from each other and covers a broad spectral region from 450 to 1100 nm and also can emit wavelengths of 532, 355, and 266 nm. The laser system consists of a master oscillator power amplifier, seeding two stages; supercontinuum and harmonic generation units. The laser is outstanding since the oscillator, amplifier and supercontinuum generation parts are all-fiber integrated with custom-developed electronics and software. To demonstrate the feasibility of the system, the images of several elements of standardized resolution test chart are acquired at multiple wavelengths. The lateral resolution of optical resolution photoacoustic microscopy system is determined as 2.68 μm. The developed system may pave the way for spectroscopic photoacoustic microscopy applications via widely tunable fiber laser technologies.
Aytac-Kipergil, Esra; Demirkiran, Aytac; Uluc, Nasire; Yavas, Seydi; Kayikcioglu, Tunc; Salman, Sarper; Karamuk, Sohret Gorkem; Ilday, Fatih Omer; Unlu, Mehmet Burcin
2016-01-01
Photoacoustic imaging is based on the detection of generated acoustic waves through thermal expansion of tissue illuminated by short laser pulses. Fiber lasers as an excitation source for photoacoustic imaging have recently been preferred for their high repetition frequencies. Here, we report a unique fiber laser developed specifically for multiwavelength photoacoustic microscopy system. The laser is custom-made for maximum flexibility in adjustment of its parameters; pulse duration (5–10 ns), pulse energy (up to 10 μJ) and repetition frequency (up to 1 MHz) independently from each other and covers a broad spectral region from 450 to 1100 nm and also can emit wavelengths of 532, 355, and 266 nm. The laser system consists of a master oscillator power amplifier, seeding two stages; supercontinuum and harmonic generation units. The laser is outstanding since the oscillator, amplifier and supercontinuum generation parts are all-fiber integrated with custom-developed electronics and software. To demonstrate the feasibility of the system, the images of several elements of standardized resolution test chart are acquired at multiple wavelengths. The lateral resolution of optical resolution photoacoustic microscopy system is determined as 2.68 μm. The developed system may pave the way for spectroscopic photoacoustic microscopy applications via widely tunable fiber laser technologies. PMID:27929049
Analysis of Local Slopes at the InSight Landing Site on Mars
NASA Astrophysics Data System (ADS)
Fergason, R. L.; Kirk, R. L.; Cushing, G.; Galuszka, D. M.; Golombek, M. P.; Hare, T. M.; Howington-Kraus, E.; Kipp, D. M.; Redding, B. L.
2017-10-01
To evaluate the topography of the surface within the InSight candidate landing ellipses, we generated Digital Terrain Models (DTMs) at lander scales and those appropriate for entry, descent, and landing simulations, along with orthoimages of both images in each stereopair, and adirectional slope images. These products were used to assess the distribution of slopes for each candidate ellipse and terrain type in the landing site region, paying particular attention to how these slopes impact InSight landing and engineering safety, and results are reported here. Overall, this region has extremely low slopes at 1-meter baseline scales and meets the safety constraints of the InSight lander. The majority of the landing ellipse has a mean slope at 1-meter baselines of 3.2°. In addition, a mosaic of HRSC, CTX, and HiRISE DTMs within the final landing ellipse (ellipse 9) was generated to support entry, descent, and landing simulations and evaluations. Several methods were tested to generate this mosaic and the NASA Ames Stereo Pipeline program dem_mosaic produced the best results. For the HRSC-CTX-HiRISE DTM mosaic, more than 99 % of the mosaic has slopes less than 15°, and the introduction of artificially high slopes along image seams was minimized.
Analysis of local slopes at the InSight landing site on Mars
Fergason, Robin L.; Kirk, Randolph L.; Cushing, Glen; Galuszka, Donna M.; Golombek, Matthew P.; Hare, Trent M.; Howington-Kraus, Elpitha; Kipp, Devin M; Redding, Bonnie L.
2017-01-01
To evaluate the topography of the surface within the InSight candidate landing ellipses, we generated Digital Terrain Models (DTMs) at lander scales and those appropriate for entry, descent, and landing simulations, along with orthoimages of both images in each stereopair, and adirectional slope images. These products were used to assess the distribution of slopes for each candidate ellipse and terrain type in the landing site region, paying particular attention to how these slopes impact InSight landing and engineering safety, and results are reported here. Overall, this region has extremely low slopes at 1-meter baseline scales and meets the safety constraints of the InSight lander. The majority of the landing ellipse has a mean slope at 1-meter baselines of 3.2°. In addition, a mosaic of HRSC, CTX, and HiRISE DTMs within the final landing ellipse (ellipse 9) was generated to support entry, descent, and landing simulations and evaluations. Several methods were tested to generate this mosaic and the NASA Ames Stereo Pipeline program dem_mosaic produced the best results. For the HRSC-CTX-HiRISE DTM mosaic, more than 99 % of the mosaic has slopes less than 15°, and the introduction of artificially high slopes along image seams was minimized.
NASA Astrophysics Data System (ADS)
Adur, J.; Ferreira, A. E.; D'Souza-Li, L.; Pelegati, V. B.; de Thomaz, A. A.; Almeida, D. B.; Baratti, M. O.; Carvalho, H. F.; Cesar, C. L.
2012-03-01
Osteogenesis Imperfecta (OI) is a genetic disorder that leads to bone fractures due to mutations in the Col1A1 or Col1A2 genes that affect the primary structure of the collagen I chain with the ultimate outcome in collagen I fibrils that are either reduced in quantity or abnormally organized in the whole body. A quick test screening of the patients would largely reduce the sample number to be studied by the time consuming molecular genetics techniques. For this reason an assessment of the human skin collagen structure by Second Harmonic Generation (SHG) can be used as a screening technique to speed up the correlation of genetics/phenotype/OI types understanding. In the present work we have used quantitative second harmonic generation (SHG) imaging microscopy to investigate the collagen matrix organization of the OI human skin samples comparing with normal control patients. By comparing fibril collagen distribution and spatial organization, we calculated the anisotropy and texture patterns of this structural protein. The analysis of the anisotropy was performed by means of the two-dimensional Discrete Fourier Transform and image pattern analysis with Gray-Level Co-occurrence Matrix (GLCM). From these results, we show that statistically different results are obtained for the normal and disease states of OI.
Paul, Jijo; Jacobi, Volkmar; Farhang, Mohammad; Bazrafshan, Babak; Vogl, Thomas J; Mbalisike, Emmanuel C
2013-06-01
Radiation dose and image quality estimation of three X-ray volume imaging (XVI) systems. A total of 126 patients were examined using three XVI systems (groups 1-3) and their data were retrospectively analysed from 2007 to 2012. Each group consisted of 42 patients and each patient was examined using cone-beam computed tomography (CBCT), digital subtraction angiography (DSA) and digital fluoroscopy (DF). Dose parameters such as dose-area product (DAP), skin entry dose (SED) and image quality parameters such as Hounsfield unit (HU), noise, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were estimated and compared using appropriate statistical tests. Mean DAP and SED were lower in recent XVI than its previous counterparts in CBCT, DSA and DF. HU of all measured locations was non-significant between the groups except the hepatic artery. Noise showed significant difference among groups (P < 0.05). Regarding CNR and SNR, the recent XVI showed a higher and significant difference compared to its previous versions. Qualitatively, CBCT showed significance between versions unlike the DSA and DF which showed non-significance. A reduction of radiation dose was obtained for the recent-generation XVI system in CBCT, DSA and DF. Image noise was significantly lower; SNR and CNR were higher than in previous versions. The technological advancements and the reduction in the number of frames led to a significant dose reduction and improved image quality with the recent-generation XVI system. • X-ray volume imaging (XVI) systems are increasingly used for interventional radiological procedures. • More modern XVI systems use lower radiation doses compared with earlier counterparts. • Furthermore more modern XVI systems provide higher image quality. • Technological advances reduce radiation dose and improve image quality.
NASA Astrophysics Data System (ADS)
Anzalone, Anna; Isgrò, Francesco
2016-10-01
The JEM-EUSO (Japanese Experiment Module-Extreme Universe Space Observatory) telescope will measure Ultra High Energy Cosmic Ray properties by detecting the UV fluorescent light generated in the interaction between cosmic rays and the atmosphere. Cloud information is crucial for a proper interpretation of these data. The problem of recovering the cloud-top height from satellite images in infrared has struck some attention over the last few decades, as a valuable tool for the atmospheric monitoring. A number of radiative methods do exist, like C02 slicing and Split Window algorithms, using one or more infrared bands. A different way to tackle the problem is, when possible, to exploit the availability of multiple views, and recover the cloud top height through stereo imaging and triangulation. A crucial step in the 3D reconstruction is the process that attempts to match a characteristic point or features selected in one image, with one of those detected in the second image. In this article the performance of a group matching algorithms that include both area-based and global techniques, has been tested. They are applied to stereo pairs of satellite IR images with the final aim of evaluating the cloud top height. Cloudy images from SEVIRI on the geostationary Meteosat Second Generation 9 and 10 (MSG-2, MSG-3) have been selected. After having applied to the cloudy scenes the algorithms for stereo matching, the outcoming maps of disparity are transformed in depth maps according to the geometry of the reference data system. As ground truth we have used the height maps provided by the database of MODIS (Moderate Resolution Imaging Spectroradiometer) on-board Terra/Aqua polar satellites, that contains images quasi-synchronous to the imaging provided by MSG.
High Energy Astronomy Observatory (HEAO)
1977-06-01
This photograph is of the High Energy Astronomy Observatory (HEAO)-2 telescope being checked by engineers in the X-Ray Calibration Facility at the Marshall Space Flight Center (MSFC). The MSFC was heavily engaged in the technical and scientific aspects, testing and calibration, of the HEAO-2 telescope. The HEAO-2 was the first imaging and largest x-ray telescope built to date. The X-Ray Calibration Facility was built in 1976 for testing MSFC's HEAO-2. The facility is the world's largest, most advanced laboratory for simulating x-ray emissions from distant celestial objects. It produced a space-like environment in which components related to x-ray telescope imaging are tested and the quality of their performance in space is predicted. The original facility contained a 1,000-foot long by 3-foot diameter vacuum tube (for the x-ray path) cornecting an x-ray generator and an instrument test chamber. Recently, the facility was upgraded to evaluate the optical elements of NASA's Hubble Space Telescope, Chandra X-Ray Observatory and Compton Gamma-Ray Observatory.
NASA Astrophysics Data System (ADS)
Boxx, I.; Carter, C. D.; Meier, W.
2014-08-01
Tomographic particle image velocimetry (tomographic-PIV) is a recently developed measurement technique used to acquire volumetric velocity field data in liquid and gaseous flows. The technique relies on line-of-sight reconstruction of the rays between a 3D particle distribution and a multi-camera imaging system. In a turbulent flame, however, index-of-refraction variations resulting from local heat-release may inhibit reconstruction and thereby render the technique infeasible. The objective of this study was to test the efficacy of tomographic-PIV in a turbulent flame. An additional goal was to determine the feasibility of acquiring usable tomographic-PIV measurements in a turbulent flame at multi-kHz acquisition rates with current-generation laser and camera technology. To this end, a setup consisting of four complementary metal oxide semiconductor cameras and a dual-cavity Nd:YAG laser was implemented to test the technique in a lifted turbulent jet flame. While the cameras were capable of kHz-rate image acquisition, the laser operated at a pulse repetition rate of only 10 Hz. However, use of this laser allowed exploration of the required pulse energy and thus power for a kHz-rate system. The imaged region was 29 × 28 × 2.7 mm in size. The tomographic reconstruction of the 3D particle distributions was accomplished using the multiplicative algebraic reconstruction technique. The results indicate that volumetric velocimetry via tomographic-PIV is feasible with pulse energies of 25 mJ, which is within the capability of current-generation kHz-rate diode-pumped solid-state lasers.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.
2018-01-01
In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.
Meso-Scale Wetting of Paper Towels
NASA Astrophysics Data System (ADS)
Abedsoltan, Hossein
In this study, a new experimental approach is proposed to investigate the absorption properties of some selected retail paper towels. The samples were selected from two important manufacturing processes, conventional wet pressing (CWP) considered value products, and through air drying (TAD) considered as high or premium products. The tested liquids were water, decane, dodecane, and tetradecane with the total volumes in micro-liter range. The method involves the point source injection of liquid with different volumetric flowrates, in the nano-liter per second range. The local site for injection was chosen arbitrarily on the sample surface. The absorption process was monitored and recorded as the liquid advances, with two distinct imaging system methods, infrared imaging and optical imaging. The microscopic images were analyzed to calculate the wetted regions during the absorption test, and the absorption diagrams were generated. These absorption diagrams were dissected to illustrate the absorption phenomenon, and the absorption properties of the samples. The local (regional) absorption rates were computed for Mardi Gras and Bounty Basic as the representative samples for CWP and TAD, respectively in order to be compared with the absorption capacity property of these two samples. Then, the absorption capacity property was chosen as an index factor to compare the absorption properties of all the tested paper towels.
Processing of 3-Dimensional Flash Lidar Terrain Images Generated From an Airborne Platform
NASA Technical Reports Server (NTRS)
Bulyshev, Alexander; Pierrottet, Diego; Amzajerdian, Farzin; Busch, George; Vanek, Michael; Reisse, Robert
2009-01-01
Data from the first Flight Test of the NASA Langley Flash Lidar system have been processed. Results of the analyses are presented and discussed. A digital elevation map of the test site is derived from the data, and is compared with the actual topography. The set of algorithms employed, starting from the initial data sorting, and continuing through to the final digital map classification is described. The accuracy, precision, and the spatial and angular resolution of the method are discussed.
Next-Generation Image and Sound Processing Strategies: Exploiting the Biological Model
2007-05-01
several video game clips which were recorded while observers interactively played the games. The feature vectors may be derived from either: the...phase, we use a different video game clip to test the model. Frames from the test clip are passed in parallel to a bottom-up saliency model, as well as... video games (Figure 6). We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a
D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System
NASA Astrophysics Data System (ADS)
Kang, J.; Lee, I.
2016-06-01
Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.
Detection of rip current using camera monitoring techniques
NASA Astrophysics Data System (ADS)
Kim, T.
2016-02-01
Rip currents are approximately shore normal seaward flows which are strong, localized and rather narrow. They are known that stacked water by longshore currents suddenly flow back out to sea as rip currents. They are transient phenomena and their generation time and location are unpredictable. They are also doing significant roles for offshore sediment transport and beach erosion. Rip currents can be very hazardous to swimmers or floaters because of their strong seaward flows and sudden depth changes by narrow and strong flows. Because of its importance in terms of safety, shoreline evolution and pollutant transport, a number of studies have been attempted to find out their mechanisms. However, understanding of rip currents is still not enough to make warning to people in the water by predicting their location and time. This paper investigates the development of rip currents using camera images. Since rip currents are developed by longshore currents, the observed longshore current variations in space and time can be used to detect rip current generation. Most of the time convergence of two longshore currents in the opposite direction is the outbreak of rip current. In order to observe longshore currents, an optical current meter(OCM) technique proposed by Chickadel et al.(2003) is used. The relationship between rip current generation time and longshore current velocity variation observed by OCM is analyzed from the images taken on the shore. The direct measurement of rip current velocity is also tested using image analysis techniques. Quantitative estimation of rip current strength is also conducted by using average and variance image of rip current area. These efforts will contribute to reduce the hazards of swimmers by prediction and warning of rip current generation.
Noise-cancellation-based nonuniformity correction algorithm for infrared focal-plane arrays.
Godoy, Sebastián E; Pezoa, Jorge E; Torres, Sergio N
2008-10-10
The spatial fixed-pattern noise (FPN) inherently generated in infrared (IR) imaging systems compromises severely the quality of the acquired imagery, even making such images inappropriate for some applications. The FPN refers to the inability of the photodetectors in the focal-plane array to render a uniform output image when a uniform-intensity scene is being imaged. We present a noise-cancellation-based algorithm that compensates for the additive component of the FPN. The proposed method relies on the assumption that a source of noise correlated to the additive FPN is available to the IR camera. An important feature of the algorithm is that all the calculations are reduced to a simple equation, which allows for the bias compensation of the raw imagery. The algorithm performance is tested using real IR image sequences and is compared to some classical methodologies. (c) 2008 Optical Society of America
Smart CMOS image sensor for lightning detection and imaging.
Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor
2013-03-01
We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.
Wilson, A J; Hodge, J C
1995-08-01
To evaluate the diagnostic performance of a teleradiology system in skeletal trauma. Radiographs from 180 skeletal trauma patients were digitized (matrix, 2,000 x 2,500) and transmitted to a remote digital viewing console (1,200-line monitor). Four radiologists interpreted both the original film images and digital images. Each reader was asked to identify, locate, and characterize fractures and dislocations. Receiver operating characteristic curves were generated, and the results of the original and digitized film readings were compared. All readers performed better with the original film when interpreting fractures. Although the patterns varied between readers, all had statistically significant differences (P < .01) for the two image types. There was no statistically significant difference in performance with the two images when dislocations were diagnosed. The system tested is not a satisfactory alternative to the original radiograph for routine reading of fracture films.
Sarkar, V; Gutierrez, A N; Stathakis, S; Swanson, G P; Papanikolaou, N
2009-01-01
The purpose of this project was to develop a software platform to produce a virtual fluoroscopic image as an aid for permanent prostate seed implants. Seed location information from a pre-plan was extracted and used as input to in-house developed software to produce a virtual fluoroscopic image. In order to account for differences in patient positioning on the day of treatment, the user was given the ability to make changes to the virtual image. The system has been shown to work as expected for all test cases. The system allows for quick (on average less than 10 sec) generation of a virtual fluoroscopic image of the planned seed pattern. The image can be used as a verification tool to aid the physician in evaluating how close the implant is to the planned distribution throughout the procedure and enable remedial action should a large deviation be observed.