Sample records for source function imaging

  1. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  2. Microseismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-07-01

    At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  3. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  4. Electrophysiological Source Imaging: A Noninvasive Window to Brain Dynamics.

    PubMed

    He, Bin; Sohrabpour, Abbas; Brown, Emery; Liu, Zhongming

    2018-06-04

    Brain activity and connectivity are distributed in the three-dimensional space and evolve in time. It is important to image brain dynamics with high spatial and temporal resolution. Electroencephalography (EEG) and magnetoencephalography (MEG) are noninvasive measurements associated with complex neural activations and interactions that encode brain functions. Electrophysiological source imaging estimates the underlying brain electrical sources from EEG and MEG measurements. It offers increasingly improved spatial resolution and intrinsically high temporal resolution for imaging large-scale brain activity and connectivity on a wide range of timescales. Integration of electrophysiological source imaging and functional magnetic resonance imaging could further enhance spatiotemporal resolution and specificity to an extent that is not attainable with either technique alone. We review methodological developments in electrophysiological source imaging over the past three decades and envision its future advancement into a powerful functional neuroimaging technology for basic and clinical neuroscience applications.

  5. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  6. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  7. Improving the convergence rate in affine registration of PET and SPECT brain images using histogram equalization.

    PubMed

    Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A

    2013-01-01

    A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.

  8. Characterisation of a resolution enhancing image inversion interferometer.

    PubMed

    Wicker, Kai; Sindbert, Simon; Heintzmann, Rainer

    2009-08-31

    Image inversion interferometers have the potential to significantly enhance the lateral resolution and light efficiency of scanning fluorescence microscopes. Self-interference of a point source's coherent point spread function with its inverted copy leads to a reduction in the integrated signal for off-axis sources compared to sources on the inversion axis. This can be used to enhance the resolution in a confocal laser scanning microscope. We present a simple image inversion interferometer relying solely on reflections off planar surfaces. Measurements of the detection point spread function for several types of light sources confirm the predicted performance and suggest its usability for scanning confocal fluorescence microscopy.

  9. A single-sided homogeneous Green's function representation for holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; Thorbecke, Jan; van der Neut, Joost

    2016-04-01

    Green's theorem plays a fundamental role in a diverse range of wavefield imaging applications, such as holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval. In many of those applications, the homogeneous Green's function (i.e. the Green's function of the wave equation without a singularity on the right-hand side) is represented by a closed boundary integral. In practical applications, sources and/or receivers are usually present only on an open surface, which implies that a significant part of the closed boundary integral is by necessity ignored. Here we derive a homogeneous Green's function representation for the common situation that sources and/or receivers are present on an open surface only. We modify the integrand in such a way that it vanishes on the part of the boundary where no sources and receivers are present. As a consequence, the remaining integral along the open surface is an accurate single-sided representation of the homogeneous Green's function. This single-sided representation accounts for all orders of multiple scattering. The new representation significantly improves the aforementioned wavefield imaging applications, particularly in situations where the first-order scattering approximation breaks down.

  10. An automated multi-scale network-based scheme for detection and location of seismic sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.

    2017-12-01

    We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.

  11. Image features dependant correlation-weighting function for efficient PRNU based source camera identification.

    PubMed

    Tiwari, Mayank; Gupta, Bhupendra

    2018-04-01

    For source camera identification (SCI), photo response non-uniformity (PRNU) has been widely used as the fingerprint of the camera. The PRNU is extracted from the image by applying a de-noising filter then taking the difference between the original image and the de-noised image. However, it is observed that intensity-based features and high-frequency details (edges and texture) of the image, effect quality of the extracted PRNU. This effects correlation calculation and creates problems in SCI. For solving this problem, we propose a weighting function based on image features. We have experimentally identified image features (intensity and high-frequency contents) effect on the estimated PRNU, and then develop a weighting function which gives higher weights to image regions which give reliable PRNU and at the same point it gives comparatively less weights to the image regions which do not give reliable PRNU. Experimental results show that the proposed weighting function is able to improve the accuracy of SCI up to a great extent. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. APT: Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ

    2012-08-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  13. Ultrahigh phase-stable swept-source optical coherence tomography as a cardiac imaging platform (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ling, Yuye; Hendon, Christine P.

    2016-02-01

    Functional extensions to optical coherence tomography (OCT) provide useful imaging contrasts that are complementary to conventional OCT. Our goal is to characterize tissue types within the myocardial due to remodeling and therapy. High-speed imaging is necessary to extract mechanical properties and dynamics of fiber orientation changes in a beating heart. Functional extensions of OCT such as polarization sensitive and optical coherence elastography (OCE) require high phase stability of the system, which is a drawback of current mechanically tuned swept source OCT systems. Here we present a high-speed functional imaging platform, which includes an ultrahigh-phase-stable swept source equipped with KTN deflector from NTT-AT. The swept source does not require mechanical movements during the wavelength sweeping; it is electrically tuned. The inter-sweep phase variance of the system was measured to be less than 300 ps at a path length difference of ~2 mm. The axial resolution of the system is 20 µm and the -10 dB fall-off depth is about 3.2 mm. The sample arm has an 8 mmx8 mm field of view with a lateral resolution of approximately 18 µm. The sample arm uses a two-axis MEMS mirror, which is programmable and capable of scanning arbitrary patterns at a sampling rate of 50 kHz. Preliminary imaging results showed differences in polarization properties and image penetration in ablated and normal myocardium. In the future, we will conduct dynamic stretching experiments with strips of human myocardial tissue to characterize mechanical properties using OCE. With high speed imaging of 200 kHz and an all-fiber design, we will work towards catheter-based functional imaging.

  14. Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure

    NASA Astrophysics Data System (ADS)

    Benfenati, A.; La Camera, A.; Carbillet, M.

    2016-02-01

    Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.

  15. Medical imaging systems

    DOEpatents

    Frangioni, John V

    2013-06-25

    A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.

  16. Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images

    PubMed Central

    D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora

    2010-01-01

    Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094

  17. Co-registered Frequency-Domain Photoacoustic Radar and Ultrasound System for Subsurface Imaging in Turbid Media

    NASA Astrophysics Data System (ADS)

    Dovlo, Edem; Lashkari, Bahman; Mandelis, Andreas

    2016-03-01

    Frequency-domain photoacoustic radar (FD-PAR) imaging of absorbers in turbid media and their comparison and/or validation as well as co-registration with their corresponding ultrasound (US) images are demonstrated in this paper. Also presented are the FD-PAR tomography and the effects of reducing the number of scan lines (or angles) on image quality, resolution, and contrast. The FD-PAR modality uses intensity-modulated (coded) continuous wave laser sources driven by frequency-swept (chirp) waveforms. The spatial cross-correlation function between the PA response and the reference signal used for laser source modulation produces the reconstructed image. Live animal testing is demonstrated, and images of comparable signal-to-noise ratio, contrast, and spatial resolution were obtained. Various image improvement techniques to further reduce absorber spread and artifacts in the images such as normalization, filtering, and amplification were also investigated. The co-registered image produced from the combined US and PA images provides more information than both images independently. The significance of this work lies in the fact that achieving PA imaging functionality on a commercial ultrasound instrument could accelerate its clinical acceptance and use. This work is aimed at functional PA imaging of small animals in vivo.

  18. A Method Based on Wavelet Transforms for Source Detection in Photon-counting Detector Images. II. Application to ROSAT PSPC Images

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1997-07-01

    We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.

  19. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  20. A novel method for fast imaging of brain function, non-invasively, with light

    NASA Astrophysics Data System (ADS)

    Chance, Britton; Anday, Endla; Nioka, Shoko; Zhou, Shuoming; Hong, Long; Worden, Katherine; Li, C.; Murray, T.; Ovetsky, Y.; Pidikiti, D.; Thomas, R.

    1998-05-01

    Imaging of the human body by any non-invasive technique has been an appropriate goal of physics and medicine, and great success has been obtained with both Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) in brain imaging. Non-imaging responses to functional activation using near infrared spectroscopy of brain (fNIR) obtained in 1993 (Chance, et al. [1]) and in 1994 (Tamura, et al. [2]) are now complemented with images of pre-frontal and parietal stimulation in adults and pre-term neonates in this communication (see also [3]). Prior studies used continuous [4], pulsed [3] or modulated [5] light. The amplitude and phase cancellation of optical patterns as demonstrated for single source detector pairs affords remarkable sensitivity of small object detection in model systems [6]. The methods have now been elaborated with multiple source detector combinations (nine sources, four detectors). Using simple back projection algorithms it is now possible to image sensorimotor and cognitive activation of adult and pre- and full-term neonate human brain function in times < 30 sec and with two dimensional resolutions of < 1 cm in two dimensional displays. The method can be used in evaluation of adult and neonatal cerebral dysfunction in a simple, portable and affordable method that does not require immobilization, as contrasted to MRI and PET.

  1. Sources of Disconnection in Neurocognitive Aging: Cerebral White Matter Integrity, Resting-state Functional Connectivity, and White Matter Hyperintensity Volume

    PubMed Central

    Madden, David J.; Parks, Emily L.; Tallman, Catherine W.; Boylan, Maria A.; Hoagey, David A.; Cocjin, Sally B.; Packard, Lauren E.; Johnson, Micah A.; Chou, Ying-hui; Potter, Guy G.; Chen, Nan-kuei; Siciliano, Rachel E.; Monge, Zachary A.; Honig, Jesse A.; Diaz, Michele T.

    2017-01-01

    Age-related decline in fluid cognition can be characterized as a disconnection among specific brain structures, leading to a decline in functional efficiency. The potential sources of disconnection, however, are unclear. We investigated imaging measures of cerebral white matter integrity, resting-state functional connectivity, and white matter hyperintensity (WMH) volume as mediators of the relation between age and fluid cognition, in 145 healthy, community-dwelling adults 19–79 years of age. At a general level of analysis, with a single composite measure of fluid cognition and single measures of each of the three imaging modalities, age exhibited an independent influence on the cognitive and imaging measures, and the imaging variables did not mediate the age-cognition relation. At a more specific level of analysis, resting-state functional connectivity of sensorimotor networks was a significant mediator of the age-related decline in executive function. These findings suggest that different levels of analysis lead to different models of neurocognitive disconnection, and that resting-state functional connectivity, in particular, may contribute to age-related decline in executive function. PMID:28389085

  2. Evaluation of DICOM viewer software for workflow integration in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.

    2015-03-01

    The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.

  3. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  4. Susceptibility-based functional brain mapping by 3D deconvolution of an MR-phase activation map.

    PubMed

    Chen, Zikuan; Liu, Jingyu; Calhoun, Vince D

    2013-05-30

    The underlying source of T2*-weighted magnetic resonance imaging (T2*MRI) for brain imaging is magnetic susceptibility (denoted by χ). T2*MRI outputs a complex-valued MR image consisting of magnitude and phase information. Recent research has shown that both the magnitude and the phase images are morphologically different from the source χ, primarily due to 3D convolution, and that the source χ can be reconstructed from complex MR images by computed inverse MRI (CIMRI). Thus, we can obtain a 4D χ dataset from a complex 4D MR dataset acquired from a brain functional MRI study by repeating CIMRI to reconstruct 3D χ volumes at each timepoint. Because the reconstructed χ is a more direct representation of neuronal activity than the MR image, we propose a method for χ-based functional brain mapping, which is numerically characterised by a temporal correlation map of χ responses to a stimulant task. Under the linear imaging conditions used for T2*MRI, we show that the χ activation map can be calculated from the MR phase map by CIMRI. We validate our approach using numerical simulations and Gd-phantom experiments. We also analyse real data from a finger-tapping visuomotor experiment and show that the χ-based functional mapping provides additional activation details (in the form of positive and negative correlation patterns) beyond those generated by conventional MR-magnitude-based mapping. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  6. Global high-frequency source imaging accounting for complexity in Green's functions

    NASA Astrophysics Data System (ADS)

    Lambert, V.; Zhan, Z.

    2017-12-01

    The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.

  7. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1983-05-03

    Processing, White-Light Holography, Image Subtraction, Image Deblurring , Coherence Requirement, Apparent Transfer Function, Source Encoding, Signal...in this period, also demonstrated several color image processing capabilities. Among those are broadband color image deblurring and color image...Broadband Image Deblurring ..... ......... 6 2.5 Color Image Subtraction ............... 7 2.6 Rainbow Holographic Aberrations . . ..... 7 2.7

  8. Influence of the noise sources motion on the estimated Green's functions from ambient noise cross-correlations.

    PubMed

    Sabra, Karim G

    2010-06-01

    It has been demonstrated theoretically and experimentally that an estimate of the Green's function between two receivers can be obtained by cross-correlating acoustic (or elastic) ambient noise recorded at these two receivers. Coherent wavefronts emerge from the noise cross-correlation time function due to the accumulated contributions over time from noise sources whose propagation path pass through both receivers. Previous theoretical studies of the performance of this passive imaging technique have assumed that no relative motion between noise sources and receivers occurs. In this article, the influence of noise sources motion (e.g., aircraft or ship) on this passive imaging technique was investigated theoretically in free space, using a stationary phase approximation, for stationary receivers. The theoretical results were extended to more complex environments, in the high-frequency regime, using first-order expansions of the Green's function. Although sources motion typically degrades the performance of wideband coherent processing schemes, such as time-delay beamforming, it was found that the Green's function estimated from ambient noise cross-correlations are not expected to be significantly affected by the Doppler effect, even for supersonic sources. Numerical Monte-Carlo simulations were conducted to confirm these theoretical predictions for both cases of subsonic and supersonic moving sources.

  9. Structure function monitor

    DOEpatents

    McGraw, John T [Placitas, NM; Zimmer, Peter C [Albuquerque, NM; Ackermann, Mark R [Albuquerque, NM

    2012-01-24

    Methods and apparatus for a structure function monitor provide for generation of parameters characterizing a refractive medium. In an embodiment, a structure function monitor acquires images of a pupil plane and an image plane and, from these images, retrieves the phase over an aperture, unwraps the retrieved phase, and analyzes the unwrapped retrieved phase. In an embodiment, analysis yields atmospheric parameters measured at spatial scales from zero to the diameter of a telescope used to collect light from a source.

  10. Studies of EGRET sources with a novel image restoration technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, Hiroyasu; Cohen-Tanugi, Johann; Kamae, Tuneyoshi

    2007-07-12

    We have developed an image restoration technique based on the Richardson-Lucy algorithm optimized for GLAST-LAT image analysis. Our algorithm is original since it utilizes the PSF (point spread function) that is calculated for each event. This is critical for EGRET and GLAST-LAT image analysis since the PSF depends on the energy and angle of incident gamma-rays and varies by more than one order of magnitude. EGRET and GLAST-LAT image analysis also faces Poisson noise due to low photon statistics. Our technique incorporates wavelet filtering to minimize noise effects. We present studies of EGRET sources using this novel image restoration techniquemore » for possible identification of extended gamma-ray sources.« less

  11. SU-G-IeP3-11: On the Utility of Pixel Variance to Characterize Noise for Image Receptors of Digital Radiography Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, C; Dave, J

    Purpose: To characterize noise for image receptors of digital radiography systems based on pixel variance. Methods: Nine calibrated digital image receptors associated with nine new portable digital radiography systems (Carestream Health, Inc., Rochester, NY) were used in this study. For each image receptor, thirteen images were acquired with RQA5 beam conditions for input detector air kerma ranging from 0 to 110 µGy, and linearized ‘For Processing’ images were extracted. Mean pixel value (MPV), standard deviation (SD) and relative noise (SD/MPV) were obtained from each image using ROI sizes varying from 2.5×2.5 to 20×20 mm{sup 2}. Variance (SD{sup 2}) was plottedmore » as a function of input detector air kerma and the coefficients of the quadratic fit were used to derive structured, quantum and electronic noise coefficients. Relative noise was also fitted as a function of input detector air kerma to identify noise sources. The fitting functions used least-squares approach. Results: The coefficient of variation values obtained using different ROI sizes was less than 1% for all the images. The structured, quantum and electronic coefficients obtained from the quadratic fit of variance (r>0.97) were 0.43±0.10, 3.95±0.27 and 2.89±0.74 (mean ± standard deviation), respectively, indicating that overall the quantum noise was the dominant noise source. However, for one system electronic noise coefficient (3.91) was greater than quantum noise coefficient (3.56) indicating electronic noise to be dominant. Using relative noise values, the power parameter of the fitting equation (|r|>0.93) showed a mean and standard deviation of 0.46±0.02. A 0.50 value for this power parameter indicates quantum noise to be the dominant noise source whereas values around 0.50 indicate presence of other noise sources. Conclusion: Characterizing noise from pixel variance assists in identifying contributions from various noise sources that, eventually, may affect image quality. This approach may be integrated during periodic quality assessments of digital image receptors.« less

  12. Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ R.; Gorjian, Varoujan; Rebull, Luisa M.; Masci, Frank J.; Fowler, John W.; Helou, George; Kulkarni, Shrinivas R.; Law, Nicholas M.

    2012-07-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It is a graphical user interface (GUI) designed to allow the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. The finely tuned layout of the GUI, along with judicious use of color-coding and alerting, is intended to give maximal user utility and convenience. Simply mouse-clicking on a source in the displayed image will instantly draw a circular or elliptical aperture and sky annulus around the source and will compute the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs with just the push of a button, including image histogram, x and y aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has many functions for customizing the calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, which is accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  13. VizieR Online Data Catalog: M33 SNR candidates properties (Lee+, 2014)

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Lee, M. G.

    2017-04-01

    We utilized the Hα and [S II] images in the LGGS to find new M33 remnants. The LGGS covered three 36' square fields of M33. We subtracted continuum sources from the narrowband images using R-band images. We smoothed the images with better seeing to match the point-spread function in the images with worse seeing, using the IRAF task psfmatch. We then scaled and subtracted the resulting continuum images from narrowband images. We selected M33 remnants considering three criteria: emission-line ratio ([S II]/Hα), the morphological structure, and the absence of blue stars inside the sources. Details are described in L14 (Lee et al. 2014ApJ...786..130L). We detected objects with [S II]/Hα>0.4 in emission-line ratio maps, and selected objects with round or shell structures in each narrowband image. As a result, we chose 435 sources. (2 data files).

  14. Imfit: A Fast, Flexible Program for Astronomical Image Fitting

    NASA Astrophysics Data System (ADS)

    Erwin, Peter

    2014-08-01

    Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.

  15. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  16. Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm

    NASA Astrophysics Data System (ADS)

    Sarika, G.; Unnithan, Harikuttan; Peter, Smitha

    2011-10-01

    When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.

  17. Source Finding in the Era of the SKA (Precursors): Aegean 2.0

    NASA Astrophysics Data System (ADS)

    Hancock, Paul J.; Trott, Cathryn M.; Hurley-Walker, Natasha

    2018-03-01

    In the era of the SKA precursors, telescopes are producing deeper, larger images of the sky on increasingly small time-scales. The greater size and volume of images place an increased demand on the software that we use to create catalogues, and so our source finding algorithms need to evolve accordingly. In this paper, we discuss some of the logistical and technical challenges that result from the increased size and volume of images that are to be analysed, and demonstrate how the Aegean source finding package has evolved to address these challenges. In particular, we address the issues of source finding on spatially correlated data, and on images in which the background, noise, and point spread function vary across the sky. We also introduce the concept of forced or prioritised fitting.

  18. 49 CFR 571.111 - Standard No. 111; Rear visibility.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... mirror that reflect images, excluding the mirror rim or mounting brackets. Environmental test fixture... defined in S15. Rearview image means a visual image, detected by means of a single source, of the area... function of producing the rearview image as required under this standard. Small manufacturer means an...

  19. Dynamic Initiator Imaging at the Advanced Photon Source: Understanding the early stages of initiator function and subsequent explosive interactions

    NASA Astrophysics Data System (ADS)

    Sanchez, Nate; Neal, Will; Jensen, Brian; Gibson, John; Martinez, Mike; Jaramillo, Dennis; Iverson, Adam; Carlson, Carl

    2017-06-01

    Recent advances in diagnostics coupled with synchrotron sources have allowed the in-situ investigation of exploding foil initiators (EFI) during flight. We present the first images of EFIs during flight utilizing x-ray phase contrast imaging at the Advanced Photon Source (APS) located in Argonne National Laboratory. These images have provided the DOE/DoD community with unprecedented images resolving details on the micron scale of the flyer formation, plasma instabilities and in flight characteristics along with the subsequent interaction with high explosives on the nanosecond time scale. Phase contrast imaging has allowed the ability to make dynamic measurements on the length and time scale necessary to resolve initiator function and provide insight to key design parameters. These efforts have also probed the fundamental physics at ``burst'' to better understand what burst means in a physical sense, rather than the traditional understanding of burst as a peak in voltage and increase in resistance. This fundamental understanding has led to increased knowledge on the mechanisms of burst and has allowed us to improve our predictive capability through magnetohydrodnamic modeling. Results will be presented from several EFI designs along with a look to the future for upcoming work.

  20. Live imaging of rat embryos with Doppler swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Larina, Irina V.; Furushima, Kenryo; Dickinson, Mary E.; Behringer, Richard R.; Larin, Kirill V.

    2009-09-01

    The rat has long been considered an excellent system to study mammalian embryonic cardiovascular physiology, but has lacked the extensive genetic tools available in the mouse to be able to create single gene mutations. However, the recent establishment of rat embryonic stem cell lines facilitates the generation of new models in the rat embryo to link changes in physiology with altered gene function to define the underlying mechanisms behind congenital cardiovascular birth defects. Along with the ability to create new rat genotypes there is a strong need for tools to analyze phenotypes with high spatial and temporal resolution. Doppler OCT has been previously used for 3-D structural analysis and blood flow imaging in other model species. We use Doppler swept-source OCT for live imaging of early postimplantation rat embryos. Structural imaging is used for 3-D reconstruction of embryo morphology and dynamic imaging of the beating heart and vessels, while Doppler-mode imaging is used to visualize blood flow. We demonstrate that Doppler swept-source OCT can provide essential information about the dynamics of early rat embryos and serve as a basis for a wide range of studies on functional evaluation of rat embryo physiology.

  1. Live imaging of rat embryos with Doppler swept-source optical coherence tomography

    PubMed Central

    Larina, Irina V.; Furushima, Kenryo; Dickinson, Mary E.; Behringer, Richard R.; Larin, Kirill V.

    2009-01-01

    The rat has long been considered an excellent system to study mammalian embryonic cardiovascular physiology, but has lacked the extensive genetic tools available in the mouse to be able to create single gene mutations. However, the recent establishment of rat embryonic stem cell lines facilitates the generation of new models in the rat embryo to link changes in physiology with altered gene function to define the underlying mechanisms behind congenital cardiovascular birth defects. Along with the ability to create new rat genotypes there is a strong need for tools to analyze phenotypes with high spatial and temporal resolution. Doppler OCT has been previously used for 3-D structural analysis and blood flow imaging in other model species. We use Doppler swept-source OCT for live imaging of early postimplantation rat embryos. Structural imaging is used for 3-D reconstruction of embryo morphology and dynamic imaging of the beating heart and vessels, while Doppler-mode imaging is used to visualize blood flow. We demonstrate that Doppler swept-source OCT can provide essential information about the dynamics of early rat embryos and serve as a basis for a wide range of studies on functional evaluation of rat embryo physiology. PMID:19895102

  2. Imaging strategies using focusing functions with applications to a North Sea field

    NASA Astrophysics Data System (ADS)

    da Costa Filho, C. A.; Meles, G. A.; Curtis, A.; Ravasi, M.; Kritski, A.

    2018-04-01

    Seismic methods are used in a wide variety of contexts to investigate subsurface Earth structures, and to explore and monitor resources and waste-storage reservoirs in the upper ˜100 km of the Earth's subsurface. Reverse-time migration (RTM) is one widely used seismic method which constructs high-frequency images of subsurface structures. Unfortunately, RTM has certain disadvantages shared with other conventional single-scattering-based methods, such as not being able to correctly migrate multiply scattered arrivals. In principle, the recently developed Marchenko methods can be used to migrate all orders of multiples correctly. In practice however, using Marchenko methods are costlier to compute than RTM—for a single imaging location, the cost of performing the Marchenko method is several times that of standard RTM, and performing RTM itself requires dedicated use of some of the largest computers in the world for individual data sets. A different imaging strategy is therefore required. We propose a new set of imaging methods which use so-called focusing functions to obtain images with few artifacts from multiply scattered waves, while greatly reducing the number of points across the image at which the Marchenko method need be applied. Focusing functions are outputs of the Marchenko scheme: they are solutions of wave equations that focus in time and space at particular surface or subsurface locations. However, they are mathematical rather than physical entities, being defined only in reference media that equal to the true Earth above their focusing depths but are homogeneous below. Here, we use these focusing functions as virtual source/receiver surface seismic surveys, the upgoing focusing function being the virtual received wavefield that is created when the downgoing focusing function acts as a spatially distributed source. These source/receiver wavefields are used in three imaging schemes: one allows specific individual reflectors to be selected and imaged. The other two schemes provide either targeted or complete images with distinct advantages over current RTM methods, such as fewer artifacts and artifacts that occur in different locations. The latter property allows the recently published `combined imaging' method to remove almost all artifacts. We show several examples to demonstrate the methods: acoustic 1-D and 2-D synthetic examples, and a 2-D line from an ocean bottom cable field data set. We discuss an extension to elastic media, which is illustrated by a 1.5-D elastic synthetic example.

  3. Laser applications and system considerations in ocular imaging

    PubMed Central

    Elsner, Ann E.; Muller, Matthew S.

    2009-01-01

    We review laser applications for primarily in vivo ocular imaging techniques, describing their constraints based on biological tissue properties, safety, and the performance of the imaging system. We discuss the need for cost effective sources with practical wavelength tuning capabilities for spectral studies. Techniques to probe the pathological changes of layers beneath the highly scattering retina and diagnose the onset of various eye diseases are described. The recent development of several optical coherence tomography based systems for functional ocular imaging is reviewed, as well as linear and nonlinear ocular imaging techniques performed with ultrafast lasers, emphasizing recent source developments and methods to enhance imaging contrast. PMID:21052482

  4. Comparative analysis of numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.

    2017-07-01

    Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.

  5. Laboratory demonstration of Stellar Intensity Interferometry using a software correlator

    NASA Astrophysics Data System (ADS)

    Matthews, Nolan; Kieda, David

    2017-06-01

    In this talk I will present measurements of the spatial coherence function of laboratory thermal (black-body) sources using Hanbury-Brown and Twiss interferometry with a digital off-line correlator. Correlations in the intensity fluctuations of a thermal source, such as a star, allow retrieval of the second order coherence function which can be used to perform high resolution imaging and source geometry characterization. We also demonstrate that intensity fluctuations between orthogonal polarization states are uncorrelated but can be used to reduce systematic noise. The work performed here can readily be applied to existing and future Imaging Air-Cherenkov telescopes to measure spatial properties of stellar sources. Some possible candidates for astronomy applications include close binary star systems, fast rotators, Cepheid variables, and potentially even exoplanet characterization.

  6. A functional magnetic resonance imaging investigation of short-term source and item memory for negative pictures.

    PubMed

    Mitchell, Karen J; Mather, Mara; Johnson, Marcia K; Raye, Carol L; Greene, Erich J

    2006-10-02

    We investigated the hypothesis that arousal recruits attention to item information, thereby disrupting working memory processes that help bind items to context. Using functional magnetic resonance imaging, we compared brain activity when participants remembered negative or neutral picture-location conjunctions (source memory) versus pictures only. Behaviorally, negative trials showed disruption of short-term source, but not picture, memory; long-term picture recognition memory was better for negative than for neutral pictures. Activity in areas involved in working memory and feature integration (precentral gyrus and its intersect with superior temporal gyrus) was attenuated on negative compared with neutral source trials relative to picture-only trials. Visual processing areas (middle occipital and lingual gyri) showed greater activity for negative than for neutral trials, especially on picture-only trials.

  7. Erratum: Sources of Image Degradation in Fundamental and Harmonic Ultrasound Imaging: A Nonlinear, Full-Wave, Simulation Study

    PubMed Central

    Pinton, Gianmarco F.; Trahey, Gregg E.; Dahl, Jeremy J.

    2015-01-01

    A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain. This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-and-sum beamforming is used to generate point spread functions (PSFs) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is due to reverberation from near-field structures. Compared with fundamental imaging, reverberation clutter in harmonic imaging is 27.1 dB lower. Simulated tissue with uniform velocity but unchanged impedance characteristics indicates that for harmonic imaging, the primary source of degradation is phase aberration. PMID:21693410

  8. The spatial coherence function in scanning transmission electron microscopy and spectroscopy.

    PubMed

    Nguyen, D T; Findlay, S D; Etheridge, J

    2014-11-01

    We investigate the implications of the form of the spatial coherence function, also referred to as the effective source distribution, for quantitative analysis in scanning transmission electron microscopy, and in particular for interpreting the spatial origin of imaging and spectroscopy signals. These questions are explored using three different source distribution models applied to a GaAs crystal case study. The shape of the effective source distribution was found to have a strong influence not only on the scanning transmission electron microscopy (STEM) image contrast, but also on the distribution of the scattered electron wavefield and hence on the spatial origin of the detected electron intensities. The implications this has for measuring structure, composition and bonding at atomic resolution via annular dark field, X-ray and electron energy loss STEM imaging are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Reconstructing cortical current density by exploring sparseness in the transform domain

    NASA Astrophysics Data System (ADS)

    Ding, Lei

    2009-05-01

    In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.

  10. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  11. Green's function and image system for the Laplace operator in the prolate spheroidal geometry

    NASA Astrophysics Data System (ADS)

    Xue, Changfeng; Deng, Shaozhong

    2017-01-01

    In the present paper, electrostatic image theory is studied for Green's function for the Laplace operator in the case where the fundamental domain is either the exterior or the interior of a prolate spheroid. In either case, an image system is developed to consist of a point image inside the complement of the fundamental domain and an additional symmetric continuous surface image over a confocal prolate spheroid outside the fundamental domain, although the process of calculating such an image system is easier for the exterior than for the interior Green's function. The total charge of the surface image is zero and its centroid is at the origin of the prolate spheroid. In addition, if the source is on the focal axis outside the prolate spheroid, then the image system of the exterior Green's function consists of a point image on the focal axis and a line image on the line segment between the two focal points.

  12. SPARX, a new environment for Cryo-EM image processing.

    PubMed

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  13. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  14. Affective attitudes to face images associated with intracerebral EEG source location before face viewing.

    PubMed

    Pizzagalli, D; Koenig, T; Regard, M; Lehmann, D

    1999-01-01

    We investigated whether different, personality-related affective attitudes are associated with different brain electric field (EEG) sources before any emotional challenge (stimulus exposure). A 27-channel EEG was recorded in 15 subjects during eyes-closed resting. After recording, subjects rated 32 images of human faces for affective appeal. The subjects in the first (i.e., most negative) and fourth (i.e., most positive) quartile of general affective attitude were further analyzed. The EEG data (mean=25+/-4. 8 s/subject) were subjected to frequency-domain model dipole source analysis (FFT-Dipole-Approximation), resulting in 3-dimensional intracerebral source locations and strengths for the delta-theta, alpha, and beta EEG frequency band, and for the full range (1.5-30 Hz) band. Subjects with negative attitude (compared to those with positive attitude) showed the following source locations: more inferior for all frequency bands, more anterior for the delta-theta band, more posterior and more right for the alpha, beta and 1.5-30 Hz bands. One year later, the subjects were asked to rate the face images again. The rating scores for the same face images were highly correlated for all subjects, and original and retest affective mean attitude was highly correlated across subjects. The present results show that subjects with different affective attitudes to face images had different active, cerebral, neural populations in a task-free condition prior to viewing the images. We conclude that the brain functional state which implements affective attitude towards face images as a personality feature exists without elicitors, as a continuously present, dynamic feature of brain functioning. Copyright 1999 Elsevier Science B.V.

  15. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.

    PubMed

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-03-01

    The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.

  16. TRIPPy: Python-based Trailed Source Photometry

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley C.; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michael E.; Pike, Rosemary E.; Kavelaars, JJ; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey

    2016-05-01

    TRIPPy (TRailed Image Photometry in Python) uses a pill-shaped aperture, a rectangle described by three parameters (trail length, angle, and radius) to improve photometry of moving sources over that done with circular apertures. It can generate accurate model and trailed point-spread functions from stationary background sources in sidereally tracked images. Appropriate aperture correction provides accurate, unbiased flux measurement. TRIPPy requires numpy, scipy, matplotlib, Astropy (ascl:1304.002), and stsci.numdisplay; emcee (ascl:1303.002) and SExtractor (ascl:1010.064) are optional.

  17. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy.

    PubMed

    Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi

    2014-02-07

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.

  18. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi

    2014-02-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.

  19. Simultaneous multimodal ophthalmic imaging using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    PubMed Central

    Malone, Joseph D.; El-Haddad, Mohamed T.; Bozic, Ivan; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2016-01-01

    Scanning laser ophthalmoscopy (SLO) benefits diagnostic imaging and therapeutic guidance by allowing for high-speed en face imaging of retinal structures. When combined with optical coherence tomography (OCT), SLO enables real-time aiming and retinal tracking and provides complementary information for post-acquisition volumetric co-registration, bulk motion compensation, and averaging. However, multimodality SLO-OCT systems generally require dedicated light sources, scanners, relay optics, detectors, and additional digitization and synchronization electronics, which increase system complexity. Here, we present a multimodal ophthalmic imaging system using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (SS-SESLO-OCT) for in vivo human retinal imaging. SESLO reduces the complexity of en face imaging systems by multiplexing spatial positions as a function of wavelength. SESLO image quality benefited from single-mode illumination and multimode collection through a prototype double-clad fiber coupler, which optimized scattered light throughput and reduce speckle contrast while maintaining lateral resolution. Using a shared 1060 nm swept-source, shared scanner and imaging optics, and a shared dual-channel high-speed digitizer, we acquired inherently co-registered en face retinal images and OCT cross-sections simultaneously at 200 frames-per-second. PMID:28101411

  20. Imaging the complex geometry of a magma reservoir using FEM-based linear inverse modeling of InSAR data: application to Rabaul Caldera, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan

    2017-06-01

    We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the east side and of the past Vulcan volcano eruptions of more evolved materials on the west side. The interconnection and spatial distributions of sources correspond to the petrography of the volcanic products described in the literature and to the dynamics of the single and twin eruptions that characterize the caldera. The ability to image the complex geometry of deformation sources in both space and time can improve our ability to monitor active volcanoes, widen our understanding of the dynamics of active volcanic systems and improve the predictions of eruptions.

  1. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2016-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.

  2. "But I Like My Body": Positive body image characteristics and a holistic model for young-adult women.

    PubMed

    Wood-Barcalow, Nichole L; Tylka, Tracy L; Augustus-Horvath, Casey L

    2010-03-01

    Extant body image research has provided a rich understanding of negative body image but a rather underdeveloped depiction of positive body image. Thus, this study used Grounded Theory to analyze interviews from 15 college women classified as having positive body image and five body image experts. Many characteristics of positive body image emerged, including appreciating the unique beauty and functionality of their body, filtering information (e.g., appearance commentary, media ideals) in a body-protective manner, defining beauty broadly, and highlighting their body's assets while minimizing perceived imperfections. A holistic model emerged: when women processed mostly positive and rejected negative source information, their body investment decreased and body evaluation became more positive, illustrating the fluidity of body image. Women reciprocally influenced these sources (e.g., mentoring others to love their bodies, surrounding themselves with others who promote body acceptance, taking care of their health), which, in turn, promoted increased positive source information. Copyright 2010. Published by Elsevier Ltd.

  3. K-edge subtraction synchrotron X-ray imaging in bio-medical research.

    PubMed

    Thomlinson, W; Elleaume, H; Porra, L; Suortti, P

    2018-05-01

    High contrast in X-ray medical imaging, while maintaining acceptable radiation dose levels to the patient, has long been a goal. One of the most promising methods is that of K-edge subtraction imaging. This technique, first advanced as long ago as 1953 by B. Jacobson, uses the large difference in the absorption coefficient of elements at energies above and below the K-edge. Two images, one taken above the edge and one below the edge, are subtracted leaving, ideally, only the image of the distribution of the target element. This paper reviews the development of the KES techniques and technology as applied to bio-medical imaging from the early low-power tube sources of X-rays to the latest high-power synchrotron sources. Applications to coronary angiography, functional lung imaging and bone growth are highlighted. A vision of possible imaging with new compact sources is presented. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Potency backprojection

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.

  5. Microlensing of an extended source by a power-law mass distribution

    NASA Astrophysics Data System (ADS)

    Congdon, Arthur B.; Keeton, Charles R.; Osmer, S. J.

    2007-03-01

    Microlensing promises to be a powerful tool for studying distant galaxies and quasars. As the data and models improve, there are systematic effects that need to be explored. Quasar continuum and broad-line regions may respond differently to microlensing due to their different sizes; to understand this effect, we study microlensing of finite sources by a mass function of stars. We find that microlensing is insensitive to the slope of the mass function but does depend on the mass range. For negative-parity images, diluting the stellar population with dark matter increases the magnification dispersion for small sources and decreases it for large sources. This implies that the quasar continuum and broad-line regions may experience very different microlensing in negative-parity lensed images. We confirm earlier conclusions that the surface brightness profile and geometry of the source have little effect on microlensing. Finally, we consider non-circular sources. We show that elliptical sources that are aligned with the direction of shear have larger magnification dispersions than sources with perpendicular alignment, an effect that becomes more prominent as the ellipticity increases. Elongated sources can lead to more rapid variability than circular sources, which raises the prospect of using microlensing to probe source shape.

  6. Operational rate-distortion performance for joint source and channel coding of images.

    PubMed

    Ruf, M J; Modestino, J W

    1999-01-01

    This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.

  7. TRIPPy: Trailed Image Photometry in Python

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michaël; Pike, Rosemary E.; Kavelaars, J. J.; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey

    2016-06-01

    Photometry of moving sources typically suffers from a reduced signal-to-noise ratio (S/N) or flux measurements biased to incorrect low values through the use of circular apertures. To address this issue, we present the software package, TRIPPy: TRailed Image Photometry in Python. TRIPPy introduces the pill aperture, which is the natural extension of the circular aperture appropriate for linearly trailed sources. The pill shape is a rectangle with two semicircular end-caps and is described by three parameters, the trail length and angle, and the radius. The TRIPPy software package also includes a new technique to generate accurate model point-spread functions (PSFs) and trailed PSFs (TSFs) from stationary background sources in sidereally tracked images. The TSF is merely the convolution of the model PSF, which consists of a moffat profile, and super-sampled lookup table. From the TSF, accurate pill aperture corrections can be estimated as a function of pill radius with an accuracy of 10 mmag for highly trailed sources. Analogous to the use of small circular apertures and associated aperture corrections, small radius pill apertures can be used to preserve S/Ns of low flux sources, with appropriate aperture correction applied to provide an accurate, unbiased flux measurement at all S/Ns.

  8. EROS Main Image File: A Picture Perfect Database for Landsat Imagery and Aerial Photography.

    ERIC Educational Resources Information Center

    Jack, Robert F.

    1984-01-01

    Describes Earth Resources Observation System online database, which provides access to computerized images of Earth obtained via satellite. Highlights include retrieval system and commands, types of images, search strategies, other online functions, and interpretation of accessions. Satellite information, sources and samples of accessions, and…

  9. Magnetoacoustic Tomography with Magnetic Induction: Bioimepedance reconstruction through vector source imaging

    PubMed Central

    Mariappan, Leo; He, Bin

    2013-01-01

    Magneto acoustic tomography with magnetic induction (MAT-MI) is a technique proposed to reconstruct the conductivity distribution in biological tissue at ultrasound imaging resolution. A magnetic pulse is used to generate eddy currents in the object, which in the presence of a static magnetic field induces Lorentz force based acoustic waves in the medium. This time resolved acoustic waves are collected with ultrasound transducers and, in the present work, these are used to reconstruct the current source which gives rise to the MAT-MI acoustic signal using vector imaging point spread functions. The reconstructed source is then used to estimate the conductivity distribution of the object. Computer simulations and phantom experiments are performed to demonstrate conductivity reconstruction through vector source imaging in a circular scanning geometry with a limited bandwidth finite size piston transducer. The results demonstrate that the MAT-MI approach is capable of conductivity reconstruction in a physical setting. PMID:23322761

  10. Experimental evaluation and basis function optimization of the spatially variant image-space PSF on the Ingenuity PET/MR scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva

    2014-06-15

    Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less

  11. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet.

    PubMed

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-02-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.

  12. Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2010-02-01

    We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.

  13. Assessment of image quality in x-ray radiography imaging using a small plasma focus device

    NASA Astrophysics Data System (ADS)

    Kanani, A.; Shirani, B.; Jabbari, I.; Mokhtari, J.

    2014-08-01

    This paper offers a comprehensive investigation of image quality parameters for a small plasma focus as a pulsed hard x-ray source for radiography applications. A set of images were captured from some metal objects and electronic circuits using a low energy plasma focus at different voltages of capacitor bank and different pressures of argon gas. The x-ray source focal spot of this device was obtained to be about 0.6 mm using the penumbra imaging method. The image quality was studied by several parameters such as image contrast, line spread function (LSF) and modulation transfer function (MTF). Results showed that the contrast changes by variations in gas pressure. The best contrast was obtained at a pressure of 0.5 mbar and 3.75 kJ stored energy. The results of x-ray dose from the device showed that about 0.6 mGy is sufficient to obtain acceptable images on the film. The measurements of LSF and MTF parameters were carried out by means of a thin stainless steel wire 0.8 mm in diameter and the cut-off frequency was obtained to be about 1.5 cycles/mm.

  14. GLACiAR, an Open-Source Python Tool for Simulations of Source Recovery and Completeness in Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Carrasco, D.; Trenti, M.; Mutch, S.; Oesch, P. A.

    2018-06-01

    The luminosity function is a fundamental observable for characterising how galaxies form and evolve throughout the cosmic history. One key ingredient to derive this measurement from the number counts in a survey is the characterisation of the completeness and redshift selection functions for the observations. In this paper, we present GLACiAR, an open python tool available on GitHub to estimate the completeness and selection functions in galaxy surveys. The code is tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman-break technique, but it can be applied broadly. The code generates artificial galaxies that follow Sérsic profiles with different indexes and with customisable size, redshift, and spectral energy distribution properties, adds them to input images, and measures the recovery rate. To illustrate this new software tool, we apply it to quantify the completeness and redshift selection functions for J-dropouts sources (redshift z 10 galaxies) in the Hubble Space Telescope Brightest of Reionizing Galaxies Survey. Our comparison with a previous completeness analysis on the same dataset shows overall agreement, but also highlights how different modelling assumptions for the artificial sources can impact completeness estimates.

  15. scarlet: Source separation in multi-band images by Constrained Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert

    2018-03-01

    SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.

  16. Airborne Open Polar/Imaging Nephelometer for Ice Particles in Cirrus Clouds and Aerosols Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martins, JV

    2016-04-01

    The Open Imaging Nephelometer (O-I-Neph) instrument is an adaptation of a proven laboratory instrument built and tested at the University of Maryland, Baltimore County (UMBC), the Polarized Imaging Nephelometer (PI-Neph). The instrument design of both imaging nephelometers uses a narrow-beam laser source and a wide-field-of-view imaging camera to capture the entire scattering-phase function in one image, quasi-instantaneously.

  17. High-throughput automated home-cage mesoscopic functional imaging of mouse cortex

    PubMed Central

    Murphy, Timothy H.; Boyd, Jamie D.; Bolaños, Federico; Vanni, Matthieu P.; Silasi, Gergely; Haupt, Dirk; LeDue, Jeff M.

    2016-01-01

    Mouse head-fixed behaviour coupled with functional imaging has become a powerful technique in rodent systems neuroscience. However, training mice can be time consuming and is potentially stressful for animals. Here we report a fully automated, open source, self-initiated head-fixation system for mesoscopic functional imaging in mice. The system supports five mice at a time and requires minimal investigator intervention. Using genetically encoded calcium indicator transgenic mice, we longitudinally monitor cortical functional connectivity up to 24 h per day in >7,000 self-initiated and unsupervised imaging sessions up to 90 days. The procedure provides robust assessment of functional cortical maps on the basis of both spontaneous activity and brief sensory stimuli such as light flashes. The approach is scalable to a number of remotely controlled cages that can be assessed within the controlled conditions of dedicated animal facilities. We anticipate that home-cage brain imaging will permit flexible and chronic assessment of mesoscale cortical function. PMID:27291514

  18. Multi-channel medical imaging system

    DOEpatents

    Frangioni, John V

    2013-12-31

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in the subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  19. Multi-channel medical imaging system

    DOEpatents

    Frangioni, John V.

    2016-05-03

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  20. Integration of optical imaging with a small animal irradiator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An

    Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less

  1. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  2. Performance evaluation of a direct-conversion flat-panel detector system in imaging and quality assurance for a high-dose-rate 192Ir source

    NASA Astrophysics Data System (ADS)

    Miyahara, Yoshinori; Hara, Yuki; Nakashima, Hiroto; Nishimura, Tomonori; Itakura, Kanae; Inomata, Taisuke; Kitagaki, Hajime

    2018-03-01

    In high-dose-rate (HDR) brachytherapy, a direct-conversion flat-panel detector (d-FPD) clearly depicts a 192Ir source without image halation, even under the emission of high-energy gamma rays. However, it was unknown why iridium is visible when using a d-FPD. The purpose of this study was to clarify the reasons for visibility of the source core based on physical imaging characteristics, including the modulation transfer functions (MTF), noise power spectral (NPS), contrast transfer functions, and linearity of d-FPD to high-energy gamma rays. The acquired data included: x-rays, [X]; gamma rays, [γ] dual rays (X  +  γ), [D], and subtracted data for depicting the source ([D]  -  [γ]). In the quality assurance (QA) test for the positional accuracy of a source core, the coordinates of each dwelling point were compared between the planned and actual source core positions using a CT/MR-compatible ovoid applicator and a Fletcher-Williamson applicator. The profile curves of [X] and ([D]  -  [γ]) matched well on MTF and NPS. The contrast resolutions of [D] and [X] were equivalent. A strongly positive linear correlation was found between the output data of [γ] and source strength (r 2  >  0.99). With regard to the accuracy of the source core position, the largest coordinate difference (3D distance) was noted at the maximum curvature of the CT/MR-compatible ovoid and Fletcher-Williamson applicators, showing 1.74  ±  0.02 mm and 1.01  ±  0.01 mm, respectively. A d-FPD system provides high-quality images of a source, even when high-energy gamma rays are emitted to the detector, and positional accuracy tests with clinical applicators are useful in identifying source positions (source movements) within the applicator for QA.

  3. Functional magnetic resonance imaging study of external source memory and its relation to cognitive insight in non-clinical subjects.

    PubMed

    Buchy, Lisa; Hawco, Colin; Bodnar, Michael; Izadi, Sarah; Dell'Elce, Jennifer; Messina, Katrina; Lepage, Martin

    2014-09-01

    Previous research has linked cognitive insight (a measure of self-reflectiveness and self-certainty) in psychosis with neurocognitive and neuroanatomical disturbances in the fronto-hippocampal neural network. The authors' goal was to use functional magnetic resonance imaging (fMRI) to investigate the neural correlates of cognitive insight during an external source memory paradigm in non-clinical subjects. At encoding, 24 non-clinical subjects travelled through a virtual city where they came across 20 separate people, each paired with a unique object in a distinct location. fMRI data were then acquired while participants viewed images of the city, and completed source recognition memory judgments of where and with whom objects were seen, which is known to involve prefrontal cortex. Cognitive insight was assessed with the Beck Cognitive Insight Scale. External source memory was associated with neural activity in a widespread network consisting of frontal cortex, including ventrolateral prefrontal cortex (VLPFC), temporal and occipital cortices. Activation in VLPFC correlated with higher self-reflectiveness and activation in midbrain correlated with lower self-certainty during source memory attributions. Neither self-reflectiveness nor self-certainty significantly correlated with source memory accuracy. By means of virtual reality and in the context of an external source memory paradigm, the study identified a preliminary functional neural basis for cognitive insight in the VLPFC in healthy people that accords with our fronto-hippocampal theoretical model as well as recent neuroimaging data in people with psychosis. The results may facilitate the understanding of the role of neural mechanisms in psychotic disorders associated with cognitive insight distortions. © 2014 The Authors. Psychiatry and Clinical Neurosciences © 2014 Japanese Society of Psychiatry and Neurology.

  4. Bedside functional brain imaging in critically-ill children using high-density EEG source modeling and multi-modal sensory stimulation.

    PubMed

    Eytan, Danny; Pang, Elizabeth W; Doesburg, Sam M; Nenadovic, Vera; Gavrilovic, Bojan; Laussen, Peter; Guerguerian, Anne-Marie

    2016-01-01

    Acute brain injury is a common cause of death and critical illness in children and young adults. Fundamental management focuses on early characterization of the extent of injury and optimizing recovery by preventing secondary damage during the days following the primary injury. Currently, bedside technology for measuring neurological function is mainly limited to using electroencephalography (EEG) for detection of seizures and encephalopathic features, and evoked potentials. We present a proof of concept study in patients with acute brain injury in the intensive care setting, featuring a bedside functional imaging set-up designed to map cortical brain activation patterns by combining high density EEG recordings, multi-modal sensory stimulation (auditory, visual, and somatosensory), and EEG source modeling. Use of source-modeling allows for examination of spatiotemporal activation patterns at the cortical region level as opposed to the traditional scalp potential maps. The application of this system in both healthy and brain-injured participants is demonstrated with modality-specific source-reconstructed cortical activation patterns. By combining stimulation obtained with different modalities, most of the cortical surface can be monitored for changes in functional activation without having to physically transport the subject to an imaging suite. The results in patients in an intensive care setting with anatomically well-defined brain lesions suggest a topographic association between their injuries and activation patterns. Moreover, we report the reproducible application of a protocol examining a higher-level cortical processing with an auditory oddball paradigm involving presentation of the patient's own name. This study reports the first successful application of a bedside functional brain mapping tool in the intensive care setting. This application has the potential to provide clinicians with an additional dimension of information to manage critically-ill children and adults, and potentially patients not suited for magnetic resonance imaging technologies.

  5. Non-contact time-domain imaging of functional brain activation and heterogeneity of superficial signals

    NASA Astrophysics Data System (ADS)

    Wabnitz, H.; Mazurenka, M.; Di Sieno, L.; Contini, D.; Dalla Mora, A.; Farina, A.; Hoshi, Y.; Kirilina, E.; Macdonald, R.; Pifferi, A.

    2017-07-01

    Non-contact scanning at small source-detector separation enables imaging of cerebral and extracranial signals at high spatial resolution and their separation based on early and late photons accounting for the related spatio-temporal characteristics.

  6. A smartphone-based chip-scale microscope using ambient illumination.

    PubMed

    Lee, Seung Ah; Yang, Changhuei

    2014-08-21

    Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone's camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the image resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction are performed on the device using a custom-built Android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.

  7. A smartphone-based chip-scale microscope using ambient illumination

    PubMed Central

    Lee, Seung Ah; Yang, Changhuei

    2014-01-01

    Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the imaging resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system. PMID:24964209

  8. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  9. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  10. Long ranging swept-source optical coherence tomography-based angiography outperforms its spectral-domain counterpart in imaging human skin microcirculations

    NASA Astrophysics Data System (ADS)

    Xu, Jingjiang; Song, Shaozhen; Men, Shaojie; Wang, Ruikang K.

    2017-11-01

    There is an increasing demand for imaging tools in clinical dermatology that can perform in vivo wide-field morphological and functional examination from surface to deep tissue regions at various skin sites of the human body. The conventional spectral-domain optical coherence tomography-based angiography (SD-OCTA) system is difficult to meet these requirements due to its fundamental limitations of the sensitivity roll-off, imaging range as well as imaging speed. To mitigate these issues, we demonstrate a swept-source OCTA (SS-OCTA) system by employing a swept source based on a vertical cavity surface-emitting laser. A series of comparisons between SS-OCTA and SD-OCTA are conducted. Benefiting from the high system sensitivity, long imaging range, and superior roll-off performance, the SS-OCTA system is demonstrated with better performance in imaging human skin than the SD-OCTA system. We show that the SS-OCTA permits remarkable deep visualization of both structure and vasculature (up to ˜2 mm penetration) with wide field of view capability (up to 18×18 mm2), enabling a more comprehensive assessment of the morphological features as well as functional blood vessel networks from the superficial epidermal to deep dermal layers. It is expected that the advantages of the SS-OCTA system will provide a ground for clinical translation, benefiting the existing dermatological practice.

  11. Simulation tools for analyzer-based x-ray phase contrast imaging system with a conventional x-ray source

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.

    2016-09-01

    Analyzer-based X-ray phase contrast imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray imaging modalities. Unlike the conventional X-ray radiography, which measures only X-ray absorption, in PC imaging one can also measures the X-rays deflection induced by the object refractive properties. It has been shown that refraction imaging provides better contrast when imaging the soft tissue, which is of great interest in medical imaging applications. In this paper, we introduce a simulation tool specifically designed to simulate the analyzer-based X-ray phase contrast imaging system with a conventional polychromatic X-ray source. By utilizing ray tracing and basic physical principles of diffraction theory our simulation tool can predicting the X-ray beam profile shape, the energy content, the total throughput (photon count) at the detector. In addition we can evaluate imaging system point-spread function for various system configurations.

  12. Determining Object Orientation from a Single Image Using Multiple Information Sources.

    DTIC Science & Technology

    1984-06-01

    object surface. Location of the image ellipse is accomplished by exploiting knowledge about object boundaries and image intensity gradients . -. The...Using Intensity Gradient Information for Ellipse fitting ........ .51 4.3.7 Orientation From Ellipses .............................. 53 4.3.8 Application...object boundaries and image intensity gradients . The orientation information from each of these three methods is combined using a "plausibility" function

  13. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  14. A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT.

    PubMed

    Badea, Cristian T; Hedlund, Laurence W; Johnson, G Allan

    2013-01-01

    CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging.

  15. A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT

    PubMed Central

    Badea, Cristian T.; Hedlund, Laurence W.; Johnson, G. Allan

    2013-01-01

    CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging. PMID:27006920

  16. Geometric error analysis for shuttle imaging spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  17. Near-Infrared Neuroimaging with NinPy

    PubMed Central

    Strangman, Gary E.; Zhang, Quan; Zeffiro, Thomas

    2009-01-01

    There has been substantial recent growth in the use of non-invasive optical brain imaging in studies of human brain function in health and disease. Near-infrared neuroimaging (NIN) is one of the most promising of these techniques and, although NIN hardware continues to evolve at a rapid pace, software tools supporting optical data acquisition, image processing, statistical modeling, and visualization remain less refined. Python, a modular and computationally efficient development language, can support functional neuroimaging studies of diverse design and implementation. In particular, Python's easily readable syntax and modular architecture allow swift prototyping followed by efficient transition to stable production systems. As an introduction to our ongoing efforts to develop Python software tools for structural and functional neuroimaging, we discuss: (i) the role of non-invasive diffuse optical imaging in measuring brain function, (ii) the key computational requirements to support NIN experiments, (iii) our collection of software tools to support NIN, called NinPy, and (iv) future extensions of these tools that will allow integration of optical with other structural and functional neuroimaging data sources. Source code for the software discussed here will be made available at www.nmr.mgh.harvard.edu/Neural_SystemsGroup/software.html. PMID:19543449

  18. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  19. Image reduction pipeline for the detection of variable sources in highly crowded fields

    NASA Astrophysics Data System (ADS)

    Gössl, C. A.; Riffeser, A.

    2002-01-01

    We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.

  20. WISE Photometry for 400 million SDSS sources

    DOE PAGES

    Lang, Dustin; Hogg, David W.; Schlegel, David J.

    2016-01-28

    Here, we present photometry of images from the Wide-Field Infrared Survey Explorer (WISE) of over 400 million sources detected by the Sloan Digital Sky Survey (SDSS). We also use a "forced photometry" technique, using measured SDSS source positions, star-galaxy classification, and galaxy profiles to define the sources whose fluxes are to be measured in the WISE images. We perform photometry with The Tractor image modeling code, working on our "unWISE" coaddds and taking account of the WISE point-spread function and a noise model. The result is a measurement of the flux of each SDSS source in each WISE band. Manymore » sources have little flux in the WISE bands, so often the measurements we report are consistent with zero given our uncertainties. But, for many sources we get 3σ or 4σ measurements; these sources would not be reported by the "official" WISE pipeline and will not appear in the WISE catalog, yet they can be highly informative for some scientific questions. In addition, these small-signal measurements can be used in stacking analyses at the catalog level. The forced photometry approach has the advantage that we measure a consistent set of sources between SDSS and WISE, taking advantage of the resolution and depth of the SDSS images to interpret the WISE images; objects that are resolved in SDSS but blended together in WISE still have accurate measurements in our photometry. Our results, and the code used to produce them, are publicly available at http://unwise.me.« less

  1. Spot size measurement of a flash-radiography source using the pinhole imaging method

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Qin; Chen, Nan; Cheng, Jin-Ming; Xie, Yu-Tong; Liu, Yun-Long; Long, Quan-Hong

    2016-07-01

    The spot size of the X-ray source is a key parameter of a flash-radiography facility, and is usually quoted as an evaluation of the resolving power. The pinhole imaging technique is applied to measure the spot size of the Dragon-I linear induction accelerator, by which a two-dimensional spatial distribution of the source spot is obtained. Experimental measurements are performed to measure the spot image when the transportation and focusing of the electron beam are tuned by adjusting the currents of solenoids in the downstream section. The spot size of full-width at half maximum and that defined from the spatial frequency at half peak value of the modulation transfer function are calculated and discussed.

  2. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  3. Tomographic gamma ray apparatus and method

    DOEpatents

    Anger, Hal O.

    1976-09-07

    This invention provides a radiation detecting apparatus for imaging the distribution of radioactive substances in a three-dimensional subject such as a medical patient. Radiating substances introduced into the subject are viewed by a radiation image detector that provides an image of the distribution of radiating sources within its field of view. By viewing the area of interest from two or more positions, as by scanning the detector over the area, the radiating sources seen by the detector have relative positions that are a function of their depth in the subject. The images seen by the detector are transformed into first output signals which are combined in a readout device with second output signals that indicate the position of the detector relative to the subject. The readout device adjusts the signals and provides multiple radiation distribution readouts of the subject, each readout comprising a sharply resolved picture that shows the distribution and intensity of radiating sources lying in a selected plane in the subject, while sources lying on other planes are blurred in that particular readout.

  4. Incorporating modern neuroscience findings to improve brain-computer interfaces: tracking auditory attention.

    PubMed

    Wronkiewicz, Mark; Larson, Eric; Lee, Adrian Kc

    2016-10-01

    Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. In this study, we use source imaging-a neuroimaging technique that projects EEG signals onto the surface of the brain-in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques-a major design choice in most BCIs. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.

  5. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    NASA Astrophysics Data System (ADS)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  6. Simulation of an active underwater imaging through a wavy sea surface

    NASA Astrophysics Data System (ADS)

    Gholami, Ali; Saghafifar, Hossein

    2018-06-01

    A numerical simulation for underwater imaging through a wavy sea surface has been done. We have used a common approach to model the sea surface elevation and its slopes as an important source of image disturbance. The simulation algorithm is based on a combination of ray tracing and optical propagation, which has taken to different approaches for downwelling and upwelling beams. The nature of randomly focusing and defocusing property of surface waves causes a fluctuated irradiance distribution as an illuminating source of immersed object, while it gives rise to a great disturbance on the image through a coordinate change of image pixels. We have also used a modulation transfer function based on Well's small angle approximations to consider the underwater optical properties effect on the transferring of the image. As expected, the absorption effect reduces the light intensity and scattering decreases image contrast by blurring the image.

  7. Structured illumination diffuse optical tomography for noninvasive functional neuroimaging in mice.

    PubMed

    Reisman, Matthew D; Markow, Zachary E; Bauer, Adam Q; Culver, Joseph P

    2017-04-01

    Optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to superficial cortical tissues. Diffuse optical tomography (DOT) techniques provide noninvasive imaging, but previous DOT systems for rodent neuroimaging have been limited either by sparse spatial sampling or by slow speed. Here, we develop a DOT system with asymmetric source-detector sampling that combines the high-density spatial sampling (0.4 mm) detection of a scientific complementary metal-oxide-semiconductor camera with the rapid (2 Hz) imaging of a few ([Formula: see text]) structured illumination (SI) patterns. Analysis techniques are developed to take advantage of the system's flexibility and optimize trade-offs among spatial sampling, imaging speed, and signal-to-noise ratio. An effective source-detector separation for the SI patterns was developed and compared with light intensity for a quantitative assessment of data quality. The light fall-off versus effective distance was also used for in situ empirical optimization of our light model. We demonstrated the feasibility of this technique by noninvasively mapping the functional response in the somatosensory cortex of the mouse following electrical stimulation of the forepaw.

  8. Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.

    NASA Astrophysics Data System (ADS)

    Dodd, Stirling Scott

    1995-01-01

    Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.

  9. Center determination for trailed sources in astronomical observation images

    NASA Astrophysics Data System (ADS)

    Du, Jun Ju; Hu, Shao Ming; Chen, Xu; Guo, Di Fu

    2014-11-01

    Images with trailed sources can be obtained when observing near-Earth objects, such as small astroids, space debris, major planets and their satellites, no matter the telescopes track on sidereal speed or the speed of target. The low centering accuracy of these trailed sources is one of the most important sources of the astrometric uncertainty, but how to determine the central positions of the trailed sources accurately remains a significant challenge to image processing techniques, especially in the study of faint or fast moving objects. According to the conditions of one-meter telescope at Weihai Observatory of Shandong University, moment and point-spread-function (PSF) fitting were chosen to develop the image processing pipeline for space debris. The principles and the implementations of both two methods are introduced in this paper. And some simulated images containing trailed sources are analyzed with each technique. The results show that two methods are comparable to obtain the accurate central positions of trailed sources when the signal to noise (SNR) is high. But moment tends to fail for the objects with low SNR. Compared with moment, PSF fitting seems to be more robust and versatile. However, PSF fitting is quite time-consuming. Therefore, if there are enough bright stars in the field, or the high astronometric accuracy is not necessary, moment is competent. Otherwise, the combination of moment and PSF fitting is recommended.

  10. Radiometric analysis of photographic data by the effective exposure method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantine, B J

    1972-04-01

    The effective exposure method provides for radiometric analysis of photographic data. A three-dimensional model, where density is a function of energy and wavelength, is postulated to represent the film response function. Calibration exposures serve to eliminate the other factors which affect image density. The effective exposure causing an image can be determined by comparing the image density with that of a calibration exposure. If the relative spectral distribution of the source is known, irradiance and/or radiance can be unfolded from the effective exposure expression.

  11. Resolving z ~2 galaxy using adaptive coadded source plane reconstruction

    NASA Astrophysics Data System (ADS)

    Sharma, Soniya; Richard, Johan; Kewley, Lisa; Yuan, Tiantian

    2018-06-01

    Natural magnification provided by gravitational lensing coupled with Integral field spectrographic observations (IFS) and adaptive optics (AO) imaging techniques have become the frontier of spatially resolved studies of high redshift galaxies (z>1). Mass models of gravitational lenses hold the key for understanding the spatially resolved source–plane (unlensed) physical properties of the background lensed galaxies. Lensing mass models very sensitively control the accuracy and precision of source-plane reconstructions of the observed lensed arcs. Effective source-plane resolution defined by image-plane (observed) point spread function (PSF) makes it challenging to recover the unlensed (source-plane) surface brightness distribution.We conduct a detailed study to recover the source-plane physical properties of z=2 lensed galaxy using spatially resolved observations from two different multiple images of the lensed target. To deal with PSF’s from two data sets on different multiple images of the galaxy, we employ a forward (Source to Image) approach to merge these independent observations. Using our novel technique, we are able to present a detailed analysis of the source-plane dynamics at scales much better than previously attainable through traditional image inversion methods. Moreover, our technique is adapted to magnification, thus allowing us to achieve higher resolution in highly magnified regions of the source. We find that this lensed system is highly evident of a minor merger. In my talk, I present this case study of z=2 lensed galaxy and also discuss the applications of our algorithm to study plethora of lensed systems, which will be available through future telescopes like JWST and GMT.

  12. Sodium 3D COncentration MApping (COMA 3D) using 23Na and proton MRI

    NASA Astrophysics Data System (ADS)

    Truong, Milton L.; Harrington, Michael G.; Schepkin, Victor D.; Chekmenev, Eduard Y.

    2014-10-01

    Functional changes of sodium 3D MRI signals were converted into millimolar concentration changes using an open-source fully automated MATLAB toolbox. These concentration changes are visualized via 3D sodium concentration maps, and they are overlaid over conventional 3D proton images to provide high-resolution co-registration for easy correlation of functional changes to anatomical regions. Nearly 5000/h concentration maps were generated on a personal computer (ca. 2012) using 21.1 T 3D sodium MRI brain images of live rats with spatial resolution of 0.8 × 0.8 × 0.8 mm3 and imaging matrices of 60 × 60 × 60. The produced concentration maps allowed for non-invasive quantitative measurement of in vivo sodium concentration in the normal rat brain as a functional response to migraine-like conditions. The presented work can also be applied to sodium-associated changes in migraine, cancer, and other metabolic abnormalities that can be sensed by molecular imaging. The MATLAB toolbox allows for automated image analysis of the 3D images acquired on the Bruker platform and can be extended to other imaging platforms. The resulting images are presented in a form of series of 2D slices in all three dimensions in native MATLAB and PDF formats. The following is provided: (a) MATLAB source code for image processing, (b) the detailed processing procedures, (c) description of the code and all sub-routines, (d) example data sets of initial and processed data. The toolbox can be downloaded at: http://www.vuiis.vanderbilt.edu/ truongm/COMA3D/.

  13. Microlensing of Relativistic Knots in the Quasar HE 1104-1805 AB

    NASA Astrophysics Data System (ADS)

    Schechter, Paul L.; Udalski, A.; Szymański, M.; Kubiak, M.; Pietrzyński, G.; Soszyński, I.; Woźniak, P.; Żebruń, K.; Szewczyk, O.; Wyrzykowski, Ł.

    2003-02-01

    We present 3 years of photometry of the ``Double Hamburger'' lensed quasar, HE 1104-1805 AB, obtained on 102 separate nights using the Optical Gravitational Lensing Experiment 1.3 m telescope. Both the A and B images show variations, but with substantial differences in the light curves at all time delays. At the 310 day delay reported by Wisotzki and collaborators, the difference light curve has an rms amplitude of 0.060 mag. The structure functions for the A and B images are quite different, with image A more than twice as variable as image B (a factor of 4 in structure function) on timescales of less than a month. Adopting microlensing as a working hypothesis for the uncorrelated variability, the short timescale argues for the relativistic motion of one or more components of the source. We argue that the small amplitude of the fluctuations is due to the finite size of the source with respect to the microlenses.

  14. Source Monitoring 15 Years Later: What Have We Learned from fMRI about the Neural Mechanisms of Source Memory?

    ERIC Educational Resources Information Center

    Mitchell, Karen J.; Johnson, Marcia K.

    2009-01-01

    Focusing primarily on functional magnetic resonance imaging (fMRI), this article reviews evidence regarding the roles of subregions of the medial temporal lobes, prefrontal cortex, posterior representational areas, and parietal cortex in source memory. In addition to evidence from standard episodic memory tasks assessing accuracy for neutral…

  15. The Chandra Source Catalog: X-ray Aperture Photometry

    NASA Astrophysics Data System (ADS)

    Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. We describe here the method by which fluxes are measured for detected sources. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect. Source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The current implementation is however limited to non-informative priors. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it.

  16. The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images

    NASA Astrophysics Data System (ADS)

    Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.

    2001-06-01

    We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.

  17. Thermal Image Sensing Model for Robotic Planning and Search.

    PubMed

    Castro Jiménez, Lídice E; Martínez-García, Edgar A

    2016-08-08

    This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.

  18. Wavelet transform analysis of the small-scale X-ray structure of the cluster Abell 1367

    NASA Technical Reports Server (NTRS)

    Grebeney, S. A.; Forman, W.; Jones, C.; Murray, S.

    1995-01-01

    We have developed a new technique based on a wavelet transform analysis to quantify the small-scale (less than a few arcminutes) X-ray structure of clusters of galaxies. We apply this technique to the ROSAT position sensitive proportional counter (PSPC) and Einstein high-resolution imager (HRI) images of the central region of the cluster Abell 1367 to detect sources embedded within the diffuse intracluster medium. In addition to detecting sources and determining their fluxes and positions, we show that the wavelet analysis allows a characterization of the sources extents. In particular, the wavelet scale at which a given source achieves a maximum signal-to-noise ratio in the wavelet images provides an estimate of the angular extent of the source. To account for the widely varying point response of the ROSAT PSPC as a function of off-axis angle requires a quantitative measurement of the source size and a comparison to a calibration derived from the analysis of a Deep Survey image. Therefore, we assume that each source could be described as an isotropic two-dimensional Gaussian and used the wavelet amplitudes, at different scales, to determine the equivalent Gaussian Full Width Half-Maximum (FWHM) (and its uncertainty) appropriate for each source. In our analysis of the ROSAT PSPC image, we detect 31 X-ray sources above the diffuse cluster emission (within a radius of 24 min), 16 of which are apparently associated with cluster galaxies and two with serendipitous, background quasars. We find that the angular extents of 11 sources exceed the nominal width of the PSPC point-spread function. Four of these extended sources were previously detected by Bechtold et al. (1983) as 1 sec scale features using the Einstein HRI. The same wavelet analysis technique was applied to the Einstein HRI image. We detect 28 sources in the HRI image, of which nine are extended. Eight of the extended sources correspond to sources previously detected by Bechtold et al. Overall, using both the PSPC and the HRI observations, we detect 16 extended features, of which nine have galaxies coincided with the X-ray-measured positions (within the positional error circles). These extended sources have luminosities lying in the range (3 - 30) x 10(exp 40) ergs/s and gas masses of approximately (1 - 30) x 10(exp 9) solar mass, if the X-rays are of thermal origin. We confirm the presence of extended features in A1367 first reported by Bechtold et al. (1983). The nature of these systems remains uncertain. The luminosities are large if the emission is attributed to single galaxies, and several of the extended features have no associated galaxy counterparts. The extended features may be associated with galaxy groups, as suggested by Canizares, Fabbiano, & Trinchieri (1987), although the number required is large.

  19. Youpi: YOUr processing PIpeline

    NASA Astrophysics Data System (ADS)

    Monnerville, Mathias; Sémah, Gregory

    2012-03-01

    Youpi is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. Built on top of various open source reduction tools released to the community by TERAPIX (http://terapix.iap.fr), Youpi can help organize data, manage processing jobs on a computer cluster in real time (using Condor) and facilitate teamwork by allowing fine-grain sharing of results and data. Youpi is modular and comes with plugins which perform, from within a browser, various processing tasks such as evaluating the quality of incoming images (using the QualityFITS software package), computing astrometric and photometric solutions (using SCAMP), resampling and co-adding FITS images (using SWarp) and extracting sources and building source catalogues from astronomical images (using SExtractor). Youpi is useful for small to medium-sized data reduction projects; it is free and is published under the GNU General Public License.

  20. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  1. Technical note: DIRART--A software suite for deformable image registration and adaptive radiotherapy research.

    PubMed

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A

    2011-01-01

    Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-

  2. Lens-based wavefront sensorless adaptive optics swept source OCT

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Lee, Sujin; Ju, Myeong Jin; Heisler, Morgan; Ding, Weiguang; Zawadzki, Robert J.; Bonora, Stefano; Sarunic, Marinko V.

    2016-06-01

    Optical coherence tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. Although the axial resolution of OCT system, which is a function of the light source bandwidth, is sufficient to resolve retinal features at a micrometer scale, the lateral resolution is dependent on the delivery optics and is limited by ocular aberrations. Through the combination of wavefront sensorless adaptive optics and the use of dual deformable transmissive optical elements, we present a compact lens-based OCT system at an imaging wavelength of 1060 nm for high resolution retinal imaging. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient’s eyes, and a novel multi-actuator adaptive lens for aberration correction to achieve near diffraction limited imaging performance at the retina. With a parallel processing computational platform, high resolution cross-sectional and en face retinal image acquisition and display was performed in real time. In order to demonstrate the system functionality and clinical utility, we present images of the photoreceptor cone mosaic and other retinal layers acquired in vivo from research subjects.

  3. A dual-modal retinal imaging system with adaptive optics.

    PubMed

    Meadway, Alexander; Girkin, Christopher A; Zhang, Yuhua

    2013-12-02

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated.

  4. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-04-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.

  5. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet

    PubMed Central

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-01-01

    Purpose: Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. Methods: In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. Results: The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. Conclusions: The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction. PMID:24506649

  6. Multimodal and Multi-tissue Measures of Connectivity Revealed by Joint Independent Component Analysis.

    PubMed

    Franco, Alexandre R; Ling, Josef; Caprihan, Arvind; Calhoun, Vince D; Jung, Rex E; Heileman, Gregory L; Mayer, Andrew R

    2008-12-01

    The human brain functions as an efficient system where signals arising from gray matter are transported via white matter tracts to other regions of the brain to facilitate human behavior. However, with a few exceptions, functional and structural neuroimaging data are typically optimized to maximize the quantification of signals arising from a single source. For example, functional magnetic resonance imaging (FMRI) is typically used as an index of gray matter functioning whereas diffusion tensor imaging (DTI) is typically used to determine white matter properties. While it is likely that these signals arising from different tissue sources contain complementary information, the signal processing algorithms necessary for the fusion of neuroimaging data across imaging modalities are still in a nascent stage. In the current paper we present a data-driven method for combining measures of functional connectivity arising from gray matter sources (FMRI resting state data) with different measures of white matter connectivity (DTI). Specifically, a joint independent component analysis (J-ICA) was used to combine these measures of functional connectivity following intensive signal processing and feature extraction within each of the individual modalities. Our results indicate that one of the most predominantly used measures of functional connectivity (activity in the default mode network) is highly dependent on the integrity of white matter connections between the two hemispheres (corpus callosum) and within the cingulate bundles. Importantly, the discovery of this complex relationship of connectivity was entirely facilitated by the signal processing and fusion techniques presented herein and could not have been revealed through separate analyses of both data types as is typically performed in the majority of neuroimaging experiments. We conclude by discussing future applications of this technique to other areas of neuroimaging and examining potential limitations of the methods.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brewer, Brendon J.; Foreman-Mackey, Daniel; Hogg, David W., E-mail: bj.brewer@auckland.ac.nz

    We present and implement a probabilistic (Bayesian) method for producing catalogs from images of stellar fields. The method is capable of inferring the number of sources N in the image and can also handle the challenges introduced by noise, overlapping sources, and an unknown point-spread function. The luminosity function of the stars can also be inferred, even when the precise luminosity of each star is uncertain, via the use of a hierarchical Bayesian model. The computational feasibility of the method is demonstrated on two simulated images with different numbers of stars. We find that our method successfully recovers the inputmore » parameter values along with principled uncertainties even when the field is crowded. We also compare our results with those obtained from the SExtractor software. While the two approaches largely agree about the fluxes of the bright stars, the Bayesian approach provides more accurate inferences about the faint stars and the number of stars, particularly in the crowded case.« less

  8. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  9. Interferometric superlocalization of two incoherent optical point sources.

    PubMed

    Nair, Ranjith; Tsang, Mankei

    2016-02-22

    A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.

  10. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  11. Development and validation of an open source quantification tool for DSC-MRI studies.

    PubMed

    Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J

    2015-03-01

    This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. WISE PHOTOMETRY FOR 400 MILLION SDSS SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Dustin; Hogg, David W.; Schlegel, David J., E-mail: dstndstn@gmail.com

    2016-02-15

    We present photometry of images from the Wide-Field Infrared Survey Explorer (WISE) of over 400 million sources detected by the Sloan Digital Sky Survey (SDSS). We use a “forced photometry” technique, using measured SDSS source positions, star–galaxy classification, and galaxy profiles to define the sources whose fluxes are to be measured in the WISE images. We perform photometry with The Tractor image modeling code, working on our “unWISE” coaddds and taking account of the WISE point-spread function and a noise model. The result is a measurement of the flux of each SDSS source in each WISE band. Many sources havemore » little flux in the WISE bands, so often the measurements we report are consistent with zero given our uncertainties. However, for many sources we get 3σ or 4σ measurements; these sources would not be reported by the “official” WISE pipeline and will not appear in the WISE catalog, yet they can be highly informative for some scientific questions. In addition, these small-signal measurements can be used in stacking analyses at the catalog level. The forced photometry approach has the advantage that we measure a consistent set of sources between SDSS and WISE, taking advantage of the resolution and depth of the SDSS images to interpret the WISE images; objects that are resolved in SDSS but blended together in WISE still have accurate measurements in our photometry. Our results, and the code used to produce them, are publicly available at http://unwise.me.« less

  13. Integrated radiotherapy imaging system (IRIS): design considerations of tumour tracking with linac gantry-mounted diagnostic x-ray systems with flat-panel detectors

    NASA Astrophysics Data System (ADS)

    Berbeco, Ross I.; Jiang, Steve B.; Sharp, Gregory C.; Chen, George T. Y.; Mostafavi, Hassan; Shirato, Hiroki

    2004-01-01

    The design of an integrated radiotherapy imaging system (IRIS), consisting of gantry mounted diagnostic (kV) x-ray tubes and fast read-out flat-panel amorphous-silicon detectors, has been studied. The system is meant to be capable of three main functions: radiographs for three-dimensional (3D) patient set-up, cone-beam CT and real-time tumour/marker tracking. The goal of the current study is to determine whether one source/panel pair is sufficient for real-time tumour/marker tracking and, if two are needed, the optimal position of each relative to other components and the isocentre. A single gantry-mounted source/imager pair is certainly capable of the first two of the three functions listed above and may also be useful for the third, if combined with prior knowledge of the target's trajectory. This would be necessary because only motion in two dimensions is visible with a single imager/source system. However, with previously collected information about the trajectory, the third coordinate may be derived from the other two with sufficient accuracy to facilitate tracking. This deduction of the third coordinate can only be made if the 3D tumour/marker trajectory is consistent from fraction to fraction. The feasibility of tumour tracking with one source/imager pair has been theoretically examined here using measured lung marker trajectory data for seven patients from multiple treatment fractions. The patients' selection criteria include minimum mean amplitudes of the tumour motions greater than 1 cm peak-to-peak. The marker trajectory for each patient was modelled using the first fraction data. Then for the rest of the data, marker positions were derived from the imager projections at various gantry angles and compared with the measured tumour positions. Our results show that, due to the three dimensionality and irregular trajectory characteristics of tumour motion, on a fraction-to-fraction basis, a 'monoscopic' system (single source/imager) is inadequate for consistent real-time tumour tracking, even with prior knowledge. We found that, among the seven patients studied with peak-to-peak marker motion greater than 1 cm, five cases have mean localization errors greater than 2 mm and two have mean errors greater than 3 mm. Because of this uncertainty associated with a monoscopic system, two source/imager pairs are necessary for robust 3D target localization. Dual orthogonal x-ray source/imager pairs mounted on the linac gantry are chosen for the IRIS. We further studied the placement of the x-ray sources/panel based on the geometric specifications of the Varian 21EX Clinac. The best configuration minimizes the localization error while maintaining a large field of view and avoiding collisions with the floor/ceiling or couch.

  14. Multiple Auto-Adapting Color Balancing for Large Number of Images

    NASA Astrophysics Data System (ADS)

    Zhou, X.

    2015-04-01

    This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  16. Medical imaging systems

    DOEpatents

    Frangioni, John V [Wayland, MA

    2012-07-24

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remains in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may also employ dyes or other fluorescent substances associated with antibodies, antibody fragments, or ligands that accumulate within a region of diagnostic significance. In one embodiment, the system provides an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide that is used to capture images. In another embodiment, the system is configured for use in open surgical procedures by providing an operating area that is closed to ambient light. More broadly, the systems described herein may be used in imaging applications where a visible light image may be usefully supplemented by an image formed from fluorescent emissions from a fluorescent substance that marks areas of functional interest.

  17. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  18. Design of a new type synchronous focusing mechanism

    NASA Astrophysics Data System (ADS)

    Zhang, Jintao; Tan, Ruijun; Chen, Zhou; Zhang, Yongqi; Fu, Panlong; Qu, Yachen

    2018-05-01

    Aiming at the dual channel telescopic imaging system composed of infrared imaging system, low-light-level imaging system and image fusion module, In the fusion of low-light-level images and infrared images, it is obvious that using clear source images is easier to obtain high definition fused images. When the target is imaged at 15m to infinity, focusing is needed to ensure the imaging quality of the dual channel imaging system; therefore, a new type of synchronous focusing mechanism is designed. The synchronous focusing mechanism realizes the focusing function through the synchronous translational imaging devices, mainly including the structure of the screw rod nut, the shaft hole coordination structure and the spring steel ball eliminating clearance structure, etc. Starting from the synchronous focusing function of two imaging devices, the structure characteristics of the synchronous focusing mechanism are introduced in detail, and the focusing range is analyzed. The experimental results show that the synchronous focusing mechanism has the advantages of ingenious design, high focusing accuracy and stable and reliable operation.

  19. MRI tools for assessment of microstructure and nephron function of the kidney.

    PubMed

    Xie, Luke; Bennett, Kevin M; Liu, Chunlei; Johnson, G Allan; Zhang, Jeff Lei; Lee, Vivian S

    2016-12-01

    MRI can provide excellent detail of renal structure and function. Recently, novel MR contrast mechanisms and imaging tools have been developed to evaluate microscopic kidney structures including the tubules and glomeruli. Quantitative MRI can assess local tubular function and is able to determine the concentrating mechanism of the kidney noninvasively in real time. Measuring single nephron function is now a near possibility. In parallel to advancing imaging techniques for kidney microstructure is a need to carefully understand the relationship between the local source of MRI contrast and the underlying physiological change. The development of these imaging markers can impact the accurate diagnosis and treatment of kidney disease. This study reviews the novel tools to examine kidney microstructure and local function and demonstrates the application of these methods in renal pathophysiology. Copyright © 2016 the American Physiological Society.

  20. Gallium nitride light sources for optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Goldberg, Graham R.; Ivanov, Pavlo; Ozaki, Nobuhiko; Childs, David T. D.; Groom, Kristian M.; Kennedy, Kenneth L.; Hogg, Richard A.

    2017-02-01

    The advent of optical coherence tomography (OCT) has permitted high-resolution, non-invasive, in vivo imaging of the eye, skin and other biological tissue. The axial resolution is limited by source bandwidth and central wavelength. With the growing demand for short wavelength imaging, super-continuum sources and non-linear fibre-based light sources have been demonstrated in tissue imaging applications exploiting the near-UV and visible spectrum. Whilst the potential has been identified of using gallium nitride devices due to relative maturity of laser technology, there have been limited reports on using such low cost, robust devices in imaging systems. A GaN super-luminescent light emitting diode (SLED) was first reported in 2009, using tilted facets to suppress lasing, with the focus since on high power, low speckle and relatively low bandwidth applications. In this paper we discuss a method of producing a GaN based broadband source, including a passive absorber to suppress lasing. The merits of this passive absorber are then discussed with regards to broad-bandwidth applications, rather than power applications. For the first time in GaN devices, the performance of the light sources developed are assessed though the point spread function (PSF) (which describes an imaging systems response to a point source), calculated from the emission spectra. We show a sub-7μm resolution is possible without the use of special epitaxial techniques, ultimately outlining the suitability of these short wavelength, broadband, GaN devices for use in OCT applications.

  1. Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: Semiclassical treatment

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-02-01

    I present a semiclassical analysis of a spatial-mode demultiplexing (SPADE) measurement scheme for far-field incoherent optical imaging under the effects of diffraction and photon shot noise. Building on previous results that assume two point sources or the Gaussian point-spread function, I generalize SPADE for a larger class of point-spread functions and evaluate its errors in estimating the moments of an arbitrary subdiffraction object. Compared with the limits to direct imaging set by the Cramér-Rao bounds, the results show that SPADE can offer far superior accuracy in estimating second- and higher-order moments.

  2. Breaking the acoustic diffraction barrier with localization optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Razansky, Daniel

    2018-02-01

    Diffraction causes blurring of high-resolution features in images and has been traditionally associated to the resolution limit in light microscopy and other imaging modalities. The resolution of an imaging system can be generally assessed via its point spread function, corresponding to the image acquired from a point source. However, the precision in determining the position of an isolated source can greatly exceed the diffraction limit. By combining the estimated positions of multiple sources, localization-based imaging has resulted in groundbreaking methods such as super-resolution fluorescence optical microscopy and has also enabled ultrasound imaging of microvascular structures with unprecedented spatial resolution in deep tissues. Herein, we introduce localization optoacoustic tomography (LOT) and discuss on the prospects of using localization imaging principles in optoacoustic imaging. LOT was experimentally implemented by real-time imaging of flowing particles in 3D with a recently-developed volumetric optoacoustic tomography system. Provided the particles were separated by a distance larger than the diffraction-limited resolution, their individual locations could be accurately determined in each frame of the acquired image sequence and the localization image was formed by superimposing a set of points corresponding to the localized positions of the absorbers. The presented results demonstrate that LOT can significantly enhance the well-established advantages of optoacoustic imaging by breaking the acoustic diffraction barrier in deep tissues and mitigating artifacts due to limited-view tomographic acquisitions.

  3. How to COAAD Images. I. Optimal Source Detection and Photometry of Point Sources Using Ensembles of Images

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.

    2017-02-01

    Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.

  4. Time encoded radiation imaging

    DOEpatents

    Marleau, Peter; Brubaker, Erik; Kiff, Scott

    2014-10-21

    The various technologies presented herein relate to detecting nuclear material at a large stand-off distance. An imaging system is presented which can detect nuclear material by utilizing time encoded imaging relating to maximum and minimum radiation particle counts rates. The imaging system is integrated with a data acquisition system that can utilize variations in photon pulse shape to discriminate between neutron and gamma-ray interactions. Modulation in the detected neutron count rates as a function of the angular orientation of the detector due to attenuation of neighboring detectors is utilized to reconstruct the neutron source distribution over 360 degrees around the imaging system. Neutrons (e.g., fast neutrons) and/or gamma-rays are incident upon scintillation material in the imager, the photons generated by the scintillation material are converted to electrical energy from which the respective neutrons/gamma rays can be determined and, accordingly, a direction to, and the location of, a radiation source identified.

  5. Propagation-based phase-contrast x-ray tomography of cochlea using a compact synchrotron source.

    PubMed

    Töpperwien, Mareike; Gradl, Regine; Keppeler, Daniel; Vassholz, Malte; Meyer, Alexander; Hessler, Roland; Achterhold, Klaus; Gleich, Bernhard; Dierolf, Martin; Pfeiffer, Franz; Moser, Tobias; Salditt, Tim

    2018-03-21

    We demonstrate that phase retrieval and tomographic imaging at the organ level of small animals can be advantageously carried out using the monochromatic radiation emitted by a compact x-ray light source, without further optical elements apart from source and detector. This approach allows to carry out microtomography experiments which - due to the large performance gap with respect to conventional laboratory instruments - so far were usually limited to synchrotron sources. We demonstrate the potential by mapping the functional soft tissue within the guinea pig and marmoset cochlea, including in the latter case an electrical cochlear implant. We show how 3d microanatomical studies without dissection or microscopic imaging can enhance future research on cochlear implants.

  6. P- and S-wave Receiver Function Imaging with Scattering Kernels

    NASA Astrophysics Data System (ADS)

    Hansen, S. M.; Schmandt, B.

    2017-12-01

    Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).

  7. A microwave imaging-based 3D localization algorithm for an in-body RF source as in wireless capsule endoscopes.

    PubMed

    Chandra, Rohit; Balasingham, Ilangko

    2015-01-01

    A microwave imaging-based technique for 3D localization of an in-body RF source is presented. Such a technique can be useful for localization of an RF source as in wireless capsule endoscopes for positioning of any abnormality in the gastrointestinal tract. Microwave imaging is used to determine the dielectric properties (relative permittivity and conductivity) of the tissues that are required for a precise localization. A 2D microwave imaging algorithm is used for determination of the dielectric properties. Calibration method is developed for removing any error due to the used 2D imaging algorithm on the imaging data of a 3D body. The developed method is tested on a simple 3D heterogeneous phantom through finite-difference-time-domain simulations. Additive white Gaussian noise at the signal-to-noise ratio of 30 dB is added to the simulated data to make them more realistic. The developed calibration method improves the imaging and the localization accuracy. Statistics on the localization accuracy are generated by randomly placing the RF source at various positions inside the small intestine of the phantom. The cumulative distribution function of the localization error is plotted. In 90% of the cases, the localization accuracy was found within 1.67 cm, showing the capability of the developed method for 3D localization.

  8. Technical Note: DIRART – A software suite for deformable image registration and adaptive radiotherapy research

    PubMed Central

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.

    2011-01-01

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176

  9. Imaging the square of the correlated two-electron wave function of a hydrogen molecule

    DOE PAGES

    Waitz, M.; Bello, R. Y.; Metz, D.; ...

    2017-12-22

    The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in whichmore » electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Finally, our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.« less

  10. Imaging the square of the correlated two-electron wave function of a hydrogen molecule.

    PubMed

    Waitz, M; Bello, R Y; Metz, D; Lower, J; Trinter, F; Schober, C; Keiling, M; Lenz, U; Pitzer, M; Mertens, K; Martins, M; Viefhaus, J; Klumpp, S; Weber, T; Schmidt, L Ph H; Williams, J B; Schöffler, M S; Serov, V V; Kheifets, A S; Argenti, L; Palacios, A; Martín, F; Jahnke, T; Dörner, R

    2017-12-22

    The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in which electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.

  11. Imaging the square of the correlated two-electron wave function of a hydrogen molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waitz, M.; Bello, R. Y.; Metz, D.

    The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in whichmore » electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Finally, our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.« less

  12. Development and Characterization of a High-Energy Neutron Time-of-Flight Imaging System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madden, Amanda Christine; Schirato, Richard C.; Swift, Alicia L.

    We present that Los Alamos National Laboratory has developed a prototype of a high-energy neutron time-of-flight imaging system for the non-destructive evaluation of dense, massive, and/or high atomic number objects. High-energy neutrons provide the penetrating power, and thus the high dynamic range necessary to image internal features and defects of such objects. The addition of the time gating capability allows for scatter rejection when paired with a pulsed monoenergetic beam, or neutron energy selection when paired with a pulsed broad-spectrum neutron source. The Time Gating to Reject Scatter and Select Energy (TiGReSSE) system was tested at the Los Alamos Neutronmore » Science Center’s (LANSCE) Weapons Nuclear Research (WNR) facility, a spallation neutron source, to provide proof of concept measurements and to characterize the instrument response. This paper will show results of several objects imaged during this run cycle. In addition, results from system performance metrics such as the Modulation Transfer Function and the Detective Quantum Efficiency measured as a function of neutron energy, characterize the current system performance and inform the next generation of neutron imaging instrument.« less

  13. Development and Characterization of a High-Energy Neutron Time-of-Flight Imaging System

    DOE PAGES

    Madden, Amanda Christine; Schirato, Richard C.; Swift, Alicia L.; ...

    2017-02-09

    We present that Los Alamos National Laboratory has developed a prototype of a high-energy neutron time-of-flight imaging system for the non-destructive evaluation of dense, massive, and/or high atomic number objects. High-energy neutrons provide the penetrating power, and thus the high dynamic range necessary to image internal features and defects of such objects. The addition of the time gating capability allows for scatter rejection when paired with a pulsed monoenergetic beam, or neutron energy selection when paired with a pulsed broad-spectrum neutron source. The Time Gating to Reject Scatter and Select Energy (TiGReSSE) system was tested at the Los Alamos Neutronmore » Science Center’s (LANSCE) Weapons Nuclear Research (WNR) facility, a spallation neutron source, to provide proof of concept measurements and to characterize the instrument response. This paper will show results of several objects imaged during this run cycle. In addition, results from system performance metrics such as the Modulation Transfer Function and the Detective Quantum Efficiency measured as a function of neutron energy, characterize the current system performance and inform the next generation of neutron imaging instrument.« less

  14. Design and validation of Segment--freely available software for cardiovascular image analysis.

    PubMed

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-11

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

  15. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  16. Multi-modal molecular diffuse optical tomography system for small animal imaging

    PubMed Central

    Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid

    2013-01-01

    A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977

  17. sfDM: Open-Source Software for Temporal Analysis and Visualization of Brain Tumor Diffusion MR Using Serial Functional Diffusion Mapping.

    PubMed

    Ceschin, Rafael; Panigrahy, Ashok; Gopalakrishnan, Vanathi

    2015-01-01

    A major challenge in the diagnosis and treatment of brain tumors is tissue heterogeneity leading to mixed treatment response. Additionally, they are often difficult or at very high risk for biopsy, further hindering the clinical management process. To overcome this, novel advanced imaging methods are increasingly being adapted clinically to identify useful noninvasive biomarkers capable of disease stage characterization and treatment response prediction. One promising technique is called functional diffusion mapping (fDM), which uses diffusion-weighted imaging (DWI) to generate parametric maps between two imaging time points in order to identify significant voxel-wise changes in water diffusion within the tumor tissue. Here we introduce serial functional diffusion mapping (sfDM), an extension of existing fDM methods, to analyze the entire tumor diffusion profile along the temporal course of the disease. sfDM provides the tools necessary to analyze a tumor data set in the context of spatiotemporal parametric mapping: the image registration pipeline, biomarker extraction, and visualization tools. We present the general workflow of the pipeline, along with a typical use case for the software. sfDM is written in Python and is freely available as an open-source package under the Berkley Software Distribution (BSD) license to promote transparency and reproducibility.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  19. The Chandra Source Catalog: X-ray Aperture Photometry

    NASA Astrophysics Data System (ADS)

    Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect, and source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it. We also determine limiting sensitivities at arbitrary locations in the field using the same formulation. This work was supported by CXC NASA contracts NAS8-39073 (VK) and NAS8-03060 (CSC).

  20. Development of a low background test facility for the SPICA-SAFARI on-ground calibration

    NASA Astrophysics Data System (ADS)

    Dieleman, P.; Laauwen, W. M.; Ferrari, L.; Ferlet, M.; Vandenbussche, B.; Meinsma, L.; Huisman, R.

    2012-09-01

    SAFARI is a far-infrared camera to be launched in 2021 onboard the SPICA satellite. SAFARI offers imaging spectroscopy and imaging photometry in the wavelength range of 34 to 210 μm with detector NEP of 2•10-19 W/√Hz. A cryogenic test facility for SAFARI on-ground calibration and characterization is being developed. The main design driver is the required low background of a few attoWatts per pixel. This prohibits optical access to room temperature and hence all test equipment needs to be inside the cryostat at 4.5K. The instrument parameters to be verified are interfaces with the SPICA satellite, sensitivity, alignment, image quality, spectral response, frequency calibration, and point spread function. The instrument sensitivity is calibrated by a calibration source providing a spatially homogeneous signal at the attoWatt level. This low light intensity is achieved by geometrical dilution of a 150K source to an integrating sphere. The beam quality and point spread function is measured by a pinhole/mask plate wheel, back-illuminated by a second integrating sphere. This sphere is fed by a stable wide-band source, providing spectral lines via a cryogenic etalon.

  1. Standoff Mid-Infrared Emissive Imaging Spectroscopy for Identification and Mapping of Materials in Polychrome Objects.

    PubMed

    Gabrieli, Francesca; Dooley, Kathryn A; Zeibel, Jason G; Howe, James D; Delaney, John K

    2018-06-18

    Microscale mid-infrared (mid-IR) imaging spectroscopy is used for the mapping of chemical functional groups. The extension to macroscale imaging requires that either the mid-IR radiation reflected off or that emitted by the object be greater than the radiation from the thermal background. Reflectance spectra can be obtained using an active IR source to increase the amount of radiation reflected off the object, but rapid heating of greater than 4 °C can occur, which is a problem for paintings. Rather than using an active source, by placing a highly reflective tube between the painting and camera and introducing a low temperature source, thermal radiation from the room can be reduced, allowing the IR radiation emitted by the painting to dominate. Thus, emissivity spectra of the object can be recovered. Using this technique, mid-IR emissivity image cubes of paintings were collected at high collection rates with a low-noise, line-scanning imaging spectrometer, which allowed pigments and paint binders to be identified and mapped. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  3. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  4. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  5. Simple and cost-effective hardware and software for functional brain mapping using intrinsic optical signal imaging.

    PubMed

    Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H

    2009-09-15

    We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.

  6. Mass spectrometry detection and imaging of inorganic and organic explosive device signatures using desorption electro-flow focusing ionization.

    PubMed

    Forbes, Thomas P; Sisco, Edward

    2014-08-05

    We demonstrate the coupling of desorption electro-flow focusing ionization (DEFFI) with in-source collision induced dissociation (CID) for the mass spectrometric (MS) detection and imaging of explosive device components, including both inorganic and organic explosives and energetic materials. We utilize in-source CID to enhance ion collisions with atmospheric gas, thereby reducing adducts and minimizing organic contaminants. Optimization of the MS signal response as a function of in-source CID potential demonstrated contrasting trends for the detection of inorganic and organic explosive device components. DEFFI-MS and in-source CID enabled isotopic and molecular speciation of inorganic components, providing further physicochemical information. The developed system facilitated the direct detection and chemical mapping of trace analytes collected with Nomex swabs and spatially resolved distributions within artificial fingerprints from forensic lift tape. The results presented here provide the forensic and security sectors a powerful tool for the detection, chemical imaging, and inorganic speciation of explosives device signatures.

  7. Automated Adaptive Brightness in Wireless Capsule Endoscopy Using Image Segmentation and Sigmoid Function.

    PubMed

    Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A

    2016-08-01

    Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.

  8. PLUS: open-source toolkit for ultrasound-guided intervention systems.

    PubMed

    Lasso, Andras; Heffter, Tamas; Rankin, Adam; Pinter, Csaba; Ungi, Tamas; Fichtinger, Gabor

    2014-10-01

    A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.

  9. Continuum generation in optical fibers for high-resolution holographic coherence domain imaging application

    NASA Astrophysics Data System (ADS)

    Li, Linghui; Gruzdev, Vitaly; Yu, Ping; Chen, J. K.

    2009-02-01

    High pulse energy continuum generation in conventional multimode optical fibers has been studied for potential applications to a holographic optical coherence imaging system. As a new imaging modality for the biological tissue imaging, high-resolution holographic optical coherence imaging requires a broadband light source with a high brightness, a relatively low spatial coherence and a high stability. A broadband femtosecond laser can not be used as the light source of holographic imaging system since the laser creates a lot of speckle patterns. By coupling high peak power femtosecond laser pulses into a multimode optical fiber, nonlinear optical effects cause a continuum generation that can be served as a super-bright and broadband light source. In our experiment, an amplified femtosecond laser was coupled into the fiber through a microscopic objective. We measured the FWHM of the continuum generation as a function of incident pulse energy from 80 nJ to 800 μJ. The maximum FWHM is about 8 times higher than that of the input pulses. The stability was analyzed at different pump energies, integration times and fiber lengths. The spectral broadening and peak position show that more than two processes compete in the fiber.

  10. Multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography at 400 kHz

    NASA Astrophysics Data System (ADS)

    El-Haddad, Mohamed T.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Multimodal imaging systems that combine scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) have demonstrated the utility of concurrent en face and volumetric imaging for aiming, eye tracking, bulk motion compensation, mosaicking, and contrast enhancement. However, this additional functionality trades off with increased system complexity and cost because both SLO and OCT generally require dedicated light sources, galvanometer scanners, relay and imaging optics, detectors, and control and digitization electronics. We previously demonstrated multimodal ophthalmic imaging using swept-source spectrally encoded SLO and OCT (SS-SESLO-OCT). Here, we present system enhancements and a new optical design that increase our SS-SESLO-OCT data throughput by >7x and field-of-view (FOV) by >4x. A 200 kHz 1060 nm Axsun swept-source was optically buffered to 400 kHz sweep-rate, and SESLO and OCT were simultaneously digitized on dual input channels of a 4 GS/s digitizer at 1.2 GS/s per channel using a custom k-clock. We show in vivo human imaging of the anterior segment out to the limbus and retinal fundus over a >40° FOV. In addition, nine overlapping volumetric SS-SESLO-OCT volumes were acquired under video-rate SESLO preview and guidance. In post-processing, all nine SESLO images and en face projections of the corresponding OCT volumes were mosaicked to show widefield multimodal fundus imaging with a >80° FOV. Concurrent multimodal SS-SESLO-OCT may have applications in clinical diagnostic imaging by enabling aiming, image registration, and multi-field mosaicking and benefit intraoperative imaging by allowing for real-time surgical feedback, instrument tracking, and overlays of computationally extracted image-based surrogate biomarkers of disease.

  11. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.

  12. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479

  13. Nighttime image dehazing using local atmospheric selection rule and weighted entropy for visible-light systems

    NASA Astrophysics Data System (ADS)

    Park, Dubok; Han, David K.; Ko, Hanseok

    2017-05-01

    Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.

  14. Characterization of the new neutron imaging and materials science facility IMAT

    NASA Astrophysics Data System (ADS)

    Minniti, Triestino; Watanabe, Kenichi; Burca, Genoveva; Pooley, Daniel E.; Kockelmann, Winfried

    2018-04-01

    IMAT is a new cold neutron imaging and diffraction instrument located at the second target station of the pulsed neutron spallation source ISIS, UK. A broad range of materials science and materials testing areas will be covered by IMAT. We present the characterization of the imaging part, including the energy-selective and energy-dispersive imaging options, and provide the basic parameters of the radiography and tomography instrument. In particular, detailed studies on mono and bi-dimensional neutron beam flux profiles, neutron flux as a function of the neutron wavelength, spatial and energy dependent neutron beam uniformities, guide artifacts, divergence and spatial resolution, and neutron pulse widths are provided. An accurate characterization of the neutron beam at the sample position, located 56 m from the source, is required to optimize collection of radiographic and tomographic data sets and for performing energy-dispersive neutron imaging via time-of-flight methods in particular.

  15. 2D dose distribution images of a hybrid low field MRI-γ detector

    NASA Astrophysics Data System (ADS)

    Abril, A.; Agulles-Pedrós, L.

    2016-07-01

    The proposed hybrid system is a combination of a low field MRI and dosimetric gel as a γ detector. The readout system is based on the polymerization process induced by the gel radiation. A gel dose map is obtained which represents the functional part of hybrid image alongside with the anatomical MRI one. Both images should be taken while the patient with a radiopharmaceutical is located inside the MRI system with a gel detector matrix. A relevant aspect of this proposal is that the dosimetric gel has never been used to acquire medical images. The results presented show the interaction of the 99mTc source with the dosimetric gel simulated in Geant4. The purpose was to obtain the planar γ 2D-image. The different source configurations are studied to explore the ability of the gel as radiation detector through the following parameters; resolution, shape definition and radio-pharmaceutical concentration.

  16. Informatics in radiology (infoRAD): navigating the fifth dimension: innovative interface for multidimensional multimodality image navigation.

    PubMed

    Rosset, Antoine; Spadola, Luca; Pysher, Lance; Ratib, Osman

    2006-01-01

    The display and interpretation of images obtained by combining three-dimensional data acquired with two different modalities (eg, positron emission tomography and computed tomography) in the same subject require complex software tools that allow the user to adjust the image parameters. With the current fast imaging systems, it is possible to acquire dynamic images of the beating heart, which add a fourth dimension of visual information-the temporal dimension. Moreover, images acquired at different points during the transit of a contrast agent or during different functional phases add a fifth dimension-functional data. To facilitate real-time image navigation in the resultant large multidimensional image data sets, the authors developed a Digital Imaging and Communications in Medicine-compliant software program. The open-source software, called OsiriX, allows the user to navigate through multidimensional image series while adjusting the blending of images from different modalities, image contrast and intensity, and the rate of cine display of dynamic images. The software is available for free download at http://homepage.mac.com/rossetantoine/osirix. (c) RSNA, 2006.

  17. Sources of image degradation in fundamental and harmonic ultrasound imaging using nonlinear, full-wave simulations.

    PubMed

    Pinton, Gianmarco F; Trahey, Gregg E; Dahl, Jeremy J

    2011-04-01

    A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD). This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-andsum beamforming is used to generate point spread functions (PSF) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is reverberation from near-field structures. Reverberation clutter in the harmonic PSF is 26 dB higher than the fundamental PSF. An artificial medium with uniform velocity but unchanged impedance characteristics indicates that for the fundamental PSF, the primary source of degradation is phase aberration. An ultrasound image is created in silico using the same physical and algorithmic process used in an ultrasound scanner: a series of pulses are transmitted through heterogeneous scattering tissue and the received echoes are used in a delay-and-sum beamforming algorithm to generate images. These beamformed images are compared with images obtained from convolution of the PSF with a scatterer field to demonstrate that a very large portion of the PSF must be used to accurately represent the clutter observed in conventional imaging. © 2011 IEEE

  18. Wide-area mapping of resting state hemodynamic correlations at microvascular resolution with multi-contrast optical imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Senarathna, Janaka; Hadjiabadi, Darian; Gil, Stacy; Thakor, Nitish V.; Pathak, Arvind P.

    2017-02-01

    Different brain regions exhibit complex information processing even at rest. Therefore, assessing temporal correlations between regions permits task-free visualization of their `resting state connectivity'. Although functional MRI (fMRI) is widely used for mapping resting state connectivity in the human brain, it is not well suited for `microvascular scale' imaging in rodents because of its limited spatial resolution. Moreover, co-registered cerebral blood flow (CBF) and total hemoglobin (HbT) data are often unavailable in conventional fMRI experiments. Therefore, we built a customized system that combines laser speckle contrast imaging (LSCI), intrinsic optical signal (IOS) imaging and fluorescence imaging (FI) to generate multi-contrast functional connectivity maps at a spatial resolution of 10 μm. This system comprised of three illumination sources: a 632 nm HeNe laser (for LSCI), a 570 nm ± 5 nm filtered white light source (for IOS), and a 473 nm blue laser (for FI), as well as a sensitive CCD camera operating at 10 frames per second for image acquisition. The acquired data enabled visualization of changes in resting state neurophysiology at microvascular spatial scales. Moreover, concurrent mapping of CBF and HbT-based temporal correlations enabled in vivo mapping of how resting brain regions were linked in terms of their hemodynamics. Additionally, we complemented this approach by exploiting the transit times of a fluorescent tracer (Dextran-FITC) to distinguish arterial from venous perfusion. Overall, we demonstrated the feasibility of wide area mapping of resting state connectivity at microvascular resolution and created a new toolbox for interrogating neurovascular function.

  19. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

  20. unWISE: Unblurred Coadds of the WISE Imaging

    NASA Astrophysics Data System (ADS)

    Lang, Dustin

    2014-05-01

    The Wide-field Infrared Survey Explorer (WISE) satellite observed the full sky in four mid-infrared bands in the 2.8-28 μm range. The primary mission was completed in 2010. The WISE team has done a superb job of producing a series of high-quality, well-documented, complete data releases in a timely manner. However, the "Atlas Image" coadds that are part of the recent AllWISE and previous data releases were intentionally blurred. Convolving the images by the point-spread function while coadding results in "matched-filtered" images that are close to optimal for detecting isolated point sources. But these matched-filtered images are sub-optimal or inappropriate for other purposes. For example, we are photometering the WISE images at the locations of sources detected in the Sloan Digital Sky Survey through forward modeling, and this blurring decreases the available signal-to-noise by effectively broadening the point-spread function. This paper presents a new set of coadds of the WISE images that have not been blurred. These images retain the intrinsic resolution of the data and are appropriate for photometry preserving the available signal-to-noise. Users should be cautioned, however, that the W3- and W4-band coadds contain artifacts around large, bright structures (large galaxies, dusty nebulae, etc.); eliminating these artifacts is the subject of ongoing work. These new coadds, and the code used to produce them, are publicly available at http://unwise.me.

  1. Designing Tracking Software for Image-Guided Surgery Applications: IGSTK Experience

    PubMed Central

    Enquobahrie, Andinet; Gobbi, David; Turek, Matt; Cheng, Patrick; Yaniv, Ziv; Lindseth, Frank; Cleary, Kevin

    2009-01-01

    Objective Many image-guided surgery applications require tracking devices as part of their core functionality. The Image-Guided Surgery Toolkit (IGSTK) was designed and developed to interface tracking devices with software applications incorporating medical images. Methods IGSTK was designed as an open source C++ library that provides the basic components needed for fast prototyping and development of image-guided surgery applications. This library follows a component-based architecture with several components designed for specific sets of image-guided surgery functions. At the core of the toolkit is the tracker component that handles communication between a control computer and navigation device to gather pose measurements of surgical instruments present in the surgical scene. The representations of the tracked instruments are superimposed on anatomical images to provide visual feedback to the clinician during surgical procedures. Results The initial version of the IGSTK toolkit has been released in the public domain and several trackers are supported. The toolkit and related information are available at www.igstk.org. Conclusion With the increased popularity of minimally invasive procedures in health care, several tracking devices have been developed for medical applications. Designing and implementing high-quality and safe software to handle these different types of trackers in a common framework is a challenging task. It requires establishing key software design principles that emphasize abstraction, extensibility, reusability, fault-tolerance, and portability. IGSTK is an open source library that satisfies these needs for the image-guided surgery community. PMID:20037671

  2. Evaluation of the image quality of telescopes using the star test

    NASA Astrophysics Data System (ADS)

    Vazquez y Monteil, Sergio; Salazar Romero, Marcos A.; Gale, David M.

    2004-10-01

    The Point Spread Function (PSF) or star test is one of the main criteria to be considered in the quality of the image formed by a telescope. In a real system the distribution of irradiance in the image of a point source is given by the PSF, a function which is highly sensitive to aberrations. The PSF of a telescope may be determined by measuring the intensity distribution in the image of a star. Alternatively, if we already know the aberrations present in the optical system, then we may use diffraction theory to calculate the function. In this paper we propose a method for determining the wavefront aberrations from the PSF, using Genetic Algorithms to perform an optimization process starting from the PSF instead of the more traditional method of adjusting an aberration polynomial. We show that this method of phase recuperation is immune to noise-induced errors arising during image aquisition and registration. Some practical results are shown.

  3. Hybrid imaging in foot and ankle disorders.

    PubMed

    García Jiménez, R; García-Gómez, F J; Noriega Álvarez, E; Calvo Morón, C; Martín-Marcuartu, J J

    Disorders of the foot and ankle are some of the most frequent ones affecting the musculoskeletal system and have a great impact on patients' quality of life. Accurate diagnosis is an important clinical challenge because of the complex anatomy and function of the foot, that make it difficult to locate the source of the pain by routine clinical examination. In the study of foot pathology, anatomical imaging (radiography, magnetic resonance imaging [MRI], ultrasound and computed tomography [CT]) and functional imaging (bone scan, positron emission tomography [PET] and MRI) techniques have been used. Hybrid imaging combines the advantages of morphological and functional studies in a synergistic way, helping the clinician manage complex problems. In this article we delve into the anatomy and biomechanics of the foot and ankle and describe the potential indications for the current hybrid techniques available for the study of foot and ankle disease. Copyright © 2017 Elsevier España, S.L.U. y SEMNIM. All rights reserved.

  4. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  5. ART AND SCIENCE OF IMAGE MAPS.

    USGS Publications Warehouse

    Kidwell, Richard D.; McSweeney, Joseph A.

    1985-01-01

    The visual image of reflected light is influenced by the complex interplay of human color discrimination, spatial relationships, surface texture, and the spectral purity of light, dyes, and pigments. Scientific theories of image processing may not always achieve acceptable results as the variety of factors, some psychological, are in part, unpredictable. Tonal relationships that affect digital image processing and the transfer functions used to transform from the continuous-tone source image to a lithographic image, may be interpreted for an insight of where art and science fuse in the production process. The application of art and science in image map production at the U. S. Geological Survey is illustrated and discussed.

  6. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    NASA Astrophysics Data System (ADS)

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system.

  7. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    PubMed Central

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system. PMID:28106129

  8. Method and apparatus for the simultaneous display and correlation of independently generated images

    DOEpatents

    Vaitekunas, Jeffrey J.; Roberts, Ronald A.

    1991-01-01

    An apparatus and method for location by location correlation of multiple images from Non-Destructive Evaluation (NDE) and other sources. Multiple images of a material specimen are displayed on one or more monitors of an interactive graphics system. Specimen landmarks are located in each image and mapping functions from a reference image to each other image are calcuated using the landmark locations. A location selected by positioning a cursor in the reference image is mapped to the other images and location identifiers are simultaneously displayed in those images. Movement of the cursor in the reference image causes simultaneous movement of the location identifiers in the other images to positions corresponding to the location of the reference image cursor.

  9. The Main Sources of Intersubject Variability in Neuronal Activation for Reading Aloud

    ERIC Educational Resources Information Center

    Kherif, Ferath; Josse, Goulven; Seghier, Mohamed L.; Price, Cathy J.

    2009-01-01

    The aim of this study was to find the most prominent source of intersubject variability in neuronal activation for reading familiar words aloud. To this end, we collected functional imaging data from a large sample of subjects (n = 76) with different demographic characteristics such as handedness, sex, and age, while reading. The…

  10. Dependence of Microlensing on Source Size and Lens Mass

    NASA Astrophysics Data System (ADS)

    Congdon, A. B.; Keeton, C. R.

    2007-11-01

    In gravitational lensed quasars, the magnification of an image depends on the configuration of stars in the lensing galaxy. We study the statistics of the magnification distribution for random star fields. The width of the distribution characterizes the amount by which the observed magnification is likely to differ from models in which the mass is smoothly distributed. We use numerical simulations to explore how the width of the magnification distribution depends on the mass function of stars, and on the size of the source quasar. We then propose a semi-analytic model to describe the distribution width for different source sizes and stellar mass functions.

  11. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Knowlton, Robert C.; Hoo, Kent S.; Huang, H. K.

    1995-05-01

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the brain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstation to aid the noninvasive presurgical evaluation of epilepsy patients. These techniques include online access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitation of structural and functional information contained in the registered images. For illustration, we describe the use of these techniques in a patient case of nonlesional neocortical epilepsy. We also present out future work based on preliminary studies.

  12. Towards a Full Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2015-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.

  13. [Construction of DICOM-WWW gateway by open source, and application to PDAs using the high-speed mobile communications network].

    PubMed

    Yokohama, Noriya

    2003-09-01

    The author constructed a medical image network system using open source software that took security into consideration. This system was enabled for search and browse with a WWW browser, and images were stored in a DICOM server. In order to realize this function, software was developed to fill in the gap between the DICOM protocol and HTTP using PHP language. The transmission speed was evaluated by the difference in protocols between DICOM and HTTP. Furthermore, an attempt was made to evaluate the convenience of medical image access with a personal information terminal via the Internet through the high-speed mobile communication terminal. Results suggested the feasibility of remote diagnosis and application to emergency care.

  14. Hemispherical reflectance model for passive images in an outdoor environment.

    PubMed

    Kim, Charles C; Thai, Bea; Yamaoka, Neil; Aboutalib, Omar

    2015-05-01

    We present a hemispherical reflectance model for simulating passive images in an outdoor environment where illumination is provided by natural sources such as the sun and the clouds. While the bidirectional reflectance distribution function (BRDF) accurately produces radiance from any objects after the illumination, using the BRDF in calculating radiance requires double integration. Replacing the BRDF by hemispherical reflectance under the natural sources transforms the double integration into a multiplication. This reduces both storage space and computation time. We present the formalism for the radiance of the scene using hemispherical reflectance instead of BRDF. This enables us to generate passive images in an outdoor environment taking advantage of the computational and storage efficiencies. We show some examples for illustration.

  15. A three-wavelength multi-channel brain functional imager based on digital lock-in photon-counting technique

    NASA Astrophysics Data System (ADS)

    Ding, Xuemei; Wang, Bingyuan; Liu, Dongyuan; Zhang, Yao; He, Jie; Zhao, Huijuan; Gao, Feng

    2018-02-01

    During the past two decades there has been a dramatic rise in the use of functional near-infrared spectroscopy (fNIRS) as a neuroimaging technique in cognitive neuroscience research. Diffuse optical tomography (DOT) and optical topography (OT) can be employed as the optical imaging techniques for brain activity investigation. However, most current imagers with analogue detection are limited by sensitivity and dynamic range. Although photon-counting detection can significantly improve detection sensitivity, the intrinsic nature of sequential excitations reduces temporal resolution. To improve temporal resolution, sensitivity and dynamic range, we develop a multi-channel continuous-wave (CW) system for brain functional imaging based on a novel lock-in photon-counting technique. The system consists of 60 Light-emitting device (LED) sources at three wavelengths of 660nm, 780nm and 830nm, which are modulated by current-stabilized square-wave signals at different frequencies, and 12 photomultiplier tubes (PMT) based on lock-in photon-counting technique. This design combines the ultra-high sensitivity of the photon-counting technique with the parallelism of the digital lock-in technique. We can therefore acquire the diffused light intensity for all the source-detector pairs (SD-pairs) in parallel. The performance assessments of the system are conducted using phantom experiments, and demonstrate its excellent measurement linearity, negligible inter-channel crosstalk, strong noise robustness and high temporal resolution.

  16. Fast deep-tissue multispectral optoacoustic tomography (MSOT) for preclinical imaging of cancer and cardiovascular disease

    NASA Astrophysics Data System (ADS)

    Taruttis, Adrian; Razansky, Daniel; Ntziachristos, Vasilis

    2012-02-01

    Optoacoustic imaging has enabled the visualization of optical contrast at high resolutions in deep tissue. Our Multispectral optoacoustic tomography (MSOT) imaging results reveal internal tissue heterogeneity, where the underlying distribution of specific endogenous and exogenous sources of absorption can be resolved in detail. Technical advances in cardiac imaging allow motion-resolved multispectral measurements of the heart, opening the way for studies of cardiovascular disease. We further demonstrate the fast characterization of the pharmacokinetic profiles of lightabsorbing agents. Overall, our MSOT findings indicate new possibilities in high resolution imaging of functional and molecular parameters.

  17. Extending RTM Imaging With a Focus on Head Waves

    NASA Astrophysics Data System (ADS)

    Holicki, Max; Drijkoningen, Guy

    2016-04-01

    Conventional industry seismic imaging predominantly focuses on pre-critical reflections, muting post-critical arrivals in the process. This standard approach neglects a lot of information present in the recorded wave field. This negligence has been partially remedied with the inclusion of head waves in more advanced imaging techniques, like Full Waveform Inversion (FWI). We would like to see post-critical information leave the realm of labour-intensive travel-time picking and tomographic inversion towards full migration to improve subsurface imaging and parameter estimation. We present a novel seismic imaging approach aimed at exploiting post-critical information, using the constant travel path for head-waves between shots. To this end, we propose to generalize conventional Reverse Time Migration (RTM) to scenarios where the sources for the forward and backward propagated wave-fields are not coinciding. RTM functions on the principle that backward propagated receiver data, due to a source at some locations, must overlap with the forward propagated source wave field, from the same source location, at subsurface scatterers. Where the wave-fields overlap in the subsurface there is a peak at the zero-lag cross-correlation, and this peak is used for the imaging. For the inclusion of head waves, we propose to relax the condition of coincident sources. This means that wave-fields, from non-coincident-sources, will not overlap properly in the subsurface anymore. We can make the wave-fields overlap in the subsurface again, by time shifting either the forward or backward propagated wave-fields until the wave-fields overlap. This is the same as imaging at non-zero cross-correlation lags, where the lag is the travel time difference between the two wave-fields for a given event. This allows us to steer which arrivals we would like to use for imaging. In the simplest case we could use Eikonal travel-times to generate our migration image, or we exclusively image the subsurface with the head wave from the nth-layer. To illustrate the method we apply it to a layered Earth model with five layers and compare it to conventional RTM. We will show that conventional RTM highlights interfaces, while our head-wave based images highlight layers, producing fundamentally different images. We also demonstrate that our proposed imaging scheme is more sensitive to the velocity model than conventional RTM, which is important for improved velocity model building in the future.

  18. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  19. Low-dose 4D cardiac imaging in small animals using dual source micro-CT

    NASA Astrophysics Data System (ADS)

    Holbrook, M.; Clark, D. P.; Badea, C. T.

    2018-01-01

    Micro-CT is widely used in preclinical studies, generating substantial interest in extending its capabilities in functional imaging applications such as blood perfusion and cardiac function. However, imaging cardiac structure and function in mice is challenging due to their small size and rapid heart rate. To overcome these challenges, we propose and compare improvements on two strategies for cardiac gating in dual-source, preclinical micro-CT: fast prospective gating (PG) and uncorrelated retrospective gating (RG). These sampling strategies combined with a sophisticated iterative image reconstruction algorithm provide faster acquisitions and high image quality in low-dose 4D (i.e. 3D  +  Time) cardiac micro-CT. Fast PG is performed under continuous subject rotation which results in interleaved projection angles between cardiac phases. Thus, fast PG provides a well-sampled temporal average image for use as a prior in iterative reconstruction. Uncorrelated RG incorporates random delays during sampling to prevent correlations between heart rate and sampling rate. We have performed both simulations and animal studies to validate these new sampling protocols. Sampling times for 1000 projections using fast PG and RG were 2 and 3 min, respectively, and the total dose was 170 mGy each. Reconstructions were performed using a 4D iterative reconstruction technique based on the split Bregman method. To examine undersampling robustness, subsets of 500 and 250 projections were also used for reconstruction. Both sampling strategies in conjunction with our iterative reconstruction method are capable of resolving cardiac phases and provide high image quality. In general, for equal numbers of projections, fast PG shows fewer errors than RG and is more robust to undersampling. Our results indicate that only 1000-projection based reconstruction with fast PG satisfies a 5% error criterion in left ventricular volume estimation. These methods promise low-dose imaging with a wide range of preclinical applications in cardiac imaging.

  20. Exact image theory for the problem of dielectric/magnetic slab

    NASA Technical Reports Server (NTRS)

    Lindell, I. V.

    1987-01-01

    Exact image method, recently introduced for the exact solution of electromagnetic field problems involving homogeneous half spaces and microstrip-like geometries, is developed for the problem of homogeneous slab of dielectric and/or magnetic material in free space. Expressions for image sources, creating the exact reflected and transmitted fields, are given and their numerical evaluation is demonstrated. Nonradiating modes, guided by the slab and responsible for the loss of convergence of the image functions, are considered and extracted. The theory allows, for example, an analysis of finite ground planes in microstrip antenna structures.

  1. Brain functional BOLD perturbation modelling for forward fMRI and inverse mapping

    PubMed Central

    Robinson, Jennifer; Calhoun, Vince

    2018-01-01

    Purpose To computationally separate dynamic brain functional BOLD responses from static background in a brain functional activity for forward fMRI signal analysis and inverse mapping. Methods A brain functional activity is represented in terms of magnetic source by a perturbation model: χ = χ0 +δχ, with δχ for BOLD magnetic perturbations and χ0 for background. A brain fMRI experiment produces a timeseries of complex-valued images (T2* images), whereby we extract the BOLD phase signals (denoted by δP) by a complex division. By solving an inverse problem, we reconstruct the BOLD δχ dataset from the δP dataset, and the brain χ distribution from a (unwrapped) T2* phase image. Given a 4D dataset of task BOLD fMRI, we implement brain functional mapping by temporal correlation analysis. Results Through a high-field (7T) and high-resolution (0.5mm in plane) task fMRI experiment, we demonstrated in detail the BOLD perturbation model for fMRI phase signal separation (P + δP) and reconstructing intrinsic brain magnetic source (χ and δχ). We also provided to a low-field (3T) and low-resolution (2mm) task fMRI experiment in support of single-subject fMRI study. Our experiments show that the δχ-depicted functional map reveals bidirectional BOLD χ perturbations during the task performance. Conclusions The BOLD perturbation model allows us to separate fMRI phase signal (by complex division) and to perform inverse mapping for pure BOLD δχ reconstruction for intrinsic functional χ mapping. The full brain χ reconstruction (from unwrapped fMRI phase) provides a new brain tissue image that allows to scrutinize the brain tissue idiosyncrasy for the pure BOLD δχ response through an automatic function/structure co-localization. PMID:29351339

  2. The GALEX Time Domain Survey. I. Selection and Classification of Over a Thousand Ultraviolet Variable Sources

    NASA Astrophysics Data System (ADS)

    Gezari, S.; Martin, D. C.; Forster, K.; Neill, J. D.; Huber, M.; Heckman, T.; Bianchi, L.; Morrissey, P.; Neff, S. G.; Seibert, M.; Schiminovich, D.; Wyder, T. K.; Burgett, W. S.; Chambers, K. C.; Kaiser, N.; Magnier, E. A.; Price, P. A.; Tonry, J. L.

    2013-03-01

    We present the selection and classification of over a thousand ultraviolet (UV) variable sources discovered in ~40 deg2 of GALEX Time Domain Survey (TDS) NUV images observed with a cadence of 2 days and a baseline of observations of ~3 years. The GALEX TDS fields were designed to be in spatial and temporal coordination with the Pan-STARRS1 Medium Deep Survey, which provides deep optical imaging and simultaneous optical transient detections via image differencing. We characterize the GALEX photometric errors empirically as a function of mean magnitude, and select sources that vary at the 5σ level in at least one epoch. We measure the statistical properties of the UV variability, including the structure function on timescales of days and years. We report classifications for the GALEX TDS sample using a combination of optical host colors and morphology, UV light curve characteristics, and matches to archival X-ray, and spectroscopy catalogs. We classify 62% of the sources as active galaxies (358 quasars and 305 active galactic nuclei), and 10% as variable stars (including 37 RR Lyrae, 53 M dwarf flare stars, and 2 cataclysmic variables). We detect a large-amplitude tail in the UV variability distribution for M-dwarf flare stars and RR Lyrae, reaching up to |Δm| = 4.6 mag and 2.9 mag, respectively. The mean amplitude of the structure function for quasars on year timescales is five times larger than observed at optical wavelengths. The remaining unclassified sources include UV-bright extragalactic transients, two of which have been spectroscopically confirmed to be a young core-collapse supernova and a flare from the tidal disruption of a star by dormant supermassive black hole. We calculate a surface density for variable sources in the UV with NUV < 23 mag and |Δm| > 0.2 mag of ~8.0, 7.7, and 1.8 deg-2 for quasars, active galactic nuclei, and RR Lyrae stars, respectively. We also calculate a surface density rate in the UV for transient sources, using the effective survey time at the cadence appropriate to each class, of ~15 and 52 deg-2 yr-1 for M dwarfs and extragalactic transients, respectively.

  3. Coherent optical processing using noncoherent light after source masking.

    PubMed

    Boopathi, V; Vasu, R M

    1992-01-10

    Coherent optical processing starting with spatially noncoherent illumination is described. Good spatial coherence is introduced in the far field by modulating a noncoherent source when masks with sharp autocorrelation are used. The far-field mutual coherence function of light is measured and it is seen that, for the masks and the source size used here, we get a fairly large area over which the mutual coherence function is high and flat. We demonstrate traditional coherent processing operations such as Fourier transformation and image deblurring when coherent light that is produced in the above fashion is used. A coherence-redundancy merit function is defined for this type of processing system. It is experimentally demonstrated that the processing system introduced here has superior blemish tolerance compared with a traditional processor that uses coherent illumination.

  4. Effects of the source, surface, and sensor couplings and colorimetric of laser speckle pattern on the performance of optical imaging system

    NASA Astrophysics Data System (ADS)

    Darwiesh, M.; El-Sherif, Ashraf F.; El-Ghandour, Hatem; Aly, Hussein A.; Mokhtar, A. M.

    2011-03-01

    Optical imaging systems are widely used in different applications include tracking for portable scanners; input pointing devices for laptop computers, cell phones, and cameras, fingerprint-identification scanners, optical navigation for target tracking, and in optical computer mouse. We presented an experimental work to measure and analyze the laser speckle pattern (LSP) produced from different optical sources (i.e. various color LEDs, 3 mW diode laser, and 10mW He-Ne laser) with different produced operating surfaces (Gabor hologram diffusers), and how they affects the performance of the optical imaging systems; speckle size and signal-to-noise ratio (signal is represented by the patches of the speckles that contain or carry information, and noise is represented by the whole remaining part of the selected image). The theoretical and experimental studies of the colorimetry (color correction is done in the color images captured by the optical imaging system to produce realistic color images which contains most of the information in the image by selecting suitable gray scale which contains most of the informative data in the image, this is done by calculating the accurate Red-Green-Blue (RGB) color components making use of the measured spectrum for light sources, and color matching functions of International Telecommunication Organization (ITU-R709) for CRT phosphorus, Tirinton-SONY Model ) for the used optical sources are investigated and introduced to present the relations between the signal-to-noise ratios with different diffusers for each light source. The source surface coupling has been discussed and concludes that the performance of the optical imaging system for certain source varies from worst to best based on the operating surface. The sensor /surface coupling has been studied and discussed for the case of He-Ne laser and concludes the speckle size is ranged from 4.59 to 4.62 μm, which are slightly different or approximately the same for all produced diffusers (which satisfies the fact that the speckle size is independent on the illuminating surface). But, the calculated value of signal-tonoise ratio takes different values ranged from 0.71 to 0.92 for different diffuser. This means that the surface texture affects the performance of the optical sensor because, all images captured for all diffusers under the same conditions [same source (He-Ne laser), same distances of the experimental set-up, and the same sensor (CCD camera)].

  5. Managing an archive of weather satellite images

    NASA Technical Reports Server (NTRS)

    Seaman, R. L.

    1992-01-01

    The author's experiences of building and maintaining an archive of hourly weather satellite pictures at NOAO are described. This archive has proven very popular with visiting and staff astronomers - especially on windy days and cloudy nights. Given access to a source of such pictures, a suite of simple shell and IRAF CL scripts can provide a great deal of robust functionality with little effort. These pictures and associated data products such as surface analysis (radar) maps and National Weather Service forecasts are updated hourly at anonymous ftp sites on the Internet, although your local Atsmospheric Sciences Department may prove to be a more reliable source. The raw image formats are unfamiliar to most astronomers, but reading them into IRAF is straightforward. Techniques for performing this format conversion at the host computer level are described which may prove useful for other chores. Pointers are given to sources of data and of software, including a package of example tools. These tools include shell and Perl scripts for downloading pictures, maps, and forecasts, as well as IRAF scripts and host level programs for translating the images into IRAF and GIF formats and for slicing & dicing the resulting images. Hints for displaying the images and for making hardcopies are given.

  6. Review of current progress in nanometrology with the helium ion microscope

    NASA Astrophysics Data System (ADS)

    Postek, Michael T.; Vladár, András; Archie, Charles; Ming, Bin

    2011-02-01

    Scanning electron microscopy has been employed as an imaging and measurement tool for more than 50 years and it continues as a primary tool in many research and manufacturing facilities across the world. A new challenger to this work is the helium ion microscope (HIM). The HIM is a new imaging and metrology technology. Essentially, substitution of the electron source with a helium ion source yields a tool visually similar in function to the scanning electron microscope, but very different in the fundamental imaging and measurement process. The imaged and measured signal originates differently than in the scanning electron microscope and that fact and its single atom source diameter may be able to push the obtainable resolution lower, provide greater depth-of-field and ultimately improve the metrology. Successful imaging and metrology with this instrument entails understanding and modeling of new ion beam/specimen interaction physics. As a new methodology, HIM is beginning to show promise and the abundance of potentially advantageous applications for nanometrology has yet to be fully exploited. This paper discusses some of the progress made at NIST in collaboration with IBM to understand the science behind this new technology.

  7. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  9. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research.

    PubMed

    Campagnola, Luke; Kratz, Megan B; Manis, Paul B

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  10. Neuroimaging Evidence for Agenda-Dependent Monitoring of Different Features during Short-Term Source Memory Tests

    ERIC Educational Resources Information Center

    Mitchell, Karen J.; Raye, Carol L.; McGuire, Joseph T.; Frankel, Hillary; Greene, Erich J.; Johnson, Marcia K.

    2008-01-01

    A short-term source monitoring procedure with functional magnetic resonance imaging assessed neural activity when participants made judgments about the format of 1 of 4 studied items (picture, word), the encoding task performed (cost, place), or whether an item was old or new. The results support findings from long-term memory studies showing that…

  11. Emission source functions in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Shapoval, V. M.; Sinyukov, Yu. M.; Karpenko, Iu. A.

    2013-12-01

    Three-dimensional pion and kaon emission source functions are extracted from hydrokinetic model (HKM) simulations of central Au+Au collisions at the top Relativistic Heavy Ion Collider (RHIC) energy sNN=200 GeV. The model describes well the experimental data, previously obtained by the PHENIX and STAR collaborations using the imaging technique. In particular, the HKM reproduces the non-Gaussian heavy tails of the source function in the pair transverse momentum (out) and beam (long) directions, observed in the pion case and practically absent for kaons. The role of rescatterings and long-lived resonance decays in forming the mentioned long-range tails is investigated. The particle rescattering contribution to the out tail seems to be dominating. The model calculations also show substantial relative emission times between pions (with mean value 13 fm/c in the longitudinally comoving system), including those coming from resonance decays and rescatterings. A prediction is made for the source functions in Large Hadron Collider (LHC) Pb+Pb collisions at sNN=2.76 TeV, which are still not extracted from the measured correlation functions.

  12. Mitigating artifacts in back-projection source imaging with implications for frequency-dependent properties of the Tohoku-Oki earthquake

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen; Ampuero, Jean-Paul; Luo, Yingdi; Wu, Wenbo; Ni, Sidao

    2012-12-01

    Comparing teleseismic array back-projection source images of the 2011 Tohoku-Oki earthquake with results from static and kinematic finite source inversions has revealed little overlap between the regions of high- and low-frequency slip. Motivated by this interesting observation, back-projection studies extended to intermediate frequencies, down to about 0.1 Hz, have suggested that a progressive transition of rupture properties as a function of frequency is observable. Here, by adapting the concept of array response function to non-stationary signals, we demonstrate that the "swimming artifact", a systematic drift resulting from signal non-stationarity, induces significant bias on beamforming back-projection at low frequencies. We introduce a "reference window strategy" into the multitaper-MUSIC back-projection technique and significantly mitigate the "swimming artifact" at high frequencies (1 s to 4 s). At lower frequencies, this modification yields notable, but significantly smaller, artifacts than time-domain stacking. We perform extensive synthetic tests that include a 3D regional velocity model for Japan. We analyze the recordings of the Tohoku-Oki earthquake at the USArray and at the European array at periods from 1 s to 16 s. The migration of the source location as a function of period, regardless of the back-projection methods, has characteristics that are consistent with the expected effect of the "swimming artifact". In particular, the apparent up-dip migration as a function of frequency obtained with the USArray can be explained by the "swimming artifact". This indicates that the most substantial frequency-dependence of the Tohoku-Oki earthquake source occurs at periods longer than 16 s. Thus, low-frequency back-projection needs to be further tested and validated in order to contribute to the characterization of frequency-dependent rupture properties.

  13. The Discovery of Lensed Radio and X-ray Sources Behind the Frontier Fields Cluster MACS J0717.5+3745 with the JVLA and Chandra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weeren, R. J. van; Ogrean, G. A.; Jones, C.

    We report on high-resolution JVLA and Chandra observations of the Hubble Space Telescope (HST) Frontier Cluster MACS J0717.5+3745. MACS J0717.5+3745 offers the largest contiguous magnified area of any known cluster, making it a promising target to search for lensed radio and X-ray sources. With the high-resolution 1.0–6.5 GHz JVLA imaging in A and B configuration, we detect a total of 51 compact radio sources within the area covered by the HST imaging. Within this sample, we find seven lensed sources with amplification factors larger than two. None of these sources are identified as multiply lensed. Based on the radio luminosities,more » the majority of these sources are likely star-forming galaxies with star-formation rates (SFRs) of 10–50 M ⊙ yr -1 located at 1≲ z ≲ 2. Two of the lensed radio sources are also detected in the Chandra image of the cluster. These two sources are likely active galactic nuclei, given their 2–10 keV X-ray luminosities of ~ 10 43-44 erg s -1. From the derived radio luminosity function, we find evidence for an increase in the number density of radio sources at 0.6 < z < 2.0, compared to a z < 0.3 sample. Lastly, our observations indicate that deep radio imaging of lensing clusters can be used to study star-forming galaxies, with SFRs as low as ~10M ⊙ yr -1, at the peak of cosmic star formation history.« less

  14. The Discovery of Lensed Radio and X-ray Sources Behind the Frontier Fields Cluster MACS J0717.5+3745 with the JVLA and Chandra

    DOE PAGES

    Weeren, R. J. van; Ogrean, G. A.; Jones, C.; ...

    2016-01-27

    We report on high-resolution JVLA and Chandra observations of the Hubble Space Telescope (HST) Frontier Cluster MACS J0717.5+3745. MACS J0717.5+3745 offers the largest contiguous magnified area of any known cluster, making it a promising target to search for lensed radio and X-ray sources. With the high-resolution 1.0–6.5 GHz JVLA imaging in A and B configuration, we detect a total of 51 compact radio sources within the area covered by the HST imaging. Within this sample, we find seven lensed sources with amplification factors larger than two. None of these sources are identified as multiply lensed. Based on the radio luminosities,more » the majority of these sources are likely star-forming galaxies with star-formation rates (SFRs) of 10–50 M ⊙ yr -1 located at 1≲ z ≲ 2. Two of the lensed radio sources are also detected in the Chandra image of the cluster. These two sources are likely active galactic nuclei, given their 2–10 keV X-ray luminosities of ~ 10 43-44 erg s -1. From the derived radio luminosity function, we find evidence for an increase in the number density of radio sources at 0.6 < z < 2.0, compared to a z < 0.3 sample. Lastly, our observations indicate that deep radio imaging of lensing clusters can be used to study star-forming galaxies, with SFRs as low as ~10M ⊙ yr -1, at the peak of cosmic star formation history.« less

  15. Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.

    2016-03-01

    A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.

  16. Initial Investigation of preclinical integrated SPECT and MR imaging.

    PubMed

    Hamamura, Mark J; Ha, Seunghoon; Roeck, Werner W; Wagenaar, Douglas J; Meier, Dirk; Patt, Bradley E; Nalcioglu, Orhan

    2010-02-01

    Single-photon emission computed tomography (SPECT) can provide specific functional information while magnetic resonance imaging (MRI) can provide high-spatial resolution anatomical information as well as complementary functional information. In this study, we utilized a dual modality SPECT/MRI (MRSPECT) system to investigate the integration of SPECT and MRI for improved image accuracy. The MRSPECT system consisted of a cadmium-zinc-telluride (CZT) nuclear radiation detector interfaced with a specialized radiofrequency (RF) coil that was placed within a whole-body 4 T MRI system. The importance of proper corrections for non-uniform detector sensitivity and Lorentz force effects was demonstrated. MRI data were utilized for attenuation correction (AC) of the nuclear projection data and optimized Wiener filtering of the SPECT reconstruction for improved image accuracy. Finally, simultaneous dual-imaging of a nude mouse was performed to demonstrated the utility of co-registration for accurate localization of a radioactive source.

  17. Initial Investigation of Preclinical Integrated SPECT and MR Imaging

    PubMed Central

    Hamamura, Mark J.; Ha, Seunghoon; Roeck, Werner W.; Wagenaar, Douglas J.; Meier, Dirk; Patt, Bradley E.; Nalcioglu, Orhan

    2014-01-01

    Single-photon emission computed tomography (SPECT) can provide specific functional information while magnetic resonance imaging (MRI) can provide high-spatial resolution anatomical information as well as complementary functional information. In this study, we utilized a dual modality SPECT/MRI (MRSPECT) system to investigate the integration of SPECT and MRI for improved image accuracy. The MRSPECT system consisted of a cadmium-zinc-telluride (CZT) nuclear radiation detector interfaced with a specialized radiofrequency (RF) coil that was placed within a whole-body 4 T MRI system. The importance of proper corrections for non-uniform detector sensitivity and Lorentz force effects was demonstrated. MRI data were utilized for attenuation correction (AC) of the nuclear projection data and optimized Wiener filtering of the SPECT reconstruction for improved image accuracy. Finally, simultaneous dual-imaging of a nude mouse was performed to demonstrated the utility of co-registration for accurate localization of a radioactive source. PMID:20082527

  18. A Complete Bank of Optical Images of the ICRF QSOs

    NASA Astrophysics Data System (ADS)

    Humberto Andrei, Alexandre; Taris, Francois; Anton, Sonia; Bourda, Geraldine; Damljanovic, Goran; Souchay, Jean; Vieira Martins, Roberto; Pursimo, Tapio; Barache, Christophe; Nepomuceno da Silva Neto, Dario; Fernandes Coelho, Bruno David

    2015-08-01

    We have been developing a systematic effort to collect good quality images of the optical counterpart of ICRF sources, in particular for those that have been regularly radio surveyed either for future implementation at high frequencies and/or those that will be the link sources between the ICRF and the Gaia CRF. Observations have been taken at the LNA/Brazil, CASLEO/Argentina, NOT/Spain, LFOA/Austria, Rozhen/Bulgária, and ASV/Serbia. In complement images were collected from the SDSS. As a step to implement such image data bank and make it publicly available through the IERS service we present its description, that comprises for each source the number of measurements, filter, pixel scale, size of field, and seeing at each observation. The photometry analysis is centered on the morphology, since there remain still cases in which the host galaxy is overwhelming, and many cases in which the host asks for a non-stellar PSF modeling. On basis of the neighbor stars we assign magnitudes and variability whenever possible. Finally, assisted by previous literature, the redshift and luminosity are used to derive astrophysical quantities, in special the absolute magnitude, SED and spectral index. Moreover, since Gaia will not obtain direct images of the observed sources, the morphology and magnitude becomes useful as templates onto which assembling and interpreting the one-dimensional and uncontinuous line spread function samplings that will be delivered by Gaia for each QSO.

  19. THE GINI COEFFICIENT AS A MORPHOLOGICAL MEASUREMENT OF STRONGLY LENSED GALAXIES IN THE IMAGE PLANE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florian, Michael K.; Li, Nan; Gladders, Michael D.

    2016-12-01

    Characterization of the morphology of strongly lensed galaxies is challenging because images of such galaxies are typically highly distorted. Lens modeling and source plane reconstruction is one approach that can provide reasonably undistorted images from which morphological measurements can be made, though at the expense of a highly spatially variable telescope point-spread function (PSF) when mapped back to the source plane. Unfortunately, modeling the lensing mass is a time- and resource-intensive process, and in many cases there are too few constraints to precisely model the lensing mass. If, however, useful morphological measurements could be made in the image plane rathermore » than the source plane, it would bypass this issue and obviate the need for a source reconstruction process for some applications. We examine the use of the Gini coefficient as one such measurement. Because it depends on the cumulative distribution of the light of a galaxy, but not the relative spatial positions, the fact that surface brightness is conserved by lensing means that the Gini coefficient may be well preserved by strong gravitational lensing. Through simulations, we test the extent to which the Gini coefficient is conserved, including by effects due to PSF convolution and pixelization, to determine whether it is invariant enough under lensing to be used as a measurement of galaxy morphology that can be made in the image plane.« less

  20. The GINI coefficient as a morphological measurement of strongly lensed galaxies in the image plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florian, Michael K.; Li, Nan; Gladders, Michael D.

    2016-11-30

    Characterization of the morphology of strongly lensed galaxies is challenging because images of such galaxies are typically highly distorted. Lens modeling and source plane reconstruction is one approach that can provide reasonably undistorted images from which morphological measurements can be made, though at the expense of a highly spatially variable telescope point-spread function (PSF) when mapped back to the source plane. Unfortunately, modeling the lensing mass is a time-and resource-intensive process, and in many cases there are too few constraints to precisely model the lensing mass. If, however, useful morphological measurements could be made in the image plane rather thanmore » the source plane, it would bypass this issue and obviate the need for a source reconstruction process for some applications. We examine the use of the Gini coefficient as one such measurement. Because it depends on the cumulative distribution of the light of a galaxy, but not the relative spatial positions, the fact that surface brightness is conserved by lensing means that the Gini coefficient may be well preserved by strong gravitational lensing. Through simulations, we test the extent to which the Gini coefficient is conserved, including by effects due to PSF convolution and pixelization, to determine whether it is invariant enough under lensing to be used as a measurement of galaxy morphology that can be made in the image plane.« less

  1. X-ray propagation microscopy of biological cells using waveguides as a quasipoint source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giewekemeyer, K.; Krueger, S. P.; Kalbfleisch, S.

    2011-02-15

    We have used x-ray waveguides as highly confining optical elements for nanoscale imaging of unstained biological cells using the simple geometry of in-line holography. The well-known twin-image problem is effectively circumvented by a simple and fast iterative reconstruction. The algorithm which combines elements of the classical Gerchberg-Saxton scheme and the hybrid-input-output algorithm is optimized for phase-contrast samples, well-justified for imaging of cells at multi-keV photon energies. The experimental scheme allows for a quantitative phase reconstruction from a single holographic image without detailed knowledge of the complex illumination function incident on the sample, as demonstrated for freeze-dried cells of the eukaryoticmore » amoeba Dictyostelium discoideum. The accessible resolution range is explored by simulations, indicating that resolutions on the order of 20 nm are within reach applying illumination times on the order of minutes at present synchrotron sources.« less

  2. Obtaining the phase in the star test using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro

    2004-10-01

    The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.

  3. First imagery generated by near-field real-time aperture synthesis passive millimetre wave imagers at 94 GHz and 183 GHz

    NASA Astrophysics Data System (ADS)

    Salmon, Neil A.; Mason, Ian; Wilkinson, Peter; Taylor, Chris; Scicluna, Peter

    2010-10-01

    The first passive millimetre wave (PMMW) imagery is presented from two proof-of-concept aperture synthesis demonstrators, developed to investigate the use of aperture synthesis for personnel security screening and all weather flying at 94 GHz, and satellite based earth observation at 183 GHz [1]. Emission from point noise sources and discharge tubes are used to examine the coherence on system baselines and to measure the point spread functions, making comparisons with theory. Image quality is examined using near field aperture synthesis and G-matrix calibration imaging algorithms. The radiometric sensitivity is measured using the emission from absorbers at elevated temperatures acting as extended sources and compared with theory. Capabilities of the latest Field Programmable Gate Arrays (FPGA) technologies for aperture synthesis PMMW imaging in all-weather and security screening applications are examined.

  4. Effects of photon noise on speckle image reconstruction with the Knox-Thompson algorithm. [in astronomy

    NASA Technical Reports Server (NTRS)

    Nisenson, P.; Papaliolios, C.

    1983-01-01

    An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.

  5. Biobeam—Multiplexed wave-optical simulations of light-sheet microscopy

    PubMed Central

    Weigert, Martin; Bundschuh, Sebastian T.

    2018-01-01

    Sample-induced image-degradation remains an intricate wave-optical problem in light-sheet microscopy. Here we present biobeam, an open-source software package that enables simulation of operational light-sheet microscopes by combining data from 105–106 multiplexed and GPU-accelerated point-spread-function calculations. The wave-optical nature of these simulations leads to the faithful reproduction of spatially varying aberrations, diffraction artifacts, geometric image distortions, adaptive optics, and emergent wave-optical phenomena, and renders image-formation in light-sheet microscopy computationally tractable. PMID:29652879

  6. Structural and Functional Biomedical Imaging Using Polarization-Based Optical Coherence Tomography

    NASA Astrophysics Data System (ADS)

    Black, Adam J.

    Biomedical imaging has had an enormous impact in medicine and research. There are numerous imaging modalities covering a large range of spatial and temporal scales, penetration depths, along with indicators for function and disease. As these imaging technologies mature, the quality of the images they produce increases to resolve finer details with greater contrast at higher speeds which aids in a faster, more accurate diagnosis in the clinic. In this dissertation, polarization-based optical coherence tomography (OCT) systems are used and developed to image biological structure and function with greater speeds, signal-to-noise (SNR) and stability. OCT can image with spatial and temporal resolutions in the micro range. When imaging any sample, feedback is very important to verify the fidelity and desired location on the sample being imaged. To increase frame rates for display as well as data throughput, field-programmable gate arrays (FPGAs) were used with custom algorithms to realize real-time display and streaming output for continuous acquisition of large datasets of swept-source OCT systems. For spectral domain (SD) OCT systems, significant increases in signal-to-noise ratios were achieved from a custom balanced detection (BD) OCT system. The BD system doubled measured signals while reducing common term. For functional imaging, a real-time directed scanner was introduced to visualize the 3D image of a sample to identify regions of interest prior to recording. Elucidating the characteristics of functional OCT signals with the aid of simulations, novel processing methods were also developed to stabilize samples being imaged and identify possible origins of functional signals being measured. Polarization-sensitive OCT was used to image cardiac tissue before and after clearing to identify the regions of vascular perfusion from a coronary artery. The resulting 3D image provides a visualization of the perfusion boundaries for the tissue that would be damaged from a myocardial infarction to possibly identity features that lead to fatal cardiac arrhythmias. 3D functional imaging was used to measure functional retinal activity from a light stimulus. In some cases, single trial responses were possible; measured at the outer segment of the photoreceptor layer. The morphology and time-course of these signals are similar to the intrinsic optical signals reported from phototransduction. Assessing function in the retina could aid in early detection of degenerative diseases of the retina, such as glaucoma and macular degeneration.

  7. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  8. On-Orbit Lunar Modulation Transfer Function Measurements for the Moderate Resolution Imaging Spectroradiometer

    NASA Technical Reports Server (NTRS)

    Choi, Taeyong; Xiong, Xiaoxiong; Wang, Zhipeng

    2013-01-01

    Spatial quality of an imaging sensor can be estimated by evaluating its modulation transfer function (MTF) from many different sources such as a sharp edge, a pulse target, or bar patterns with different spatial frequencies. These well-defined targets are frequently used for prelaunch laboratory tests, providing very reliable and accurate MTF measurements. A laboratory-quality edge input source was included in the spatial-mode operation of the Spectroradiometric Calibration Assembly (SRCA), which is one of the onboard calibrators of the Moderate Resolution Imaging Spectroradiometer (MODIS). Since not all imaging satellites have such an instrument, SRCA MTF estimations can be used as a reference for an on-orbit lunar MTF algorithm and results. In this paper, the prelaunch spatial quality characterization process from the Integrated Alignment Collimator and SRCA is briefly discussed. Based on prelaunch MTF calibration using the SRCA, a lunar MTF algorithm is developed and applied to the lifetime on-orbit Terra and Aqua MODIS lunar collections. In each lunar collection, multiple scan-directionMoon-to-background transition profiles are aligned by the subpixel edge locations from a parametric Fermi function fit. Corresponding accumulated edge profiles are filtered and interpolated to obtain the edge spread function (ESF). The MTF is calculated by applying a Fourier transformation on the line spread function through a simple differentiation of the ESF. The lifetime lunar MTF results are analyzed and filtered by a relationship with the Sun-Earth-MODIS angle. Finally, the filtered lunarMTF values are compared to the SRCA MTF results. This comparison provides the level of accuracy for on-orbit MTF estimations validated through prelaunch SRCA measurements. The lunar MTF values had larger uncertainty than the SRCA MTF results; however, the ratio mean of lunarMTF fit and SRCA MTF values is within 2% in the 250- and 500-m bands. Based on the MTF measurement uncertainty range, the suggested lunar MTF algorithm can be applied to any on-orbit imaging sensor with lunar calibration capability.

  9. Depth-encoded all-fiber swept source polarization sensitive OCT

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.

    2014-01-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008

  10. Experimentally determining the locations of two astigmatic images for an underwater light source

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng; Liu, Jian-You; Ying, Shang-Ping

    2015-05-01

    Images formed by an underwater object from light rays refracted in the sagittal and tangential planes are located at different positions for an oblique viewing position. The overlapping of these two images from the observer's perspective will thus prevent the image-splitting astigmatism from being directly observable. In this work, we present a heuristic method to experimentally visualize the astigmatism. A point light source is used as an underwater object and the emerging wave front is recorded using a Shack-Hartmann wave-front sensor. The wave front is found to deform from a circular paraboloid to an elliptic paraboloid as the viewing position changes from normal to oblique. Using geometric optics, we derive an analytical expression for the image position as a function of the rotating angle of an arm used to carry the wave-front sensor in our experimental setup. The measured results are seen to be in good agreement with the theoretical predictions.

  11. Retrieval of Garstang's emission function from all-sky camera images

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  12. [Research on Time-frequency Characteristics of Magneto-acoustic Signal of Different Thickness Medium Based on Wave Summing Method].

    PubMed

    Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng

    2015-08-01

    Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.

  13. A Complete Public Archive for the Einstein Imaging Proportional Counter

    NASA Technical Reports Server (NTRS)

    Helfand, David J.

    1996-01-01

    Consistent with our proposal to the Astrophysics Data Program in 1992, we have completed the design, construction, documentation, and distribution of a flexible and complete archive of the data collected by the Einstein Imaging Proportional Counter. Along with software and data delivered to the High Energy Astrophysics Science Archive Research Center at Goddard Space Flight Center, we have compiled and, where appropriate, published catalogs of point sources, soft sources, hard sources, extended sources, and transient flares detected in the database along with extensive analyses of the instrument's backgrounds and other anomalies. We include in this document a brief summary of the archive's functionality, a description of the scientific catalogs and other results, a bibliography of publications supported in whole or in part under this contract, and a list of personnel whose pre- and post-doctoral education consisted in part in participation in this project.

  14. On an image reconstruction method for ECT

    NASA Astrophysics Data System (ADS)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  15. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2004-12-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  16. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  17. Imaging of neural oscillations with embedded inferential and group prevalence statistics.

    PubMed

    Donhauser, Peter W; Florin, Esther; Baillet, Sylvain

    2018-02-01

    Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.

  18. Imaging of neural oscillations with embedded inferential and group prevalence statistics

    PubMed Central

    2018-01-01

    Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902

  19. ASTROPOP: ASTROnomical Polarimetry and Photometry pipeline

    NASA Astrophysics Data System (ADS)

    Campagnolo, Julio C. N.

    2018-05-01

    AstroPoP reduces almost any CCD photometry and image polarimetry data. For photometry reduction, the code performs source finding, aperture and PSF photometry, astrometry calibration using different automated and non-automated methods and automated source identification and magnitude calibration based on online and local catalogs. For polarimetry, the code resolves linear and circular Stokes parameters produced by image beam splitter or polarizer polarimeters. In addition to the modular functions, ready-to-use pipelines based in configuration files and header keys are also provided with the code. AstroPOP was initially developed to reduce the IAGPOL polarimeter data installed at Observatório Pico dos Dias (Brazil).

  20. Integrated semiconductor optical sensors for chronic, minimally-invasive imaging of brain function.

    PubMed

    Lee, Thomas T; Levi, Ofer; Cang, Jianhua; Kaneko, Megumi; Stryker, Michael P; Smith, Stephen J; Shenoy, Krishna V; Harris, James S

    2006-01-01

    Intrinsic optical signal (IOS) imaging is a widely accepted technique for imaging brain activity. We propose an integrated device consisting of interleaved arrays of gallium arsenide (GaAs) based semiconductor light sources and detectors operating at telecommunications wavelengths in the near-infrared. Such a device will allow for long-term, minimally invasive monitoring of neural activity in freely behaving subjects, and will enable the use of structured illumination patterns to improve system performance. In this work we describe the proposed system and show that near-infrared IOS imaging at wavelengths compatible with semiconductor devices can produce physiologically significant images in mice, even through skull.

  1. Integrated software environment based on COMKAT for analyzing tracer pharmacokinetics with molecular imaging.

    PubMed

    Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F

    2010-01-01

    An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.

  2. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.

  3. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    NASA Astrophysics Data System (ADS)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  4. THE ALLEN TELESCOPE ARRAY Pi GHz SKY SURVEY. III. THE ELAIS-N1, COMA, AND LOCKMAN HOLE FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Steve; Bower, Geoffrey C.; Whysong, David

    2013-01-10

    We present results from a total of 459 repeated 3.1 GHz radio continuum observations (of which 379 were used in a search for transient sources) of the ELAIS-N1, Coma, Lockman Hole, and NOAO Deep Wide Field Survey fields as part of the Pi GHz Sky Survey. The observations were taken approximately once per day between 2009 May and 2011 April. Each image covers 11.8 square degrees and has 100'' FWHM resolution. Deep images for each of the four fields have rms noise between 180 and 310 {mu}Jy, and the corresponding catalogs contain {approx}200 sources in each field. Typically 40-50 ofmore » these sources are detected in each single-epoch image. This represents one of the shortest cadence, largest area, multi-epoch surveys undertaken at these frequencies. We compare the catalogs generated from the combined images to those from individual epochs, and from monthly averages, as well as to legacy surveys. We undertake a search for transients, with particular emphasis on excluding false positive sources. We find no confirmed transients, defined here as sources that can be shown to have varied by at least a factor of 10. However, we find one source that brightened in a single-epoch image to at least six times the upper limit from the corresponding deep image. We also find a source associated with a z = 0.6 quasar which appears to have brightened by a factor {approx}3 in one of our deep images, when compared to catalogs from legacy surveys. We place new upper limits on the number of transients brighter than 10 mJy: fewer than 0.08 transients deg{sup -2} with characteristic timescales of months to years; fewer than 0.02 deg{sup -2} with timescales of months; and fewer than 0.009 deg{sup -2} with timescales of days. We also plot upper limits as a function of flux density for transients on the same timescales.« less

  5. Fluid Registration of Diffusion Tensor Images Using Information Theory

    PubMed Central

    Chiang, Ming-Chang; Leow, Alex D.; Klunder, Andrea D.; Dutton, Rebecca A.; Barysheva, Marina; Rose, Stephen E.; McMahon, Katie L.; de Zubicaray, Greig I.; Toga, Arthur W.; Thompson, Paul M.

    2008-01-01

    We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or J-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the J-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data. PMID:18390342

  6. Optical coherence tomography imaging based on non-harmonic analysis

    NASA Astrophysics Data System (ADS)

    Cao, Xu; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya

    2009-11-01

    A new processing technique called Non-Harmonic Analysis (NHA) is proposed for OCT imaging. Conventional Fourier-Domain OCT relies on the FFT calculation which depends on the window function and length. Axial resolution is counter proportional to the frame length of FFT that is limited by the swept range of the swept source in SS-OCT, or the pixel counts of CCD in SD-OCT degraded in FD-OCT. However, NHA process is intrinsically free from this trade-offs; NHA can resolve high frequency without being influenced by window function or frame length of sampled data. In this study, NHA process is explained and applied to OCT imaging and compared with OCT images based on FFT. In order to validate the benefit of NHA in OCT, we carried out OCT imaging based on NHA with the three different sample of onion-skin,human-skin and pig-eye. The results show that NHA process can realize practical image resolution that is equivalent to 100nm swept range only with less than half-reduced wavelength range.

  7. Open-Source, Web-Based Dashboard Components for DICOM Connectivity.

    PubMed

    Bustamante, Catalina; Pineda, Julian; Rascovsky, Simon; Arango, Andres

    2016-08-01

    The administration of a DICOM network within an imaging healthcare institution requires tools that allow for monitoring of connectivity and availability for adequate uptime measurements and help guide technology management strategies. We present the implementation of an open-source widget for the Dashing framework that provides basic dashboard functionality allowing for monitoring of a DICOM network using network "ping" and DICOM "C-ECHO" operations.

  8. Image change detection systems, methods, and articles of manufacture

    DOEpatents

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  9. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.; Knowlton, R.; Hoo, K.S.

    1995-12-31

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the grain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstationmore » to aid the non-invasive presurgical evaluation of epilepsy patients. These techniques include on-line access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitative of structural and functional information contained in the registered images. For illustration, the authors describe the use of these techniques in a patient case of non-lesional neocortical epilepsy. They also present the future work based on preliminary studies.« less

  10. Coded aperture imaging with self-supporting uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.

    1983-01-01

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.

  11. Estimating Slopes In Images Of Terrain By Use Of BRDF

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1995-01-01

    Proposed method of estimating slopes of terrain features based on use of bidirectional reflectivity distribution function (BRDF) in analyzing aerial photographs, satellite video images, or other images produced by remote sensors. Estimated slopes integrated along horizontal coordinates to obtain estimated heights; generating three-dimensional terrain maps. Method does not require coregistration of terrain features in pairs of images acquired from slightly different perspectives nor requires Sun or other source of illumination to be low in sky over terrain of interest. On contrary, best when Sun is high. Works at almost all combinations of illumination and viewing angles.

  12. Reflective type objective based spectral-domain phase-sensitive optical coherence tomography for high-sensitive structural and functional imaging of cochlear microstructures through intact bone of an excised guinea pig cochlea

    NASA Astrophysics Data System (ADS)

    Subhash, Hrebesh M.; Wang, Ruikang K.; Chen, Fangyi; Nuttall, Alfred L.

    2013-03-01

    Most of the optical coherence tomographic (OCT) systems for high resolution imaging of biological specimens are based on refractive type microscope objectives, which are optimized for specific wave length of the optical source. In this study, we present the feasibility of using commercially available reflective type objective for high sensitive and high resolution structural and functional imaging of cochlear microstructures of an excised guinea pig through intact temporal bone. Unlike conventional refractive type microscopic objective, reflective objective are free from chromatic aberrations due to their all-reflecting nature and can support a broadband of spectrum with very high light collection efficiency.

  13. IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München

    2015-02-01

    I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less

  14. Singular value decomposition metrics show limitations of detector design in diffuse fluorescence tomography

    PubMed Central

    Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.

    2010-01-01

    The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566

  15. Improvements in Speed and Functionality of a 670-GHz Imaging Radar

    NASA Technical Reports Server (NTRS)

    Dengler, Robert J.; Cooper, Ken B.; Mehdi, Imran; Siegel, Peter H.; Tarsala, Jan A.; Bryllert, Thomas E.

    2011-01-01

    Significant improvements have been made in the instrument originally described in a prior NASA Tech Briefs article: Improved Speed and Functionality of a 580-GHz Imaging Radar (NPO-45156), Vol. 34, No. 7 (July 2010), p. 51. First, the wideband YIG oscillator has been replaced with a JPL-designed and built phase-locked, low-noise chirp source. Second, further refinements to the data acquisition and signal processing software have been performed by moving critical code sections to C code, and compiling those sections to Windows DLLs, which are then invoked from the main LabVIEW executive. This system is an active, single-pixel scanned imager operating at 670 GHz. The actual chirp signals for the RF and LO chains were generated by a pair of MITEQ 2.5 3.3 GHz chirp sources. Agilent benchtop synthesizers operating at fixed frequencies around 13 GHz were then used to up-convert the chirp sources to 15.5 16.3 GHz. The resulting signals were then multiplied 36 times by a combination of off-the-shelf millimeter- wave components, and JPL-built 200- GHz doublers and 300- and 600-GHz triplers. The power required to drive the submillimeter-wave multipliers was provided by JPL-built W-band amplifiers. The receive and transmit signal paths were combined using a thin, high-resistivity silicon wafer as a beam splitter. While the results at present are encouraging, the system still lacks sufficient speed to be usable for practical applications in a contraband detection. Ideally, an image acquisition speed of ten seconds, or a factor of 30 improvement, is desired. However, the system improvements to date have resulted in a factor of five increase in signal acquisition speed, as well as enhanced signal processing algorithms, permitting clearer imaging of contraband objects hidden underneath clothing. In particular, advances in three distinct areas have enabled these performance enhancements: base source phase noise reduction, chirp rate, and signal processing. Additionally, a second pixel was added, automatically reducing the imaging time by a factor of two. Although adding a second pixel to the system doubles the amount of submillimeter components required, some savings in microwave hardware can be realized by using a common low-noise source.

  16. Electrical Neuroimaging of Music Processing Reveals Mid-Latency Changes with Level of Musical Expertise

    PubMed Central

    James, Clara E.; Oechslin, Mathias S.; Michel, Christoph M.; De Pretto, Michael

    2017-01-01

    This original research focused on the effect of musical training intensity on cerebral and behavioral processing of complex music using high-density event-related potential (ERP) approaches. Recently we have been able to show progressive changes with training in gray and white matter, and higher order brain functioning using (f)MRI [(functional) Magnetic Resonance Imaging], as well as changes in musical and general cognitive functioning. The current study investigated the same population of non-musicians, amateur pianists and expert pianists using spatio-temporal ERP analysis, by means of microstate analysis, and ERP source imaging. The stimuli consisted of complex musical compositions containing three levels of transgression of musical syntax at closure that participants appraised. ERP waveforms, microstates and underlying brain sources revealed gradual differences according to musical expertise in a 300–500 ms window after the onset of the terminal chords of the pieces. Within this time-window, processing seemed to concern context-based memory updating, indicated by a P3b-like component or microstate for which underlying sources were localized in the right middle temporal gyrus, anterior cingulate and right parahippocampal areas. Given that the 3 expertise groups were carefully matched for demographic factors, these results provide evidence of the progressive impact of training on brain and behavior. PMID:29163017

  17. Electrical Neuroimaging of Music Processing Reveals Mid-Latency Changes with Level of Musical Expertise.

    PubMed

    James, Clara E; Oechslin, Mathias S; Michel, Christoph M; De Pretto, Michael

    2017-01-01

    This original research focused on the effect of musical training intensity on cerebral and behavioral processing of complex music using high-density event-related potential (ERP) approaches. Recently we have been able to show progressive changes with training in gray and white matter, and higher order brain functioning using (f)MRI [(functional) Magnetic Resonance Imaging], as well as changes in musical and general cognitive functioning. The current study investigated the same population of non-musicians, amateur pianists and expert pianists using spatio-temporal ERP analysis, by means of microstate analysis, and ERP source imaging. The stimuli consisted of complex musical compositions containing three levels of transgression of musical syntax at closure that participants appraised. ERP waveforms, microstates and underlying brain sources revealed gradual differences according to musical expertise in a 300-500 ms window after the onset of the terminal chords of the pieces. Within this time-window, processing seemed to concern context-based memory updating, indicated by a P3b-like component or microstate for which underlying sources were localized in the right middle temporal gyrus, anterior cingulate and right parahippocampal areas. Given that the 3 expertise groups were carefully matched for demographic factors, these results provide evidence of the progressive impact of training on brain and behavior.

  18. Analyser-based phase contrast image reconstruction using geometrical optics.

    PubMed

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-07-21

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.

  19. Mesoscale brain explorer, a flexible python-based image analysis and visualization tool.

    PubMed

    Haupt, Dirk; Vanni, Matthieu P; Bolanos, Federico; Mitelut, Catalin; LeDue, Jeffrey M; Murphy, Tim H

    2017-07-01

    Imaging of mesoscale brain activity is used to map interactions between brain regions. This work has benefited from the pioneering studies of Grinvald et al., who employed optical methods to image brain function by exploiting the properties of intrinsic optical signals and small molecule voltage-sensitive dyes. Mesoscale interareal brain imaging techniques have been advanced by cell targeted and selective recombinant indicators of neuronal activity. Spontaneous resting state activity is often collected during mesoscale imaging to provide the basis for mapping of connectivity relationships using correlation. However, the information content of mesoscale datasets is vast and is only superficially presented in manuscripts given the need to constrain measurements to a fixed set of frequencies, regions of interest, and other parameters. We describe a new open source tool written in python, termed mesoscale brain explorer (MBE), which provides an interface to process and explore these large datasets. The platform supports automated image processing pipelines with the ability to assess multiple trials and combine data from different animals. The tool provides functions for temporal filtering, averaging, and visualization of functional connectivity relations using time-dependent correlation. Here, we describe the tool and show applications, where previously published datasets were reanalyzed using MBE.

  20. A Review on the Bioinformatics Tools for Neuroimaging

    PubMed Central

    MAN, Mei Yen; ONG, Mei Sin; Mohamad, Mohd Saberi; DERIS, Safaai; SULONG, Ghazali; YUNUS, Jasmy; CHE HARUN, Fauzan Khairi

    2015-01-01

    Neuroimaging is a new technique used to create images of the structure and function of the nervous system in the human brain. Currently, it is crucial in scientific fields. Neuroimaging data are becoming of more interest among the circle of neuroimaging experts. Therefore, it is necessary to develop a large amount of neuroimaging tools. This paper gives an overview of the tools that have been used to image the structure and function of the nervous system. This information can help developers, experts, and users gain insight and a better understanding of the neuroimaging tools available, enabling better decision making in choosing tools of particular research interest. Sources, links, and descriptions of the application of each tool are provided in this paper as well. Lastly, this paper presents the language implemented, system requirements, strengths, and weaknesses of the tools that have been widely used to image the structure and function of the nervous system. PMID:27006633

  1. The application of wavelet denoising in material discrimination system

    NASA Astrophysics Data System (ADS)

    Fu, Kenneth; Ranta, Dale; Guest, Clark; Das, Pankaj

    2010-01-01

    Recently, the need for cargo inspection imaging systems to provide a material discrimination function has become desirable. This is done by scanning the cargo container with x-rays at two different energy levels. The ratio of attenuations of the two energy scans can provide information on the composition of the material. However, with the statistical error from noise, the accuracy of such systems can be low. Because the moving source emits two energies of x-rays alternately, images from the two scans will not be identical. That means edges of objects in the two images are not perfectly aligned. Moreover, digitization creates blurry-edge artifacts. Different energy x-rays produce different edge spread functions. Those combined effects contribute to a source of false classification namely, the "edge effect." Other types of false classification are caused by noise, mainly Poisson noise associated with photons. The Poisson noise in xray images can be dealt with using either a Wiener filter or a wavelet shrinkage denoising approach. In this paper, we propose a method that uses the wavelet shrinkage denoising approach to enhance the performance of the material identification system. Test results show that this wavelet-based approach has improved performance in object detection and eliminating false positives due to the edge effects.

  2. Functional imaging of the human brain using a modular, fibre-less, high-density diffuse optical tomography system.

    PubMed

    Chitnis, Danial; Cooper, Robert J; Dempsey, Laura; Powell, Samuel; Quaggia, Simone; Highton, David; Elwell, Clare; Hebden, Jeremy C; Everdell, Nicholas L

    2016-10-01

    We present the first three-dimensional, functional images of the human brain to be obtained using a fibre-less, high-density diffuse optical tomography system. Our technology consists of independent, miniaturized, silicone-encapsulated DOT modules that can be placed directly on the scalp. Four of these modules were arranged to provide up to 128, dual-wavelength measurement channels over a scalp area of approximately 60 × 65 mm 2 . Using a series of motor-cortex stimulation experiments, we demonstrate that this system can obtain high-quality, continuous-wave measurements at source-detector separations ranging from 14 to 55 mm in adults, in the presence of hair. We identify robust haemodynamic response functions in 5 out of 5 subjects, and present diffuse optical tomography images that depict functional haemodynamic responses that are well-localized in all three dimensions at both the individual and group levels. This prototype modular system paves the way for a new generation of wearable, wireless, high-density optical neuroimaging technologies.

  3. Noise-based body-wave seismic tomography in an active underground mine.

    NASA Astrophysics Data System (ADS)

    Olivier, G.; Brenguier, F.; Campillo, M.; Lynch, R.; Roux, P.

    2014-12-01

    Over the last decade, ambient noise tomography has become increasingly popular to image the earth's upper crust. The seismic noise recorded in the earth's crust is dominated by surface waves emanating from the interaction of the ocean with the solid earth. These surface waves are low frequency in nature ( < 1 Hz) and not usable for imaging smaller structures associated with mining or oil and gas applications. The seismic noise recorded at higher frequencies are typically from anthropogenic sources, which are short lived, spatially unstable and not well suited for constructing seismic Green's functions between sensors with conventional cross-correlation methods. To examine the use of ambient noise tomography for smaller scale applications, continuous data were recorded for 5 months in an active underground mine in Sweden located more than 1km below surface with 18 high frequency seismic sensors. A wide variety of broadband (10 - 3000 Hz) seismic noise sources are present in an active underground mine ranging from drilling, scraping, trucks, ore crushers and ventilation fans. Some of these sources generate favorable seismic noise, while others are peaked in frequency and not usable. In this presentation, I will show that the noise generated by mining activity can be useful if periods of seismic noise are carefully selected. Although noise sources are not temporally stable and not evenly distributed around the sensor array, good estimates of the seismic Green's functions between sensors can be retrieved for a broad frequency range (20 - 400 Hz) when a selective stacking scheme is used. For frequencies below 100 Hz, the reconstructed Green's functions show clear body-wave arrivals for almost all of the 153 sensor pairs. The arrival times of these body-waves are picked and used to image the local velocity structure. The resulting 3-dimensional image shows a high velocity structure that overlaps with a known ore-body. The material properties of the ore-body differ from the host rock and is likely the cause of the observed high velocity structure. For frequencies above 200 Hz, the seismic waves are multiply scattered by the tunnels and excavations and used to determine the scattering properties of the medium. The results of this study should be useful for future imaging and exploration projects in mining and oil and gas industries.

  4. Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method

    NASA Astrophysics Data System (ADS)

    Li, Jiahang; Shen, Yang; Zhang, Wei

    2018-02-01

    At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.

  5. PET/CT alignment calibration with a non-radioactive phantom and the intrinsic 176Lu radiation of PET detector

    NASA Astrophysics Data System (ADS)

    Wei, Qingyang; Ma, Tianyu; Wang, Shi; Liu, Yaqiang; Gu, Yu; Dai, Tiantian

    2016-11-01

    Positron emission tomography/computed tomography (PET/CT) is an important tool for clinical studies and pre-clinical researches which provides both functional and anatomical images. To achieve high quality co-registered PET/CT images, alignment calibration of PET and CT scanner is a critical procedure. The existing methods reported use positron source phantoms imaged both by PET and CT scanner and then derive the transformation matrix from the reconstructed images of the two modalities. In this paper, a novel PET/CT alignment calibration method with a non-radioactive phantom and the intrinsic 176Lu radiation of the PET detector was developed. Firstly, a multi-tungsten-alloy-sphere phantom without positron source was designed and imaged by CT and the PET scanner using intrinsic 176Lu radiation included in LYSO. Secondly, the centroids of the spheres were derived and matched by an automatic program. Lastly, the rotation matrix and the translation vector were calculated by least-square fitting of the centroid data. The proposed method was employed in an animal PET/CT system (InliView-3000) developed in our lab. Experimental results showed that the proposed method achieves high accuracy and is feasible to replace the conventional positron source based methods.

  6. Determination of the effect of source intensity profile on speckle contrast using coherent spatial frequency domain imaging

    PubMed Central

    Rice, Tyler B.; Konecky, Soren D.; Owen, Christopher; Choi, Bernard; Tromberg, Bruce J.

    2012-01-01

    Laser Speckle Imaging (LSI) is fast, noninvasive technique to image particle dynamics in scattering media such as biological tissue. While LSI measurements are independent of the overall intensity of the laser source, we find that spatial variations in the laser source profile can impact measured flow rates. This occurs due to differences in average photon path length across the profile, and is of significant concern because all lasers have some degree of natural Gaussian profile in addition to artifacts potentially caused by projecting optics. Two in vivo measurement are performed to show that flow rates differ based on location with respect to the beam profile. A quantitative analysis is then done through a speckle contrast forward model generated within a coherent Spatial Frequency Domain Imaging (cSFDI) formalism. The model predicts remitted speckle contrast as a function of spatial frequency, optical properties, and scattering dynamics. Comparison with experimental speckle contrast images were done using liquid phantoms with known optical properties for three common beam shapes. cSFDI is found to accurately predict speckle contrast for all beam shapes to within 5% root mean square error. Suggestions for improving beam homogeneity are given, including a widening of the natural beam Gaussian, proper diffusing glass spreading, and flat top shaping using microlens arrays. PMID:22741080

  7. A Brief Review on the Use of Functional Near-Infrared Spectroscopy (fNIRS) for Language Imaging Studies in Human Newborns and Adults

    ERIC Educational Resources Information Center

    Quaresima, Valentina; Bisconti, Silvia; Ferrari, Marco

    2012-01-01

    Upon stimulation, real time maps of cortical hemodynamic responses can be obtained by non-invasive functional near-infrared spectroscopy (fNIRS) which measures changes in oxygenated and deoxygenated hemoglobin after positioning multiple sources and detectors over the human scalp. The current commercially available transportable fNIRS systems have…

  8. Time reversal imaging and cross-correlations techniques by normal mode theory

    NASA Astrophysics Data System (ADS)

    Montagner, J.; Fink, M.; Capdeville, Y.; Phung, H.; Larmat, C.

    2007-12-01

    Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and recently to seismic waves in seismology for earthquake imaging. The increasing power of computers and numerical methods (such as spectral element methods) enables one to simulate more and more accurately the propagation of seismic waves in heterogeneous media and to develop new applications, in particular time reversal in the three-dimensional Earth. Generalizing the scalar approach of Draeger and Fink (1999), the theoretical understanding of time-reversal method can be addressed for the 3D- elastic Earth by using normal mode theory. It is shown how to relate time- reversal methods on one hand, with auto-correlation of seismograms for source imaging and on the other hand, with cross-correlation between receivers for structural imaging and retrieving Green function. The loss of information will be discussed. In the case of source imaging, automatic location in time and space of earthquakes and unknown sources is obtained by time reversal technique. In the case of big earthquakes such as the Sumatra-Andaman earthquake of december 2004, we were able to reconstruct the spatio-temporal history of the rupture. We present here some new applications at the global scale of these techniques on synthetic tests and on real data.

  9. Integrated Analysis Platform: An Open-Source Information System for High-Throughput Plant Phenotyping1[C][W][OPEN

    PubMed Central

    Klukas, Christian; Chen, Dijun; Pape, Jean-Michel

    2014-01-01

    High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818

  10. Acoustic noise during functional magnetic resonance imaginga)

    PubMed Central

    Ravicz, Michael E.; Melcher, Jennifer R.; Kiang, Nelson Y.-S.

    2007-01-01

    Functional magnetic resonance imaging (fMRI) enables sites of brain activation to be localized in human subjects. For studies of the auditory system, acoustic noise generated during fMRI can interfere with assessments of this activation by introducing uncontrolled extraneous sounds. As a first step toward reducing the noise during fMRI, this paper describes the temporal and spectral characteristics of the noise present under typical fMRI study conditions for two imagers with different static magnetic field strengths. Peak noise levels were 123 and 138 dB re 20 μPa in a 1.5-tesla (T) and a 3-T imager, respectively. The noise spectrum (calculated over a 10-ms window coinciding with the highest-amplitude noise) showed a prominent maximum at 1 kHz for the 1.5-T imager (115 dB SPL) and at 1.4 kHz for the 3-T imager (131 dB SPL). The frequency content and timing of the most intense noise components indicated that the noise was primarily attributable to the readout gradients in the imaging pulse sequence. The noise persisted above background levels for 300-500 ms after gradient activity ceased, indicating that resonating structures in the imager or noise reverberating in the imager room were also factors. The gradient noise waveform was highly repeatable. In addition, the coolant pump for the imager’s permanent magnet and the room air handling system were sources of ongoing noise lower in both level and frequency than gradient coil noise. Knowledge of the sources and characteristics of the noise enabled the examination of general approaches to noise control that could be applied to reduce the unwanted noise during fMRI sessions. PMID:11051496

  11. Development of integrated semiconductor optical sensors for functional brain imaging

    NASA Astrophysics Data System (ADS)

    Lee, Thomas T.

    Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex in vivo for over two decades. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS), that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ bench-top equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. The ultimate goal of this work is to overcome these limitations by developing a single-chip semiconductor sensor using arrays of sources and detectors operating at near-infrared (NIR) wavelengths. A single-chip implementation, combined with wireless telemetry, will eliminate the need for immobilization or anesthesia of subjects and allow in vivo studies of free behavior. NIR light offers additional advantages because it experiences less absorption in animal tissue than visible light, which allows for imaging through superficial tissues. This, in turn, reduces or eliminates the need for traumatic surgery and enables long-term brain-mapping studies in freely-behaving animals. This dissertation concentrates on key engineering challenges of implementing the sensor. This work shows the feasibility of using a GaAs-based array of vertical-cavity surface emitting lasers (VCSELs) and PIN photodiodes for IOS imaging. I begin with in-vivo studies of IOS imaging through the skull in mice, and use these results along with computer simulations to establish minimum performance requirements for light sources and detectors. I also evaluate the performance of a current commercial VCSEL for IOS imaging, and conclude with a proposed prototype sensor.

  12. Synchrotron radiation imaging is a powerful tool to image brain microvasculature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Mengqi; Sun, Danni; Xie, Yuanyuan

    2014-03-15

    Synchrotron radiation (SR) imaging is a powerful experimental tool for micrometer-scale imaging of microcirculation in vivo. This review discusses recent methodological advances and findings from morphological investigations of cerebral vascular networks during several neurovascular pathologies. In particular, it describes recent developments in SR microangiography for real-time assessment of the brain microvasculature under various pathological conditions in small animal models. It also covers studies that employed SR-based phase-contrast imaging to acquire 3D brain images and provide detailed maps of brain vasculature. In addition, a brief introduction of SR technology and current limitations of SR sources are described in this review. Inmore » the near future, SR imaging could transform into a common and informative imaging modality to resolve subtle details of cerebrovascular function.« less

  13. Synchrotron radiation imaging is a powerful tool to image brain microvasculature.

    PubMed

    Zhang, Mengqi; Peng, Guanyun; Sun, Danni; Xie, Yuanyuan; Xia, Jian; Long, Hongyu; Hu, Kai; Xiao, Bo

    2014-03-01

    Synchrotron radiation (SR) imaging is a powerful experimental tool for micrometer-scale imaging of microcirculation in vivo. This review discusses recent methodological advances and findings from morphological investigations of cerebral vascular networks during several neurovascular pathologies. In particular, it describes recent developments in SR microangiography for real-time assessment of the brain microvasculature under various pathological conditions in small animal models. It also covers studies that employed SR-based phase-contrast imaging to acquire 3D brain images and provide detailed maps of brain vasculature. In addition, a brief introduction of SR technology and current limitations of SR sources are described in this review. In the near future, SR imaging could transform into a common and informative imaging modality to resolve subtle details of cerebrovascular function.

  14. Imaging plates calibration to X-rays

    NASA Astrophysics Data System (ADS)

    Curcio, A.; Andreoli, P.; Cipriani, M.; Claps, G.; Consoli, F.; Cristofari, G.; De Angelis, R.; Giulietti, D.; Ingenito, F.; Pacella, D.

    2016-05-01

    The growing interest for the Imaging Plates, due to their high sensitivity range and versatility, has induced, in the last years, to detailed characterizations of their response function in different energy ranges and kind of radiation/particles. A calibration of the Imaging Plates BAS-MS, BAS-SR, BAS-TR has been performed at the ENEA-Frascati labs by exploiting the X-ray fluorescence of different targets (Ca, Cu, Pb, Mo, I, Ta) and the radioactivity of a BaCs source, in order to cover the X-ray range between few keV to 80 keV.

  15. Innovative design of parabolic reflector light guiding structure

    NASA Astrophysics Data System (ADS)

    Whang, Allen J.; Tso, Chun-Hsien; Chen, Yi-Yung

    2008-02-01

    Due to the idea of everlasting green architecture, it is of increasing importance to guild natural light into indoors. The advantages are multifold - to have better color rendering index, excellent energy savings from environments viewpoints and make humans more healthy, etc. Our search is to design an innovative structure, to convert outdoor sun light impinges on larger surfaces, into near linear light beam sources, later convert this light beam into near point sources which enters the indoor spaces then can be used as lighting sources indoors. We are not involved with the opto-electrical transformation, to the guild light into to the building, to perform the illumination, as well as the imaging function. Because non-imaging optics, well known for apply to the solar concentrators, that can use non-imaging structures to fulfill our needs, which can also be used as energy collectors in solar energy devices. Here, we have designed a pair of large and small parabolic reflector, which can be used to collect daylight and change area from large to small. Then we make a light-guide system that is been designed by us use of this parabolic reflector to guide the collection light, can pick up the performance for large surface source change to near linear source and a larger collection area.

  16. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  17. Optimal Matched Filter in the Low-number Count Poisson Noise Regime and Implications for X-Ray Source Detection

    NASA Astrophysics Data System (ADS)

    Ofek, Eran O.; Zackay, Barak

    2018-04-01

    Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.

  18. Luminance-based specular gloss characterization.

    PubMed

    Leloup, Frédéric B; Pointer, Michael R; Dutré, Philip; Hanselaer, Peter

    2011-06-01

    Gloss is a feature of visual appearance that arises from the directionally selective reflection of light incident on a surface. Especially when a distinct reflected image is perceptible, the luminance distribution of the illumination scene above the sample can strongly influence the gloss perception. For this reason, industrial glossmeters do not provide a satisfactory gloss estimation of high-gloss surfaces. In this study, the influence of the conditions of illumination on specular gloss perception was examined through a magnitude estimation experiment in which 10 observers took part. A light booth with two light sources was utilized: the mirror image of only one source being visible in reflection by the observer. The luminance of both the reflected image and the adjacent sample surface could be independently varied by separate adjustment of the intensity of the two light sources. A psychophysical scaling function was derived, relating the visual gloss estimations to the measured luminance of both the reflected image and the off-specular sample background. The generalization error of the model was estimated through a validation experiment performed by 10 other observers. In result, a metric including both surface and illumination properties is provided. Based on this metric, improved gloss evaluation methods and instruments could be developed.

  19. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research

    PubMed Central

    Campagnola, Luke; Kratz, Megan B.; Manis, Paul B.

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org. PMID:24523692

  20. The Morava E-theories of finite general linear groups

    NASA Astrophysics Data System (ADS)

    Mattafirri, Sara

    The feasibility of producing an image of radioactivity distribution within a patient or confined region of space using information carried by the gamma-rays emitted from the source is investigated. The imaging approach makes use of parameters related to the gamma-rays which undergo Compton scattering within a detection system, it does not involve the use of pin-holes, and it employs gamma-rays of energy ranging from a few hundreds of keVs to MeVs. Energy range of the photons and absence of pin-holes aim to provide larger pool of radioisotopes and larger efficiency than other emission imaging modalities, such as single photon emission computed tomography and positron emission tomography, making it possible to investigate larger pool of functions and smaller radioactivity doses. The observables available to produce the image are the gamma-ray position of interaction and energy deposition during Compton scattering within the detection systems. Image reconstruction methodologies such as backprojection and list-mode maximum likelihood expectation maximization algorithm are characterized and applied to produce images of simulated and experimental sources on the basis of the observed parameters. Given the observables and image reconstruction methodologies, imaging systems based on minimizing the variation of the impulse response with position within the field of view are developed. The approach allows imaging of three-dimensional sources when an imaging system which provides full 4 pi view of the object is used and imaging of two-dimensional sources when a single block-type detector which provides one view of the object is used. Geometrical resolution of few millimeters is obtained at few centimeters from the detection system if employing gamma-rays of energy in the order of few hundreds of keVs and current state of the art semi-conductor detectors; At this level of resolution, detection efficiency is in the order of 10-3 at few centimeters from the detector when a single block detector few centimeters in size is used. The resolution significantly improves with increasing energy of the photons and it degrades roughly linearly with increasing distance from the detector; Larger detection efficiency can be obtained at the expenses of resolution or via targeted configurations of the detector. Results pave the way for image reconstruction of practical gamma-ray emitting sources.

  1. The effects of aging on the neural correlates of subjective and objective recollection.

    PubMed

    Duarte, Audrey; Henson, Richard N; Graham, Kim S

    2008-09-01

    High-functioning older adults can exhibit normal recollection when measured subjectively, via "remember" judgments, but not when measured objectively, via source judgments, whereas low-functioning older adults exhibit impairments for both measures. A potential explanation for this is that typical subjective and objective tests of recollection necessitate different processing demands, supported by distinct brain regions, and that deficits in these tests are observed according to the degree of age-related changes in these regions. Here, we used event-related functional magnetic resonance imaging to measure the effects of aging on neural correlates of subjective and objective measures of recollection, in young, high-functioning (Old-High) and low-functioning (Old-Low) older adults. Behaviorally, the Old-High group showed intact subjective ("remember" judgments) but impaired objective recollection (for 1 of 2 spatial or temporal sources), whereas the Old-Low group was impaired on both measures. Imaging data showed changes in parietal subjective recollection effects in the Old-Low group and in lateral frontal objective recollection effects in both older adult groups. Our results highlight the importance of examining performance variability in older adults and suggest that differential effects of aging on brain regions are associated with different patterns of performance on tests of subjective and objective recollection.

  2. Functional Imaging of Sleep Vertex Sharp Transients

    PubMed Central

    Stern, John M.; Caporro, Matteo; Haneef, Zulfi; Yeh, Hsiang J.; Buttinelli, Carla; Lenartowicz, Agatha; Mumford, Jeanette A.; Parvizi, Josef; Poldrack, Russell A.

    2011-01-01

    Objective The vertex sharp transient (VST) is an electroencephalographic (EEG) discharge that is an early marker of non-REM sleep. It has been recognized since the beginning of sleep physiology research, but its source and function remain mostly unexplained. We investigated VST generation using functional MRI (fMRI). Methods Simultaneous EEG and fMRI were recorded from 7 individuals in drowsiness and light sleep. VST occurrences on EEG were modeled with fMRI using an impulse function convolved with a hemodynamic response function to identify cerebral regions correlating to the VSTs. A resulting statistical image was thresholded at Z>2.3. Results Two hundred VSTs were identified. Significantly increased signal was present bilaterally in medial central, lateral precentral, posterior superior temporal, and medial occipital cortex. No regions of decreased signal were present. Conclusion The regions are consistent with electrophysiologic evidence from animal models and functional imaging of human sleep, but the results are specific to VSTs. The regions principally encompass the primary sensorimotor cortical regions for vision, hearing, and touch. Significance The results depict a network comprising the presumed VST generator and its associated regions. The associated regions functional similarity for primary sensation suggests a role for VSTs in sensory experience during sleep. PMID:21310653

  3. The Galex Time Domain Survey. I. Selection And Classification of Over a Thousand Ultraviolet Variable Sources

    NASA Technical Reports Server (NTRS)

    Gezari, S.; Martin, D. C.; Forster, K.; Neill, J. D.; Huber, M.; Heckman, T.; Bianchi, L.; Morrissey, P.; Neff, S. G.; Seibert, M.; hide

    2013-01-01

    We present the selection and classification of over a thousand ultraviolet (UV) variable sources discovered in approximately 40 deg(exp 2) of GALEX Time Domain Survey (TDS) NUV images observed with a cadence of 2 days and a baseline of observations of approximately 3 years. The GALEX TDS fields were designed to be in spatial and temporal coordination with the Pan-STARRS1 Medium Deep Survey, which provides deep optical imaging and simultaneous optical transient detections via image differencing. We characterize the GALEX photometric errors empirically as a function of mean magnitude, and select sources that vary at the 5 sigma level in at least one epoch. We measure the statistical properties of the UV variability, including the structure function on timescales of days and years. We report classifications for the GALEX TDS sample using a combination of optical host colors and morphology, UV light curve characteristics, and matches to archival X-ray, and spectroscopy catalogs. We classify 62% of the sources as active galaxies (358 quasars and 305 active galactic nuclei), and 10% as variable stars (including 37 RR Lyrae, 53 M dwarf flare stars, and 2 cataclysmic variables). We detect a large-amplitude tail in the UV variability distribution for M-dwarf flare stars and RR Lyrae, reaching up to absolute value(?m) = 4.6 mag and 2.9 mag, respectively. The mean amplitude of the structure function for quasars on year timescales is five times larger than observed at optical wavelengths. The remaining unclassified sources include UV-bright extragalactic transients, two of which have been spectroscopically confirmed to be a young core-collapse supernova and a flare from the tidal disruption of a star by dormant supermassive black hole. We calculate a surface density for variable sources in the UV with NUV less than 23 mag and absolute value(?m) greater than 0.2 mag of approximately 8.0, 7.7, and 1.8 deg(exp -2) for quasars, active galactic nuclei, and RR Lyrae stars, respectively. We also calculate a surface density rate in the UV for transient sources, using the effective survey time at the cadence appropriate to each class, of approximately 15 and 52 deg(exp -2 yr-1 for M dwarfs and extragalactic transients, respectively.

  4. Feasibility of generating quantitative composition images in dual energy mammography: a simulation study

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Kim, Ye-seul; Choi, Sunghoon; Lee, Haenghwa; Choi, Seungyeon; Kim, Hee-Joung

    2016-03-01

    Breast cancer is one of the most common malignancies in women. For years, mammography has been used as the gold standard for localizing breast cancer, despite its limitation in determining cancer composition. Therefore, the purpose of this simulation study is to confirm the feasibility of obtaining tumor composition using dual energy digital mammography. To generate X-ray sources for dual energy mammography, 26 kVp and 39 kVp voltages were generated for low and high energy beams, respectively. Additionally, the energy subtraction and inverse mapping functions were applied to provide compositional images. The resultant images showed that the breast composition obtained by the inverse mapping function with cubic fitting achieved the highest accuracy and least noise. Furthermore, breast density analysis with cubic fitting showed less than 10% error compare to true values. In conclusion, this study demonstrated the feasibility of creating individual compositional images and capability of analyzing breast density effectively.

  5. Earthquake source imaging by high-resolution array analysis at regional distances: the 2010 M7 Haiti earthquake as seen by the Venezuela National Seismic Network

    NASA Astrophysics Data System (ADS)

    Meng, L.; Ampuero, J. P.; Rendon, H.

    2010-12-01

    Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification of biased uncertainty of the back projection. Preliminary results from the Venezuela data set shows an East to West rupture propagation along the fault with sub-Rayleigh rupture speed, consistent with a compact source with two significant asperities which are confirmed by source time function obtained from Green’s function deconvolution and other source inversion results. These efforts could lead the Venezuela National Seismic Network to play a prominent role in the timely characterization of the rupture process of large earthquakes in the Caribbean, including the future ruptures along the yet unbroken segments of the Enriquillo fault system.

  6. The experience of mathematical beauty and its neural correlates

    PubMed Central

    Zeki, Semir; Romaya, John Paul; Benincasa, Dionigi M. T.; Atiyah, Michael F.

    2014-01-01

    Many have written of the experience of mathematical beauty as being comparable to that derived from the greatest art. This makes it interesting to learn whether the experience of beauty derived from such a highly intellectual and abstract source as mathematics correlates with activity in the same part of the emotional brain as that derived from more sensory, perceptually based, sources. To determine this, we used functional magnetic resonance imaging (fMRI) to image the activity in the brains of 15 mathematicians when they viewed mathematical formulae which they had individually rated as beautiful, indifferent or ugly. Results showed that the experience of mathematical beauty correlates parametrically with activity in the same part of the emotional brain, namely field A1 of the medial orbito-frontal cortex (mOFC), as the experience of beauty derived from other sources. PMID:24592230

  7. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  8. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  9. pyBSM: A Python package for modeling imaging systems

    NASA Astrophysics Data System (ADS)

    LeMaster, Daniel A.; Eismann, Michael T.

    2017-05-01

    There are components that are common to all electro-optical and infrared imaging system performance models. The purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First, performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate performance of an algorithm used to detect objects in an image.

  10. Intensity distribution of the x ray source for the AXAF VETA-I mirror test

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann

    1992-01-01

    The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.

  11. Application of laser-wakefield-based x-ray source to global food security issues

    NASA Astrophysics Data System (ADS)

    Kieffer, J. C.; Fourmaux, S.; Hallin, E.; Arnison, P.; Brereton, N.; Pitre, F.; Dixon, M.; Tran, N.

    2017-05-01

    We present the development of a high throughput phase contrast screening system based on LWFA Xray sources for plant imaging. We upgraded the INRS laser-betatron beam line and we illustrate its imaging potential through the innovative development of new tools for addressing issues relevant to global food security. This initiative, led by the Global Institute of Food Security (GIFS) at the U of Saskatchewan, aims to elucidate that part of the function that maps environmental inputs onto specific plant phenotypes. The prospect of correlating phenotypic expression with adaptation to environmental stresses will provide researchers with a new tool to assess breeding programs for crops meant to thrive under the climate extremes.

  12. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    PubMed

    Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel

    2015-01-01

    Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  13. Structured Illumination Diffuse Optical Tomography for Mouse Brain Imaging

    NASA Astrophysics Data System (ADS)

    Reisman, Matthew David

    As advances in functional magnetic resonance imaging (fMRI) have transformed the study of human brain function, they have also widened the divide between standard research techniques used in humans and those used in mice, where high quality images are difficult to obtain using fMRI given the small volume of the mouse brain. Optical imaging techniques have been developed to study mouse brain networks, which are highly valuable given the ability to study brain disease treatments or development in a controlled environment. A planar imaging technique known as optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to imaging a 2-dimensional view of superficial cortical tissues. Diffuse optical tomography (DOT) is a non-invasive, volumetric neuroimaging technique that has been valuable for bedside imaging of patients in the clinic, but previous DOT systems for rodent neuroimaging have been limited by either sparse spatial sampling or by slow speed. My research has been to develop diffuse optical tomography for whole brain mouse neuroimaging by expanding previous techniques to achieve high spatial sampling using multiple camera views for detection and high speed using structured illumination sources. I have shown the feasibility of this method to perform non-invasive functional neuroimaging in mice and its capabilities of imaging the entire volume of the brain. Additionally, the system has been built with a custom, flexible framework to accommodate the expansion to imaging multiple dynamic contrasts in the brain and populations that were previously difficult or impossible to image, such as infant mice and awake mice. I have contributed to preliminary feasibility studies of these more advanced techniques using OIS, which can now be carried out using the structured illumination diffuse optical tomography technique to perform longitudinal, non-invasive studies of the whole volume of the mouse brain.

  14. Expanding Functionality of Commercial Optical Coherence Tomography Systems by Integrating a Custom Endoscope

    PubMed Central

    Welge, Weston A.; Barton, Jennifer K.

    2015-01-01

    Optical coherence tomography (OCT) is a useful imaging modality for detecting and monitoring diseases of the gastrointestinal tract and other tubular structures. The non-destructiveness of OCT enables time-serial studies in animal models. While turnkey commercial research OCT systems are plenty, researchers often require custom imaging probes. We describe the integration of a custom endoscope with a commercial swept-source OCT system and generalize this description to any imaging probe and OCT system. A numerical dispersion compensation method is also described. Example images demonstrate that OCT can visualize the mouse colon crypt structure and detect adenoma in vivo. PMID:26418811

  15. Multi-photon microscopy with a low-cost and highly efficient Cr:LiCAF laser

    PubMed Central

    Sakadić, Sava; Demirbas, Umit; Mempel, Thorsten R.; Moore, Anna; Ruvinskaya, Svetlana; Boas, David A.; Sennaroglu, Alphan; Kartner, Franz X.; Fujimoto, James G.

    2009-01-01

    Multi-photon microscopy (MPM) is a powerful tool for biomedical imaging, enabling molecular contrast and integrated structural and functional imaging on the cellular and subcellular level. However, the cost and complexity of femtosecond laser sources that are required in MPM are significant hurdles to widespread adoption of this important imaging modality. In this work, we describe femtosecond diode pumped Cr:LiCAF laser technology as a low cost alternative to femtosecond Ti:Sapphire lasers for MPM. Using single mode pump diodes which cost only $150 each, a diode pumped Cr:LiCAF laser generates ~70-fs duration, 1.8-nJ pulses at ~800 nm wavelengths, with a repetition rate of 100 MHz and average output power of 180 mW. Representative examples of MPM imaging in neuroscience, immunology, endocrinology and cancer research using Cr:LiCAF laser technology are presented. These studies demonstrate the potential of this laser source for use in a broad range of MPM applications. PMID:19065223

  16. Smartphone based scalable reverse engineering by digital image correlation

    NASA Astrophysics Data System (ADS)

    Vidvans, Amey; Basu, Saurabh

    2018-03-01

    There is a need for scalable open source 3D reconstruction systems for reverse engineering. This is because most commercially available reconstruction systems are capital and resource intensive. To address this, a novel reconstruction technique is proposed. The technique involves digital image correlation based characterization of surface speeds followed by normalization with respect to angular speed during rigid body rotational motion of the specimen. Proof of concept of the same is demonstrated and validated using simulation and empirical characterization. Towards this, smart-phone imaging and inexpensive off the shelf components along with those fabricated additively using poly-lactic acid polymer with a standard 3D printer are used. Some sources of error in this reconstruction methodology are discussed. It is seen that high curvatures on the surface suppress accuracy of reconstruction. Reasons behind this are delineated in the nature of the correlation function. Theoretically achievable resolution during smart-phone based 3D reconstruction by digital image correlation is derived.

  17. Spatial resolution of imaging plate with flash X-rays and its utilization for radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaikh, A. M., E-mail: shaikham@barc.gov.in; Romesh, C.; Kolage, T. S.

    2015-06-24

    A flash X-ray source developed using pulsed electron accelerator with electron energy range of 400keV to 1030keV and a field emission cathode is characterized using X-ray imaging plates. Spatial resolution of the imaging system is measured using edge spread function fitted to data obtained from radiograph of Pb step wedge. A spatial resolution of 150±6 µm is obtained. The X-ray beam size is controlled by the anode-cathode configuration. Optimum source size of ∼13±2 mm diameter covering an area with intensity of ∼27000 PSL/mm{sup 2} is obtained on the imaging plate kept at a distance of ∼200 mm from the tip of the anode.more » It is used for recording radiographs of objects like satellite cable cutter, aero-engine turbine blade and variety of pyro-devices used in aerospace industry.« less

  18. The Capricorn Orogen Passive source Array (COPA) in Western Australia

    NASA Astrophysics Data System (ADS)

    Gessner, K.; Yuan, H.; Murdie, R.; Dentith, M. C.; Johnson, S.; Brett, J.

    2015-12-01

    COPA is the passive source component of a multi-method geophysical program aimed at assessing the mineral deposits potential of the Proterozoic Capricorn Orogen. Previous results from the active source surveys, receiver functions and magnetotelluric studies show reworked orogenic crust in the orogen that contrasts with more simple crust in the neighbouring Archean cratons, suggesting progressive and punctuated collisional processes during the final amalgamation of the Western Australian craton. Previous seismic studies are all based on line deployment or single station analyses; therefore it is essential to develop 3D seismic images to test whether these observations are representative for the whole orogen. With a careful design that takes advantage of previous passive source surveys, the current long-term and short-term deployments span an area of approximately 500 x 500 km. The 36-month total deployment can guarantee enough data recording for 3D structure imaging using body wave tomography, ambient noise surface wave tomography and P- and S-wave receiver function Common Conversion Point (CCP) stacking techniques. A successive instrument loan from the ANSIR national instrument pool, provided 34 broadband seismometers that have been deployed in the western half of the orogen since March 2014. We expect approximately 40-km lateral resolution near the surface for the techniques we propose, which due to low frequency nature of earthquake waves will degrade to about 100 km near the base of the cratonic lithosphere, which is expected at depths between 200 to 250 km. Preliminary results from the first half of the COPA deployment will be presented in the light of the hypotheses that 1) distinct crustal blocks can be detected continuously throughout the orogen (using ambient noise/body wave tomography); 2) distinct lithologies are present in the crust and upper mantle across the orogen (using receiver function CCP images); and 3) crustal and lithosphere deformation along craton margins in general follows the "wedge" tectonic model (e.g. subduction of Juvenile blocks under the craton mantle as represented by craton-ward dipping sutures.

  19. A New Approach for Alpha Radiography by Triple THGEM using Monte Carlo Simulation and Measurement

    NASA Astrophysics Data System (ADS)

    Khezripour, S.; Negarestani, A.; Rezaie, M. R.

    2018-05-01

    In this research, alpha imaging in Self Quenching Streamer (SQS) mode is investigated using a triple Thick Gas Electron Multiplier (THGEM) detector by Monte Carlo method and experimental data. First, a semi-empirical equation is derived to represent the relation between the SQS voltage and the alpha energy in every hole of the triple THGEM. The accuracy of this equation is tested and confirmed by a high degree of consistency. Secondly, the images of objects that are irradiated by Am-241 alpha source (5.49 MeV) are recorded by a CMOS camera using triple THGEM detector in the SQS mode. The resolution of images in this paper is a function of the exposure time. For an alpha source with 150 kBq activity, an optimal time interval for exposure is about 30 sec. For exposure time less or more than 30 sec, the images are incomplete and ambiguous, respectively. The overall objective of this work is to facilitate the alpha radiography in nuclear imaging through a triple THGEM without any amplifier or complicated electrical equipment.

  20. Superresolution near-field imaging with surface waves

    NASA Astrophysics Data System (ADS)

    Fu, Lei; Liu, Zhaolun; Schuster, Gerard

    2018-02-01

    We present the theory for near-field superresolution imaging with surface waves and time reverse mirrors (TRMs). Theoretical formulae and numerical results show that applying the TRM operation to surface waves in an elastic half-space can achieve superresolution imaging of subwavelength scatterers if they are located less than about 1/2 of the shear wavelength from the source line. We also show that the TRM operation for a single frequency is equivalent to natural migration, which uses the recorded data to approximate the Green's functions for migration, and only costs O(N4) algebraic operations for post-stack migration compared to O(N6) operations for natural pre-stack migration. Here, we assume the sources and receivers are on an N × N grid and there are N2 trial image points on the free surface. Our theoretical predictions of superresolution are validated with tests on synthetic data. The field-data tests suggest that hidden faults at the near surface can be detected with subwavelength imaging of surface waves by using the TRM operation if they are no deeper than about 1/2 the dominant shear wavelength.

  1. Phase contrast tomography of the mouse cochlea at microfocus x-ray sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartels, Matthias; Krenkel, Martin; Hernandez, Victor H.

    2013-08-19

    We present phase contrast x-ray tomography of functional soft tissue within the bony cochlear capsule of mice, carried out at laboratory microfocus sources with well-matched source, detector, geometry, and reconstruction algorithms at spatial resolutions down to 2 μm. Contrast, data quality and resolution enable the visualization of thin membranes and nerve fibers as well as automated segmentation of surrounding bone. By complementing synchrotron radiation imaging techniques, a broad range of biomedical applications becomes possible as demonstrated for optogenetic cochlear implant research.

  2. THz optical design considerations and optimization for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Sung, Shijun; Garritano, James; Bajwa, Neha; Nowroozi, Bryan; Llombart, Nuria; Grundfest, Warren; Taylor, Zachary D.

    2014-09-01

    THz imaging system design will play an important role making possible imaging of targets with arbitrary properties and geometries. This study discusses design consideration and imaging performance optimization techniques in THz quasioptical imaging system optics. Analysis of field and polarization distortion by off-axis parabolic (OAP) mirrors in THz imaging optics shows how distortions are carried in a series of mirrors while guiding the THz beam. While distortions of the beam profile by individual mirrors are not significant, these effects are compounded by a series of mirrors in antisymmetric orientation. It is shown that symmetric orientation of the OAP mirror effectively cancels this distortion to recover the original beam profile. Additionally, symmetric orientation can correct for some geometrical off-focusing due to misalignment. We also demonstrate an alternative method to test for overall system optics alignment by investigating the imaging performance of the tilted target plane. Asymmetric signal profile as a function of the target plane's tilt angle indicates when one or more imaging components are misaligned, giving a preferred tilt direction. Such analysis can offer additional insight into often elusive source device misalignment at an integrated system. Imaging plane tilting characteristics are representative of a 3-D modulation transfer function of the imaging system. A symmetric tilted plane is preferred to optimize imaging performance.

  3. Design of a web portal for interdisciplinary image retrieval from multiple online image resources.

    PubMed

    Kammerer, F J; Frankewitsch, T; Prokosch, H-U

    2009-01-01

    Images play an important role in medicine. Finding the desired images within the multitude of online image databases is a time-consuming and frustrating process. Existing websites do not meet all the requirements for an ideal learning environment for medical students. This work intends to establish a new web portal providing a centralized access point to a selected number of online image databases. A back-end system locates images on given websites and extracts relevant metadata. The images are indexed using UMLS and the MetaMap system provided by the US National Library of Medicine. Specially developed functions allow to create individual navigation structures. The front-end system suits the specific needs of medical students. A navigation structure consisting of several medical fields, university curricula and the ICD-10 was created. The images may be accessed via the given navigation structure or using different search functions. Cross-references are provided by the semantic relations of the UMLS. Over 25,000 images were identified and indexed. A pilot evaluation among medical students showed good first results concerning the acceptance of the developed navigation structures and search features. The integration of the images from different sources into the UMLS semantic network offers a quick and an easy-to-use learning environment.

  4. A simple solution for model comparison in bold imaging: the special case of reward prediction error and reward outcomes.

    PubMed

    Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D

    2013-01-01

    Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.

  5. A projective surgical navigation system for cancer resection

    NASA Astrophysics Data System (ADS)

    Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald

    2016-03-01

    Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.

  6. 3D deformable image matching: a hierarchical approach over nested subspaces

    NASA Astrophysics Data System (ADS)

    Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul

    2000-06-01

    This paper presents a fast hierarchical method to perform dense deformable inter-subject matching of 3D MR Images of the brain. To recover the complex morphological variations in neuroanatomy, a hierarchy of 3D deformations fields is estimated, by minimizing a global energy function over a sequence of nested subspaces. The nested subspaces, generated from a single scaling function, consist of deformation fields constrained at different scales. The highly non linear energy function, describing the interactions between the target and the source images, is minimized using a coarse-to-fine continuation strategy over this hierarchy. The resulting deformable matching method shows low sensitivity to local minima and is able to track large non-linear deformations, with moderate computational load. The performances of the approach are assessed both on simulated 3D transformations and on a real data base of 3D brain MR Images from different individuals. The method has shown efficient in putting into correspondence the principle anatomical structures of the brain. An application to atlas-based MRI segmentation, by transporting a labeled segmentation map on patient data, is also presented.

  7. Temperature field determination in slabs, circular plates and spheres with saw tooth heat generating sources

    NASA Astrophysics Data System (ADS)

    Diestra Cruz, Heberth Alexander

    The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.

  8. A dedicated cone-beam CT system for musculoskeletal extremities imaging: design, optimization, and initial performance characterization.

    PubMed

    Zbijewski, W; De Jean, P; Prakash, P; Ding, Y; Stayman, J W; Packard, N; Senn, R; Yang, D; Yorkston, J; Machado, A; Carrino, J A; Siewerdsen, J H

    2011-08-01

    This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified the following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a -55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 x 20 x 20 cm3 field of view); total acquisition arc of -240 degrees. The system MTF declines to 50% at -1.3 mm(-1) and to 10% at -2.7 mm(-1), consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from -500 projections at less than -0.5 kW power, implying -6.4 mGy (0.064 mSv) for low-dose protocols and -15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.

  9. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    PubMed Central

    Zbijewski, W.; De Jean, P.; Prakash, P.; Ding, Y.; Stayman, J. W.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Machado, A.; Carrino, J. A.; Siewerdsen, J. H.

    2011-01-01

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified the following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a ∼55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 × 20 × 20 cm3 field of view); total acquisition arc of ∼240°. The system MTF declines to 50% at ∼1.3 mm−1 and to 10% at ∼2.7 mm−1, consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from ∼500 projections at less than ∼0.5 kW power, implying ∼6.4 mGy (0.064 mSv) for low-dose protocols and ∼15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10–20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography. PMID:21928644

  10. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zbijewski, W.; De Jean, P.; Prakash, P.

    2011-08-15

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified themore » following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a {approx}55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 x 20 x 20 cm{sup 3} field of view); total acquisition arc of {approx}240 deg. The system MTF declines to 50% at {approx}1.3 mm{sup -1} and to 10% at {approx}2.7 mm{sup -1}, consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from {approx}500 projections at less than {approx}0.5 kW power, implying {approx}6.4 mGy (0.064 mSv) for low-dose protocols and {approx}15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.« less

  11. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manohar, N; Cho, S; Reynoso, F

    Purpose: To make benchtop x-ray fluorescence computed tomography (XFCT) practical for routine preclinical imaging tasks with gold nanoparticles (GNPs) by deploying, integrating, and characterizing a dedicated high-performance x-ray source and addition of simultaneous micro-CT functionality. Methods: Considerable research effort is currently under way to develop a polychromatic benchtop cone-beam XFCT system capable of imaging GNPs by stimulation and detection of gold K-shell x-ray fluorescence (XRF) photons. Recently, an ad hoc high-power x-ray source was incorporated and used to image the biodistribution of GNPs within a mouse, postmortem. In the current work, a dedicated x-ray source system featuring a liquid-cooled tungsten-targetmore » x-ray tube (max 160 kVp, ∼3 kW power) was deployed. The source was operated at 125 kVp, 24 mA. The tube’s compact dimensions allowed greater flexibility for optimizing both the irradiation and detection geometries. Incident x-rays were shaped by a conical collimator and filtered by 2 mm of tin. A compact “OEM” cadmium-telluride x-ray detector was implemented for detecting XRF/scatter spectra. Additionally, a flat panel detector was installed to allow simultaneous transmission CT imaging. The performance of the system was characterized by determining the detection limit (10-second acquisition time) for inserts filled with water/GNPs at various concentrations (0 and 0.010–1.0 wt%) and embedded in a small-animal-sized phantom. The phantom was loaded with 0.5, 0.3, and 0.1 wt% inserts and imaged using XFCT and simultaneous micro-CT. Results: An unprecedented detection limit of 0.030 wt% was experimentally demonstrated, with a 33% reduction in acquisition time. The reconstructed XFCT image accurately localized the imaging inserts. Micro-CT imaging did not provide enough contrast to distinguish imaging inserts from the phantom under the current conditions. Conclusion: The system is immediately capable of in vivo preclinical XFCT imaging with GNPs. Micro-CT imaging will require optimization of irradiation parameters to improve contrast. Supported by NIH/NCI grant R01CA155446; This investigation was supported by NIH/NCI grant R01CA155446.« less

  13. Determination of the Wave Parameters from the Statistical Characteristics of the Image of a Linear Test Object

    NASA Astrophysics Data System (ADS)

    Weber, V. L.

    2018-03-01

    We statistically analyze the images of the objects of the "light-line" and "half-plane" types which are observed through a randomly irregular air-water interface. The expressions for the correlation function of fluctuations of the image of an object given in the form of a luminous half-plane are found. The possibility of determining the spatial and temporal correlation functions of the slopes of a rough water surface from these relationships is shown. The problem of the probability of intersection of a small arbitrarily oriented line segment by the contour image of a luminous straight line is solved. Using the results of solving this problem, we show the possibility of determining the values of the curvature variances of a rough water surface. A practical method for obtaining an image of a rectilinear luminous object in the light rays reflected from the rough surface is proposed. It is theoretically shown that such an object can be synthesized by temporal accumulation of the image of a point source of light rapidly moving in the horizontal plane with respect to the water surface.

  14. Multimodal breast cancer imaging using coregistered dynamic diffuse optical tomography and digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zimmermann, Bernhard B.; Deng, Bin; Singh, Bhawana; Martino, Mark; Selb, Juliette; Fang, Qianqian; Sajjadi, Amir Y.; Cormier, Jayne; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.; Saksena, Mansi A.; Carp, Stefan A.

    2017-04-01

    Diffuse optical tomography (DOT) is emerging as a noninvasive functional imaging method for breast cancer diagnosis and neoadjuvant chemotherapy monitoring. In particular, the multimodal approach of combining DOT with x-ray digital breast tomosynthesis (DBT) is especially synergistic as DBT prior information can be used to enhance the DOT reconstruction. DOT, in turn, provides a functional information overlay onto the mammographic images, increasing sensitivity and specificity to cancer pathology. We describe a dynamic DOT apparatus designed for tight integration with commercial DBT scanners and providing a fast (up to 1 Hz) image acquisition rate to enable tracking hemodynamic changes induced by the mammographic breast compression. The system integrates 96 continuous-wave and 24 frequency-domain source locations as well as 32 continuous wave and 20 frequency-domain detection locations into low-profile plastic plates that can easily mate to the DBT compression paddle and x-ray detector cover, respectively. We demonstrate system performance using static and dynamic tissue-like phantoms as well as in vivo images acquired from the pool of patients recalled for breast biopsies at the Massachusetts General Hospital Breast Imaging Division.

  15. Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE

    DOE PAGES

    Borges, Nicholas; Losko, Adrian; Vogel, Sven

    2018-02-13

    The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. In this work, we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolvedmore » neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution.« less

  16. Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borges, Nicholas; Losko, Adrian; Vogel, Sven

    The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. In this work, we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolvedmore » neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution.« less

  17. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  18. Single pulse two photon fluorescence lifetime imaging (SP-FLIM) with MHz pixel rate.

    PubMed

    Eibl, Matthias; Karpf, Sebastian; Weng, Daniel; Hakert, Hubertus; Pfeiffer, Tom; Kolb, Jan Philip; Huber, Robert

    2017-07-01

    Two-photon-excited fluorescence lifetime imaging microscopy (FLIM) is a chemically specific 3-D sensing modality providing valuable information about the microstructure, composition and function of a sample. However, a more widespread application of this technique is hindered by the need for a sophisticated ultra-short pulse laser source and by speed limitations of current FLIM detection systems. To overcome these limitations, we combined a robust sub-nanosecond fiber laser as the excitation source with high analog bandwidth detection. Due to the long pulse length in our configuration, more fluorescence photons are generated per pulse, which allows us to derive the lifetime with a single excitation pulse only. In this paper, we show high quality FLIM images acquired at a pixel rate of 1 MHz. This approach is a promising candidate for an easy-to-use and benchtop FLIM system to make this technique available to a wider research community.

  19. Software-based measurement of thin filament lengths: an open-source GUI for Distributed Deconvolution analysis of fluorescence images

    PubMed Central

    Gokhin, David S.; Fowler, Velia M.

    2016-01-01

    The periodically arranged thin filaments within the striated myofibrils of skeletal and cardiac muscle have precisely regulated lengths, which can change in response to developmental adaptations, pathophysiological states, and genetic perturbations. We have developed a user-friendly, open-source ImageJ plugin that provides a graphical user interface (GUI) for super-resolution measurement of thin filament lengths by applying Distributed Deconvolution (DDecon) analysis to periodic line scans collected from fluorescence images. In the workflow presented here, we demonstrate thin filament length measurement using a phalloidin-stained cryosection of mouse skeletal muscle. The DDecon plugin is also capable of measuring distances of any periodically localized fluorescent signal from the Z- or M-line, as well as distances between successive Z- or M-lines, providing a broadly applicable tool for quantitative analysis of muscle cytoarchitecture. These functionalities can also be used to analyze periodic fluorescence signals in nonmuscle cells. PMID:27644080

  20. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    PubMed

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  1. Virtual plane-wave imaging via Marchenko redatuming

    NASA Astrophysics Data System (ADS)

    Meles, Giovanni Angelo; Wapenaar, Kees; Thorbecke, Jan

    2018-04-01

    Marchenko redatuming is a novel scheme used to retrieve up- and down-going Green's functions in an unknown medium. Marchenko equations are based on reciprocity theorems and are derived on the assumption of the existence of functions exhibiting space-time focusing properties once injected in the subsurface. In contrast to interferometry but similarly to standard migration methods, Marchenko redatuming only requires an estimate of the direct wave from the virtual source (or to the virtual receiver), illumination from only one side of the medium, and no physical sources (or receivers) inside the medium. In this contribution we consider a different time-focusing condition within the frame of Marchenko redatuming that leads to the retrieval of virtual plane-wave responses. As a result, it allows multiple-free imaging using only a one-dimensional sampling of the targeted model at a fraction of the computational cost of standard Marchenko schemes. The potential of the new method is demonstrated on 2D synthetic models.

  2. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less

  3. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm.

    PubMed

    Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran

    2015-10-01

    Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.

  4. AN IMAGE-PLANE ALGORITHM FOR JWST'S NON-REDUNDANT APERTURE MASK DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenbaum, Alexandra Z.; Pueyo, Laurent; Sivaramakrishnan, Anand

    2015-01-10

    The high angular resolution technique of non-redundant masking (NRM) or aperture masking interferometry (AMI) has yielded images of faint protoplanetary companions of nearby stars from the ground. AMI on James Webb Space Telescope (JWST)'s Near Infrared Imager and Slitless Spectrograph (NIRISS) has a lower thermal background than ground-based facilities and does not suffer from atmospheric instability. NIRISS AMI images are likely to have 90%-95% Strehl ratio between 2.77 and 4.8 μm. In this paper we quantify factors that limit the raw point source contrast of JWST NRM. We develop an analytic model of the NRM point spread function which includesmore » different optical path delays (pistons) between mask holes and fit the model parameters with image plane data. It enables a straightforward way to exclude bad pixels, is suited to limited fields of view, and can incorporate effects such as intra-pixel sensitivity variations. We simulate various sources of noise to estimate their effect on the standard deviation of closure phase, σ{sub CP} (a proxy for binary point source contrast). If σ{sub CP} < 10{sup –4} radians—a contrast ratio of 10 mag—young accreting gas giant planets (e.g., in the nearby Taurus star-forming region) could be imaged with JWST NIRISS. We show the feasibility of using NIRISS' NRM with the sub-Nyquist sampled F277W, which would enable some exoplanet chemistry characterization. In the presence of small piston errors, the dominant sources of closure phase error (depending on pixel sampling, and filter bandwidth) are flat field errors and unmodeled variations in intra-pixel sensitivity. The in-flight stability of NIRISS will determine how well these errors can be calibrated by observing a point source. Our results help develop efficient observing strategies for space-based NRM.« less

  5. Method of remotely characterizing thermal properties of a sample

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (Inventor); Heath, D. Michele (Inventor); Welch, Christopher (Inventor); Winfree, William P. (Inventor); Miller, William E. (Inventor)

    1992-01-01

    A sample in a wind tunnel is radiated from a thermal energy source outside of the wind tunnel. A thermal imager system, also located outside of the wind tunnel, reads surface radiations from the sample as a function of time. The produced thermal images are characteristic of the heat transferred from the sample to the flow across the sample. In turn, the measured rates of heat loss of the sample are characteristic of the flow and the sample.

  6. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  7. Optimizing Within-Subject Experimental Designs for jICA of Multi-Channel ERP and fMRI

    PubMed Central

    Mangalathu-Arumana, Jain; Liebenthal, Einat; Beardsley, Scott A.

    2018-01-01

    Joint independent component analysis (jICA) can be applied within subject for fusion of multi-channel event-related potentials (ERP) and functional magnetic resonance imaging (fMRI), to measure brain function at high spatiotemporal resolution (Mangalathu-Arumana et al., 2012). However, the impact of experimental design choices on jICA performance has not been systematically studied. Here, the sensitivity of jICA for recovering neural sources in individual data was evaluated as a function of imaging SNR, number of independent representations of the ERP/fMRI data, relationship between instantiations of the joint ERP/fMRI activity (linear, non-linear, uncoupled), and type of sources (varying parametrically and non-parametrically across representations of the data), using computer simulations. Neural sources were simulated with spatiotemporal and noise attributes derived from experimental data. The best performance, maximizing both cross-modal data fusion and the separation of brain sources, occurred with a moderate number of representations of the ERP/fMRI data (10–30), as in a mixed block/event related experimental design. Importantly, the type of relationship between instantiations of the ERP/fMRI activity, whether linear, non-linear or uncoupled, did not in itself impact jICA performance, and was accurately recovered in the common profiles (i.e., mixing coefficients). Thus, jICA provides an unbiased way to characterize the relationship between ERP and fMRI activity across brain regions, in individual data, rendering it potentially useful for characterizing pathological conditions in which neurovascular coupling is adversely affected. PMID:29410611

  8. LETTER TO THE EDITOR: Combined optical and single photon emission imaging: preliminary results

    NASA Astrophysics Data System (ADS)

    Boschi, Federico; Spinelli, Antonello E.; D'Ambrosio, Daniela; Calderan, Laura; Marengo, Mario; Sbarbati, Andrea

    2009-12-01

    In vivo optical imaging instruments are generally devoted to the acquisition of light coming from fluorescence or bioluminescence processes. Recently, an instrument was conceived with radioisotopic detection capabilities (Kodak in Vivo Multispectral System F) based on the conversion of x-rays from the phosphorus screen. The goal of this work is to demonstrate that an optical imager (IVIS 200, Xenogen Corp., Alameda, USA), designed for in vivo acquisitions of small animals in bioluminescent and fluorescent modalities, can even be employed to detect signals due to radioactive tracers. Our system is based on scintillator crystals for the conversion of high-energy rays and a collimator. No hardware modifications are required. Crystals alone permit the acquisition of photons coming from an in vivo 20 g nude mouse injected with a solution of methyl diphosphonate technetium 99 metastable (Tc99m-MDP). With scintillator crystals and collimators, a set of measurements aimed to fully characterize the system resolution was carried out. More precisely, system point spread function and modulation transfer function were measured at different source depths. Results show that system resolution is always better than 1.3 mm when the source depth is less than 10 mm. The resolution of the images obtained with radioactive tracers is comparable with the resolution achievable with dedicated techniques. Moreover, it is possible to detect both optical and nuclear tracers or bi-modal tracers with only one instrument.

  9. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    NASA Astrophysics Data System (ADS)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  10. A New Serial-direction Trail Effect in CCD Images of the Lunar-based Ultraviolet Telescope

    NASA Astrophysics Data System (ADS)

    Wu, C.; Deng, J. S.; Guyonnet, A.; Antilogus, P.; Cao, L.; Cai, H. B.; Meng, X. M.; Han, X. H.; Qiu, Y. L.; Wang, J.; Wang, S.; Wei, J. Y.; Xin, L. P.; Li, G. W.

    2016-10-01

    Unexpected trails have been seen subsequent to relative bright sources in astronomical images taken with the CCD camera of the Lunar-based Ultraviolet Telescope (LUT) since its first light on the Moon’s surface. The trails can only be found in the serial-direction of CCD readout, differing themselves from image trails of radiation-damaged space-borne CCDs, which usually appear in the parallel-readout direction. After analyzing the same trail defects following warm pixels (WPs) in dark frames, we found that the relative intensity profile of the LUT CCD trails can be expressed as an exponential function of the distance i (in number of pixels) of the trailing pixel to the original source (or WP), i.e., {\\mathtt{\\exp }}(α {\\mathtt{i}}+β ). The parameters α and β seem to be independent of the CCD temperature, intensity of the source (or WP), and its position in the CCD frame. The main trail characteristics show evolution occurring at an increase rate of ˜(7.3 ± 3.6) × 10-4 in the first two operation years. The trails affect the consistency of the profiles of different brightness sources, which make smaller aperture photometry have larger extra systematic error. The astrometric uncertainty caused by the trails is too small to be acceptable based on LUT requirements for astrometry accuracy. Based on the empirical profile model, a correction method has been developed for LUT images that works well for restoring the fluxes of astronomical sources that are lost in trailing pixels.

  11. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  12. Supercontinuum ultra-high resolution line-field OCT; experimental spectrograph comparison and comparison with current clinical OCT systems by the imaging of a human cornea

    NASA Astrophysics Data System (ADS)

    Lawman, Samuel; Romano, Vito; Madden, Peter W.; Mason, Sharon; Williams, Bryan M.; Zheng, Yalin; Shen, Yao-Chun

    2018-03-01

    Ultra high axial resolution (UHR) was demonstrated early in the development of optical coherence tomography (OCT), but has not yet reached clinical practice. We present the combination of supercontinuum light source and line field (LF-) OCT as a technical and economical route to get UHR-OCT into clinic and other OCT application areas. We directly compare images of a human donor cornea taken with low and high resolution current generation clinical OCT systems with UHR-LF-OCT. These images highlight the massive information increase of UHR-OCT. Application to pharmaceutical pellets, and the functionality and imaging performance of different imaging spectrograph choices for LF- OCT are also demonstrated.

  13. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms I: Revisiting Cluster-Based Inferences.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K

    2018-02-01

    In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.

  14. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Junjing; Hong, Young Pyo; Chen, Si

    Modern integrated circuits (ICs) employ a myriad of materials organized at nanoscale dimensions, and certain critical tolerances must be met for them to function. To understand departures from intended functionality, it is essential to examine ICs as manufactured so as to adjust design rules ideally in a nondestructive way so that imaged structures can be correlated with electrical performance. Electron microscopes can do this on thin regions or on exposed surfaces, but the required processing alters or even destroys functionality. Microscopy with multi-keV x-rays provides an alternative approach with greater penetration, but the spatial resolution of x-ray imaging lenses hasmore » not allowed one to see the required detail in the latest generation of ICs. X-ray ptychography provides a way to obtain images of ICs without lens-imposed resolution limits with past work delivering 20–40-nm resolution on thinned ICs. We describe a simple model for estimating the required exposure and use it to estimate the future potential for this technique. Here we show that this approach can be used to image circuit detail through an unprocessed 300-μm-thick silicon wafer with sub-20-nm detail clearly resolved after mechanical polishing to 240-μm thickness was used to eliminate image contrast caused by Si wafer surface scratches. Here, by using continuous x-ray scanning, massively parallel computation, and a new generation of synchrotron light sources, this should enable entire nonetched ICs to be imaged to 10-nm resolution or better while maintaining their ability to function in electrical tests.« less

  16. Single scan parameterization of space-variant point spread functions in image space via a printed array: the impact for two PET/CT scanners.

    PubMed

    Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J

    2011-05-21

    Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.

  17. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    NASA Astrophysics Data System (ADS)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  18. Hi-GAL, the Herschel infrared Galactic Plane Survey: photometric maps and compact source catalogues. First data release for the inner Milky Way: +68° ≥ l ≥ -70°

    NASA Astrophysics Data System (ADS)

    Molinari, S.; Schisano, E.; Elia, D.; Pestalozzi, M.; Traficante, A.; Pezzuto, S.; Swinyard, B. M.; Noriega-Crespo, A.; Bally, J.; Moore, T. J. T.; Plume, R.; Zavagno, A.; di Giorgio, A. M.; Liu, S. J.; Pilbratt, G. L.; Mottram, J. C.; Russeil, D.; Piazzo, L.; Veneziani, M.; Benedettini, M.; Calzoletti, L.; Faustini, F.; Natoli, P.; Piacentini, F.; Merello, M.; Palmese, A.; Del Grande, R.; Polychroni, D.; Rygl, K. L. J.; Polenta, G.; Barlow, M. J.; Bernard, J.-P.; Martin, P. G.; Testi, L.; Ali, B.; André, P.; Beltrán, M. T.; Billot, N.; Carey, S.; Cesaroni, R.; Compiègne, M.; Eden, D.; Fukui, Y.; Garcia-Lario, P.; Hoare, M. G.; Huang, M.; Joncas, G.; Lim, T. L.; Lord, S. D.; Martinavarro-Armengol, S.; Motte, F.; Paladini, R.; Paradis, D.; Peretto, N.; Robitaille, T.; Schilke, P.; Schneider, N.; Schulz, B.; Sibthorpe, B.; Strafella, F.; Thompson, M. A.; Umana, G.; Ward-Thompson, D.; Wyrowski, F.

    2016-07-01

    Aims: We present the first public release of high-quality data products (DR1) from Hi-GAL, the Herschel infrared Galactic Plane Survey. Hi-GAL is the keystone of a suite of continuum Galactic plane surveys from the near-IR to the radio and covers five wavebands at 70, 160, 250, 350 and 500 μm, encompassing the peak of the spectral energy distribution of cold dust for 8 ≲ T ≲ 50 K. This first Hi-GAL data release covers the inner Milky Way in the longitude range 68° ≳ ℓ ≳ -70° in a | b | ≤ 1° latitude strip. Methods: Photometric maps have been produced with the ROMAGAL pipeline, which optimally capitalizes on the excellent sensitivity and stability of the bolometer arrays of the Herschel PACS and SPIRE photometric cameras. It delivers images of exquisite quality and dynamical range, absolutely calibrated with Planck and IRAS, and recovers extended emission at all wavelengths and all spatial scales, from the point-spread function to the size of an entire 2°× 2° "tile" that is the unit observing block of the survey. The compact source catalogues were generated with the CuTEx algorithm, which was specifically developed to optimise source detection and extraction in the extreme conditions of intense and spatially varying background that are found in the Galactic plane in the thermal infrared. Results: Hi-GAL DR1 images are cirrus noise limited and reach the 1σ-rms predicted by the Herschel Time Estimators for parallel-mode observations at 60'' s-1 scanning speed in relatively low cirrus emission regions. Hi-GAL DR1 images will be accessible through a dedicated web-based image cutout service. The DR1 Compact Source Catalogues are delivered as single-band photometric lists containing, in addition to source position, peak, and integrated flux and source sizes, a variety of parameters useful to assess the quality and reliability of the extracted sources. Caveats and hints to help in this assessment are provided. Flux completeness limits in all bands are determined from extensive synthetic source experiments and greatly depend on the specific line of sight along the Galactic plane because the background strongly varies as a function of Galactic longitude. Hi-GAL DR1 catalogues contain 123210, 308509, 280685, 160972, and 85460 compact sources in the five bands. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.The images and the catalogues are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A149

  19. A Magnifying Glass for Virtual Imaging of Subwavelength Resolution by Transformation Optics.

    PubMed

    Sun, Fei; Guo, Shuwei; Liu, Yichao; He, Sailing

    2018-06-14

    Traditional magnifying glasses can give magnified virtual images with diffraction-limited resolution, that is, detailed information is lost. Here, a novel magnifying glass by transformation optics, referred to as a "superresolution magnifying glass" (SMG) is designed, which can produce magnified virtual images with a predetermined magnification factor and resolve subwavelength details (i.e., light sources with subwavelength distances can be resolved). Based on theoretical calculations and reductions, a metallic plate structure to produce the reduced SMG in microwave frequencies, which gives good performance verified by both numerical simulations and experimental results, is proposed and realized. The function of SMG is to create a superresolution virtual image, unlike traditional superresolution imaging devices that create real images. The proposed SMG will create a new branch of superresolution imaging technology. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Determining X-ray source intensity and confidence bounds in crowded fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu

    We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less

  1. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  2. Functional imaging of sleep vertex sharp transients.

    PubMed

    Stern, John M; Caporro, Matteo; Haneef, Zulfi; Yeh, Hsiang J; Buttinelli, Carla; Lenartowicz, Agatha; Mumford, Jeanette A; Parvizi, Josef; Poldrack, Russell A

    2011-07-01

    The vertex sharp transient (VST) is an electroencephalographic (EEG) discharge that is an early marker of non-REM sleep. It has been recognized since the beginning of sleep physiology research, but its source and function remain mostly unexplained. We investigated VST generation using functional MRI (fMRI). Simultaneous EEG and fMRI were recorded from seven individuals in drowsiness and light sleep. VST occurrences on EEG were modeled with fMRI using an impulse function convolved with a hemodynamic response function to identify cerebral regions correlating to the VSTs. A resulting statistical image was thresholded at Z>2.3. Two hundred VSTs were identified. Significantly increased signal was present bilaterally in medial central, lateral precentral, posterior superior temporal, and medial occipital cortex. No regions of decreased signal were present. The regions are consistent with electrophysiologic evidence from animal models and functional imaging of human sleep, but the results are specific to VSTs. The regions principally encompass the primary sensorimotor cortical regions for vision, hearing, and touch. The results depict a network comprising the presumed VST generator and its associated regions. The associated regions functional similarity for primary sensation suggests a role for VSTs in sensory experience during sleep. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. LSST Resources for the Community

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne

    2011-01-01

    LSST will generate 100 petabytes of images and 20 petabytes of catalogs, covering 18,000-20,000 square degrees of area sampled every few days, throughout a total of ten years of time -- all publicly available and exquisitely calibrated. The primary access to this data will be through Data Access Centers (DACs). DACs will provide access to catalogs of sources (single detections from individual images) and objects (associations of sources from multiple images). Simple user interfaces or direct SQL queries at the DAC can return user-specified portions of data from catalogs or images. More complex manipulations of the data, such as calculating multi-point correlation functions or creating alternative photo-z measurements on terabyte-scale data, can be completed with the DAC's own resources. Even more data-intensive computations requiring access to large numbers of image pixels on petabyte-scale could also be conducted at the DAC, using compute resources allocated in a similar manner to a TAC. DAC resources will be available to all individuals in member countries or institutes and LSST science collaborations. DACs will also assist investigators with requests for allocations at national facilities such as the Petascale Computing Facility, TeraGrid, and Open Science Grid. Using data on this scale requires new approaches to accessibility and analysis which are being developed through interactions with the LSST Science Collaborations. We are producing simulated images (as might be acquired by LSST) based on models of the universe and generating catalogs from these images (as well as from the base model) using the LSST data management framework in a series of data challenges. The resulting images and catalogs are being made available to the science collaborations to verify the algorithms and develop user interfaces. All LSST software is open source and available online, including preliminary catalog formats. We encourage feedback from the community.

  4. The Grism Lens-Amplified Survey from Space (GLASS). VI. Comparing the Mass and Light in MACS J0416.1-2403 Using Frontier Field Imaging and GLASS Spectroscopy

    NASA Astrophysics Data System (ADS)

    Hoag, A.; Huang, K.-H.; Treu, T.; Bradač, M.; Schmidt, K. B.; Wang, X.; Brammer, G. B.; Broussard, A.; Amorin, R.; Castellano, M.; Fontana, A.; Merlin, E.; Schrabback, T.; Trenti, M.; Vulcani, B.

    2016-11-01

    We present a model using both strong and weak gravitational lensing of the galaxy cluster MACS J0416.1-2403, constrained using spectroscopy from the Grism Lens-Amplified Survey from Space (GLASS) and Hubble Frontier Fields (HFF) imaging data. We search for emission lines in known multiply imaged sources in the GLASS spectra, obtaining secure spectroscopic redshifts of 30 multiple images belonging to 15 distinct source galaxies. The GLASS spectra provide the first spectroscopic measurements for five of the source galaxies. The weak lensing signal is acquired from 884 galaxies in the F606W HFF image. By combining the weak lensing constraints with 15 multiple image systems with spectroscopic redshifts and nine multiple image systems with photometric redshifts, we reconstruct the gravitational potential of the cluster on an adaptive grid. The resulting map of total mass density is compared with a map of stellar mass density obtained from the deep Spitzer Frontier Fields imaging data to study the relative distribution of stellar and total mass in the cluster. We find that the projected stellar mass to total mass ratio, f ⋆, varies considerably with the stellar surface mass density. The mean projected stellar mass to total mass ratio is < {f}\\star > =0.009+/- 0.003 (stat.), but with a systematic error as large as 0.004-0.005, dominated by the choice of the initial mass function. We find agreement with several recent measurements of f ⋆ in massive cluster environments. The lensing maps of convergence, shear, and magnification are made available to the broader community in the standard HFF format.

  5. Nanoscale x-ray imaging of circuit features without wafer etching.

    PubMed

    Deng, Junjing; Hong, Young Pyo; Chen, Si; Nashed, Youssef S G; Peterka, Tom; Levi, Anthony J F; Damoulakis, John; Saha, Sayan; Eiles, Travis; Jacobsen, Chris

    2017-03-01

    Modern integrated circuits (ICs) employ a myriad of materials organized at nanoscale dimensions, and certain critical tolerances must be met for them to function. To understand departures from intended functionality, it is essential to examine ICs as manufactured so as to adjust design rules, ideally in a non-destructive way so that imaged structures can be correlated with electrical performance. Electron microscopes can do this on thin regions, or on exposed surfaces, but the required processing alters or even destroys functionality. Microscopy with multi-keV x-rays provides an alternative approach with greater penetration, but the spatial resolution of x-ray imaging lenses has not allowed one to see the required detail in the latest generation of ICs. X-ray ptychography provides a way to obtain images of ICs without lens-imposed resolution limits, with past work delivering 20-40 nm resolution on thinned ICs. We describe a simple model for estimating the required exposure, and use it to estimate the future potential for this technique. Here we show for the first time that this approach can be used to image circuit detail through an unprocessed 300 μ m thick silicon wafer, with sub-20 nm detail clearly resolved after mechanical polishing to 240 μ m thickness was used to eliminate image contrast caused by Si wafer surface scratches. By using continuous x-ray scanning, massively parallel computation, and a new generation of synchrotron light sources, this should enable entire non-etched ICs to be imaged to 10 nm resolution or better while maintaining their ability to function in electrical tests.

  6. Nanoscale x-ray imaging of circuit features without wafer etching

    DOE PAGES

    Deng, Junjing; Hong, Young Pyo; Chen, Si; ...

    2017-03-24

    Modern integrated circuits (ICs) employ a myriad of materials organized at nanoscale dimensions, and certain critical tolerances must be met for them to function. To understand departures from intended functionality, it is essential to examine ICs as manufactured so as to adjust design rules ideally in a nondestructive way so that imaged structures can be correlated with electrical performance. Electron microscopes can do this on thin regions or on exposed surfaces, but the required processing alters or even destroys functionality. Microscopy with multi-keV x-rays provides an alternative approach with greater penetration, but the spatial resolution of x-ray imaging lenses hasmore » not allowed one to see the required detail in the latest generation of ICs. X-ray ptychography provides a way to obtain images of ICs without lens-imposed resolution limits with past work delivering 20–40-nm resolution on thinned ICs. We describe a simple model for estimating the required exposure and use it to estimate the future potential for this technique. Here we show that this approach can be used to image circuit detail through an unprocessed 300-μm-thick silicon wafer with sub-20-nm detail clearly resolved after mechanical polishing to 240-μm thickness was used to eliminate image contrast caused by Si wafer surface scratches. Here, by using continuous x-ray scanning, massively parallel computation, and a new generation of synchrotron light sources, this should enable entire nonetched ICs to be imaged to 10-nm resolution or better while maintaining their ability to function in electrical tests.« less

  7. Nanoscale x-ray imaging of circuit features without wafer etching

    NASA Astrophysics Data System (ADS)

    Deng, Junjing; Hong, Young Pyo; Chen, Si; Nashed, Youssef S. G.; Peterka, Tom; Levi, Anthony J. F.; Damoulakis, John; Saha, Sayan; Eiles, Travis; Jacobsen, Chris

    2017-03-01

    Modern integrated circuits (ICs) employ a myriad of materials organized at nanoscale dimensions, and certain critical tolerances must be met for them to function. To understand departures from intended functionality, it is essential to examine ICs as manufactured so as to adjust design rules ideally in a nondestructive way so that imaged structures can be correlated with electrical performance. Electron microscopes can do this on thin regions or on exposed surfaces, but the required processing alters or even destroys functionality. Microscopy with multi-keV x rays provides an alternative approach with greater penetration, but the spatial resolution of x-ray imaging lenses has not allowed one to see the required detail in the latest generation of ICs. X-ray ptychography provides a way to obtain images of ICs without lens-imposed resolution limits with past work delivering 20-40-nm resolution on thinned ICs. We describe a simple model for estimating the required exposure and use it to estimate the future potential for this technique. Here we show that this approach can be used to image circuit detail through an unprocessed 300 -μ m -thick silicon wafer with sub-20-nm detail clearly resolved after mechanical polishing to 240 -μ m thickness was used to eliminate image contrast caused by Si wafer surface scratches. By using continuous x-ray scanning, massively parallel computation, and a new generation of synchrotron light sources, this should enable entire nonetched ICs to be imaged to 10-nm resolution or better while maintaining their ability to function in electrical tests.

  8. Open Source High Content Analysis Utilizing Automated Fluorescence Lifetime Imaging Microscopy.

    PubMed

    Görlitz, Frederik; Kelly, Douglas J; Warren, Sean C; Alibhai, Dominic; West, Lucien; Kumar, Sunil; Alexandrov, Yuriy; Munro, Ian; Garcia, Edwin; McGinty, James; Talbot, Clifford; Serwa, Remigiusz A; Thinon, Emmanuelle; da Paola, Vincenzo; Murray, Edward J; Stuhmeier, Frank; Neil, Mark A A; Tate, Edward W; Dunsby, Christopher; French, Paul M W

    2017-01-18

    We present an open source high content analysis instrument utilizing automated fluorescence lifetime imaging (FLIM) for assaying protein interactions using Förster resonance energy transfer (FRET) based readouts of fixed or live cells in multiwell plates. This provides a means to screen for cell signaling processes read out using intramolecular FRET biosensors or intermolecular FRET of protein interactions such as oligomerization or heterodimerization, which can be used to identify binding partners. We describe here the functionality of this automated multiwell plate FLIM instrumentation and present exemplar data from our studies of HIV Gag protein oligomerization and a time course of a FRET biosensor in live cells. A detailed description of the practical implementation is then provided with reference to a list of hardware components and a description of the open source data acquisition software written in µManager. The application of FLIMfit, an open source MATLAB-based client for the OMERO platform, to analyze arrays of multiwell plate FLIM data is also presented. The protocols for imaging fixed and live cells are outlined and a demonstration of an automated multiwell plate FLIM experiment using cells expressing fluorescent protein-based FRET constructs is presented. This is complemented by a walk-through of the data analysis for this specific FLIM FRET data set.

  9. Open Source High Content Analysis Utilizing Automated Fluorescence Lifetime Imaging Microscopy

    PubMed Central

    Warren, Sean C.; Alibhai, Dominic; West, Lucien; Kumar, Sunil; Alexandrov, Yuriy; Munro, Ian; Garcia, Edwin; McGinty, James; Talbot, Clifford; Serwa, Remigiusz A.; Thinon, Emmanuelle; da Paola, Vincenzo; Murray, Edward J.; Stuhmeier, Frank; Neil, Mark A. A.; Tate, Edward W.; Dunsby, Christopher; French, Paul M. W.

    2017-01-01

    We present an open source high content analysis instrument utilizing automated fluorescence lifetime imaging (FLIM) for assaying protein interactions using Förster resonance energy transfer (FRET) based readouts of fixed or live cells in multiwell plates. This provides a means to screen for cell signaling processes read out using intramolecular FRET biosensors or intermolecular FRET of protein interactions such as oligomerization or heterodimerization, which can be used to identify binding partners. We describe here the functionality of this automated multiwell plate FLIM instrumentation and present exemplar data from our studies of HIV Gag protein oligomerization and a time course of a FRET biosensor in live cells. A detailed description of the practical implementation is then provided with reference to a list of hardware components and a description of the open source data acquisition software written in µManager. The application of FLIMfit, an open source MATLAB-based client for the OMERO platform, to analyze arrays of multiwell plate FLIM data is also presented. The protocols for imaging fixed and live cells are outlined and a demonstration of an automated multiwell plate FLIM experiment using cells expressing fluorescent protein-based FRET constructs is presented. This is complemented by a walk-through of the data analysis for this specific FLIM FRET data set. PMID:28190060

  10. No scanning depth imaging system based on TOF

    NASA Astrophysics Data System (ADS)

    Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo

    2016-03-01

    To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.

  11. Algorithm for Wavefront Sensing Using an Extended Scene

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Green, Joseph; Ohara, Catherine

    2008-01-01

    A recently conceived algorithm for processing image data acquired by a Shack-Hartmann (SH) wavefront sensor is not subject to the restriction, previously applicable in SH wavefront sensing, that the image be formed from a distant star or other equivalent of a point light source. That is to say, the image could be of an extended scene. (One still has the option of using a point source.) The algorithm can be implemented in commercially available software on ordinary computers. The steps of the algorithm are the following: 1. Suppose that the image comprises M sub-images. Determine the x,y Cartesian coordinates of the centers of these sub-images and store them in a 2xM matrix. 2. Within each sub-image, choose an NxN-pixel cell centered at the coordinates determined in step 1. For the ith sub-image, let this cell be denoted as si(x,y). Let the cell of another subimage (preferably near the center of the whole extended-scene image) be designated a reference cell, denoted r(x,y). 3. Calculate the fast Fourier transforms of the sub-sub-images in the central NxN portions (where N < N and both are preferably powers of 2) of r(x,y) and si(x,y). 4. Multiply the two transforms to obtain a cross-correlation function Ci(u,v), in the Fourier domain. Then let the phase of Ci(u, v) constitute a phase function, phi(u,v). 5. Fit u and v slopes to phi (u,v) over a small u,v subdomain. 6. Compute the fast Fourier transform, Si(u,v) of the full NxN cell si(x,y). Multiply this transform by the u and phase slopes obtained in step 4. Then compute the inverse fast Fourier transform of the product. 7. Repeat steps 4 through 6 in an iteration loop, cumulating the u and slopes, until a maximum iteration number is reached or the change in image shift becomes smaller than a predetermined tolerance. 8. Repeat steps 4 through 7 for the cells of all other sub-images.

  12. The WFIRST Galaxy Survey Exposure Time Calculator

    NASA Technical Reports Server (NTRS)

    Hirata, Christopher M.; Gehrels, Neil; Kneib, Jean-Paul; Kruk, Jeffrey; Rhodes, Jason; Wang, Yun; Zoubian, Julien

    2013-01-01

    This document describes the exposure time calculator for the Wide-Field Infrared Survey Telescope (WFIRST) high-latitude survey. The calculator works in both imaging and spectroscopic modes. In addition to the standard ETC functions (e.g. background and SN determination), the calculator integrates over the galaxy population and forecasts the density and redshift distribution of galaxy shapes usable for weak lensing (in imaging mode) and the detected emission lines (in spectroscopic mode). The source code is made available for public use.

  13. s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography

    PubMed Central

    Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai

    2016-01-01

    EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529

  14. Design of system calibration for effective imaging

    NASA Astrophysics Data System (ADS)

    Varaprasad Babu, G.; Rao, K. M. M.

    2006-12-01

    A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.

  15. HST/WFC3: understanding and mitigating radiation damage effects in the CCD detectors

    NASA Astrophysics Data System (ADS)

    Baggett, S. M.; Anderson, J.; Sosey, M.; Gosmeyer, C.; Bourque, M.; Bajaj, V.; Khandrika, H.; Martlin, C.

    2016-07-01

    At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel is a 4096x4096 pixel e2v CCD array. While these detectors continue to perform extremely well after more than 7 years in low-earth orbit, the cumulative effects of radiation damage are becoming increasingly evident. The result is a continual increase of the hotpixel population and the progressive loss in charge-transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. In this report, we summarize the radiation damage effects seen in WFC3/UVIS and the evolution of the CTE losses as a function of time, source brightness, and image-background level. In addition, we discuss the available mitigation options, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low-background images for a relatively small noise penalty. Currently, all WFC3 observers are encouraged to consider post-flash for images with low backgrounds. Finally, a pixel-based CTE correction is available for use after the images have been acquired. Similar to the software in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an observationally-defined model of how much charge is captured and released in order to reconstruct the image. As of Feb 2016, the pixel-based CTE correction is part of the automated WFC3 calibration pipeline. Observers with pre-existing data may request their images from MAST (Mikulski Archive for Space Telescopes) to obtain the improved products.

  16. Hyperspectral Fluorescence and Reflectance Imaging Instrument

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; O'Neal, S. Duane; Lanoue, Mark; Russell, Jeffrey

    2008-01-01

    The system is a single hyperspectral imaging instrument that has the unique capability to acquire both fluorescence and reflectance high-spatial-resolution data that is inherently spatially and spectrally registered. Potential uses of this instrument include plant stress monitoring, counterfeit document detection, biomedical imaging, forensic imaging, and general materials identification. Until now, reflectance and fluorescence spectral imaging have been performed by separate instruments. Neither a reflectance spectral image nor a fluorescence spectral image alone yields as much information about a target surface as does a combination of the two modalities. Before this system was developed, to benefit from this combination, analysts needed to perform time-consuming post-processing efforts to co-register the reflective and fluorescence information. With this instrument, the inherent spatial and spectral registration of the reflectance and fluorescence images minimizes the need for this post-processing step. The main challenge for this technology is to detect the fluorescence signal in the presence of a much stronger reflectance signal. To meet this challenge, the instrument modulates artificial light sources from ultraviolet through the visible to the near-infrared part of the spectrum; in this way, both the reflective and fluorescence signals can be measured through differencing processes to optimize fluorescence and reflectance spectra as needed. The main functional components of the instrument are a hyperspectral imager, an illumination system, and an image-plane scanner. The hyperspectral imager is a one-dimensional (line) imaging spectrometer that includes a spectrally dispersive element and a two-dimensional focal plane detector array. The spectral range of the current imaging spectrometer is between 400 to 1,000 nm, and the wavelength resolution is approximately 3 nm. The illumination system consists of narrowband blue, ultraviolet, and other discrete wavelength light-emitting-diode (LED) sources and white-light LED sources designed to produce consistently spatially stable light. White LEDs provide illumination for the measurement of reflectance spectra, while narrowband blue and UV LEDs are used to excite fluorescence. Each spectral type of LED can be turned on or off depending on the specific remote-sensing process being performed. Uniformity of illumination is achieved by using an array of LEDs and/or an integrating sphere or other diffusing surface. The image plane scanner uses a fore optic with a field of view large enough to provide an entire scan line on the image plane. It builds up a two-dimensional image in pushbroom fashion as the target is scanned across the image plane either by moving the object or moving the fore optic. For fluorescence detection, spectral filtering of a narrowband light illumination source is sometimes necessary to minimize the interference of the source spectrum wings with the fluorescence signal. Spectral filtering is achieved with optical interference filters and absorption glasses. This dual spectral imaging capability will enable the optimization of reflective, fluorescence, and fused datasets as well as a cost-effective design for multispectral imaging solutions. This system has been used in plant stress detection studies and in currency analysis.

  17. ScanImage: flexible software for operating laser scanning microscopes.

    PubMed

    Pologruto, Thomas A; Sabatini, Bernardo L; Svoboda, Karel

    2003-05-17

    Laser scanning microscopy is a powerful tool for analyzing the structure and function of biological specimens. Although numerous commercial laser scanning microscopes exist, some of the more interesting and challenging applications demand custom design. A major impediment to custom design is the difficulty of building custom data acquisition hardware and writing the complex software required to run the laser scanning microscope. We describe a simple, software-based approach to operating a laser scanning microscope without the need for custom data acquisition hardware. Data acquisition and control of laser scanning are achieved through standard data acquisition boards. The entire burden of signal integration and image processing is placed on the CPU of the computer. We quantitate the effectiveness of our data acquisition and signal conditioning algorithm under a variety of conditions. We implement our approach in an open source software package (ScanImage) and describe its functionality. We present ScanImage, software to run a flexible laser scanning microscope that allows easy custom design.

  18. Image formation of volume holographic microscopy using point spread functions

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Oh, Se Baek; Kou, Shan Shan; Lee, Justin; Sheppard, Colin J. R.; Barbastathis, George

    2010-04-01

    We present a theoretical formulation to quantify the imaging properties of volume holographic microscopy (VHM). Volume holograms are formed by exposure of a photosensitive recording material to the interference of two mutually coherent optical fields. Recently, it has been shown that a volume holographic pupil has spatial and spectral sectioning capability for fluorescent samples. Here, we analyze the point spread function (PSF) to assess the imaging behavior of the VHM with a point source and detector. The coherent PSF of the VHM is derived, and the results are compared with those from conventional microscopy, and confocal microscopy with point and slit apertures. According to our analysis, the PSF of the VHM can be controlled in the lateral direction by adjusting the parameters of the VH. Compared with confocal microscopes, the performance of the VHM is comparable or even potentially better, and the VHM is also able to achieve real-time and three-dimensional (3D) imaging due to its multiplexing ability.

  19. Unification of Speaker and Meaning in Language Comprehension: An fMRI Study

    ERIC Educational Resources Information Center

    Tesink, Cathelijne M. J. Y.; Petersson, Karl Magnus; van Berkum, Jos J. A.; van den Brink, Danielle; Buitelaar, Jan K.; Hagoort, Peter

    2009-01-01

    When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the…

  20. Open source software in a practical approach for post processing of radiologic images.

    PubMed

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  1. The electromagnetic interference of mobile phones on the function of a γ-camera.

    PubMed

    Javadi, Hamid; Azizmohammadi, Zahra; Mahmoud Pashazadeh, Ali; Neshandar Asli, Isa; Moazzeni, Taleb; Baharfar, Nastaran; Shafiei, Babak; Nabipour, Iraj; Assadi, Majid

    2014-03-01

    The aim of the present study is to evaluate whether or not the electromagnetic field generated by mobile phones interferes with the function of a SPECT γ-camera during data acquisition. We tested the effects of 7 models of mobile phones on 1 SPECT γ-camera. The mobile phones were tested when making a call, in ringing mode, and in standby mode. The γ-camera function was assessed during data acquisition from a planar source and a point source of Tc with activities of 10 mCi and 3 mCi, respectively. A significant visual decrease in count number was considered to be electromagnetic interference (EMI). The percentage of induced EMI with the γ-camera per mobile phone was in the range of 0% to 100%. The incidence of EMI was mainly observed in the first seconds of ringing and then mitigated in the following frames. Mobile phones are portable sources of electromagnetic radiation, and there is interference potential with the function of SPECT γ-cameras leading to adverse effects on the quality of the acquired images.

  2. Structural and functional human retinal imaging with a fiber-based visible light OCT ophthalmoscope (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Chong, Shau Poh; Bernucci, Marcel T.; Borycki, Dawid; Radhakrishnan, Harsha; Srinivasan, Vivek J.

    2017-02-01

    Visible light is absorbed by intrinsic chromophores such as photopigment, melanin, and hemoglobin, and scattered by subcellular structures, all of which are potential retinal disease biomarkers. Recently, high-resolution quantitative measurement and mapping of hemoglobin concentrations was demonstrated using visible light Optical Coherence Tomography (OCT). Yet, most high-resolution visible light OCT systems adopt free-space, or bulk, optical setups, which could limit clinical applications. Here, the construction of a multi-functional fiber-optic OCT system for human retinal imaging with <2.5 micron axial resolution is described. A detailed noise characterization of two supercontinuum light sources with differing pulse repetition rates is presented. The higher repetition rate, lower noise, source is found to enable a sensitivity of 87 dB with 0.1 mW incident power at the cornea and a 98 microsecond exposure time. Using a broadband, asymmetric, fused single-mode fiber coupler designed for visible wavelengths, the sample arm is integrated into an ophthalmoscope platform, rendering it portable and suitable for clinical use. In vivo anatomical, Doppler, and spectroscopic imaging of the human retina is further demonstrated using a single oversampled B-scan. For spectroscopic fitting of oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb) content in the retinal vessels, a noise bias-corrected absorbance spectrum is estimated using a sliding short-time Fourier transform of the complex OCT signal and fit using a model of light absorption and scattering. This yielded path length (L) times molar concentration, LCHbO2 and LCHb. Based on these results, we conclude that high-resolution visible light OCT has potential for depth-resolved functional imaging of the eye.

  3. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  4. Imaging an 80 au radius dust ring around the F5V star HD 157587

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Millar-Blanchaer, Maxwell A.; Wang, Jason J.; Kalas, Paul

    Here, we present H-band near-infrared polarimetric imaging observations of the F5V star HD 157587 obtained with the Gemini Planet Imager (GPI) that reveal the debris disk as a bright ring structure at a separation of ~80–100 au. The new GPI data complement recent Hubble Space Telescope /STIS observations that show the disk extending out to over 500 au. The GPI image displays a strong asymmetry along the projected minor axis as well as a fainter asymmetry along the projected major axis. We associate the minor and major axis asymmetries with polarized forward scattering and a possible stellocentric offset, respectively. Tomore » constrain the disk geometry, we fit two separate disk models to the polarized image, each using a different scattering phase function. Both models favor a disk inclination of ~70° and a 1.5 ± 0.6 au stellar offset in the plane of the sky along the projected major axis of the disk. We find that the stellar offset in the disk plane, perpendicular to the projected major axis is degenerate with the form of the scattering phase function and remains poorly constrained. The disk is not recovered in total intensity due in part to strong adaptive optics residuals, but we recover three point sources. Considering the system's proximity to the galactic plane and the point sources' positions relative to the disk, we consider it likely that they are background objects and unrelated to the disk's offset from the star.« less

  5. The Cardiac Atlas Project--an imaging database for computational modeling and statistical atlases of the heart.

    PubMed

    Fonseca, Carissa G; Backhaus, Michael; Bluemke, David A; Britten, Randall D; Chung, Jae Do; Cowan, Brett R; Dinov, Ivo D; Finn, J Paul; Hunter, Peter J; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Medrano-Gracia, Pau; Shivkumar, Kalyanam; Suinesiaputra, Avan; Tao, Wenchao; Young, Alistair A

    2011-08-15

    Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups. Three main open-source software components were developed: (i) a database with web-interface; (ii) a modeling client for 3D + time visualization and parametric description of shape and motion; and (iii) open data formats for semantic characterization of models and annotations. The database was implemented using a three-tier architecture utilizing MySQL, JBoss and Dcm4chee, in compliance with the DICOM standard to provide compatibility with existing clinical networks and devices. Parts of Dcm4chee were extended to access image specific attributes as search parameters. To date, approximately 3000 de-identified cardiac imaging examinations are available in the database. All software components developed by the CAP are open source and are freely available under the Mozilla Public License Version 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt). http://www.cardiacatlas.org a.young@auckland.ac.nz Supplementary data are available at Bioinformatics online.

  6. Development of a low-cost, 11 µm spectral domain optical coherence tomography surface profilometry prototype

    NASA Astrophysics Data System (ADS)

    Suliali, Nyasha J.; Baricholo, Peter; Neethling, Pieter H.; Rohwer, Erich G.

    2017-06-01

    A spectral-domain Optical Coherence Tomography (OCT) surface profilometry prototype has been developed for the purpose of surface metrology of optical elements. The prototype consists of a light source, spectral interferometer, sample fixture and software currently running on Microsoft® Windows platforms. In this system, a broadband light emitting diode beam is focused into a Michelson interferometer with a plane mirror as its sample fixture. At the interferometer output, spectral interferograms of broadband sources were measured using a Czerny-Turner mount monochromator with a 2048-element complementary metal oxide semiconductor linear array as the detector. The software performs importation and interpolation of interferometer spectra to pre-condition the data for image computation. One dimensional axial OCT images were computed by Fourier transformation of the measured spectra. A first reflection surface profilometry (FRSP) algorithm was then formulated to perform imaging of step-function-surfaced samples. The algorithm re-constructs two dimensional colour-scaled slice images by concatenation of 21 and 13 axial scans to form a 10 mm and 3.0 mm slice respectively. Measured spectral interferograms, computed interference fringe signals and depth reflectivity profiles were comparable to simulations and correlated to displacements of a single reflector linearly translated about the arm null-mismatch point. Surface profile images of a double-step-function-surfaced sample, embedded with inclination and crack detail were plotted with an axial resolution of 11 μm. The surface shape, defects and misalignment relative to the incident beam were detected to the order of a micron, confirming high resolution of the developed system as compared to electro-mechanical surface profilometry techniques.

  7. Imaging an 80 au radius dust ring around the F5V star HD 157587

    DOE PAGES

    Millar-Blanchaer, Maxwell A.; Wang, Jason J.; Kalas, Paul; ...

    2016-10-21

    Here, we present H-band near-infrared polarimetric imaging observations of the F5V star HD 157587 obtained with the Gemini Planet Imager (GPI) that reveal the debris disk as a bright ring structure at a separation of ~80–100 au. The new GPI data complement recent Hubble Space Telescope /STIS observations that show the disk extending out to over 500 au. The GPI image displays a strong asymmetry along the projected minor axis as well as a fainter asymmetry along the projected major axis. We associate the minor and major axis asymmetries with polarized forward scattering and a possible stellocentric offset, respectively. Tomore » constrain the disk geometry, we fit two separate disk models to the polarized image, each using a different scattering phase function. Both models favor a disk inclination of ~70° and a 1.5 ± 0.6 au stellar offset in the plane of the sky along the projected major axis of the disk. We find that the stellar offset in the disk plane, perpendicular to the projected major axis is degenerate with the form of the scattering phase function and remains poorly constrained. The disk is not recovered in total intensity due in part to strong adaptive optics residuals, but we recover three point sources. Considering the system's proximity to the galactic plane and the point sources' positions relative to the disk, we consider it likely that they are background objects and unrelated to the disk's offset from the star.« less

  8. A compact neutron scatter camera for field deployment

    DOE PAGES

    Goldsmith, John E. M.; Gerling, Mark D.; Brennan, James S.

    2016-08-23

    Here, we describe a very compact (0.9 m high, 0.4 m diameter, 40 kg) battery operable neutron scatter camera designed for field deployment. Unlike most other systems, the configuration of the sixteen liquid-scintillator detection cells are arranged to provide omnidirectional (4π) imaging with sensitivity comparable to a conventional two-plane system. Although designed primarily to operate as a neutron scatter camera for localizing energetic neutron sources, it also functions as a Compton camera for localizing gamma sources. In addition to describing the radionuclide source localization capabilities of this system, we demonstrate how it provides neutron spectra that can distinguish plutonium metalmore » from plutonium oxide sources, in addition to the easier task of distinguishing AmBe from fission sources.« less

  9. Low Statistics Reconstruction of the Compton Camera Point Spread Function in 3D Prompt-γ Imaging of Ion Beam Therapy

    NASA Astrophysics Data System (ADS)

    Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy

    2013-10-01

    The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.

  10. Imaging of dental material by polarization-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Dichtl, Sabine; Baumgartner, Angela; Hitzenberger, Christoph K.; Moritz, Andreas; Wernisch, Johann; Robl, Barbara; Sattmann, Harald; Leitgeb, Rainer; Sperr, Wolfgang; Fercher, Adolf F.

    1999-05-01

    Partial coherence interferometry (PCI) and optical coherence tomography (OCT) are noninvasive and noncontact techniques for high precision biometry and for obtaining cross- sectional images of biologic structures. OCT was initially introduced to depict the transparent tissue of the eye. It is based on interferometry employing the partial coherence properties of a light source with high spatial coherence ut short coherence length to image structures with a resolution of the order of a few microns. Recently this technique has been modified for cross section al imaging of dental and periodontal tissues. In vitro and in vivo OCT images have been recorded, which distinguish enamel, cemento and dentin structures and provide detailed structural information on clinical abnormalities. In contrast to convention OCT, where the magnitude of backscattered light as a function of depth is imaged, polarization sensitive OCT uses backscattered light to image the magnitude of the birefringence in the sample as a function of depth. First polarization sensitive OCT recordings show, that changes in the mineralization status of enamel or dentin caused by caries or non-caries lesions can result in changes of the polarization state of the light backscattered by dental material. Therefore polarization sensitive OCT might provide a new diagnostic imaging modality in clinical and research dentistry.

  11. Comparison Study of Three Different Image Reconstruction Algorithms for MAT-MI

    PubMed Central

    Xia, Rongmin; Li, Xu

    2010-01-01

    We report a theoretical study on magnetoacoustic tomography with magnetic induction (MAT-MI). According to the description of signal generation mechanism using Green’s function, the acoustic dipole model was proposed to describe acoustic source excited by the Lorentz force. Using Green’s function, three kinds of reconstruction algorithms based on different models of acoustic source (potential energy, vectored acoustic pressure, and divergence of Lorenz force) are deduced and compared, and corresponding numerical simulations were conducted to compare these three kinds of reconstruction algorithms. The computer simulation results indicate that the potential energy method and vectored pressure method can directly reconstruct the Lorentz force distribution and give a more accurate reconstruction of electrical conductivity. PMID:19846363

  12. System alignment using the Talbot effect

    NASA Astrophysics Data System (ADS)

    Chevallier, Raymond; Le Falher, Eric; Heggarty, Kevin

    1990-08-01

    The Talbot effect is utilized to correct an alignment problem related to a neural network used for image recognition, which required the alignment of a spatial light modulator (SLM) with the input module. A mathematical model which employs the Fresnel diffraction theory is presented to describe the method. The calculation of the diffracted amplitude describes the wavefront sphericity and the original object transmittance function in order to qualify the lateral shift of the Talbot image. Another explanation is set forth in terms of plane-wave illumination in the neural network. Using a Fourier series and by describing planes where all the harmonics are in phase, the reconstruction of Talbot images is explained. The alignment is effective when the lenslet array is aligned on the even Talbot images of the SLM pixels and the incident wave is a plane wave. The alignment is evaluated in terms of source and periodicity errors, tilt of the incident plane waves, and finite object dimensions. The effects of the error sources are concluded to be negligible, the lenslet array is shown to be successfully aligned with the SLM, and other alignment applications are shown to be possible.

  13. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  14. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  15. Development of a single-shot CCD-based data acquisition system for time-resolved X-ray photoelectron spectroscopy at an X-ray free-electron laser facility

    PubMed Central

    Oura, Masaki; Wagai, Tatsuya; Chainani, Ashish; Miyawaki, Jun; Sato, Hiromi; Matsunami, Masaharu; Eguchi, Ritsuko; Kiss, Takayuki; Yamaguchi, Takashi; Nakatani, Yasuhiro; Togashi, Tadashi; Katayama, Tetsuo; Ogawa, Kanade; Yabashi, Makina; Tanaka, Yoshihito; Kohmura, Yoshiki; Tamasaku, Kenji; Shin, Shik; Ishikawa, Tetsuya

    2014-01-01

    In order to utilize high-brilliance photon sources, such as X-ray free-electron lasers (XFELs), for advanced time-resolved photoelectron spectroscopy (TR-PES), a single-shot CCD-based data acquisition system combined with a high-resolution hemispherical electron energy analyzer has been developed. The system’s design enables it to be controlled by an external trigger signal for single-shot pump–probe-type TR-PES. The basic performance of the system is demonstrated with an offline test, followed by online core-level photoelectron and Auger electron spectroscopy in ‘single-shot image’, ‘shot-to-shot image (image-to-image storage or block storage)’ and ‘shot-to-shot sweep’ modes at soft X-ray undulator beamline BL17SU of SPring-8. In the offline test the typical repetition rate for image-to-image storage mode has been confirmed to be about 15 Hz using a conventional pulse-generator. The function for correcting the shot-to-shot intensity fluctuations of the exciting photon beam, an important requirement for the TR-PES experiments at FEL sources, has been successfully tested at BL17SU by measuring Au 4f photoelectrons with intentionally controlled photon flux. The system has also been applied to hard X-ray PES (HAXPES) in ‘ordinary sweep’ mode as well as shot-to-shot image mode at the 27 m-long undulator beamline BL19LXU of SPring-8 and also at the SACLA XFEL facility. The XFEL-induced Ti 1s core-level spectrum of La-doped SrTiO3 is reported as a function of incident power density. The Ti 1s core-level spectrum obtained at low power density is consistent with the spectrum obtained using the synchrotron source. At high power densities the Ti 1s core-level spectra show space-charge effects which are analysed using a known mean-field model for ultrafast electron packet propagation. The results successfully confirm the capability of the present data acquisition system for carrying out the core-level HAXPES studies of condensed matter induced by the XFEL. PMID:24365935

  16. Prototype of Partial Cutting Tool of Geological Map Images Distributed by Geological Web Map Service

    NASA Astrophysics Data System (ADS)

    Nonogaki, S.; Nemoto, T.

    2014-12-01

    Geological maps and topographical maps play an important role in disaster assessment, resource management, and environmental preservation. These map information have been distributed in accordance with Web services standards such as Web Map Service (WMS) and Web Map Tile Service (WMTS) recently. In this study, a partial cutting tool of geological map images distributed by geological WMTS was implemented with Free and Open Source Software. The tool mainly consists of two functions: display function and cutting function. The former function was implemented using OpenLayers. The latter function was implemented using Geospatial Data Abstraction Library (GDAL). All other small functions were implemented by PHP and Python. As a result, this tool allows not only displaying WMTS layer on web browser but also generating a geological map image of intended area and zoom level. At this moment, available WTMS layers are limited to the ones distributed by WMTS for the Seamless Digital Geological Map of Japan. The geological map image can be saved as GeoTIFF format and WebGL format. GeoTIFF is one of the georeferenced raster formats that is available in many kinds of Geographical Information System. WebGL is useful for confirming a relationship between geology and geography in 3D. In conclusion, the partial cutting tool developed in this study would contribute to create better conditions for promoting utilization of geological information. Future work is to increase the number of available WMTS layers and the types of output file format.

  17. VizieR Online Data Catalog: Broadband polarisation of radio AGN (O'Sullivan+, 2017)

    NASA Astrophysics Data System (ADS)

    O'Sullivan, S. P.; Purcell, C. R.; Anderson, C. S.; Farnes, J. S.; Sun, X. H.; Gaensler, B. M.

    2017-08-01

    Linear polarisation data as a function of wavelength-squared for 100 extragalactic radio sources, selected to be highly polarised at 1.4GHz. The data presented here were obtained using the Australia Telescope Compact Array (ATCA) over 1.1-3.1GHz (16cm) with 1MHz spectral resolution between 2014 April 19-28. The integrated emission from each source, imaged at 10 MHz intervals, is presented below. See Section 2 for details. (2 data files).

  18. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  19. A new treatment planning formalism for catheter-based beta sources used in intravascular brachytherapy.

    PubMed

    Patel, N S; Chiu-Tsao, S T; Tsao, H S; Harrison, L B

    2001-01-01

    Intravascular brachytherapy (IVBT) is an emerging modality for the treatment of atherosclerotic lesions in the artery. As part of the refinement in this rapidly evolving modality of treatment, the current simplistic dosimetry approach based on a fixed-point prescription must be challenged by future rigorous dosimetry method employing image-based three-dimensional (3D) treatment planning. The goals of 3D IVBT treatment planning calculations include (1) achieving high accuracy in a slim cylindrical region of interest, (2) accounting for the edge effect around the source ends, and (3) supporting multiple dwell positions. The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for gamma sources, as well as short beta sources with lengths less than twice the beta particle range. However, for the elongated beta sources and/or seed trains with lengths greater than twice the beta range, a new formalism is required to handle their distinctly different dose characteristics. Specifically, these characteristics consist of (a) flat isodose curves in the central region, (b) steep dose gradient at the source ends, and (c) exponential dose fall-off in the radial direction. In this paper, we present a novel formalism that evolved from TG-60 in maintaining the dose rate as a product of four key quantities. We propose to employ cylindrical coordinates (R, Z, phi), which are more natural and suitable to the slim cylindrical shape of the volume of interest, as opposed to the spherical coordinate system (r, theta, phi) used in the TG-60 formalism. The four quantities used in this formalism include (1) the distribution factor, H(R, Z), (2) the modulation function, M(R, Z), (3) the transverse dose function, h(R), and (4) the reference dose rate at 2 mm along the perpendicular bisector, D(R0=2 mm, Z0=0). The first three are counterparts of the geometry factor, the anisotropy function and the radial dose function in the TG-60 formalism, respectively. The reference dose rate is identical to that recommended by TG-60. The distribution factor is intended to resemble the dose profile due to the spatial distribution of activity in the elongated beta source, and it is a modified Fermi-Dirac function in mathematical form. The utility of this formalism also includes the slow-varying nature of the modulation function, allowing for more accurate treatment planning calculations based on interpolation. The transverse dose function describes the exponential fall-off of the dose in the radial direction, and an exponential or a polynomial can fit it. Simultaneously, the decoupling nature of these dose-related quantities facilitates image-based 3D treatment planning calculations for long beta sources used in IVBT. The new formalism also supports the dosimetry involving multiple dwell positions required for lesions longer than the source length. An example of the utilization of this formalism is illustrated for a 90Y coil source in a carbon dioxide-filled balloon. The pertinent dosimetric parameters were generated and tabulated for future use.

  20. Visualizing Tensions in an Ethnographic Moment: Images and Intersubjectivity.

    PubMed

    Crowder, Jerome W

    2017-01-01

    Images function as sources of data and influence our thinking about fieldwork, representation, and intersubjectivity. In this article, I show how both the ethnographic relationships and the working method of photography lead to a more nuanced understanding of a healing event. I systematically analyze 33 photographs made over a 15-minute period during the preparation and application of a poultice (topical cure) in a rural Andean home. The images chronicle the event, revealing my initial reaction and the decisions I made when tripping the shutter. By unpacking the relationship between ethnographer and subject, I reveal the constant negotiation of positions, assumptions, and expectations that make up intersubjectivity. For transparency, I provide thumbnails of all images, including metadata, so that readers may consider alternative interpretations of the images and event.

  1. Toward morphological thoracic EIT: major signal sources correspond to respective organ locations in CT.

    PubMed

    Ferrario, Damien; Grychtol, Bartłomiej; Adler, Andy; Solà, Josep; Böhm, Stephan H; Bodenstein, Marc

    2012-11-01

    Lung and cardiovascular monitoring applications of electrical impedance tomography (EIT) require localization of relevant functional structures or organs of interest within the reconstructed images. We describe an algorithm for automatic detection of heart and lung regions in a time series of EIT images. Using EIT reconstruction based on anatomical models, candidate regions are identified in the frequency domain and image-based classification techniques applied. The algorithm was validated on a set of simultaneously recorded EIT and CT data in pigs. In all cases, identified regions in EIT images corresponded to those manually segmented in the matched CT image. Results demonstrate the ability of EIT technology to reconstruct relevant impedance changes at their anatomical locations, provided that information about the thoracic boundary shape (and electrode positions) are used for reconstruction.

  2. Towards full waveform ambient noise inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas

    2018-01-01

    In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.

  3. Invited Article: Mask-modulated lensless imaging with multi-angle illuminations

    NASA Astrophysics Data System (ADS)

    Zhang, Zibang; Zhou, You; Jiang, Shaowei; Guo, Kaikai; Hoshino, Kazunori; Zhong, Jingang; Suo, Jinli; Dai, Qionghai; Zheng, Guoan

    2018-06-01

    The use of multiple diverse measurements can make lensless phase retrieval more robust. Conventional diversity functions include aperture diversity, wavelength diversity, translational diversity, and defocus diversity. Here we discuss a lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions. In this scheme, we place a binary mask between the sample and the detector for imposing support constraints for the phase retrieval process. This support constraint enforces the light field to be zero at certain locations and is similar to the aperture constraint in Fourier ptychographic microscopy. We use a self-calibration algorithm to correct the misalignment of the binary mask. The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the reconstruction quality using mean square error and structural similarity index. The scheme is then experimentally tested by recovering images of a resolution target and biological samples. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can also be combined with other diversity functions for better constraining the phase retrieval solution space. We provide the open-source implementation code for the broad research community.

  4. The Function Biomedical Informatics Research Network Data Repository

    PubMed Central

    Keator, David B.; van Erp, Theo G.M.; Turner, Jessica A.; Glover, Gary H.; Mueller, Bryon A.; Liu, Thomas T.; Voyvodic, James T.; Rasmussen, Jerod; Calhoun, Vince D.; Lee, Hyo Jong; Toga, Arthur W.; McEwen, Sarah; Ford, Judith M.; Mathalon, Daniel H.; Diaz, Michele; O’Leary, Daniel S.; Bockholt, H. Jeremy; Gadde, Syam; Preda, Adrian; Wible, Cynthia G.; Stern, Hal S.; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G.

    2015-01-01

    The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical datasets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 dataset consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 Tesla scanners. The FBIRN Phase 2 and Phase 3 datasets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN’s multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. PMID:26364863

  5. 3D reconstruction of internal structure of animal body using near-infrared light

    NASA Astrophysics Data System (ADS)

    Tran, Trung Nghia; Yamamoto, Kohei; Namita, Takeshi; Kato, Yuji; Shimizu, Koichi

    2014-03-01

    To realize three-dimensional (3D) optical imaging of the internal structure of animal body, we have developed a new technique to reconstruct CT images from two-dimensional (2D) transillumination images. In transillumination imaging, the image is blurred due to the strong scattering in the tissue. We had developed a scattering suppression technique using the point spread function (PSF) for a fluorescent light source in the body. In this study, we have newly proposed a technique to apply this PSF for a light source to the image of unknown light-absorbing structure. The effectiveness of the proposed technique was examined in the experiments with a model phantom and a mouse. In the phantom experiment, the absorbers were placed in the tissue-equivalent medium to simulate the light-absorbing organs in mouse body. Near-infrared light was illuminated from one side of the phantom and the image was recorded with CMOS camera from another side. Using the proposed techniques, the scattering effect was efficiently suppressed and the absorbing structure can be visualized in the 2D transillumination image. Using the 2D images obtained in many different orientations, we could reconstruct the 3D image. In the mouse experiment, an anesthetized mouse was held in an acrylic cylindrical holder. We can visualize the internal organs such as kidneys through mouse's abdomen using the proposed technique. The 3D image of the kidneys and a part of the liver were reconstructed. Through these experimental studies, the feasibility of practical 3D imaging of the internal light-absorbing structure of a small animal was verified.

  6. Single-sensor multispeaker listening with acoustic metamaterials

    PubMed Central

    Xie, Yangbo; Tsai, Tsung-Han; Konneker, Adam; Popa, Bogdan-Ioan; Brady, David J.; Cummer, Steven A.

    2015-01-01

    Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications. PMID:26261314

  7. Time reversal imaging, Inverse problems and Adjoint Tomography}

    NASA Astrophysics Data System (ADS)

    Montagner, J.; Larmat, C. S.; Capdeville, Y.; Kawakatsu, H.; Fink, M.

    2010-12-01

    With the increasing power of computers and numerical techniques (such as spectral element methods), it is possible to address a new class of seismological problems. The propagation of seismic waves in heterogeneous media is simulated more and more accurately and new applications developed, in particular time reversal methods and adjoint tomography in the three-dimensional Earth. Since the pioneering work of J. Claerbout, theorized by A. Tarantola, many similarities were found between time-reversal methods, cross-correlations techniques, inverse problems and adjoint tomography. By using normal mode theory, we generalize the scalar approach of Draeger and Fink (1999) and Lobkis and Weaver (2001) to the 3D- elastic Earth, for theoretically understanding time-reversal method on global scale. It is shown how to relate time-reversal methods on one hand, with auto-correlations of seismograms for source imaging and on the other hand, with cross-correlations between receivers for structural imaging and retrieving Green function. Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and to seismic waves in seismology for earthquake imaging. In the case of source imaging, time reversal techniques make it possible an automatic location in time and space as well as the retrieval of focal mechanism of earthquakes or unknown environmental sources . We present here some applications at the global scale of these techniques on synthetic tests and on real data, such as Sumatra-Andaman (Dec. 2004), Haiti (Jan. 2010), as well as glacial earthquakes and seismic hum.

  8. Fault detection in rotating machines with beamforming: Spatial visualization of diagnosis features

    NASA Astrophysics Data System (ADS)

    Cardenas Cabada, E.; Leclere, Q.; Antoni, J.; Hamzaoui, N.

    2017-12-01

    Rotating machines diagnosis is conventionally related to vibration analysis. Sensors are usually placed on the machine to gather information about its components. The recorded signals are then processed through a fault detection algorithm allowing the identification of the failing part. This paper proposes an acoustic-based diagnosis method. A microphone array is used to record the acoustic field radiated by the machine. The main advantage over vibration-based diagnosis is that the contact between the sensors and the machine is no longer required. Moreover, the application of acoustic imaging makes possible the identification of the sources of acoustic radiation on the machine surface. The display of information is then spatially continuous while the accelerometers only give it discrete. Beamforming provides the time-varying signals radiated by the machine as a function of space. Any fault detection tool can be applied to the beamforming output. Spectral kurtosis, which highlights the impulsiveness of a signal as function of frequency, is used in this study. The combination of spectral kurtosis with acoustic imaging makes possible the mapping of the impulsiveness as a function of space and frequency. The efficiency of this approach lays on the source separation in the spatial and frequency domains. These mappings make possible the localization of such impulsive sources. The faulty components of the machine have an impulsive behavior and thus will be highlighted on the mappings. The study presents experimental validations of the method on rotating machines.

  9. Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.

    PubMed

    Kopp, O; Markert, S; Tornow, R P

    2002-01-01

    To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.

  10. Comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Sullivan, Malcolm N.; Chan, Kam Wai Clifford; Boyd, Robert W.

    2010-11-15

    We present a theoretical comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging. We first calculate the signal-to-noise ratio of each process in terms of its controllable experimental conditions. We show that a key distinction is that a thermal ghost image always resides on top of a large background; the fluctuations in this background constitutes an intrinsic noise source for thermal ghost imaging. In contrast, there is a negligible intrinsic background to a quantum ghost image. However, for practical reasons involving achievable illumination levels, acquisition times for thermal ghost images are often much shorter than those for quantummore » ghost images. We provide quantitative predictions for the conditions under which each process provides superior performance. Our conclusion is that each process can provide useful functionality, although under complementary conditions.« less

  11. Role of HIS/RIS DICOM interfaces in the integration of imaging into the Department of Veterans Affairs healthcare enterprise

    NASA Astrophysics Data System (ADS)

    Kuzmak, Peter M.; Dayhoff, Ruth E.

    1998-07-01

    The U.S. Department of Veterans Affairs is integrating imaging into the healthcare enterprise using the Digital Imaging and Communication in Medicine (DICOM) standard protocols. Image management is directly integrated into the VistA Hospital Information System (HIS) software and clinical database. Radiology images are acquired via DICOM, and are stored directly in the HIS database. Images can be displayed on low- cost clinician's workstations throughout the medical center. High-resolution diagnostic quality multi-monitor VistA workstations with specialized viewing software can be used for reading radiology images. DICOM has played critical roles in the ability to integrate imaging functionality into the Healthcare Enterprise. Because of its openness, it allows the integration of system components from commercial and non- commercial sources to work together to provide functional cost-effective solutions (see Figure 1). Two approaches are used to acquire and handle images within the radiology department. At some VA Medical Centers, DICOM is used to interface a commercial Picture Archiving and Communications System (PACS) to the VistA HIS. At other medical centers, DICOM is used to interface the image producing modalities directly to the image acquisition and display capabilities of VistA itself. Both of these approaches use a small set of DICOM services that has been implemented by VistA to allow patient and study text data to be transmitted to image producing modalities and the commercial PACS, and to enable images and study data to be transferred back.

  12. Seismic reflection imaging, accounting for primary and multiple reflections

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Thorbecke, Jan; Broggini, Filippo; Slob, Evert; Snieder, Roel

    2015-04-01

    Imaging of seismic reflection data is usually based on the assumption that the seismic response consists of primary reflections only. Multiple reflections, i.e. waves that have reflected more than once, are treated as primaries and are imaged at wrong positions. There are two classes of multiple reflections, which we will call surface-related multiples and internal multiples. Surface-related multiples are those multiples that contain at least one reflection at the earth's surface, whereas internal multiples consist of waves that have reflected only at subsurface interfaces. Surface-related multiples are the strongest, but also relatively easy to deal with because the reflecting boundary (the earth's surface) is known. Internal multiples constitute a much more difficult problem for seismic imaging, because the positions and properties of the reflecting interfaces are not known. We are developing reflection imaging methodology which deals with internal multiples. Starting with the Marchenko equation for 1D inverse scattering problems, we derived 3D Marchenko-type equations, which relate reflection data at the surface to Green's functions between virtual sources anywhere in the subsurface and receivers at the surface. Based on these equations, we derived an iterative scheme by which these Green's functions can be retrieved from the reflection data at the surface. This iterative scheme requires an estimate of the direct wave of the Green's functions in a background medium. Note that this is precisely the same information that is also required by standard reflection imaging schemes. However, unlike in standard imaging, our iterative Marchenko scheme retrieves the multiple reflections of the Green's functions from the reflection data at the surface. For this, no knowledge of the positions and properties of the reflecting interfaces is required. Once the full Green's functions are retrieved, reflection imaging can be carried out by which the primaries and multiples are mapped to their correct positions, with correct reflection amplitudes. In the presentation we will illustrate this new methodology with numerical examples and discuss its potential and limitations.

  13. Magneto-acoustic imaging by continuous-wave excitation.

    PubMed

    Shunqi, Zhang; Zhou, Xiaoqing; Tao, Yin; Zhipeng, Liu

    2017-04-01

    The electrical characteristics of tissue yield valuable information for early diagnosis of pathological changes. Magneto-acoustic imaging is a functional approach for imaging of electrical conductivity. This study proposes a continuous-wave magneto-acoustic imaging method. A kHz-range continuous signal with an amplitude range of several volts is used to excite the magneto-acoustic signal and improve the signal-to-noise ratio. The magneto-acoustic signal amplitude and phase are measured to locate the acoustic source via lock-in technology. An optimisation algorithm incorporating nonlinear equations is used to reconstruct the magneto-acoustic source distribution based on the measured amplitude and phase at various frequencies. Validation simulations and experiments were performed in pork samples. The experimental and simulation results agreed well. While the excitation current was reduced to 10 mA, the acoustic signal magnitude increased up to 10 -7  Pa. Experimental reconstruction of the pork tissue showed that the image resolution reached mm levels when the excitation signal was in the kHz range. The signal-to-noise ratio of the detected magneto-acoustic signal was improved by more than 25 dB at 5 kHz when compared to classical 1 MHz pulse excitation. The results reported here will aid further research into magneto-acoustic generation mechanisms and internal tissue conductivity imaging.

  14. Mesh-based phase contrast Fourier transform imaging

    NASA Astrophysics Data System (ADS)

    Tahir, Sajjad; Bashir, Sajid; MacDonald, C. A.; Petruccelli, Jonathan C.

    2017-04-01

    Traditional x-ray radiography is limited by low attenuation contrast in materials of low electron density. Phase contrast imaging offers the potential to improve the contrast between such materials, but due to the requirements on the spatial coherence of the x-ray beam, practical implementation of such systems with tabletop (i.e. non-synchrotron) sources has been limited. One phase imaging technique employs multiple fine-pitched gratings. However, the strict manufacturing tolerances and precise alignment requirements have limited the widespread adoption of grating-based techniques. In this work, we have investigated a recently developed technique that utilizes a single grid of much coarser pitch. Our system consisted of a low power 100 μm spot Mo source, a CCD with 22 μm pixel pitch, and either a focused mammography linear grid or a stainless steel woven mesh. Phase is extracted from a single image by windowing and comparing data localized about harmonics of the mesh in the Fourier domain. The effects on the diffraction phase contrast and scattering amplitude images of varying grid types and periods, and of varying the width of the window function used to separate the harmonics were investigated. Using the wire mesh, derivatives of the phase along two orthogonal directions were obtained and combined to form improved phase contrast images.

  15. Translation-aware semantic segmentation via conditional least-square generative adversarial networks

    NASA Astrophysics Data System (ADS)

    Zhang, Mi; Hu, Xiangyun; Zhao, Like; Pang, Shiyan; Gong, Jinqi; Luo, Min

    2017-10-01

    Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f-divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.

  16. Imaging fast electrical activity in the brain with electrical impedance tomography

    PubMed Central

    Aristovich, Kirill Y.; Packham, Brett C.; Koo, Hwan; Santos, Gustavo Sato dos; McEvoy, Andy; Holder, David S.

    2016-01-01

    Imaging of neuronal depolarization in the brain is a major goal in neuroscience, but no technique currently exists that could image neural activity over milliseconds throughout the whole brain. Electrical impedance tomography (EIT) is an emerging medical imaging technique which can produce tomographic images of impedance changes with non-invasive surface electrodes. We report EIT imaging of impedance changes in rat somatosensory cerebral cortex with a resolution of 2 ms and < 200 μm during evoked potentials using epicortical arrays with 30 electrodes. Images were validated with local field potential recordings and current source-sink density analysis. Our results demonstrate that EIT can image neural activity in a volume 7 × 5 × 2 mm in somatosensory cerebral cortex with reduced invasiveness, greater resolution and imaging volume than other methods. Modeling indicates similar resolutions are feasible throughout the entire brain so this technique, uniquely, has the potential to image functional connectivity of cortical and subcortical structures. PMID:26348559

  17. Hard X-ray imaging spectroscopy of FOXSI microflares

    NASA Astrophysics Data System (ADS)

    Glesener, Lindsay; Krucker, Sam; Christe, Steven; Buitrago-Casas, Juan Camilo; Ishikawa, Shin-nosuke; Foster, Natalie

    2015-04-01

    The ability to investigate particle acceleration and hot thermal plasma in solar flares relies on hard X-ray imaging spectroscopy using bremsstrahlung emission from high-energy electrons. Direct focusing of hard X-rays (HXRs) offers the ability to perform cleaner imaging spectroscopy of this emission than has previously been possible. Using direct focusing, spectra for different sources within the same field of view can be obtained easily since each detector segment (pixel or strip) measures the energy of each photon interacting within that segment. The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload has successfully completed two flights, observing microflares each time. Flare images demonstrate an instrument imaging dynamic range far superior to the indirect methods of previous instruments like the RHESSI spacecraft.In this work, we present imaging spectroscopy of microflares observed by FOXSI in its two flights. Imaging spectroscopy performed on raw FOXSI images reveals the temperature structure of flaring loops, while more advanced techniques such as deconvolution of the point spread function produce even more detailed images.

  18. Volumetric Two-photon Imaging of Neurons Using Stereoscopy (vTwINS)

    PubMed Central

    Song, Alexander; Charles, Adam S.; Koay, Sue Ann; Gauthier, Jeff L.; Thiberge, Stephan Y.; Pillow, Jonathan W.; Tank, David W.

    2017-01-01

    Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large scale recording of neural activity in vivo. Here we introduce volumetric Two-photon Imaging of Neurons using Stereoscopy (vTwINS), a volumetric calcium imaging method that employs an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced “image pairs” in the resulting 2D image, and the separation distance between images is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a novel orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrate vTwINS by imaging neural population activity in mouse primary visual cortex and hippocampus. Our results demonstrate that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame-rate. PMID:28319111

  19. The instrument development status of hyper-spectral imager suite (HISUI)

    NASA Astrophysics Data System (ADS)

    Itoh, Yoshiyuki; Kawashima, Takahiro; Inada, Hitomi; Tanii, Jun; Iwasaki, Akira

    2012-11-01

    The hyper-multi spectral mission named HISUI (Hyper-spectral Imager SUIte) is the next Japanese earth observation project. This project is the follow up mission of the Advanced Spaceborne Thermal Emission and reflection Radiometer (ASTER) and Advanced Land Imager (ALDS). HISUI is composed of hyperspectral radiometer with higher spectral resolution and multi-spectral radiometer with higher spatial resolution. The development of functional evaluation model was carried out to confirm the spectral and radiometric performance prior to the flight model manufacture phase. This model contains the VNIR and SWIR spectrograph, the VNIR and SWIR detector assemblies with a mechanical cooler for SWIR, signal processing circuit and on-board calibration source.

  20. Managers' social support: Facilitators and hindrances for seeking support at work.

    PubMed

    Lundqvist, Daniel; Fogelberg Eriksson, Anna; Ekberg, Kerstin

    2018-01-01

    Previous research has shown that social support is important for health and performance at work, but there is a lack of research regarding managers' social support at work, and if it needs to be improvedOBJECTIVE:To investigate managers' perception of work-related social support, and facilitators and hindrances that influence their seeking of social support at work. Semi-structured interviews with sixty-two managers in two Swedish organizations. Work-related support, which strengthened their managerial image of being competent, was sought from sources within the workplace. Sensitive and personal support, where there was a risk of jeopardizing their image of being competent, was sought from sources outside the workplace. Access to arenas for support (location of the workplace, meetings, and vocational courses) and the managerial role could facilitate their support-seeking, but could also act as hindrances. Because attending different arenas for support were demanding, they refrained from seeking support if the demands were perceived as too high. Different supportive sources are distinguished based on what supportive function they have and in which arenas they are found, in order to preserve the confidence of the closest organization and to maintain the image of being a competent and performing manager.

  1. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  2. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  3. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  4. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  5. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  6. Faint source detection in ISOCAM images

    NASA Astrophysics Data System (ADS)

    Starck, J. L.; Aussel, H.; Elbaz, D.; Fadda, D.; Cesarsky, C.

    1999-08-01

    We present a tool adapted to the detection of faint mid-infrared sources within ISOCAM mosaics. This tool is based on a wavelet analysis which allows us to discriminate sources from cosmic ray impacts at the very limit of the instrument, four orders of magnitudes below IRAS. It is called PRETI for Pattern REcognition Technique for ISOCAM data, because glitches with transient behaviors are isolated in the wavelet space, i.e. frequency space, where they present peculiar signatures in the form of patterns automatically identified and then reconstructed. We have tested PRETI with Monte-Carlo simulations of fake ISOCAM data. These simulations allowed us to define the fraction of remaining false sources due to cosmic rays, the sensitivity and completeness limits as well as the photometric accuracy as a function of the observation parameters. Although the main scientific applications of this technique have appeared or will appear in separated papers, we present here an application to the ISOCAM-Hubble Deep Field image. This work completes and confirms the results already published (\\cite[Aussel et al. 1999]{starck:aussel99}).

  7. Nerve fiber layer (NFL) degeneration associated with acute q-switched laser exposure in the nonhuman primate

    NASA Astrophysics Data System (ADS)

    Zwick, Harry; Zuclich, Joseph A.; Stuck, Bruce E.; Gagliano, Donald A.; Lund, David J.; Glickman, Randolph D.

    1995-01-01

    We have evaluated acute laser retinal exposure in non-human primates using a Rodenstock scanning laser ophthalmoscope (SLO) equipped with spectral imaging laser sources at 488, 514, 633, and 780 nm. Confocal spectral imaging at each laser wavelength allowed evaluation of the image plane from deep within the retinal vascular layer to the more superficial nerve fiber layer in the presence and absence of the short wavelength absorption of the macular pigment. SLO angiography included both fluorescein and indocyanine green procedures to assess the extent of damage to the sensory retina, the retinal pigment epithelium (RPE), and the choroidal vasculature. All laser exposures in this experiment were from a Q-switched Neodymium laser source at an exposure level sufficient to produce vitreous hemorrhage. Confocal imaging of the nerve fiber layer revealed discrete optic nerve sector defects between the lesion site and the macula (retrograde degeneration) as well as between the lesion site and the optic disk (Wallerian degeneration). In multiple hemorrhagic exposures, lesions placed progressively distant from the macula or overlapping the macula formed bridging scars visible at deep retinal levels. Angiography revealed blood flow disturbance at the retina as well as at the choroidal vascular level. These data suggest that acute parafoveal laser retinal injury can involve both direct full thickness damage to the sensory and non-sensory retina and remote nerve fiber degeneration. Such injury has serious functional implications for both central and peripheral visual function.

  8. Multi-modal diffuse optical techniques for breast cancer neoadjuvant chemotherapy monitoring (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cochran, Jeffrey M.; Busch, David R.; Ban, Han Y.; Kavuri, Venkaiah C.; Schweiger, Martin J.; Arridge, Simon R.; Yodh, Arjun G.

    2017-02-01

    We present high spatial density, multi-modal, parallel-plate Diffuse Optical Tomography (DOT) imaging systems for the purpose of breast tumor detection. One hybrid instrument provides time domain (TD) and continuous wave (CW) DOT at 64 source fiber positions. The TD diffuse optical spectroscopy with PMT- detection produces low-resolution images of absolute tissue scattering and absorption while the spatially dense array of CCD-coupled detector fibers (108 detectors) provides higher-resolution CW images of relative tissue optical properties. Reconstruction of the tissue optical properties, along with total hemoglobin concentration and tissue oxygen saturation, is performed using the TOAST software suite. Comparison of the spatially-dense DOT images and MR images allows for a robust validation of DOT against an accepted clinical modality. Additionally, the structural information from co-registered MR images is used as a spatial prior to improve the quality of the functional optical images and provide more accurate quantification of the optical and hemodynamic properties of tumors. We also present an optical-only imaging system that provides frequency domain (FD) DOT at 209 source positions with full CCD detection and incorporates optical fringe projection profilometry to determine the breast boundary. This profilometry serves as a spatial constraint, improving the quality of the DOT reconstructions while retaining the benefits of an optical-only device. We present initial images from both human subjects and phantoms to display the utility of high spatial density data and multi-modal information in DOT reconstruction with the two systems.

  9. Transthoracic Cardiac Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Bradway, David Pierson

    This dissertation investigates the feasibility of a real-time transthoracic Acoustic Radiation Force Impulse (ARFI) imaging system to measure myocardial function non-invasively in clinical setting. Heart failure is an important cardiovascular disease and contributes to the leading cause of death for developed countries. Patients exhibiting heart failure with a low left ventricular ejection fraction (LVEF) can often be identified by clinicians, but patients with preserved LVEF might be undetected if they do not exhibit other signs and symptoms of heart failure. These cases motivate development of transthoracic ARFI imaging to aid the early diagnosis of the structural and functional heart abnormalities leading to heart failure. M-Mode ARFI imaging utilizes ultrasonic radiation force to displace tissue several micrometers in the direction of wave propagation. Conventional ultrasound tracks the response of the tissue to the force. This measurement is repeated rapidly at a location through the cardiac cycle, measuring timing and relative changes in myocardial stiffness. ARFI imaging was previously shown capable of measuring myocardial properties and function via invasive open-chest and intracardiac approaches. The prototype imaging system described in this dissertation is capable of rapid acquisition, processing, and display of ARFI images and shear wave elasticity imaging (SWEI) movies. Also presented is a rigorous safety analysis, including finite element method (FEM) simulations of tissue heating, hydrophone intensity and mechanical index (MI) measurements, and thermocouple transducer face heating measurements. For the pulse sequences used in later animal and clinical studies, results from the safety analysis indicates that transthoracic ARFI imaging can be safely applied at rates and levels realizable on the prototype ARFI imaging system. Preliminary data are presented from in vivo trials studying changes in myocardial stiffness occurring under normal and abnormal heart function. Presented is the first use of transthoracic ARFI imaging in a serial study of heart failure in a porcine model. Results demonstrate the ability of transthoracic ARFI to image cyclically-varying stiffness changes in healthy and infarcted myocardium under good B-mode imaging conditions at depths in the range of 3-5 cm. Challenging imaging scenarios such as deep regions of interest, vigorous lateral motion and stable, reverberant clutter are analyzed and discussed. Results are then presented from the first study of clinical feasibility of transthoracic cardiac ARFI imaging. At the Duke University Medical Center, healthy volunteers and patients having magnetic resonance imaging-confirmed apical infarcts were enrolled for the study. The number of patients who met the inclusion criteria in this preliminary clinical trial was low, but results showed that the limitations seen in animal studies were not overcome by allowing transmit power levels to exceed the FDA mechanical index (MI) limit. The results suggested the primary source of image degradation was clutter rather than lack of radiation force. Additionally, the transthoracic method applied in its present form was not shown capable of tracking propagating ARFI-induced shear waves in the myocardium. Under current instrumentation and processing methods, results of these studies support feasibility for transthoracic ARFI in high-quality B-Mode imaging conditions. Transthoracic ARFI was not shown sensitive to infarct or to tracking heart failure in the presence of clutter and signal decorrelation. This work does provide evidence that transthoracic ARFI imaging is a safe non-invasive tool, but clinical efficacy as a diagnostic tool will need to be addressed by further development to overcome current challenges and increase robustness to sources of image degradation.

  10. A line-source method for aligning on-board and other pinhole SPECT systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-15

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less

  11. A line-source method for aligning on-board and other pinhole SPECT systems

    PubMed Central

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-01-01

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537

  12. A line-source method for aligning on-board and other pinhole SPECT systems.

    PubMed

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-01

    In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.

  13. Distinguishing one from many using super-resolution compressive sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.

    We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less

  14. Distinguishing one from many using super-resolution compressive sensing

    DOE PAGES

    Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...

    2018-05-14

    We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less

  15. Phase-sensitive optical coherence tomography-based vibrometry using a highly phase-stable akinetic swept laser source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Applegate, Brian E.; Park, Jesung; Carbajal, Esteban

    Phase-sensitive Optical Coherence Tomography (PhOCT) is an emerging tool for in vivo investigation of the vibratory function of the intact middle and inner ear. PhOCT is able to resolve micron scale tissue morphology in three dimensions as well as measure picometer scale motion at each spatial position. Most PhOCT systems to date have relied upon the phase stability offered by spectrometer detection. On the other hand swept laser source based PhOCT offers a number of advantages including balanced detection, long imaging depths, and high imaging speeds. Unfortunately the inherent phase instability of traditional swept laser sources has necessitated complex usermore » developed hardware/software solutions to restore phase sensitivity. Here we present recent results using a prototype swept laser that overcomes these issues. The akinetic swept laser is electronically tuned and precisely controls sweeps without any mechanical movement, which results in high phase stability. We have developed an optical fiber based PhOCT system around the akinetic laser source that had a 1550 nm center wavelength and a sweep rate of 140 kHz. The stability of the system was measured to be 4.4 pm with a calibrated reflector, thus demonstrating near shot noise limited performance. Using this PhOCT system, we have acquired structural and vibratory measurements of the middle ear in a mouse model, post mortem. The quality of the results suggest that the akinetic laser source is a superior laser source for PhOCT with many advantages that greatly reduces the required complexity of the imaging system.« less

  16. Worsening respiratory function in mechanically ventilated intensive care patients: feasibility and value of xenon-enhanced dual energy CT.

    PubMed

    Hoegl, Sandra; Meinel, Felix G; Thieme, Sven F; Johnson, Thorsten R C; Eickelberg, Oliver; Zwissler, Bernhard; Nikolaou, Konstantin

    2013-03-01

    To evaluate the feasibility and incremental diagnostic value of xenon-enhanced dual-energy CT in mechanically ventilated intensive care patients with worsening respiratory function. The study was performed in 13 mechanically ventilated patients with severe pulmonary conditions (acute respiratory distress syndrome (ARDS), n=5; status post lung transplantation, n=5; other, n=3) and declining respiratory function. CT scans were performed using a dual-source CT scanner at an expiratory xenon concentration of 30%. Both ventilation images (Xe-DECT) and standard CT images were reconstructed from a single CT scan. Findings were recorded for Xe-DECT and standard CT images separately. Ventilation defects on xenon images were matched to morphological findings on standard CT images and incremental diagnostic information of xenon ventilation images was recorded if present. Mean xenon consumption was 2.95 l per patient. No adverse events occurred under xenon inhalation. In the visual CT analysis, the Xe-DECT ventilation defects matched with pathologic changes in lung parenchyma seen in the standard CT images in all patients. Xe-DECT provided additional diagnostic findings in 4/13 patients. These included preserved ventilation despite early pneumonia (n=1), more confident discrimination between a large bulla and pneumothorax (n=1), detection of an airway-to-pneumothorax fistula (n=1) and exclusion of a suspected airway-to-mediastinum fistula (n=1). In all 4 patients, the additional findings had a substantial impact on patients' management. Xenon-enhanced DECT is safely feasible and can add relevant diagnostic information in mechanically ventilated intensive care patients with worsening respiratory function. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Thermal imager sources of non-uniformities: modeling of static and dynamic contributions during operations

    NASA Astrophysics Data System (ADS)

    Sozzi, B.; Olivieri, M.; Mariani, P.; Giunti, C.; Zatti, S.; Porta, A.

    2014-05-01

    Due to the fast-growing of cooled detector sensitivity in the last years, on the image 10-20 mK temperature difference between adjacent objects can theoretically be discerned if the calibration algorithm (NUC) is capable to take into account and compensate every spatial noise source. To predict how the NUC algorithm is strong in all working condition, the modeling of the flux impinging on the detector becomes a challenge to control and improve the quality of a properly calibrated image in all scene/ambient conditions including every source of spurious signal. In literature there are just available papers dealing with NU caused by pixel-to-pixel differences of detector parameters and by the difference between the reflection of the detector cold part and the housing at the operative temperature. These models don't explain the effects on the NUC results due to vignetting, dynamic sources out and inside the FOV, reflected contributions from hot spots inside the housing (for example thermal reference far of the optical path). We propose a mathematical model in which: 1) detector and system (opto-mechanical configuration and scene) are considered separated and represented by two independent transfer functions 2) on every pixel of the array the amount of photonic signal coming from different spurious sources are considered to evaluate the effect on residual spatial noise due to dynamic operative conditions. This article also contains simulation results showing how this model can be used to predict the amount of spatial noise.

  18. Critical bounds on noise and SNR for robust estimation of real-time brain activity from functional near infra-red spectroscopy.

    PubMed

    Aqil, Muhammad; Jeong, Myung Yung

    2018-04-24

    The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Source biases in midlatitude magnetotelluric transfer functions due to Pc3-4 geomagnetic pulsations

    NASA Astrophysics Data System (ADS)

    Murphy, Benjamin S.; Egbert, Gary D.

    2018-01-01

    The magnetotelluric (MT) method for imaging the electrical conductivity structure of the Earth is based on the assumption that source magnetic fields can be considered quasi-uniform, such that the spatial scale of the inducing source is much larger than the intrinsic length scale of the electromagnetic induction process (the skin depth). Here, we show using EarthScope MT data that short spatial scale source magnetic fields from geomagnetic pulsations (Pc's) can violate this fundamental assumption. Over resistive regions of the Earth, the skin depth can be comparable to the short meridional range of Pc3-4 disturbances that are generated by geomagnetic field-line resonances (FLRs). In such cases, Pc's can introduce narrow-band bias in MT transfer function estimates at FLR eigenperiods ( 10-100 s). Although it appears unlikely that these biases will be a significant problem for data inversions, further study is necessary to understand the conditions under which they may distort inverse solutions.[Figure not available: see fulltext.

  20. On the origin of the soft X-ray background. [in cosmological observations

    NASA Technical Reports Server (NTRS)

    Wang, Q. D.; Mccray, Richard

    1993-01-01

    The angular autocorrelation function and spectrum of the soft X-ray background is studied below a discrete source detection limit, using two deep images from the Rosat X-ray satellite. The average spectral shape of pointlike sources, which account for 40 to 60 percent of the background intensity, is determined by using the autocorrelation function. The background spectrum, in the 0.5-0.9 keV band (M band), is decomposed into a pointlike source component characterized by a power law and a diffuse component represented by a two-temperature plasma. These pointlike sources cannot contribute more than 60 percent of the X-ray background intensity in the M band without exceeding the total observed flux in the R7 band. Spectral analysis has shown that the local soft diffuse component, although dominating the background intensity at energies not greater than 0.3 keV, contributes only a small fraction of the M band background intensity. The diffuse component may represent an important constituent of the interstellar or intergalactic medium.

  1. A new, open-source, multi-modality digital breast phantom

    NASA Astrophysics Data System (ADS)

    Graff, Christian G.

    2016-03-01

    An anthropomorphic digital breast phantom has been developed with the goal of generating random voxelized breast models that capture the anatomic variability observed in vivo. This is a new phantom and is not based on existing digital breast phantoms or segmentation of patient images. It has been designed at the outset to be modality agnostic (i.e., suitable for use in modeling x-ray based imaging systems, magnetic resonance imaging, and potentially other imaging systems) and open source so that users may freely modify the phantom to suit a particular study. In this work we describe the modeling techniques that have been developed, the capabilities and novel features of this phantom, and study simulated images produced from it. Starting from a base quadric, a series of deformations are performed to create a breast with a particular volume and shape. Initial glandular compartments are generated using a Voronoi technique and a ductal tree structure with terminal duct lobular units is grown from the nipple into each compartment. An additional step involving the creation of fat and glandular lobules using a Perlin noise function is performed to create more realistic glandular/fat tissue interfaces and generate a Cooper's ligament network. A vascular tree is grown from the chest muscle into the breast tissue. Breast compression is performed using a neo-Hookean elasticity model. We show simulated mammographic and T1-weighted MRI images and study properties of these images.

  2. Multichannel optical brain imaging to separate cerebral vascular, tissue metabolic, and neuronal effects of cocaine

    NASA Astrophysics Data System (ADS)

    Ren, Hugang; Luo, Zhongchi; Yuan, Zhijia; Pan, Yingtian; Du, Congwu

    2012-02-01

    Characterization of cerebral hemodynamic and oxygenation metabolic changes, as well neuronal function is of great importance to study of brain functions and the relevant brain disorders such as drug addiction. Compared with other neuroimaging modalities, optical imaging techniques have the potential for high spatiotemporal resolution and dissection of the changes in cerebral blood flow (CBF), blood volume (CBV), and hemoglobing oxygenation and intracellular Ca ([Ca2+]i), which serves as markers of vascular function, tissue metabolism and neuronal activity, respectively. Recently, we developed a multiwavelength imaging system and integrated it into a surgical microscope. Three LEDs of λ1=530nm, λ2=570nm and λ3=630nm were used for exciting [Ca2+]i fluorescence labeled by Rhod2 (AM) and sensitizing total hemoglobin (i.e., CBV), and deoxygenated-hemoglobin, whereas one LD of λ1=830nm was used for laser speckle imaging to form a CBF mapping of the brain. These light sources were time-sharing for illumination on the brain and synchronized with the exposure of CCD camera for multichannel images of the brain. Our animal studies indicated that this optical approach enabled simultaneous mapping of cocaine-induced changes in CBF, CBV and oxygenated- and deoxygenated hemoglobin as well as [Ca2+]i in the cortical brain. Its high spatiotemporal resolution (30μm, 10Hz) and large field of view (4x5 mm2) are advanced as a neuroimaging tool for brain functional study.

  3. Blind source separation in retinal videos

    NASA Astrophysics Data System (ADS)

    Barriga, Eduardo S.; Truitt, Paul W.; Pattichis, Marios S.; Tüso, Dan; Kwon, Young H.; Kardon, Randy H.; Soliz, Peter

    2003-05-01

    An optical imaging device of retina function (OID-RF) has been developed to measure changes in blood oxygen saturation due to neural activity resulting from visual stimulation of the photoreceptors in the human retina. The video data that are collected represent a mixture of the functional signal in response to the retinal activation and other signals from undetermined physiological activity. Measured changes in reflectance in response to the visual stimulus are on the order of 0.1% to 1.0% of the total reflected intensity level which makes the functional signal difficult to detect by standard methods since it is masked by the other signals that are present. In this paper, we apply principal component analysis (PCA), blind source separation (BSS), using Extended Spatial Decorrelation (ESD) and independent component analysis (ICA) using the Fast-ICA algorithm to extract the functional signal from the retinal videos. The results revealed that the functional signal in a stimulated retina can be detected through the application of some of these techniques.

  4. SU-F-T-61: Treatment Planning Observations for the CivaSheet Directional Brachytherapy Device Using VariSeed 9.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivard, MJ; Rothley, DJ

    2016-06-15

    Purpose: The VariSeed 9.0 brachytherapy TPS is recently available and has new features such as ability to rotate a brachytherapy source away from normal to the imaging plane. Consequently, a dosimetric analysis was performed for a directional brachytherapy source (CivaSheet) with tests of this functionality and experiences from clinical treatment planning were documented. These observations contribute to safe, practical, and accurate use of such new software features. Methods: Several tests were established to evaluate the new rotational feature, specific to the CivaSheet for the first patients treated using this new brachytherapy device. These included suitability of imaging slice-thickness and in-planemore » resolution, window/level adjustments for brachytherapy source visualization, commissioning the source physical length for performing rotations, and using different planar and 3D window views to identify source orientation. Additional CivaSheet-specific tests were performed to determine the dosimetric influence on target coverage: changing the source tilt angle, source positioning in the treatment plan based on the CivaSheet rectangular array of CivaDots, and influence of prescription depth on the necessary treatment margin for adequate target coverage. Results: Higher imaging-resolution produced better accuracy for source orientation and positioning, with sub-millimeter CT slice-thickness and in-plane resolution preferred. Source rotation was possible only in sagittal or coronal views. The process for validating source orientation required iteratively altering rotations then checking them in the 3D view, which was cumbersome given the absence of quantitative plan documentation to indicate orientation. Given the small Pd-103 source size, influence of source tilt within 30° was negligible for <1.0 cm. Influence of source position was important when the source was positioned in/out of the adjacent source plane, causing changes of 15%, 7%, and 3% at depths of 0.5, 0.7, and 1.0 cm. Conclusion: The new TPS rotational feature worked well, but several issues were identified to improve the treatment planning process. Research supported in part by CivaTech Oncology, Inc. for Dr. Rivard.« less

  5. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  7. A SmallSat Approach for Global Imaging Spectroscopy of the Earth SYSTEM Enabled by Advanced Technology

    NASA Astrophysics Data System (ADS)

    Green, R. O.; Asner, G. P.; Thompson, D. R.; Mouroulis, P.; Eastwood, M. L.; Chien, S.

    2017-12-01

    Global coverage imaging spectroscopy in the solar reflected energy portion of the spectrum has been identified by the Earth Decadal Survey as an important measurement that enables a diverse set of new and time critical science objectives/targets for the Earth system. These science objectives include biodiversity; ecosystem function; ecosystem biogeochemistry; initialization and constraint of global ecosystem models; fire fuel, combustion, burn severity, and recovery; surface mineralogy, geochemistry, geologic processes, soils, and hazards; global mineral dust source composition; cryospheric albedo, energy balance, and melting; coastal and inland water habitats; coral reefs; point source gas emission; cloud thermodynamic phase; urban system properties; and more. Traceability of these science objectives to spectroscopic measurement in the visible to short wavelength infrared portion of the spectrum is summarized. New approaches, including satellite constellations, to acquire these global imaging spectroscopy measurements is presented drawing from recent advances in optical design, detector technology, instrument architecture, thermal control, on-board processing, data storage, and downlink.

  8. Preclinical Magnetic Resonance Imaging and Systems Biology in Cancer Research

    PubMed Central

    Albanese, Chris; Rodriguez, Olga C.; VanMeter, John; Fricke, Stanley T.; Rood, Brian R.; Lee, YiChien; Wang, Sean S.; Madhavan, Subha; Gusev, Yuriy; Petricoin, Emanuel F.; Wang, Yue

    2014-01-01

    Biologically accurate mouse models of human cancer have become important tools for the study of human disease. The anatomical location of various target organs, such as brain, pancreas, and prostate, makes determination of disease status difficult. Imaging modalities, such as magnetic resonance imaging, can greatly enhance diagnosis, and longitudinal imaging of tumor progression is an important source of experimental data. Even in models where the tumors arise in areas that permit visual determination of tumorigenesis, longitudinal anatomical and functional imaging can enhance the scope of studies by facilitating the assessment of biological alterations, (such as changes in angiogenesis, metabolism, cellular invasion) as well as tissue perfusion and diffusion. One of the challenges in preclinical imaging is the development of infrastructural platforms required for integrating in vivo imaging and therapeutic response data with ex vivo pathological and molecular data using a more systems-based multiscale modeling approach. Further challenges exist in integrating these data for computational modeling to better understand the pathobiology of cancer and to better affect its cure. We review the current applications of preclinical imaging and discuss the implications of applying functional imaging to visualize cancer progression and treatment. Finally, we provide new data from an ongoing preclinical drug study demonstrating how multiscale modeling can lead to a more comprehensive understanding of cancer biology and therapy. PMID:23219428

  9. Free LittleDog!: Towards Completely Untethered Operation of the LittleDog Quadruped

    DTIC Science & Technology

    2007-08-01

    helpful Intel Open Source Computer Vision ( OpenCV ) library [4] wherever possible rather than reimplementing many of the standard algorithms, however...correspondences between image points and world points, and feeding these to a camera calibration function, such as that provided by OpenCV , allows one to solve... OpenCV calibration function to that used for intrinsic calibration solves for Tboard→camerai . The position of the camera 37 Figure 5.3: Snapshot of

  10. Applications of two-photon fluorescence microscopy in deep-tissue imaging

    NASA Astrophysics Data System (ADS)

    Dong, Chen-Yuan; Yu, Betty; Hsu, Lily L.; Kaplan, Peter D.; Blankschstein, D.; Langer, Robert; So, Peter T. C.

    2000-07-01

    Based on the non-linear excitation of fluorescence molecules, two-photon fluorescence microscopy has become a significant new tool for biological imaging. The point-like excitation characteristic of this technique enhances image quality by the virtual elimination of off-focal fluorescence. Furthermore, sample photodamage is greatly reduced because fluorescence excitation is limited to the focal region. For deep tissue imaging, two-photon microscopy has the additional benefit in the greatly improved imaging depth penetration. Since the near- infrared laser sources used in two-photon microscopy scatter less than their UV/glue-green counterparts, in-depth imaging of highly scattering specimen can be greatly improved. In this work, we will present data characterizing both the imaging characteristics (point-spread-functions) and tissue samples (skin) images using this novel technology. In particular, we will demonstrate how blind deconvolution can be used further improve two-photon image quality and how this technique can be used to study mechanisms of chemically-enhanced, transdermal drug delivery.

  11. Anatomical guidance for functional near-infrared spectroscopy: AtlasViewer tutorial

    PubMed Central

    Aasted, Christopher M.; Yücel, Meryem A.; Cooper, Robert J.; Dubb, Jay; Tsuzuki, Daisuke; Becerra, Lino; Petkov, Mike P.; Borsook, David; Dan, Ippeita; Boas, David A.

    2015-01-01

    Abstract. Functional near-infrared spectroscopy (fNIRS) is an optical imaging method that is used to noninvasively measure cerebral hemoglobin concentration changes induced by brain activation. Using structural guidance in fNIRS research enhances interpretation of results and facilitates making comparisons between studies. AtlasViewer is an open-source software package we have developed that incorporates multiple spatial registration tools to enable structural guidance in the interpretation of fNIRS studies. We introduce the reader to the layout of the AtlasViewer graphical user interface, the folder structure, and user files required in the creation of fNIRS probes containing sources and detectors registered to desired locations on the head, evaluating probe fabrication error and intersubject probe placement variability, and different procedures for estimating measurement sensitivity to different brain regions as well as image reconstruction performance. Further, we detail how AtlasViewer provides a generic head atlas for guiding interpretation of fNIRS results, but also permits users to provide subject-specific head anatomies to interpret their results. We anticipate that AtlasViewer will be a valuable tool in improving the anatomical interpretation of fNIRS studies. PMID:26157991

  12. Regression Models for Identifying Noise Sources in Magnetic Resonance Images

    PubMed Central

    Zhu, Hongtu; Li, Yimei; Ibrahim, Joseph G.; Shi, Xiaoyan; An, Hongyu; Chen, Yashen; Gao, Wei; Lin, Weili; Rowe, Daniel B.; Peterson, Bradley S.

    2009-01-01

    Stochastic noise, susceptibility artifacts, magnetic field and radiofrequency inhomogeneities, and other noise components in magnetic resonance images (MRIs) can introduce serious bias into any measurements made with those images. We formally introduce three regression models including a Rician regression model and two associated normal models to characterize stochastic noise in various magnetic resonance imaging modalities, including diffusion-weighted imaging (DWI) and functional MRI (fMRI). Estimation algorithms are introduced to maximize the likelihood function of the three regression models. We also develop a diagnostic procedure for systematically exploring MR images to identify noise components other than simple stochastic noise, and to detect discrepancies between the fitted regression models and MRI data. The diagnostic procedure includes goodness-of-fit statistics, measures of influence, and tools for graphical display. The goodness-of-fit statistics can assess the key assumptions of the three regression models, whereas measures of influence can isolate outliers caused by certain noise components, including motion artifacts. The tools for graphical display permit graphical visualization of the values for the goodness-of-fit statistic and influence measures. Finally, we conduct simulation studies to evaluate performance of these methods, and we analyze a real dataset to illustrate how our diagnostic procedure localizes subtle image artifacts by detecting intravoxel variability that is not captured by the regression models. PMID:19890478

  13. Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation

    PubMed Central

    Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem

    2017-01-01

    Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348

  14. Sex, acceleration, brain imaging, and rhesus monkeys: Converging evidence for an evolutionary bias for looming auditory motion

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.

    2003-04-01

    Increasing acoustic intensity is a primary cue to looming auditory motion. Perceptual overestimation of increasing intensity could provide an evolutionary selective advantage by specifying that an approaching sound source is closer than actual, thus affording advanced warning and more time than expected to prepare for the arrival of the source. Here, multiple lines of converging evidence for this evolutionary hypothesis are presented. First, it is shown that intensity change specifying accelerating source approach changes in loudness more than equivalent intensity change specifying decelerating source approach. Second, consistent with evolutionary hunter-gatherer theories of sex-specific spatial abilities, it is shown that females have a significantly larger bias for rising intensity than males. Third, using functional magnetic resonance imaging in conjunction with approaching and receding auditory motion, it is shown that approaching sources preferentially activate a specific neural network responsible for attention allocation, motor planning, and translating perception into action. Finally, it is shown that rhesus monkeys also exhibit a rising intensity bias by orienting longer to looming tones than to receding tones. Together these results illustrate an adaptive perceptual bias that has evolved because it provides a selective advantage in processing looming acoustic sources. [Work supported by NSF and CDC.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    CorAL is a software Library designed to aid in the analysis of femtoscipic data. Femtoscopic data are a class of measured quantities used in heavy-ion collisions to characterize particle emitting source sizes. The most common type of this data is two-particle correleations induced by the Hanbury-Brown/Twiss (HBT) Effect, but can also include correlations induced by final-state interactions between pairs of emitted particles in a heavy-ion collision. Because heavy-ion collisions are complex many particle systems, modeling hydrodynamical models or hybrid techniques. Using the CRAB module, CorAL can turn the output from these models into something that can be directley compared tomore » experimental data. CorAL can also take the raw experimentally measured correlation functions and image them by inverting the Koonin-Pratt equation to extract the space-time emission profile of the particle emitting source. This source function can be further analyzed or directly compared to theoretical calculations.« less

  16. Prestack reverse time migration for tilted transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Jang, Seonghyung; Hien, Doan Huy

    2013-04-01

    According to having interest in unconventional resource plays, anisotropy problem is naturally considered as an important step for improving the seismic image quality. Although it is well known prestack depth migration for the seismic reflection data is currently one of the powerful tools for imaging complex geological structures, it may lead to migration error without considering anisotropy. Asymptotic analysis of wave propagation in transversely isotropic (TI) media yields a dispersion relation of couple P- and SV wave modes that can be converted to a fourth order scalar partial differential equation (PDE). By setting the shear wave velocity equal zero, the fourth order PDE, called an acoustic wave equation for TI media, can be reduced to couple of second order PDE systems and we try to solve the second order PDE by the finite difference method (FDM). The result of this P wavefield simulation is kinematically similar to elastic and anisotropic wavefield simulation. We develop prestack depth migration algorithm for tilted transversely isotropic media using reverse time migration method (RTM). RTM is a method for imaging the subsurface using inner product of source wavefield extrapolation in forward and receiver wavefield extrapolation in backward. We show the subsurface image in TTI media using the inner product of partial derivative wavefield with respect to physical parameters and observation data. Since the partial derivative wavefields with respect to the physical parameters require extremely huge computing time, so we implemented the imaging condition by zero lag crosscorrelation of virtual source and back propagating wavefield instead of partial derivative wavefields. The virtual source is calculated directly by solving anisotropic acoustic wave equation, the back propagating wavefield on the other hand is calculated by the shot gather used as the source function in the anisotropic acoustic wave equation. According to the numerical model test for a simple geological model including syncline and anticline, the prestack depth migration using TTI-RTM in weak anisotropic media shows the subsurface image is similar to the true geological model used to generate the shot gathers.

  17. Towards the Experimental Assessment of the DQE in SPECT Scanners

    NASA Astrophysics Data System (ADS)

    Fountos, G. P.; Michail, C. M.

    2017-11-01

    The purpose of this work was to introduce the Detective Quantum Efficiency (DQE) in single photon emission computed tomography (SPECT) systems using a flood source. A Tc-99m-based flood source (Eγ = 140 keV) consisting of a radiopharmaceutical solution of dithiothreitol (DTT, 10-3 M)/Tc-99m(III)-DMSA, 40 mCi/40 ml bound to the grains of an Agfa MammoRay HDR Medical X-ray film) was prepared in laboratory. The source was placed between two PMMA blocks and images were obtained by using the brain tomographic acquisition protocol (DatScan-brain). The Modulation Transfer Function (MTF) was evaluated using the Iterative 2D algorithm. All imaging experiments were performed in a Siemens e-Cam gamma camera. The Normalized Noise Power spectra (NNPS) were obtained from the sagittal views of the source. The higher MTF values were obtained for the Flash Iterative 2D with 24 iterations and 20 subsets. The noise levels of the SPECT reconstructed images, in terms of the NNPS, were found to increase as the number of iterations increase. The behavior of the DQE was influenced by both MTF and NNPS. As the number of iterations was increased, higher MTF values were obtained, however with a parallel, increase of magnitude in image noise, as depicted from the NNPS results. DQE values, which were influenced by both MTF and NNPS, were found higher when the number of iterations results in resolution saturation. The method presented here is novel and easy to implement, requiring materials commonly found in clinical practice and can be useful in the quality control of SPECT scanners.

  18. OnEarth: An Open Source Solution for Efficiently Serving High-Resolution Mapped Image Products

    NASA Astrophysics Data System (ADS)

    Thompson, C. K.; Plesea, L.; Hall, J. R.; Roberts, J. T.; Cechini, M. F.; Schmaltz, J. E.; Alarcon, C.; Huang, T.; McGann, J. M.; Chang, G.; Boller, R. A.; Ilavajhala, S.; Murphy, K. J.; Bingham, A. W.

    2013-12-01

    This presentation introduces OnEarth, a server side software package originally developed at the Jet Propulsion Laboratory (JPL), that facilitates network-based, minimum-latency geolocated image access independent of image size or spatial resolution. The key component in this package is the Meta Raster Format (MRF), a specialized raster file extension to the Geospatial Data Abstraction Library (GDAL) consisting of an internal indexed pyramid of image tiles. Imagery to be served is converted to the MRF format and made accessible online via an expandable set of server modules handling requests in several common protocols, including the Open Geospatial Consortium (OGC) compliant Web Map Tile Service (WMTS) as well as Tiled WMS and Keyhole Markup Language (KML). OnEarth has recently transitioned to open source status and is maintained and actively developed as part of GIBS (Global Imagery Browse Services), a collaborative project between JPL and Goddard Space Flight Center (GSFC). The primary function of GIBS is to enhance and streamline the data discovery process and to support near real-time (NRT) applications via the expeditious ingestion and serving of full-resolution imagery representing science products from across the NASA Earth Science spectrum. Open source software solutions are leveraged where possible in order to utilize existing available technologies, reduce development time, and enlist wider community participation. We will discuss some of the factors and decision points in transitioning OnEarth to a suitable open source paradigm, including repository and licensing agreement decision points, institutional hurdles, and perceived benefits. We will also provide examples illustrating how OnEarth is integrated within GIBS and other applications.

  19. Remote measurement methods for 3-D modeling purposes using BAE Systems' Software

    NASA Astrophysics Data System (ADS)

    Walker, Stewart; Pietrzak, Arleta

    2015-06-01

    Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.

  20. Spectral Indices of Faint Radio Sources

    NASA Astrophysics Data System (ADS)

    Gim, Hansung B.; Hales, Christopher A.; Momjian, Emmanuel; Yun, Min Su

    2015-01-01

    The significant improvement in bandwidth and the resultant sensitivity offered by the Karl G. Jansky Very Large Array (VLA) allows us to explore the faint radio source population. Through the study of the radio continuum we can explore the spectral indices of these radio sources. Robust radio spectral indices are needed for accurate k-corrections, for example in the study of the radio - far-infrared (FIR) correlation. We present an analysis of measuring spectral indices using two different approaches. In the first, we use the standard wideband imaging algorithm in the data reduction package CASA. In the second, we use a traditional approach of imaging narrower bandwidths to derive the spectral indices. For these, we simulated data to match the observing parameter space of the CHILES Con Pol survey (Hales et al. 2014). We investigate the accuracy and precision of spectral index measurements as a function of signal-to noise, and explore the requirements to reliably probe possible evolution of the radio-FIR correlation in CHILES Con Pol.

  1. SimVascular: An Open Source Pipeline for Cardiovascular Simulation.

    PubMed

    Updegrove, Adam; Wilson, Nathan M; Merkow, Jameson; Lan, Hongzhi; Marsden, Alison L; Shadden, Shawn C

    2017-03-01

    Patient-specific cardiovascular simulation has become a paradigm in cardiovascular research and is emerging as a powerful tool in basic, translational and clinical research. In this paper we discuss the recent development of a fully open-source SimVascular software package, which provides a complete pipeline from medical image data segmentation to patient-specific blood flow simulation and analysis. This package serves as a research tool for cardiovascular modeling and simulation, and has contributed to numerous advances in personalized medicine, surgical planning and medical device design. The SimVascular software has recently been refactored and expanded to enhance functionality, usability, efficiency and accuracy of image-based patient-specific modeling tools. Moreover, SimVascular previously required several licensed components that hindered new user adoption and code management and our recent developments have replaced these commercial components to create a fully open source pipeline. These developments foster advances in cardiovascular modeling research, increased collaboration, standardization of methods, and a growing developer community.

  2. A novel phoswich imaging detector for simultaneous beta and coincidence-gamma imaging of plant leaves.

    PubMed

    Wu, Heyu; Tai, Yuan-Chuan

    2011-09-07

    To meet the growing demand for functional imaging technology for use in studying plant biology, we are developing a novel technique that permits simultaneous imaging of escaped positrons and coincidence gammas from annihilation of positrons within an intake leaf. The multi-modality imaging system will include two planar detectors: one is a typical PET detector array and the other is a phoswich imaging detector that detects both beta and gamma. The novel phoswich detector is made of a plastic scintillator, a lutetium oxyorthosilicate (LSO) array, and a position sensitive photomultiplier tube (PS-PMT). The plastic scintillator serves as a beta detector, while the LSO array serves as a gamma detector and light guide that couples scintillation light from the plastic detector to the PMT. In our prototype, the PMT signal was fed into the Siemens QuickSilver electronics to achieve shaping and waveform sampling. Pulse-shape discrimination based on the detectors' decay times (2.1 ns for plastic and 40 ns for LSO) was used to differentiate beta and gamma events using the common PMT signals. Using our prototype phoswich detector, we simultaneously measured a beta image and gamma events (in single mode). The beta image showed a resolution of 1.6 mm full-width-at-half-maximum using F-18 line sources. Because this shows promise for plant-scale imaging, our future plans include development of a fully functional simultaneous beta-and-coincidence-gamma imager with sub-millimeter resolution imaging capability for both modalities.

  3. Processing challenges in the XMM-Newton slew survey

    NASA Astrophysics Data System (ADS)

    Saxton, Richard D.; Altieri, Bruno; Read, Andrew M.; Freyberg, Michael J.; Esquej, M. P.; Bermejo, Diego

    2005-08-01

    The great collecting area of the mirrors coupled with the high quantum efficiency of the EPIC detectors have made XMM-Newton the most sensitive X-ray observatory flown to date. This is particularly evident during slew exposures which, while giving only 15 seconds of on-source time, actually constitute a 2-10 keV survey ten times deeper than current "all-sky" catalogues. Here we report on progress towards making a catalogue of slew detections constructed from the full, 0.2-12 keV energy band and discuss the challenges associated with processing the slew data. The fast (90 degrees per hour) slew speed results in images which are smeared, by different amounts depending on the readout mode, effectively changing the form of the point spread function. The extremely low background in slew images changes the optimum source searching criteria such that searching a single image using the full energy band is seen to be more sensitive than splitting the data into discrete energy bands. False detections due to optical loading by bright stars, the wings of the PSF in very bright sources and single-frame detector flashes are considered and techniques for identifying and removing these spurious sources from the final catalogue are outlined. Finally, the attitude reconstruction of the satellite during the slewing maneuver is complex. We discuss the implications of this on the positional accuracy of the catalogue.

  4. Using high resolution computed tomography to visualize the three dimensional structure and function of plant vasculature.

    PubMed

    McElrone, Andrew J; Choat, Brendan; Parkinson, Dilworth Y; MacDowell, Alastair A; Brodersen, Craig R

    2013-04-05

    High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D) (e.g. Brodersen et al. 2010; 2011; 2012a,b). HRCT imaging is based on the same principles as medical CT systems, but a high intensity synchrotron x-ray source results in higher spatial resolution and decreased image acquisition time. Here, we demonstrate in detail how synchrotron-based HRCT (performed at the Advanced Light Source-LBNL Berkeley, CA, USA) in combination with Avizo software (VSG Inc., Burlington, MA, USA) is being used to explore plant xylem in excised tissue and living plants. This new imaging tool allows users to move beyond traditional static, 2D light or electron micrographs and study samples using virtual serial sections in any plane. An infinite number of slices in any orientation can be made on the same sample, a feature that is physically impossible using traditional microscopy methods. Results demonstrate that HRCT can be applied to both herbaceous and woody plant species, and a range of plant organs (i.e. leaves, petioles, stems, trunks, roots). Figures presented here help demonstrate both a range of representative plant vascular anatomy and the type of detail extracted from HRCT datasets, including scans for coast redwood (Sequoia sempervirens), walnut (Juglans spp.), oak (Quercus spp.), and maple (Acer spp.) tree saplings to sunflowers (Helianthus annuus), grapevines (Vitis spp.), and ferns (Pteridium aquilinum and Woodwardia fimbriata). Excised and dried samples from woody species are easiest to scan and typically yield the best images. However, recent improvements (i.e. more rapid scans and sample stabilization) have made it possible to use this visualization technique on green tissues (e.g. petioles) and in living plants. On occasion some shrinkage of hydrated green plant tissues will cause images to blur and methods to avoid these issues are described. These recent advances with HRCT provide promising new insights into plant vascular function.

  5. Using High Resolution Computed Tomography to Visualize the Three Dimensional Structure and Function of Plant Vasculature

    PubMed Central

    McElrone, Andrew J.; Choat, Brendan; Parkinson, Dilworth Y.; MacDowell, Alastair A.; Brodersen, Craig R.

    2013-01-01

    High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D) (e.g. Brodersen et al. 2010; 2011; 2012a,b). HRCT imaging is based on the same principles as medical CT systems, but a high intensity synchrotron x-ray source results in higher spatial resolution and decreased image acquisition time. Here, we demonstrate in detail how synchrotron-based HRCT (performed at the Advanced Light Source-LBNL Berkeley, CA, USA) in combination with Avizo software (VSG Inc., Burlington, MA, USA) is being used to explore plant xylem in excised tissue and living plants. This new imaging tool allows users to move beyond traditional static, 2D light or electron micrographs and study samples using virtual serial sections in any plane. An infinite number of slices in any orientation can be made on the same sample, a feature that is physically impossible using traditional microscopy methods. Results demonstrate that HRCT can be applied to both herbaceous and woody plant species, and a range of plant organs (i.e. leaves, petioles, stems, trunks, roots). Figures presented here help demonstrate both a range of representative plant vascular anatomy and the type of detail extracted from HRCT datasets, including scans for coast redwood (Sequoia sempervirens), walnut (Juglans spp.), oak (Quercus spp.), and maple (Acer spp.) tree saplings to sunflowers (Helianthus annuus), grapevines (Vitis spp.), and ferns (Pteridium aquilinum and Woodwardia fimbriata). Excised and dried samples from woody species are easiest to scan and typically yield the best images. However, recent improvements (i.e. more rapid scans and sample stabilization) have made it possible to use this visualization technique on green tissues (e.g. petioles) and in living plants. On occasion some shrinkage of hydrated green plant tissues will cause images to blur and methods to avoid these issues are described. These recent advances with HRCT provide promising new insights into plant vascular function. PMID:23609036

  6. A review of biomedical multiphoton microscopy and its laser sources

    NASA Astrophysics Data System (ADS)

    Lefort, Claire

    2017-10-01

    Multiphoton microscopy (MPM) has been the subject of major development efforts for about 25 years for imaging biological specimens at micron scale and presented as an elegant alternative to classical fluorescence methods such as confocal microscopy. In this topical review, the main interests and technical requirements of MPM are addressed with a focus on the crucial role of excitation source for optimization of multiphoton processes. Then, an overview of the different sources successfully demonstrated in literature for MPM is presented, and their physical parameters are inventoried. A classification of these sources in function with their ability to optimize multiphoton processes is proposed, following a protocol found in literature. Starting from these considerations, a suggestion of a possible identikit of the ideal laser source for MPM concludes this topical review. Dedicated to Martin.

  7. Multispectral breast imaging using a ten-wavelength, 64 x 64 source/detector channels silicon photodiode-based diffuse optical tomography system.

    PubMed

    Li, Changqing; Zhao, Hongzhi; Anderson, Bonnie; Jiang, Huabei

    2006-03-01

    We describe a compact diffuse optical tomography system specifically designed for breast imaging. The system consists of 64 silicon photodiode detectors, 64 excitation points, and 10 diode lasers in the near-infrared region, allowing multispectral, three-dimensional optical imaging of breast tissue. We also detail the system performance and optimization through a calibration procedure. The system is evaluated using tissue-like phantom experiments and an in vivo clinic experiment. Quantitative two-dimensional (2D) and three-dimensional (3D) images of absorption and reduced scattering coefficients are obtained from these experiments. The ten-wavelength spectra of the extracted reduced scattering coefficient enable quantitative morphological images to be reconstructed with this system. From the in vivo clinic experiment, functional images including deoxyhemoglobin, oxyhemoglobin, and water concentration are recovered and tumors are detected with correct size and position compared with the mammography.

  8. Multi-Beam Approach for Accelerating Alignment and Calibration of HyspIRI-Like Imaging Spectrometers

    NASA Technical Reports Server (NTRS)

    Eastwood, Michael L.; Green, Robert O.; Mouroulis, Pantazis; Hochberg, Eric B.; Hein, Randall C.; Kroll, Linley A.; Geier, Sven; Coles, James B.; Meehan, Riley

    2012-01-01

    A paper describes an optical stimulus that produces more consistent results, and can be automated for unattended, routine generation of data analysis products needed by the integration and testing team assembling a high-fidelity imaging spectrometer system. One key attribute of the system is an arrangement of pick-off mirrors that provides multiple input beams (five in this implementation) to simultaneously provide stimulus light to several field angles along the field of view of the sensor under test, allowing one data set to contain all the information that previously required five data sets to be separately collected. This stimulus can also be fed by quickly reconfigured sources that ultimately provide three data set types that would previously be collected separately using three different setups: Spectral Response Function (SRF), Cross-track Response Function (CRF), and Along-track Response Function (ARF), respectively. This method also lends itself to expansion of the number of field points if less interpolation across the field of view is desirable. An absolute minimum of three is required at the beginning stages of imaging spectrometer alignment.

  9. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    PubMed

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  10. MEG Source Imaging Method using Fast L1 Minimum-norm and its Applications to Signals with Brain Noise and Human Resting-state Source Amplitude Images

    PubMed Central

    Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704

  11. Coherence switching of a vertical-cavity semiconductor-laser for multimode biomedical imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Knitter, Sebastian; Liu, Changgeng; Redding, Brandon; Khokha, Mustafa Kezar; Choma, Michael Andrew

    2017-02-01

    Speckle formation is a limiting factor when using coherent sources for imaging and sensing, but can provide useful information about the motion of an object. Illumination sources with tunable spatial coherence are therefore desirable as they can offer both speckled and speckle-free images. Efficient methods of coherence switching have been achieved with a solid-state degenerate laser, and here we demonstrate a semiconductor-based degenerate laser system that can be switched between a large number of mutually incoherent spatial modes and few-mode operation. Our system is designed around a semiconductor gain element, and overcomes barriers presented by previous low spatial coherence lasers. The gain medium is an electrically-pumped vertical external cavity surface emitting laser (VECSEL) with a large active area. The use of a degenerate external cavity enables either distributing the laser emission over a large ( 1000) number of mutually incoherent spatial modes or concentrating emission to few modes by using a pinhole in the Fourier plane of the self-imaging cavity. To demonstrate the unique potential of spatial coherence switching for multimodal biomedical imaging, we use both low and high spatial coherence light generated by our VECSEL-based degenerate laser for imaging embryo heart function in Xenopus, an important animal model of heart disease. The low-coherence illumination is used for high-speed (100 frames per second) speckle-free imaging of dynamic heart structure, while the high-coherence emission is used for laser speckle contrast imaging of the blood flow.

  12. Deconvolution of the PSF of a seismic lens

    NASA Astrophysics Data System (ADS)

    Yu, Jianhua; Wang, Yue; Schuster, Gerard T.

    2002-12-01

    We show that if seismic data d is related to the migration image by mmig = LTd. then mmig is a blurred version of the actual reflectivity distribution m, i.e., mmig = (LTL)m. Here L is the acoustic forward modeling operator under the Born approximation where d = Lm. The blurring operator (LTL), or point spread function, distorts the image because of defects in the seismic lens, i.e., small source-receiver recording aperture and irregular/coarse geophone-source spacing. These distortions can be partly suppressed by applying the deblurring operator (LTL)-1 to the migration image to get m = (LTL)-1mmig. This deblurred image is known as a least squares migration (LSM) image if (LTL)-1LT is applied to the data d using a conjugate gradient method, and is known as a migration deconvolved (MD) image if (LTL)-1 is directly applied to the migration image mmig in (kx, ky, z) space. The MD algorithm is an order-of-magnitude faster than LSM, but it employs more restrictive assumptions. We also show that deblurring can be used to filter out coherent noise in the data such as multiple reflections. The procedure is to, e.g., decompose the forward modeling operator into both primary and multiple reflection operators d = (Lprim + Lmulti)m, invert for m, and find the primary reflection data by dprim = Lprimm. This method is named least squares migration filtering (LSMF). The above three algorithms (LSM, MD and LSMF) might be useful for attacking problems in optical imaging.

  13. A Flexible Method for Producing F.E.M. Analysis of Bone Using Open-Source Software

    NASA Technical Reports Server (NTRS)

    Boppana, Abhishektha; Sefcik, Ryan; Meyers, Jerry G.; Lewandowski, Beth E.

    2016-01-01

    This project, performed in support of the NASA GRC Space Academy summer program, sought to develop an open-source workflow methodology that segmented medical image data, created a 3D model from the segmented data, and prepared the model for finite-element analysis. In an initial step, a technological survey evaluated the performance of various existing open-source software that claim to perform these tasks. However, the survey concluded that no single software exhibited the wide array of functionality required for the potential NASA application in the area of bone, muscle and bio fluidic studies. As a result, development of a series of Python scripts provided the bridging mechanism to address the shortcomings of the available open source tools. The implementation of the VTK library provided the most quick and effective means of segmenting regions of interest from the medical images; it allowed for the export of a 3D model by using the marching cubes algorithm to build a surface mesh. To facilitate the development of the model domain from this extracted information required a surface mesh to be processed in the open-source software packages Blender and Gmsh. The Preview program of the FEBio suite proved to be sufficient for volume filling the model with an unstructured mesh and preparing boundaries specifications for finite element analysis. To fully allow FEM modeling, an in house developed Python script allowed assignment of material properties on an element by element basis by performing a weighted interpolation of voxel intensity of the parent medical image correlated to published information of image intensity to material properties, such as ash density. A graphical user interface combined the Python scripts and other software into a user friendly interface. The work using Python scripts provides a potential alternative to expensive commercial software and inadequate, limited open-source freeware programs for the creation of 3D computational models. More work will be needed to validate this approach in creating finite-element models.

  14. Evaluation of image quality in terahertz pulsed imaging using test objects.

    PubMed

    Fitzgerald, A J; Berry, E; Miles, R E; Zinovev, N N; Smith, M A; Chamberlain, J M

    2002-11-07

    As with other imaging modalities, the performance of terahertz (THz) imaging systems is limited by factors of spatial resolution, contrast and noise. The purpose of this paper is to introduce test objects and image analysis methods to evaluate and compare THz image quality in a quantitative and objective way, so that alternative terahertz imaging system configurations and acquisition techniques can be compared, and the range of image parameters can be assessed. Two test objects were designed and manufactured, one to determine the modulation transfer functions (MTF) and the other to derive image signal to noise ratio (SNR) at a range of contrasts. As expected the higher THz frequencies had larger MTFs, and better spatial resolution as determined by the spatial frequency at which the MTF dropped below the 20% threshold. Image SNR was compared for time domain and frequency domain image parameters and time delay based images consistently demonstrated higher SNR than intensity based parameters such as relative transmittance because the latter are more strongly affected by the sources of noise in the THz system such as laser fluctuations and detector shot noise.

  15. Muscle-Tendon-Enthesis Unit.

    PubMed

    Tadros, Anthony S; Huang, Brady K; Pathria, Mini N

    2018-07-01

    Injuries to the muscle-tendon-enthesis unit are common and a significant source of pain and loss of function. This article focuses on the important anatomical and biomechanical considerations for each component of the muscle-tendon-enthesis unit. We review normal and pathologic conditions affecting this unit, illustrating the imaging appearance of common disorders on magnetic resonance imaging and ultrasound. Knowledge of the anatomy and biomechanics of these structures is crucial for the radiologist to make accurate diagnoses and provide clinically relevant assessments. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. Ultrahigh-speed non-invasive widefield angiography

    NASA Astrophysics Data System (ADS)

    Blatter, Cedric; Klein, Thomas; Grajciar, Branislav; Schmoll, Tilman; Wieser, Wolfgang; Andre, Raphael; Huber, Robert; Leitgeb, Rainer A.

    2012-07-01

    Retinal and choroidal vascular imaging is an important diagnostic benefit for ocular diseases such as age-related macular degeneration. The current gold standard for vessel visualization is fluorescence angiography. We present a potential non-invasive alternative to image blood vessels based on functional Fourier domain optical coherence tomography (OCT). For OCT to compete with the field of view and resolution of angiography while maintaining motion artifacts to a minimum, ultrahigh-speed imaging has to be introduced. We employ Fourier domain mode locking swept source technology that offers high quality imaging at an A-scan rate of up to 1.68 MHz. We present retinal angiogram over ˜48 deg acquired in a few seconds in a single recording without the need of image stitching. OCT at 1060 nm allows for high penetration in the choroid and efficient separate characterization of the retinal and choroidal vascularization.

  17. Time-frequency analysis of functional optical mammographic images

    NASA Astrophysics Data System (ADS)

    Barbour, Randall L.; Graber, Harry L.; Schmitz, Christoph H.; Tarantini, Frank; Khoury, Georges; Naar, David J.; Panetta, Thomas F.; Lewis, Theophilus; Pei, Yaling

    2003-07-01

    We have introduced working technology that provides for time-series imaging of the hemoglobin signal in large tissue structures. In this study we have explored our ability to detect aberrant time-frequency responses of breast vasculature for subjects with Stage II breast cancer at rest and in response to simple provocations. The hypothesis being explored is that time-series imaging will be sensitive to the known structural and functional malformations of the tumor vasculature. Mammographic studies were conducted using an adjustable hemisheric measuring head containing 21 source and 21 detector locations (441 source-detector pairs). Simultaneous dual-wavelength studies were performed at 760 and 830 nm at a framing rate of ~2.7 Hz. Optical measures were performed on women lying prone with the breast hanging in a pendant position. Two class of measures were performed: (1) 20- minute baseline measure wherein the subject was at rest; (2) provocation studies wherein the subject was asked to perform some simple breathing maneuvers. Collected data were analyzed to identify the time-frequency structure and central tendencies of the detector responses and those of the image time series. Imaging data were generated using the Normalized Difference Method (Pei et al., Appl. Opt. 40, 5755-5769, 2001). Results obtained clearly document three classes of anomalies when compared to the normal contralateral breast. 1) Breast tumors exhibit altered oxygen supply/demand imbalance in response to an oxidative challenge (breath hold). 2) The vasomotor response of the tumor vasculature is mainly depressed and exhibits an altered modulation. 3) The affected area of the breast wherein the altered vasomotor signature is seen extends well beyond the limits of the tumor itself.

  18. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction.

    PubMed

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2015-01-07

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.

  19. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction

    PubMed Central

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2016-01-01

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063

  20. The Effect of Magnetic Field on Positron Range and Spatial Resolution in an Integrated Whole-Body Time-Of-Flight PET/MRI System.

    PubMed

    Huang, Shih-Ying; Savic, Dragana; Yang, Jaewon; Shrestha, Uttam; Seo, Youngho

    2014-11-01

    Simultaneous imaging systems combining positron emission tomography (PET) and magnetic resonance imaging (MRI) have been actively investigated. A PET/MR imaging system (GE Healthcare) comprised of a time-of-flight (TOF) PET system utilizing silicon photomultipliers (SiPMs) and 3-tesla (3T) MRI was recently installed at our institution. The small-ring (60 cm diameter) TOF PET subsystem of this PET/MRI system can generate images with higher spatial resolution compared with conventional PET systems. We have examined theoretically and experimentally the effect of uniform magnetic fields on the spatial resolution for high-energy positron emitters. Positron emitters including 18 F, 124 I, and 68 Ga were simulated in water using the Geant4 Monte Carlo toolkit in the presence of a uniform magnetic field (0, 3, and 7 Tesla). The positron annihilation position was tracked to determine the 3D spatial distribution of the 511-keV gammy ray emission. The full-width at tenth maximum (FWTM) of the positron point spread function (PSF) was determined. Experimentally, 18 F and 68 Ga line source phantoms in air and water were imaged with an investigational PET/MRI system and a PET/CT system to investigate the effect of magnetic field on the spatial resolution of PET. The full-width half maximum (FWHM) of the line spread function (LSF) from the line source was determined as the system spatial resolution. Simulations and experimental results show that the in-plane spatial resolution was slightly improved at field strength as low as 3 Tesla, especially when resolving signal from high-energy positron emitters in the air-tissue boundary.

  1. Ambient seismic noise interferometry in Hawai'i reveals long-range observability of volcanic tremor

    USGS Publications Warehouse

    Ballmer, Silke; Wolfe, Cecily; Okubo, Paul G.; Haney, Matt; Thurber, Clifford H.

    2013-01-01

    The use of seismic noise interferometry to retrieve Green's functions and the analysis of volcanic tremor are both useful in studying volcano dynamics. Whereas seismic noise interferometry allows long-range extraction of interpretable signals from a relatively weak noise wavefield, the characterization of volcanic tremor often requires a dense seismic array close to the source. We here show that standard processing of seismic noise interferometry yields volcanic tremor signals observable over large distances exceeding 50 km. Our study comprises 2.5 yr of data from the U.S. Geological Survey Hawaiian Volcano Observatory short period seismic network. Examining more than 700 station pairs, we find anomalous and temporally coherent signals that obscure the Green's functions. The time windows and frequency bands of these anomalous signals correspond well with the characteristics of previously studied volcanic tremor sources at Pu'u 'Ō'ō and Halema'uma'u craters. We use the derived noise cross-correlation functions to perform a grid-search for source location, confirming that these signals are surface waves originating from the known tremor sources. A grid-search with only distant stations verifies that useful tremor signals can indeed be recovered far from the source. Our results suggest that the specific data processing in seismic noise interferometry—typically used for Green's function retrieval—can aid in the study of both the wavefield and source location of volcanic tremor over large distances. In view of using the derived Green's functions to image heterogeneity and study temporal velocity changes at volcanic regions, however, our results illustrate how care should be taken when contamination by tremor may be present.

  2. Time-reversal in geophysics: the key for imaging a seismic source, generating a virtual source or imaging with no source (Invited)

    NASA Astrophysics Data System (ADS)

    Tourin, A.; Fink, M.

    2010-12-01

    The concept of time-reversal (TR) focusing was introduced in acoustics by Mathias Fink in the early nineties: a pulsed wave is sent from a source, propagates in an unknown media and is captured at a transducer array termed a “Time Reversal Mirror (TRM)”. Then the waveforms received at each transducer are flipped in time and sent back resulting in a wave converging at the original source regardless of the complexity of the propagation medium. TRMs have now been implemented in a variety of physical scenarios from GHz microwaves to MHz ultrasonics and to hundreds of Hz in ocean acoustics. Common to this broad range of scales is a remarkable robustness exemplified by observations that the more complex the medium (random or chaotic), the sharper the focus. A TRM acts as an antenna that uses complex environments to appear wider than it is, resulting for a broadband pulse, in a refocusing quality that does not depend on the TRM aperture. We show that the time-reversal concept is also at the heart of very active research fields in seismology and applied geophysics: imaging of seismic sources, passive imaging based on noise correlations, seismic interferometry, monitoring of CO2 storage using the virtual source method. All these methods can indeed be viewed in a unified framework as an application of the so-called time-reversal cavity approach. That approach uses the fact that a wave field can be predicted at any location inside a volume (without source) from the knowledge of both the field and its normal derivative on the surrounding surface S, which for acoustic scalar waves is mathematically expressed in the Helmholtz Kirchhoff (HK) integral. Thus in the first step of an ideal TR process, the field coming from a point-like source as well as its normal derivative should be measured on S. In a second step, the initial source is removed and monopole and dipole sources reemit the time reversal of the components measured in the first step. Instead of directly computing the resulting HK integral along S, physical arguments can be used to straightforwardly predict that the time-reversed field in the cavity writes as the difference of advanced and retarded Green’s functions centred on the initial source position. This result is in some way disappointing because it means that reversing a field using a closed TRM is not enough to realize a perfect time-reversal experiment. In practical applications, the converging wave is always followed by a diverging one (see figure). However we will show that this result is of great importance since it furnishes the basis for imaging methods in media with no active source. We will focus more especially on the virtual source method showing that it can be used for implementing the DORT method (Decomposition of the time reversal operator) in a passive way. The passive DORT method could be interesting for monitoring changes in a complex scattering medium, for example in the context of CO2 storage. Time-reversal imaging applied to the giant Sumatra earthquake

  3. BOLDSync: a MATLAB-based toolbox for synchronized stimulus presentation in functional MRI.

    PubMed

    Joshi, Jitesh; Saharan, Sumiti; Mandal, Pravat K

    2014-02-15

    Precise and synchronized presentation of paradigm stimuli in functional magnetic resonance imaging (fMRI) is central to obtaining accurate information about brain regions involved in a specific task. In this manuscript, we present a new MATLAB-based toolbox, BOLDSync, for synchronized stimulus presentation in fMRI. BOLDSync provides a user friendly platform for design and presentation of visual, audio, as well as multimodal audio-visual (AV) stimuli in functional imaging experiments. We present simulation experiments that demonstrate the millisecond synchronization accuracy of BOLDSync, and also illustrate the functionalities of BOLDSync through application to an AV fMRI study. BOLDSync gains an advantage over other available proprietary and open-source toolboxes by offering a user friendly and accessible interface that affords both precision in stimulus presentation and versatility across various types of stimulus designs and system setups. BOLDSync is a reliable, efficient, and versatile solution for synchronized stimulus presentation in fMRI study. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. Lee

    1991-01-01

    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.

  5. Ultrafast Synthetic Transmit Aperture Imaging Using Hadamard-Encoded Virtual Sources With Overlapping Sub-Apertures.

    PubMed

    Ping Gong; Pengfei Song; Shigao Chen

    2017-06-01

    The development of ultrafast ultrasound imaging offers great opportunities to improve imaging technologies, such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, there are tradeoffs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Various approaches have been proposed to solve this tradeoff, such as multiplane wave imaging or the attempts of implementing synthetic transmit aperture imaging. In this paper, we propose an ultrafast synthetic transmit aperture (USTA) imaging technique using Hadamard-encoded virtual sources with overlapping sub-apertures to enhance both image SNR and resolution without sacrificing frame rate. This method includes three steps: 1) create virtual sources using sub-apertures; 2) encode virtual sources using Hadamard matrix; and 3) add short time intervals (a few microseconds) between transmissions of different virtual sources to allow overlapping sub-apertures. The USTA was tested experimentally with a point target, a B-mode phantom, and in vivo human kidney micro-vessel imaging. Compared with standard coherent diverging wave compounding with the same frame rate, improvements on image SNR, lateral resolution (+33%, with B-mode phantom imaging), and contrast ratio (+3.8 dB, with in vivo human kidney micro-vessel imaging) have been achieved. The f-number of virtual sources, the number of virtual sources used, and the number of elements used in each sub-aperture can be flexibly adjusted to enhance resolution and SNR. This allows very flexible optimization of USTA for different applications.

  6. Imaging episodic memory: implications for cognitive theories and phenomena.

    PubMed

    Nyberg, L

    1999-01-01

    Functional neuroimaging studies are beginning to identify neuroanatomical correlates of various cognitive functions. This paper presents results relevant to several theories and phenomena of episodic memory, including component processes of episodic retrieval, encoding specificity, inhibition, item versus source memory, encoding-retrieval overlap, and the picture-superiority effect. Overall, by revealing specific activation patterns, the results provide support for existing theoretical views and they add some unique information which may be important to consider in future attempts to develop cognitive theories of episodic memory.

  7. Ponderomotive phase plate for transmission electron microscopes

    DOEpatents

    Reed, Bryan W [Livermore, CA

    2012-07-10

    A ponderomotive phase plate system and method for controllably producing highly tunable phase contrast transfer functions in a transmission electron microscope (TEM) for high resolution and biological phase contrast imaging. The system and method includes a laser source and a beam transport system to produce a focused laser crossover as a phase plate, so that a ponderomotive potential of the focused laser crossover produces a scattering-angle-dependent phase shift in the electrons of the post-sample electron beam corresponding to a desired phase contrast transfer function.

  8. Rotating Modulation Imager for the Orphan Source Search Problem

    DTIC Science & Technology

    2008-01-01

    black mask. If the photon hits an open element it is transmitted and the function M(x) = 1. If the photon hits a closed mask element it is not...photon enters the top mask pair in the third slit, but passes through the second slit on the bottom mask. With a single black mask this is physically...modulation efficiency changes as a function of mask thickness for both tungsten and lead masks. The black line shows how the field of view changes with

  9. Towards Observational Astronomy of Jets in Active Galaxies from General Relativistic Magnetohydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Anantua, Richard; Roger Blandford, Jonathan McKinney and Alexander Tchekhovskoy

    2016-01-01

    We carry out the process of "observing" simulations of active galactic nuclei (AGN) with relativistic jets (hereafter called jet/accretion disk/black hole (JAB) systems) from ray tracing between image plane and source to convolving the resulting images with a point spread function. Images are generated at arbitrary observer angle relative to the black hole spin axis by implementing spatial and temporal interpolation of conserved magnetohydrodynamic flow quantities from a time series of output datablocks from fully general relativistic 3D simulations. We also describe the evolution of simulations of JAB systems' dynamical and kinematic variables, e.g., velocity shear and momentum density, respectively, and the variation of these variables with respect to observer polar and azimuthal angles. We produce, at frequencies from radio to optical, fixed observer time intensity and polarization maps using various plasma physics motivated prescriptions for the emissivity function of physical quantities from the simulation output, and analyze the corresponding light curves. Our hypothesis is that this approach reproduces observed features of JAB systems such as superluminal bulk flow projections and quasi-periodic oscillations in the light curves more closely than extant stylized analytical models, e.g., cannonball bulk flows. Moreover, our development of user-friendly, versatile C++ routines for processing images of state-of-the-art simulations of JAB systems may afford greater flexibility for observing a wide range of sources from high power BL-Lacs to low power quasars (possibly with the same simulation) without requiring years of observation using multiple telescopes. Advantages of observing simulations instead of observing astrophysical sources directly include: the absence of a diffraction limit, panoramic views of the same object and the ability to freely track features. Light travel time effects become significant for high Lorentz factor and small angles between observer direction and incident light rays; this regime is relevant for the study of AGN blazars in JAB simulations.

  10. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants.

    PubMed

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas

    2017-01-01

    Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.

  11. Multiple image x-radiography for functional lung imaging

    NASA Astrophysics Data System (ADS)

    Aulakh, G. K.; Mann, A.; Belev, G.; Wiebe, S.; Kuebler, W. M.; Singh, B.; Chapman, D.

    2018-01-01

    Detection and visualization of lung tissue structures is impaired by predominance of air. However, by using synchrotron x-rays, refraction of x-rays at the interface of tissue and air can be utilized to generate contrast which may in turn enable quantification of lung optical properties. We utilized multiple image radiography, a variant of diffraction enhanced imaging, at the Canadian light source to quantify changes in unique x-ray optical properties of lungs, namely attenuation, refraction and ultra small-angle scatter (USAXS or width) contrast ratios as a function of lung orientation in free-breathing or respiratory-gated mice before and after intra-nasal bacterial endotoxin (lipopolysaccharide) instillation. The lung ultra small-angle scatter and attenuation contrast ratios were significantly higher 9 h post lipopolysaccharide instillation compared to saline treatment whereas the refraction contrast decreased in magnitude. In ventilated mice, end-expiratory pressures result in an increase in ultra small-angle scatter contrast ratio when compared to end-inspiratory pressures. There were no detectable changes in lung attenuation or refraction contrast ratio with change in lung pressure alone. In effect, multiple image radiography can be applied towards following optical properties of lung air-tissue barrier over time during pathologies such as acute lung injury.

  12. Gnuastro: GNU Astronomy Utilities

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Mohammad

    2018-01-01

    Gnuastro (GNU Astronomy Utilities) manipulates and analyzes astronomical data. It is an official GNU package of a large collection of programs and C/C++ library functions. Command-line programs perform arithmetic operations on images, convert FITS images to common types like JPG or PDF, convolve an image with a given kernel or matching of kernels, perform cosmological calculations, crop parts of large images (possibly in multiple files), manipulate FITS extensions and keywords, and perform statistical operations. In addition, it contains programs to make catalogs from detection maps, add noise, make mock profiles with a variety of radial functions using monte-carlo integration for their centers, match catalogs, and detect objects in an image among many other operations. The command-line programs share the same basic command-line user interface for the comfort of both the users and developers. Gnuastro is written to comply fully with the GNU coding standards and integrates well with all Unix-like operating systems. This enables astronomers to expect a fully familiar experience in the source code, building, installing and command-line user interaction that they have seen in all the other GNU software that they use. Gnuastro's extensive library is included for users who want to build their own unique programs.

  13. A multi-satellite analysis of the direct radiative effects of absorbing aerosols above clouds

    NASA Astrophysics Data System (ADS)

    Chang, Y. Y.; Christopher, S. A.

    2015-12-01

    Radiative effects of absorbing aerosols above liquid water clouds in the southeast Atlantic as a function of fire sources are investigated using A-Train data coupled with the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard Suomi National Polar-orbiting Partnership (Suomi NPP). Both the VIIRS Active Fire product and the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Anomalies product (MYD14) are used to identify the biomass burning fire origin in southern Africa. The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) are used to assess the aerosol type, aerosol altitude, and cloud altitude. We use back trajectory information, wind data, and the Fire Locating and Modeling of Burning Emissions (FLAMBE) product to infer the transportation of aerosols from the fire source to the CALIOP swath in the southeast Atlantic during austral winter.

  14. Polyplanar optical display

    NASA Astrophysics Data System (ADS)

    Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard

    1997-07-01

    The polyplanar optical display (POD) is a unique display screen which can be use with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser as its optical source. In order to produce real- time video, the laser light is being modulated by a digital light processing (DLP) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the electronic interfacing to the DLP chip, the opto-mechanical design and viewing angle characteristics.

  15. Laser-driven polyplanar optic display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veligdan, J.T.; Biscardi, C.; Brewster, C.

    1998-01-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP) chip manufactured by Texas Instruments, Inc. A variablemore » astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the DLP chip, the optomechanical design and viewing angle characteristics.« less

  16. Laser-driven polyplanar optic display

    NASA Astrophysics Data System (ADS)

    Veligdan, James T.; Beiser, Leo; Biscardi, Cyrus; Brewster, Calvin; DeSanto, Leonard

    1998-05-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid- state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLPTM) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, we discuss the DLPTM chip, the opto-mechanical design and viewing angle characteristics.

  17. 3T MRI and 128-slice dual-source CT cisternography images of the cranial nerves a brief pictorial review for clinicians.

    PubMed

    Roldan-Valadez, Ernesto; Martinez-Anda, Jaime J; Corona-Cedillo, Roberto

    2014-01-01

    There is a broad community of health sciences professionals interested in the anatomy of the cranial nerves (CNs): specialists in neurology, neurosurgery, radiology, otolaryngology, ophthalmology, maxillofacial surgery, radiation oncology, and emergency medicine, as well as other related fields. Advances in neuroimaging using high-resolution images from computed tomography (CT) and magnetic resonance (MR) have made highly-detailed visualization of brain structures possible, allowing normal findings to be routinely assessed and nervous system pathology to be detected. In this article we present an integrated perspective of the normal anatomy of the CNs established by radiologists and neurosurgeons in order to provide a practical imaging review, which combines 128-slice dual-source multiplanar images from CT cisternography and 3T MR curved reconstructed images. The information about the CNs includes their origin, course (with emphasis on the cisternal segments and location of the orifices at the skull base transmitting them), function, and a brief listing of the most common pathologies affecting them. The scope of the article is clinical anatomy; readers will find specialized texts presenting detailed information about particular topics. Our aim in this article is to provide a helpful reference for understanding the complex anatomy of the cranial nerves. Copyright © 2013 Wiley Periodicals, Inc.

  18. An open source, wireless capable miniature microscope system

    NASA Astrophysics Data System (ADS)

    Liberti, William A., III; Perkins, L. Nathan; Leman, Daniel P.; Gardner, Timothy J.

    2017-08-01

    Objective. Fluorescence imaging through head-mounted microscopes in freely behaving animals is becoming a standard method to study neural circuit function. Flexible, open-source designs are needed to spur evolution of the method. Approach. We describe a miniature microscope for single-photon fluorescence imaging in freely behaving animals. The device is made from 3D printed parts and off-the-shelf components. These microscopes weigh less than 1.8 g, can be configured to image a variety of fluorophores, and can be used wirelessly or in conjunction with active commutators. Microscope control software, based in Swift for macOS, provides low-latency image processing capabilities for closed-loop, or BMI, experiments. Main results. Miniature microscopes were deployed in the songbird premotor region HVC (used as a proper name), in singing zebra finches. Individual neurons yield temporally precise patterns of calcium activity that are consistent over repeated renditions of song. Several cells were tracked over timescales of weeks and months, providing an opportunity to study learning related changes in HVC. Significance. 3D printed miniature microscopes, composed completely of consumer grade components, are a cost-effective, modular option for head-mounting imaging. These easily constructed and customizable tools provide access to cell-type specific neural ensembles over timescales of weeks.

  19. Development of a digital astronomical intensity interferometer: laboratory results with thermal light

    NASA Astrophysics Data System (ADS)

    Matthews, Nolan; Kieda, David; LeBohec, Stephan

    2018-06-01

    We present measurements of the second-order spatial coherence function of thermal light sources using Hanbury-Brown and Twiss interferometry with a digital correlator. We demonstrate that intensity fluctuations between orthogonal polarizations, or at detector separations greater than the spatial coherence length of the source, are uncorrelated but can be used to reduce systematic noise. The work performed here can readily be applied to existing and future Imaging Air-Cherenkov Telescopes used as star light collectors for stellar intensity interferometry to measure spatial properties of astronomical objects.

  20. Time Delay and Accretion Disk Size Measurements in the Lensed Quasar SBS 0909+532 from Multiwavelength Microlensing Analysis

    DTIC Science & Technology

    2013-09-01

    of the cosmic microwave background dipole velocity onto the lens plane, as done by Kochanek (2004). We compare the simulated light curves to the...observer, the background source, the foreground lens galaxy, and its stars cause uncorrelated variations in the source magnification as a function of...hereafter SBS 0909; αJ2000 = 09h13m01.s05, δJ2000 = +52d59m28.s83) is a doubly-imaged quasar lens sys- tem in which the background quasar has redshift

  1. Class of near-perfect coded apertures

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1977-01-01

    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method.

  2. Calibration of BAS-TR image plate response to high energy (3-300 MeV) carbon ions

    NASA Astrophysics Data System (ADS)

    Doria, D.; Kar, S.; Ahmed, H.; Alejo, A.; Fernandez, J.; Cerchez, M.; Gray, R. J.; Hanton, F.; MacLellan, D. A.; McKenna, P.; Najmudin, Z.; Neely, D.; Romagnani, L.; Ruiz, J. A.; Sarri, G.; Scullion, C.; Streeter, M.; Swantusch, M.; Willi, O.; Zepf, M.; Borghesi, M.

    2015-12-01

    The paper presents the calibration of Fuji BAS-TR image plate (IP) response to high energy carbon ions of different charge states by employing an intense laser-driven ion source, which allowed access to carbon energies up to 270 MeV. The calibration method consists of employing a Thomson parabola spectrometer to separate and spectrally resolve different ion species, and a slotted CR-39 solid state detector overlayed onto an image plate for an absolute calibration of the IP signal. An empirical response function was obtained which can be reasonably extrapolated to higher ion energies. The experimental data also show that the IP response is independent of ion charge states.

  3. Calibration of BAS-TR image plate response to high energy (3-300 MeV) carbon ions.

    PubMed

    Doria, D; Kar, S; Ahmed, H; Alejo, A; Fernandez, J; Cerchez, M; Gray, R J; Hanton, F; MacLellan, D A; McKenna, P; Najmudin, Z; Neely, D; Romagnani, L; Ruiz, J A; Sarri, G; Scullion, C; Streeter, M; Swantusch, M; Willi, O; Zepf, M; Borghesi, M

    2015-12-01

    The paper presents the calibration of Fuji BAS-TR image plate (IP) response to high energy carbon ions of different charge states by employing an intense laser-driven ion source, which allowed access to carbon energies up to 270 MeV. The calibration method consists of employing a Thomson parabola spectrometer to separate and spectrally resolve different ion species, and a slotted CR-39 solid state detector overlayed onto an image plate for an absolute calibration of the IP signal. An empirical response function was obtained which can be reasonably extrapolated to higher ion energies. The experimental data also show that the IP response is independent of ion charge states.

  4. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  5. Calibration of scintillation-light filters for neutron time-of-flight spectrometers at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayre, D. B., E-mail: sayre4@llnl.gov; Barbosa, F.; Caggiano, J. A.

    Sixty-four neutral density filters constructed of metal plates with 88 apertures of varying diameter have been radiographed with a soft x-ray source and CCD camera at National Security Technologies, Livermore. An analysis of the radiographs fits the radial dependence of the apertures’ image intensities to sigmoid functions, which can describe the rapidly decreasing intensity towards the apertures’ edges. The fitted image intensities determine the relative attenuation value of each filter. Absolute attenuation values of several imaged filters, measured in situ during calibration experiments, normalize the relative quantities which are now used in analyses of neutron spectrometer data at the Nationalmore » Ignition Facility.« less

  6. Calibration of scintillation-light filters for neutron time-of-flight spectrometers at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayre, D. B.; Barbosa, F.; Caggiano, J. A.

    Sixty-four neutral density filters constructed of metal plates with 88 apertures of varying diameter have been radiographed with a soft x-ray source and CCD camera at National Security Technologies, Livermore. An analysis of the radiographs fits the radial dependence of the apertures’ image intensities to sigmoid functions, which can describe the rapidly decreasing intensity towards the apertures’ edges. Here, the fitted image intensities determine the relative attenuation value of each filter. Absolute attenuation values of several imaged filters, measured in situ during calibration experiments, normalize the relative quantities which are now used in analyses of neutron spectrometer data at themore » National Ignition Facility.« less

  7. Calibration of scintillation-light filters for neutron time-of-flight spectrometers at the National Ignition Facility

    DOE PAGES

    Sayre, D. B.; Barbosa, F.; Caggiano, J. A.; ...

    2016-07-26

    Sixty-four neutral density filters constructed of metal plates with 88 apertures of varying diameter have been radiographed with a soft x-ray source and CCD camera at National Security Technologies, Livermore. An analysis of the radiographs fits the radial dependence of the apertures’ image intensities to sigmoid functions, which can describe the rapidly decreasing intensity towards the apertures’ edges. Here, the fitted image intensities determine the relative attenuation value of each filter. Absolute attenuation values of several imaged filters, measured in situ during calibration experiments, normalize the relative quantities which are now used in analyses of neutron spectrometer data at themore » National Ignition Facility.« less

  8. Calibration of scintillation-light filters for neutron time-of-flight spectrometers at the National Ignition Facility.

    PubMed

    Sayre, D B; Barbosa, F; Caggiano, J A; DiPuccio, V N; Eckart, M J; Grim, G P; Hartouni, E P; Hatarik, R; Weber, F A

    2016-11-01

    Sixty-four neutral density filters constructed of metal plates with 88 apertures of varying diameter have been radiographed with a soft x-ray source and CCD camera at National Security Technologies, Livermore. An analysis of the radiographs fits the radial dependence of the apertures' image intensities to sigmoid functions, which can describe the rapidly decreasing intensity towards the apertures' edges. The fitted image intensities determine the relative attenuation value of each filter. Absolute attenuation values of several imaged filters, measured in situ during calibration experiments, normalize the relative quantities which are now used in analyses of neutron spectrometer data at the National Ignition Facility.

  9. Functional magnetic resonance imaging of awake monkeys: some approaches for improving imaging quality

    PubMed Central

    Chen, Gang; Wang, Feng; Dillenburger, Barbara C.; Friedman, Robert M.; Chen, Li M.; Gore, John C.; Avison, Malcolm J.; Roe, Anna W.

    2011-01-01

    Functional magnetic resonance imaging (fMRI), at high magnetic field strength can suffer from serious degradation of image quality because of motion and physiological noise, as well as spatial distortions and signal losses due to susceptibility effects. Overcoming such limitations is essential for sensitive detection and reliable interpretation of fMRI data. These issues are particularly problematic in studies of awake animals. As part of our initial efforts to study functional brain activations in awake, behaving monkeys using fMRI at 4.7T, we have developed acquisition and analysis procedures to improve image quality with encouraging results. We evaluated the influence of two main variables on image quality. First, we show how important the level of behavioral training is for obtaining good data stability and high temporal signal-to-noise ratios. In initial sessions, our typical scan session lasted 1.5 hours, partitioned into short (<10 minutes) runs. During reward periods and breaks between runs, the monkey exhibited movements resulting in considerable image misregistrations. After a few months of extensive behavioral training, we were able to increase the length of individual runs and the total length of each session. The monkey learned to wait until the end of a block for fluid reward, resulting in longer periods of continuous acquisition. Each additional 60 training sessions extended the duration of each session by 60 minutes, culminating, after about 140 training sessions, in sessions that last about four hours. As a result, the average translational movement decreased from over 500 μm to less than 80 μm, a displacement close to that observed in anesthetized monkeys scanned in a 7 T horizontal scanner. Another major source of distortion at high fields arises from susceptibility variations. To reduce such artifacts, we used segmented gradient-echo echo-planar imaging (EPI) sequences. Increasing the number of segments significantly decreased susceptibility artifacts and image distortion. Comparisons of images from functional runs using four segments with those using a single-shot EPI sequence revealed a roughly two-fold improvement in functional signal-to-noise-ratio and 50% decrease in distortion. These methods enabled reliable detection of neural activation and permitted blood-oxygenation-level-dependent (BOLD) based mapping of early visual areas in monkeys using a volume coil. In summary, both extensive behavioral training of monkeys and application of segmented gradient-echo EPI sequence improved signal-to-noise and image quality. Understanding the effects these factors have is important for the application of high field imaging methods to the detection of sub-millimeter functional structures in the awake monkey brain. PMID:22055855

  10. Acoustic noise and functional magnetic resonance imaging: current strategies and future prospects.

    PubMed

    Amaro, Edson; Williams, Steve C R; Shergill, Sukhi S; Fu, Cynthia H Y; MacSweeney, Mairead; Picchioni, Marco M; Brammer, Michael J; McGuire, Philip K

    2002-11-01

    Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event-related designs are assessed. We also provide new data, using a silent event-related (SER) design, which demonstrate higher blood oxygen level-dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition. Copyright 2002 Wiley-Liss, Inc.

  11. OpenNFT: An open-source Python/Matlab framework for real-time fMRI neurofeedback training based on activity, connectivity and multivariate pattern analysis.

    PubMed

    Koush, Yury; Ashburner, John; Prilepin, Evgeny; Sladky, Ronald; Zeidman, Peter; Bibikov, Sergei; Scharnowski, Frank; Nikonorov, Artem; De Ville, Dimitri Van

    2017-08-01

    Neurofeedback based on real-time functional magnetic resonance imaging (rt-fMRI) is a novel and rapidly developing research field. It allows for training of voluntary control over localized brain activity and connectivity and has demonstrated promising clinical applications. Because of the rapid technical developments of MRI techniques and the availability of high-performance computing, new methodological advances in rt-fMRI neurofeedback become possible. Here we outline the core components of a novel open-source neurofeedback framework, termed Open NeuroFeedback Training (OpenNFT), which efficiently integrates these new developments. This framework is implemented using Python and Matlab source code to allow for diverse functionality, high modularity, and rapid extendibility of the software depending on the user's needs. In addition, it provides an easy interface to the functionality of Statistical Parametric Mapping (SPM) that is also open-source and one of the most widely used fMRI data analysis software. We demonstrate the functionality of our new framework by describing case studies that include neurofeedback protocols based on brain activity levels, effective connectivity models, and pattern classification approaches. This open-source initiative provides a suitable framework to actively engage in the development of novel neurofeedback approaches, so that local methodological developments can be easily made accessible to a wider range of users. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Dynamic granularity of imaging systems

    DOE PAGES

    Geissel, Matthias; Smith, Ian C.; Shores, Jonathon E.; ...

    2015-11-04

    Imaging systems that include a specific source, imaging concept, geometry, and detector have unique properties such as signal-to-noise ratio, dynamic range, spatial resolution, distortions, and contrast. Some of these properties are inherently connected, particularly dynamic range and spatial resolution. It must be emphasized that spatial resolution is not a single number but must be seen in the context of dynamic range and consequently is better described by a function or distribution. We introduce the “dynamic granularity” G dyn as a standardized, objective relation between a detector’s spatial resolution (granularity) and dynamic range for complex imaging systems in a given environmentmore » rather than the widely found characterization of detectors such as cameras or films by themselves. We found that this relation can partly be explained through consideration of the signal’s photon statistics, background noise, and detector sensitivity, but a comprehensive description including some unpredictable data such as dust, damages, or an unknown spectral distribution will ultimately have to be based on measurements. Measured dynamic granularities can be objectively used to assess the limits of an imaging system’s performance including all contributing noise sources and to qualify the influence of alternative components within an imaging system. Our article explains the construction criteria to formulate a dynamic granularity and compares measured dynamic granularities for different detectors used in the X-ray backlighting scheme employed at Sandia’s Z-Backlighter facility.« less

  13. In vivo three-photon imaging of deep cerebellum

    NASA Astrophysics Data System (ADS)

    Wang, Mengran; Wang, Tianyu; Wu, Chunyan; Li, Bo; Ouzounov, Dimitre G.; Sinefeld, David; Guru, Akash; Nam, Hyung-Song; Capecchi, Mario R.; Warden, Melissa R.; Xu, Chris

    2018-02-01

    We demonstrate three-photon microscopy (3PM) of mouse cerebellum at 1 mm depth by imaging both blood vessels and neurons. We compared 3PM and 2PM in the mouse cerebellum for imaging green (using excitation sources at 1300 nm and 920 nm, respectively) and red fluorescence (using excitation sources at 1680 nm and 1064 nm, respectively). 3PM enabled deeper imaging than 2PM because the use of longer excitation wavelength reduces the scattering in biological tissue and the higher order nonlinear excitation provides better 3D localization. To illustrate these two advantages quantitatively, we measured the signal decay as well as the signal-to-background ratio (SBR) as a function of depth. We performed 2-photon imaging from the brain surface all the way down to the area where the SBR reaches 1, while at the same depth, 3PM still has SBR above 30. The segmented decay curve shows that the mouse cerebellum has different effective attenuation lengths at different depths, indicating heterogeneous tissue property for this brain region. We compared the third harmonic generation (THG) signal, which is used to visualize myelinated fibers, with the decay curve. We found that the regions with shorter effective attenuation lengths correspond to the regions with more fibers. Our results indicate that the widespread, non-uniformly distributed myelinated fibers adds heterogeneity to mouse cerebellum, which poses additional challenges in deep imaging of this brain region.

  14. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  15. Helioviewer.org: An Open-source Tool for Visualizing Solar Data

    NASA Astrophysics Data System (ADS)

    Hughitt, V. Keith; Ireland, J.; Schmiedel, P.; Dimitoglou, G.; Mueller, D.; Fleck, B.

    2009-05-01

    As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. Currently, Helioviewer enables users to browse the entire SOHO data archive, updated hourly, as well as data feature/event catalog data from eight different catalogs including active region, flare, coronal mass ejection, type II radio burst data. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Future functionality will include: support for additional data-sources including TRACE, SDO and STEREO, dynamic movie generation, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.

  16. Super-contrast photoacoustic resonance imaging

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Zhang, Ruochong; Feng, Xiaohua; Liu, Siyu; Zheng, Yuanjin

    2018-02-01

    In this paper, a new imaging modality, named photoacoustic resonance imaging (PARI), is proposed and experimentally demonstrated. Being distinct from conventional single nanosecond laser pulse induced wideband PA signal, the proposed PARI method utilizes multi-burst modulated laser source to induce PA resonant signal with enhanced signal strength and narrower bandwidth. Moreover, imaging contrast could be clearly improved than conventional single-pulse laser based PA imaging by selecting optimum modulation frequency of the laser source, which originates from physical properties of different materials beyond the optical absorption coefficient. Specifically, the imaging steps is as follows: 1: Perform conventional PA imaging by modulating the laser source as a short pulse to identify the location of the target and the background. 2: Shine modulated laser beam on the background and target respectively to characterize their individual resonance frequency by sweeping the modulation frequency of the CW laser source. 3: Select the resonance frequency of the target as the modulation frequency of the laser source, perform imaging and get the first PARI image. Then choose the resonance frequency of the background as the modulation frequency of the laser source, perform imaging and get the second PARI image. 4: subtract the first PARI image from the second PARI image, then we get the contrast-enhanced PARI results over the conventional PA imaging in step 1. Experimental validation on phantoms have been performed to show the merits of the proposed PARI method with much improved image contrast.

  17. Reanalysis of global terrestrial vegetation trends from MODIS products: Browning or greening?

    Treesearch

    Yulong Zhang; Conghe Song; Lawrence E. Band; Ge Sun; Junxiang Li

    2017-01-01

    Accurately monitoring global vegetation dynamics with modern remote sensing is critical for understanding the functions and processes of the biosphere and its interactions with the planetary climate. The MODerate resolution Imaging Spectroradiometer (MODIS) vegetation index (VI) product has been a primary data source for this purpose. To date, theMODIS teamhad released...

  18. Utility of an image-based canopy reflectance modeling tool for remote estimation and LAI and leaf chlorophyll content at regional scales

    USDA-ARS?s Scientific Manuscript database

    Radiance data recorded by remote sensors function as a unique source for monitoring the terrestrial biosphere and vegetation dynamics at a range of spatial and temporal scales. A key challenge is to relate the remote sensing signal to critical variables describing land surface vegetation canopies su...

  19. STRUCTURAL AND FUNCTIONAL CHARACTERIZATION OF BENIGN FLECK RETINA USING MULTIMODAL IMAGING.

    PubMed

    Neriyanuri, Srividya; Rao, Chetan; Raman, Rajiv

    2017-01-01

    To report structural and functional features in a case series of benign fleck retina using multimodal imaging. Four cases with benign fleck retina underwent complete ophthalmic examination that included detailed history, visual acuity, and refractive error testing, FM-100 hue test, dilated fundus evaluation, full field electroretinogram, fundus photography with autofluorescence, fundus fluorescein angiography, and swept-source optical coherence tomography. Age group of the cases ranged from 19 years to 35 years (3 males and 1 female). Parental consanguinity was reported in two cases. All of them were visually asymptomatic with best-corrected visual acuity of 20/20 (moderate astigmatism) in both the eyes. Low color discrimination was seen in two cases. Fundus photography showed pisciform flecks which were compactly placed on posterior pole and were discrete, diverging towards periphery. Lesions were seen as smaller dots within 1500 microns from fovea and were hyperfluorescent on autofluorescence. Palisading retinal pigment epithelium defects were seen in posterior pole on fundus fluorescein angiography imaging; irregular hyper fluorescence was also noted. One case had reduced cone responses on full field electroretinogram; the other three cases had normal electroretinogram. On optical coherence tomography, level of lesions varied from retinal pigment epithelium, inner segment to outer segment extending till external limiting membrane. Functional and structural deficits in benign fleck retina were picked up using multimodal imaging.

  20. The Function Biomedical Informatics Research Network Data Repository.

    PubMed

    Keator, David B; van Erp, Theo G M; Turner, Jessica A; Glover, Gary H; Mueller, Bryon A; Liu, Thomas T; Voyvodic, James T; Rasmussen, Jerod; Calhoun, Vince D; Lee, Hyo Jong; Toga, Arthur W; McEwen, Sarah; Ford, Judith M; Mathalon, Daniel H; Diaz, Michele; O'Leary, Daniel S; Jeremy Bockholt, H; Gadde, Syam; Preda, Adrian; Wible, Cynthia G; Stern, Hal S; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G

    2016-01-01

    The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical data sets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 data set consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 T scanners. The FBIRN Phase 2 and Phase 3 data sets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN's multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. Copyright © 2015 Elsevier Inc. All rights reserved.

Top