Using pyramids to define local thresholds for blob detection.
Shneier, M
1983-03-01
A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goffin, Mark A., E-mail: mark.a.goffin@gmail.com; Buchan, Andrew G.; Dargaville, Steven
2015-01-15
A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specifiedmore » functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.« less
Automated Segmentation of High-Resolution Photospheric Images of Active Regions
NASA Astrophysics Data System (ADS)
Yang, Meng; Tian, Yu; Rao, Changhui
2018-02-01
Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).
Image enhancement in positron emission mammography
NASA Astrophysics Data System (ADS)
Slavine, Nikolai V.; Seiler, Stephen; McColl, Roderick W.; Lenkinski, Robert E.
2017-02-01
Purpose: To evaluate an efficient iterative deconvolution method (RSEMD) for improving the quantitative accuracy of previously reconstructed breast images by commercial positron emission mammography (PEM) scanner. Materials and Methods: The RSEMD method was tested on breast phantom data and clinical PEM imaging data. Data acquisition was performed on a commercial Naviscan Flex Solo II PEM camera. This method was applied to patient breast images previously reconstructed with Naviscan software (MLEM) to determine improvements in resolution, signal to noise ratio (SNR) and contrast to noise ratio (CNR.) Results: In all of the patients' breast studies the post-processed images proved to have higher resolution and lower noise as compared with images reconstructed by conventional methods. In general, the values of SNR reached a plateau at around 6 iterations with an improvement factor of about 2 for post-processed Flex Solo II PEM images. Improvements in image resolution after the application of RSEMD have also been demonstrated. Conclusions: A rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach RSEMD that operates on patient DICOM images has been used for quantitative improvement in breast imaging. The RSEMD method can be applied to clinical PEM images to improve image quality to diagnostically acceptable levels and will be crucial in order to facilitate diagnosis of tumor progression at the earliest stages. The RSEMD method can be considered as an extended Richardson-Lucy algorithm with multiple resolution levels (resolution subsets).
Resolution experiments using the white light speckle method.
Conley, E; Cloud, G
1991-03-01
Noncoherent light speckle methods have been successfully applied to gauge the motion of glaciers and buildings. Resolution of the optical method was limited by the aberrating turbulent atmosphere through which the images were collected. Sensitivity limitations regarding this particular application of speckle interferometry are discussed and analyzed. Resolution limit experiments that were incidental to glacier flow studies are related to the basic theory of astronomical imaging. Optical resolution of the ice flow measurement technique is shown to be in substantial agreement with the sensitivity predictions of astronomy theory.
NASA Astrophysics Data System (ADS)
Saleh, Sarah S.; Lotfy, Hayam M.; Hassan, Nagiba Y.; Salem, Hesham
2014-11-01
This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision.
Applying high resolution remote sensing image and DEM to falling boulder hazard assessment
NASA Astrophysics Data System (ADS)
Huang, Changqing; Shi, Wenzhong; Ng, K. C.
2005-10-01
Boulder fall hazard assessing generally requires gaining the boulder information. The extensive mapping and surveying fieldwork is a time-consuming, laborious and dangerous conventional method. So this paper proposes an applying image processing technology to extract boulder and assess boulder fall hazard from high resolution remote sensing image. The method can replace the conventional method and extract the boulder information in high accuracy, include boulder size, shape, height and the slope and aspect of its position. With above boulder information, it can be satisfied for assessing, prevention and cure boulder fall hazard.
Applied Use Value of Scientific Information for Management of Ecosystem Services
NASA Astrophysics Data System (ADS)
Raunikar, R. P.; Forney, W.; Bernknopf, R.; Mishra, S.
2012-12-01
The U.S. Geological Survey has developed and applied methods for quantifying the value of scientific information (VOI) that are based on the applied use value of the information. In particular the applied use value of U.S. Geological Survey information often includes efficient management of ecosystem services. The economic nature of U.S. Geological Survey scientific information is largely equivalent to that of any information, but we focus application of our VOI quantification methods on the information products provided freely to the public by the U.S. Geological Survey. We describe VOI economics in general and illustrate by referring to previous studies that use the evolving applied use value methods, which includes examples of the siting of landfills in Louden County, the mineral exploration efficiencies of finer resolution geologic maps in Canada, and improved agricultural production and groundwater protection in Eastern Iowa possible with Landsat moderate resolution satellite imagery. Finally, we describe the adaptation of the applied use value method to the case of streamgage information used to improve the efficiency of water markets in New Mexico.
Multiframe super resolution reconstruction method based on light field angular images
NASA Astrophysics Data System (ADS)
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
Afzali, Maryam; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid
2015-09-30
Diffusion weighted imaging (DWI) is a non-invasive method for investigating the brain white matter structure and can be used to evaluate fiber bundles. However, due to practical constraints, DWI data acquired in clinics are low resolution. This paper proposes a method for interpolation of orientation distribution functions (ODFs). To this end, fuzzy clustering is applied to segment ODFs based on the principal diffusion directions (PDDs). Next, a cluster is modeled by a tensor so that an ODF is represented by a mixture of tensors. For interpolation, each tensor is rotated separately. The method is applied on the synthetic and real DWI data of control and epileptic subjects. Both experiments illustrate capability of the method in increasing spatial resolution of the data in the ODF field properly. The real dataset show that the method is capable of reliable identification of differences between temporal lobe epilepsy (TLE) patients and normal subjects. The method is compared to existing methods. Comparison studies show that the proposed method generates smaller angular errors relative to the existing methods. Another advantage of the method is that it does not require an iterative algorithm to find the tensors. The proposed method is appropriate for increasing resolution in the ODF field and can be applied to clinical data to improve evaluation of white matter fibers in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.
Super-resolution reconstruction of hyperspectral images.
Akgun, Toygar; Altunbasak, Yucel; Mersereau, Russell M
2005-11-01
Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration. Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of these images. Improving their resolution has a high payoff, but applying super-resolution techniques separately to every spectral band is problematic for two main reasons. First, the number of spectral bands can be in the hundreds, which increases the computational load excessively. Second, considering the bands separately does not make use of the information that is present across them. Furthermore, separate band super-resolution does not make use of the inherent low dimensionality of the spectral data, which can effectively be used to improve the robustness against noise. In this paper, we introduce a novel super-resolution method for hyperspectral images. An integral part of our work is to model the hyperspectral image acquisition process. We propose a model that enables us to represent the hyperspectral observations from different wavelengths as weighted linear combinations of a small number of basis image planes. Then, a method for applying super resolution to hyperspectral images using this model is presented. The method fuses information from multiple observations and spectral bands to improve spatial resolution and reconstruct the spectrum of the observed scene as a combination of a small number of spectral basis functions.
NASA Astrophysics Data System (ADS)
Clergeau, Jean-François; Ferraton, Matthieu; Guérard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daullé, Thibault
2017-01-01
1D or 2D neutron position sensitive detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of position resolution. We then apply this measure to quantify the power of position resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the position resolution over best-wire algorithms which are the standard way of treating these signals.
Saleh, Sarah S; Lotfy, Hayam M; Hassan, Nagiba Y; Salem, Hesham
2014-11-11
This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
ELM: an Algorithm to Estimate the Alpha Abundance from Low-resolution Spectra
NASA Astrophysics Data System (ADS)
Bu, Yude; Zhao, Gang; Pan, Jingchang; Bharat Kumar, Yerra
2016-01-01
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSS spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.
Single image super-resolution via an iterative reproducing kernel Hilbert space method.
Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu
2016-11-01
Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.
A. M. S. Smith; N. A. Drake; M. J. Wooster; A. T. Hudak; Z. A. Holden; C. J. Gibbons
2007-01-01
Accurate production of regional burned area maps are necessary to reduce uncertainty in emission estimates from African savannah fires. Numerous methods have been developed that map burned and unburned surfaces. These methods are typically applied to coarse spatial resolution (1 km) data to produce regional estimates of the area burned, while higher spatial resolution...
Ground-based measurements of ionospheric dynamics
NASA Astrophysics Data System (ADS)
Kouba, Daniel; Chum, Jaroslav
2018-05-01
Different methods are used to research and monitor the ionospheric dynamics using ground measurements: Digisonde Drift Measurements (DDM) and Continuous Doppler Sounding (CDS). For the first time, we present comparison between both methods on specific examples. Both methods provide information about the vertical drift velocity component. The DDM provides more information about the drift velocity vector and detected reflection points. However, the method is limited by the relatively low time resolution. In contrast, the strength of CDS is its high time resolution. The discussed methods can be used for real-time monitoring of medium scale travelling ionospheric disturbances. We conclude that it is advantageous to use both methods simultaneously if possible. The CDS is then applied for the disturbance detection and analysis, and the DDM is applied for the reflection height control.
Digital Signal Processing Based on a Clustering Algorithm for Ir/Au TES Microcalorimeter
NASA Astrophysics Data System (ADS)
Zen, N.; Kunieda, Y.; Takahashi, H.; Hiramoto, K.; Nakazawa, M.; Fukuda, D.; Ukibe, M.; Ohkubo, M.
2006-02-01
In recent years, cryogenic microcalorimeters using their superconducting transition edge have been under development for possible application to the research for astronomical X-ray observations. To improve the energy resolution of superconducting transition edge sensors (TES), several correction methods have been developed. Among them, a clustering method based on digital signal processing has recently been proposed. In this paper, we applied the clustering method to Ir/Au bilayer TES. This method resulted in almost a 10% improvement in the energy resolution. Conversely, from the point of view of imaging X-ray spectroscopy, we applied the clustering method to pixellated Ir/Au-TES devices. We will thus show how a clustering method which sorts signals by their shapes is also useful for position identification
Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas
NASA Astrophysics Data System (ADS)
Pawłuszek, K.; Borkowski, A.; Tarolli, P.
2017-05-01
Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.
Soft X-ray astronomy using grazing incidence optics
NASA Technical Reports Server (NTRS)
Davis, John M.
1989-01-01
The instrumental background of X-ray astronomy with an emphasis on high resolution imagery is outlined. Optical and system performance, in terms of resolution, are compared and methods for improving the latter in finite length instruments described. The method of analysis of broadband images to obtain diagnostic information is described and is applied to the analysis of coronal structures.
NASA Astrophysics Data System (ADS)
Guerriero, Merilisa; Capozzoli, Luigi; De Martino, Gregory; Perciante, Felice; Gueguen, Erwan; Rizzo, Enzo
2017-04-01
Geophysical methods are commonly applied to characterize karst cave. Several geophysical method are used such as electrical resistivity tomography (ERT), gravimetric prospecting (G), ground penetrating radar (GPR) and seismic methods (S), in order to provide information on cave geometry and subsurface geological structure. In detail, in some complex karst systems, each geophysical method can only give partial information if used in normal way due to a low resolution for deep target. In order to reduce uncertainty and avoid misinterpretations based on a normal use of the electrical resistivity tomography method, a new ERT approach has been applied in karst cave Castello di Lepre (Marsico Nuovo, Basilicata region, Italy) located in the Mezo-Cenozoic carbonate substratum of the Monti della Maddalena ridge (Southern Appenines). In detail, a cross-ERT acquisition system was applied in order to improve the resolution on the electrical resistivity distribution on the surrounding geological structure of a karst cave. The cross-ERT system provides a more uniform model resolution vertically, increasing the resolution of the surface resistivity imaging. The usual cross-ERT is made by electrode setting in two or more borehole in order to acquire the resistivity data distribution. In this work the cross-ERT was made between the electrodes located on surface and along a karst cave, in order to obtain an high resolution of the electrical resistivity distributed between the cave and the surface topography. Finally, the acquired cross-ERT is potentially well-suited for imaging fracture zones since electrical current flow in fractured rock is primarily electrolytic via the secondary porosity associated with the fractures.
High resolution SAW elastography for ex-vivo porcine skin specimen
NASA Astrophysics Data System (ADS)
Zhou, Kanheng; Feng, Kairui; Wang, Mingkai; Jamera, Tanatswa; Li, Chunhui; Huang, Zhihong
2018-02-01
Surface acoustic wave (SAW) elastography has been proven to be a non-invasive, non-destructive method for accurately characterizing tissue elastic properties. Current SAW elastography technique tracks generated surface acoustic wave impulse point by point which are a few millimeters away. Thus, reconstructed elastography has low lateral resolution. To improve the lateral resolution of current SAW elastography, a new method was proposed in this research. A M-B scan mode, high spatial resolution phase sensitive optical coherence tomography (PhS-OCT) system was employed to track the ultrasonically induced SAW impulse. Ex-vivo porcine skin specimen was tested using this proposed method. A 2D fast Fourier transform based algorithm was applied to process the acquired data for estimating the surface acoustic wave dispersion curve and its corresponding penetration depth. Then, the ex-vivo porcine skin elastogram was established by relating the surface acoustic wave dispersion curve and its corresponding penetration depth. The result from the proposed method shows higher lateral resolution than that from current SAW elastography technique, and the approximated skin elastogram could also distinguish the different layers in the skin specimen, i.e. epidermis, dermis and fat layer. This proposed SAW elastography technique may have a large potential to be widely applied in clinical use for skin disease diagnosis and treatment monitoring.
Two-Point Turbulence Closure Applied to Variable Resolution Modeling
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Rubinstein, Robert
2011-01-01
Variable resolution methods have become frontline CFD tools, but in order to take full advantage of this promising new technology, more formal theoretical development is desirable. Two general classes of variable resolution methods can be identified: hybrid or zonal methods in which RANS and LES models are solved in different flow regions, and bridging or seamless models which interpolate smoothly between RANS and LES. This paper considers the formulation of bridging methods using methods of two-point closure theory. The fundamental problem is to derive a subgrid two-equation model. We compare and reconcile two different approaches to this goal: the Partially Integrated Transport Model, and the Partially Averaged Navier-Stokes method.
Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses
NASA Astrophysics Data System (ADS)
Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong
2017-04-01
Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.
High-Resolution Wind Measurements for Offshore Wind Energy Development
NASA Technical Reports Server (NTRS)
Nghiem, Son V.; Neumann, Gregory
2011-01-01
A mathematical transform, called the Rosette Transform, together with a new method, called the Dense Sampling Method, have been developed. The Rosette Transform is invented to apply to both the mean part and the fluctuating part of a targeted radar signature using the Dense Sampling Method to construct the data in a high-resolution grid at 1-km posting for wind measurements over water surfaces such as oceans or lakes.
NASA Astrophysics Data System (ADS)
Petrou, Zisis I.; Xian, Yang; Tian, YingLi
2018-04-01
Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.
ELM: AN ALGORITHM TO ESTIMATE THE ALPHA ABUNDANCE FROM LOW-RESOLUTION SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bu, Yude; Zhao, Gang; Kumar, Yerra Bharat
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSSmore » spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.« less
NASA Astrophysics Data System (ADS)
Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian
2018-06-01
Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.
Nonnegative constraint quadratic program technique to enhance the resolution of γ spectra
NASA Astrophysics Data System (ADS)
Li, Jinglun; Xiao, Wuyun; Ai, Xianyun; Chen, Ye
2018-04-01
Two concepts of the nonnegative least squares problem (NNLS) and the linear complementarity problem (LCP) are introduced for the resolution enhancement of the γ spectra. The respective algorithms such as the active set method and the primal-dual interior point method are applied to solve the above two problems. In mathematics, the nonnegative constraint results in the sparsity of the optimal solution of the deconvolution, and it is this sparsity that enhances the resolution. Finally, a comparison in the peak position accuracy and the computation time is made between these two methods and the boosted L_R and Gold methods.
Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system
NASA Astrophysics Data System (ADS)
Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi
2010-05-01
Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.
NASA Astrophysics Data System (ADS)
Bai, Rui; Tiejian, Li; Huang, Yuefei; Jiaye, Li; Wang, Guangqian; Yin, Dongqin
2015-12-01
The increasing resolution of Digital Elevation Models (DEMs) and the development of drainage network extraction algorithms make it possible to develop high-resolution drainage networks for large river basins. These vector networks contain massive numbers of river reaches with associated geographical features, including topological connections and topographical parameters. These features create challenges for efficient map display and data management. Of particular interest are the requirements of data management for multi-scale hydrological simulations using multi-resolution river networks. In this paper, a hierarchical pyramid method is proposed, which generates coarsened vector drainage networks from the originals iteratively. The method is based on the Horton-Strahler's (H-S) order schema. At each coarsening step, the river reaches with the lowest H-S order are pruned, and their related sub-basins are merged. At the same time, the topological connections and topographical parameters of each coarsened drainage network are inherited from the former level using formulas that are presented in this study. The method was applied to the original drainage networks of a watershed in the Huangfuchuan River basin extracted from a 1-m-resolution airborne LiDAR DEM and applied to the full Yangtze River basin in China, which was extracted from a 30-m-resolution ASTER GDEM. In addition, a map-display and parameter-query web service was published for the Mississippi River basin, and its data were extracted from the 30-m-resolution ASTER GDEM. The results presented in this study indicate that the developed method can effectively manage and display massive amounts of drainage network data and can facilitate multi-scale hydrological simulations.
A high resolution InSAR topographic reconstruction research in urban area based on TerraSAR-X data
NASA Astrophysics Data System (ADS)
Qu, Feifei; Qin, Zhang; Zhao, Chaoying; Zhu, Wu
2011-10-01
Aiming at the problems of difficult unwrapping and phase noise in InSAR DEM reconstruction, especially for the high-resolution TerraSAR-X data, this paper improved the height reconstruction algorithm in view of "remove-restore" based on external coarse DEM and multi-interferogram processing, proposed a height calibration method based on CR+GPS data. Several measures have been taken for urban high resolution DEM reconstruction with TerraSAR data. The SAR interferometric pairs with long spatial and short temporal baselines are served for the DEM. The external low resolution and low accuracy DEM is applied for the "remove-restore" concept to ease the phase unwrapping. The stochastic errors including atmospheric effects and phase noise are suppressed by weighted averaging of DEM phases. Six TerraSAR-X data are applied to create the twelve-meter's resolution DEM over Xian, China with the newly-proposed method. The heights in discrete GPS benchmarks are used to calibrate the result, and the RMS of 3.29 meter is achieved by comparing with 1:50000 DEM.
NASA Astrophysics Data System (ADS)
Costa-Surós, M.; Calbó, J.; González, J. A.; Long, C. N.
2013-06-01
The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, is an important characteristic in order to describe the impact of clouds in a changing climate. In this work several methods to estimate the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering number and position of cloud layers, with a ground based system which is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ on the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study these methods are applied to 125 radiosonde profiles acquired at the ARM Southern Great Plains site during all seasons of year 2009 and endorsed by GOES images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The overall agreement for the methods ranges between 44-88%; four methods produce total agreements around 85%. Further tests and improvements are applied on one of these methods. In addition, we attempt to make this method suitable for low resolution vertical profiles, which could be useful in atmospheric modeling. The total agreement, even when using low resolution profiles, can be improved up to 91% if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.
Computational methods for constructing protein structure models from 3D electron microscopy maps.
Esquivel-Rodríguez, Juan; Kihara, Daisuke
2013-10-01
Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.
Detecting breast microcalcifications using super-resolution ultrasound imaging: a clinical study
NASA Astrophysics Data System (ADS)
Huang, Lianjie; Labyed, Yassin; Hanson, Kenneth; Sandoval, Daniel; Pohl, Jennifer; Williamson, Michael
2013-03-01
Imaging breast microcalcifications is crucial for early detection and diagnosis of breast cancer. It is challenging for current clinical ultrasound to image breast microcalcifications. However, new imaging techniques using data acquired with a synthetic-aperture ultrasound system have the potential to significantly improve ultrasound imaging. We recently developed a super-resolution ultrasound imaging method termed the phase-coherent multiple-signal classification (PC-MUSIC). This signal subspace method accounts for the phase response of transducer elements to improve image resolution. In this paper, we investigate the clinical feasibility of our super-resolution ultrasound imaging method for detecting breast microcalcifications. We use our custom-built, real-time synthetic-aperture ultrasound system to acquire breast ultrasound data for 40 patients whose mammograms show the presence of breast microcalcifications. We apply our super-resolution ultrasound imaging method to the patient data, and produce clear images of breast calcifications. Our super-resolution ultrasound PC-MUSIC imaging with synthetic-aperture ultrasound data can provide a new imaging modality for detecting breast microcalcifications in clinic without using ionizing radiation.
Higashiura, Akifumi; Ohta, Kazunori; Masaki, Mika; Sato, Masaru; Inaka, Koji; Tanaka, Hiroaki; Nakagawa, Atsushi
2013-11-01
Recently, many technical improvements in macromolecular X-ray crystallography have increased the number of structures deposited in the Protein Data Bank and improved the resolution limit of protein structures. Almost all high-resolution structures have been determined using a synchrotron radiation source in conjunction with cryocooling techniques, which are required in order to minimize radiation damage. However, optimization of cryoprotectant conditions is a time-consuming and difficult step. To overcome this problem, the high-pressure cryocooling method was developed (Kim et al., 2005) and successfully applied to many protein-structure analyses. In this report, using the high-pressure cryocooling method, the X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. Structural comparisons between high- and ambient-pressure cryocooled crystals at ultra-high resolution illustrate the versatility of this technique. This is the first ultra-high-resolution X-ray structure obtained using the high-pressure cryocooling method.
Kok, H P; de Greef, M; Bel, A; Crezee, J
2009-08-01
In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.
NASA Technical Reports Server (NTRS)
Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.
1992-01-01
The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.
NASA Astrophysics Data System (ADS)
Crawford, Ben; Grimmond, Sue; Kent, Christoph; Gabey, Andrew; Ward, Helen; Sun, Ting; Morrison, William
2017-04-01
Remotely sensed data from satellites have potential to enable high-resolution, automated calculation of urban surface energy balance terms and inform decisions about urban adaptations to environmental change. However, aerodynamic resistance methods to estimate sensible heat flux (QH) in cities using satellite-derived observations of surface temperature are difficult in part due to spatial and temporal variability of the thermal aerodynamic resistance term (rah). In this work, we extend an empirical function to estimate rah using observational data from several cities with a broad range of surface vegetation land cover properties. We then use this function to calculate spatially and temporally variable rah in London based on high-resolution (100 m) land cover datasets and in situ meteorological observations. In order to calculate high-resolution QH based on satellite-observed land surface temperatures, we also develop and employ novel methods to i) apply source area-weighted averaging of surface and meteorological variables across the study spatial domain, ii) calculate spatially variable, high-resolution meteorological variables (wind speed, friction velocity, and Obukhov length), iii) incorporate spatially interpolated urban air temperatures from a distributed sensor network, and iv) apply a modified Monte Carlo approach to assess uncertainties with our results, methods, and input variables. Modeled QH using the aerodynamic resistance method is then compared to in situ observations in central London from a unique network of scintillometers and eddy-covariance measurements.
Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M
2014-10-01
Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.
A new MUSIC electromagnetic imaging method with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Zhong, Yu; Chen, Xudong
2008-11-01
This paper investigates the influence of test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
A direct method for unfolding the resolution function from measurements of neutron induced reactions
NASA Astrophysics Data System (ADS)
Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration
2017-12-01
The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.
NASA Technical Reports Server (NTRS)
Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)
2001-01-01
We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.
Umehara, Kensuke; Ota, Junko; Ishida, Takayuki
2017-10-18
In this study, the super-resolution convolutional neural network (SRCNN) scheme, which is the emerging deep-learning-based super-resolution method for enhancing image resolution in chest CT images, was applied and evaluated using the post-processing approach. For evaluation, 89 chest CT cases were sampled from The Cancer Imaging Archive. The 89 CT cases were divided randomly into 45 training cases and 44 external test cases. The SRCNN was trained using the training dataset. With the trained SRCNN, a high-resolution image was reconstructed from a low-resolution image, which was down-sampled from an original test image. For quantitative evaluation, two image quality metrics were measured and compared to those of the conventional linear interpolation methods. The image restoration quality of the SRCNN scheme was significantly higher than that of the linear interpolation methods (p < 0.001 or p < 0.05). The high-resolution image reconstructed by the SRCNN scheme was highly restored and comparable to the original reference image, in particular, for a ×2 magnification. These results indicate that the SRCNN scheme significantly outperforms the linear interpolation methods for enhancing image resolution in chest CT images. The results also suggest that SRCNN may become a potential solution for generating high-resolution CT images from standard CT images.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-05
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-01
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749
Xiang, Suyun; Wang, Wei; Xia, Jia; Xiang, Bingren; Ouyang, Pingkai
2009-09-01
The stochastic resonance algorithm is applied to the trace analysis of alkyl halides and alkyl benzenes in water samples. Compared to encountering a single signal when applying the algorithm, the optimization of system parameters for a multicomponent is more complex. In this article, the resolution of adjacent chromatographic peaks is first involved in the optimization of parameters. With the optimized parameters, the algorithm gave an ideal output with good resolution as well as enhanced signal-to-noise ratio. Applying the enhanced signals, the method extended the limit of detection and exhibited good linearity, which ensures accurate determination of the multicomponent.
High-resolution two dimensional advective transport
Smith, P.E.; Larock, B.E.
1989-01-01
The paper describes a two-dimensional high-resolution scheme for advective transport that is based on a Eulerian-Lagrangian method with a flux limiter. The scheme is applied to the problem of pure-advection of a rotated Gaussian hill and shown to preserve the monotonicity property of the governing conservation law.
Image processing enhancement of high-resolution TEM micrographs of nanometer-size metal particles
NASA Technical Reports Server (NTRS)
Artal, P.; Avalos-Borja, M.; Soria, F.; Poppa, H.; Heinemann, K.
1989-01-01
The high-resolution TEM detectability of lattice fringes from metal particles supported on substrates is impeded by the substrate itself. Single value decomposition (SVD) and Fourier filtering (FFT) methods were applied to standard high resolution micrographs to enhance lattice resolution from particles as well as from crystalline substrates. SVD produced good results for one direction of fringes, and it can be implemented as a real-time process. Fourier methods are independent of azimuthal directions and allow separation of particle lattice planes from those pertaining to the substrate, which makes it feasible to detect possible substrate distortions produced by the supported particle. This method, on the other hand, is more elaborate, requires more computer time than SVD and is, therefore, less likely to be used in real-time image processing applications.
NASA Astrophysics Data System (ADS)
Hasegawa, Hideyuki
2017-07-01
The range spatial resolution is an important factor determining the image quality in ultrasonic imaging. The range spatial resolution in ultrasonic imaging depends on the ultrasonic pulse length, which is determined by the mechanical response of the piezoelectric element in an ultrasonic probe. To improve the range spatial resolution without replacing the transducer element, in the present study, methods based on maximum likelihood (ML) estimation and multiple signal classification (MUSIC) were proposed. The proposed methods were applied to echo signals received by individual transducer elements in an ultrasonic probe. The basic experimental results showed that the axial half maximum of the echo from a string phantom was improved from 0.21 mm (conventional method) to 0.086 mm (ML) and 0.094 mm (MUSIC).
An Airborne Conflict Resolution Approach Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mondoloni, Stephane; Conway, Sheila
2001-01-01
An airborne conflict resolution approach is presented that is capable of providing flight plans forecast to be conflict-free with both area and traffic hazards. This approach is capable of meeting constraints on the flight plan such as required times of arrival (RTA) at a fix. The conflict resolution algorithm is based upon a genetic algorithm, and can thus seek conflict-free flight plans meeting broader flight planning objectives such as minimum time, fuel or total cost. The method has been applied to conflicts occurring 6 to 25 minutes in the future in climb, cruise and descent phases of flight. The conflict resolution approach separates the detection, trajectory generation and flight rules function from the resolution algorithm. The method is capable of supporting pilot-constructed resolutions, cooperative and non-cooperative maneuvers, and also providing conflict resolution on trajectories forecast by an onboard FMC.
A super-resolution ultrasound method for brain vascular mapping
O'Reilly, Meaghan A.; Hynynen, Kullervo
2013-01-01
Purpose: High-resolution vascular imaging has not been achieved in the brain due to limitations of current clinical imaging modalities. The authors present a method for transcranial ultrasound imaging of single micrometer-size bubbles within a tube phantom. Methods: Emissions from single bubbles within a tube phantom were mapped through an ex vivo human skull using a sparse hemispherical receiver array and a passive beamforming algorithm. Noninvasive phase and amplitude correction techniques were applied to compensate for the aberrating effects of the skull bone. The positions of the individual bubbles were estimated beyond the diffraction limit of ultrasound to produce a super-resolution image of the tube phantom, which was compared with microcomputed tomography (micro-CT). Results: The resulting super-resolution ultrasound image is comparable to results obtained via the micro-CT for small tissue specimen imaging. Conclusions: This method provides superior resolution to deep-tissue contrast ultrasound and has the potential to be extended to provide complete vascular network imaging in the brain. PMID:24320408
Synergetic Use of Sentinel-1 and Sentinel-2 Data for Soil Moisture Mapping at 100 m Resolution.
Gao, Qi; Zribi, Mehrez; Escorihuela, Maria Jose; Baghdadi, Nicolas
2017-08-26
The recent deployment of ESA's Sentinel operational satellites has established a new paradigm for remote sensing applications. In this context, Sentinel-1 radar images have made it possible to retrieve surface soil moisture with a high spatial and temporal resolution. This paper presents two methodologies for the retrieval of soil moisture from remotely-sensed SAR images, with a spatial resolution of 100 m. These algorithms are based on the interpretation of Sentinel-1 data recorded in the VV polarization, which is combined with Sentinel-2 optical data for the analysis of vegetation effects over a site in Urgell (Catalunya, Spain). The first algorithm has already been applied to observations in West Africa by Zribi et al., 2008, using low spatial resolution ERS scatterometer data, and is based on change detection approach. In the present study, this approach is applied to Sentinel-1 data and optimizes the inversion process by taking advantage of the high repeat frequency of the Sentinel observations. The second algorithm relies on a new method, based on the difference between backscattered Sentinel-1 radar signals observed on two consecutive days, expressed as a function of NDVI optical index. Both methods are applied to almost 1.5 years of satellite data (July 2015-November 2016), and are validated using field data acquired at a study site. This leads to an RMS error in volumetric moisture of approximately 0.087 m³/m³ and 0.059 m³/m³ for the first and second methods, respectively. No site calibrations are needed with these techniques, and they can be applied to any vegetation-covered area for which time series of SAR data have been recorded.
Synergetic Use of Sentinel-1 and Sentinel-2 Data for Soil Moisture Mapping at 100 m Resolution
Gao, Qi; Zribi, Mehrez
2017-01-01
The recent deployment of ESA’s Sentinel operational satellites has established a new paradigm for remote sensing applications. In this context, Sentinel-1 radar images have made it possible to retrieve surface soil moisture with a high spatial and temporal resolution. This paper presents two methodologies for the retrieval of soil moisture from remotely-sensed SAR images, with a spatial resolution of 100 m. These algorithms are based on the interpretation of Sentinel-1 data recorded in the VV polarization, which is combined with Sentinel-2 optical data for the analysis of vegetation effects over a site in Urgell (Catalunya, Spain). The first algorithm has already been applied to observations in West Africa by Zribi et al., 2008, using low spatial resolution ERS scatterometer data, and is based on change detection approach. In the present study, this approach is applied to Sentinel-1 data and optimizes the inversion process by taking advantage of the high repeat frequency of the Sentinel observations. The second algorithm relies on a new method, based on the difference between backscattered Sentinel-1 radar signals observed on two consecutive days, expressed as a function of NDVI optical index. Both methods are applied to almost 1.5 years of satellite data (July 2015–November 2016), and are validated using field data acquired at a study site. This leads to an RMS error in volumetric moisture of approximately 0.087 m3/m3 and 0.059 m3/m3 for the first and second methods, respectively. No site calibrations are needed with these techniques, and they can be applied to any vegetation-covered area for which time series of SAR data have been recorded. PMID:28846601
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
NASA Astrophysics Data System (ADS)
Liebel, L.; Körner, M.
2016-06-01
In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.
XPS Study of Oxide/GaAs and SiO2/Si Interfaces
NASA Technical Reports Server (NTRS)
Grunthaner, F. J.; Grunthaner, P. J.; Vasquez, R. P.; Lewis, B. F.; Maserjian, J.; Madhukar, A.
1982-01-01
Concepts developed in study of SiO2/Si interface applied to analysis of native oxide/GaAs interface. High-resolution X-ray photoelectron spectroscopy (XPS) has been combined with precise chemical-profiling technique and resolution-enhancement methods to study stoichiometry of transitional layer. Results are presented in report now available.
Podshivalov, L; Fischer, A; Bar-Yoseph, P Z
2011-04-01
This paper describes a new alternative for individualized mechanical analysis of bone trabecular structure. This new method closes the gap between the classic homogenization approach that is applied to macro-scale models and the modern micro-finite element method that is applied directly to micro-scale high-resolution models. The method is based on multiresolution geometrical modeling that generates intermediate structural levels. A new method for estimating multiscale material properties has also been developed to facilitate reliable and efficient mechanical analysis. What makes this method unique is that it enables direct and interactive analysis of the model at every intermediate level. Such flexibility is of principal importance in the analysis of trabecular porous structure. The method enables physicians to zoom-in dynamically and focus on the volume of interest (VOI), thus paving the way for a large class of investigations into the mechanical behavior of bone structure. This is one of the very few methods in the field of computational bio-mechanics that applies mechanical analysis adaptively on large-scale high resolution models. The proposed computational multiscale FE method can serve as an infrastructure for a future comprehensive computerized system for diagnosis of bone structures. The aim of such a system is to assist physicians in diagnosis, prognosis, drug treatment simulation and monitoring. Such a system can provide a better understanding of the disease, and hence benefit patients by providing better and more individualized treatment and high quality healthcare. In this paper, we demonstrate the feasibility of our method on a high-resolution model of vertebra L3. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Costa-Surós, M.; Calbó, J.; González, J. A.; Long, C. N.
2014-08-01
The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, are important characteristics in order to describe the impact of clouds on climate. In this work, several methods for estimating the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering the number and position of cloud layers, with a ground-based system that is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ in the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study, these methods are applied to 193 radiosonde profiles acquired at the Atmospheric Radiation Measurement (ARM) Southern Great Plains site during all seasons of the year 2009 and endorsed by Geostationary Operational Environmental Satellite (GOES) images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The perfect agreement (i.e., when the whole CVS is estimated correctly) for the methods ranges between 26 and 64%; the methods show additional approximate agreement (i.e., when at least one cloud layer is assessed correctly) from 15 to 41%. Further tests and improvements are applied to one of these methods. In addition, we attempt to make this method suitable for low-resolution vertical profiles, like those from the outputs of reanalysis methods or from the World Meteorological Organization's (WMO) Global Telecommunication System. The perfect agreement, even when using low-resolution profiles, can be improved by up to 67% (plus 25% of the approximate agreement) if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.
NASA Astrophysics Data System (ADS)
Costa-Surós, M.; Calbó, J.; González, J. A.; Long, C. N.
2014-04-01
The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, is an important characteristic in order to describe the impact of clouds on climate. In this work several methods to estimate the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering number and position of cloud layers, with a ground based system which is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ on the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study these methods are applied to 193 radiosonde profiles acquired at the ARM Southern Great Plains site during all seasons of year 2009 and endorsed by GOES images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The perfect agreement (i.e. when the whole CVS is correctly estimated) for the methods ranges between 26-64%; the methods show additional approximate agreement (i.e. when at least one cloud layer is correctly assessed) from 15-41%. Further tests and improvements are applied on one of these methods. In addition, we attempt to make this method suitable for low resolution vertical profiles, like those from the outputs of reanalysis methods or from the WMO's Global Telecommunication System. The perfect agreement, even when using low resolution profiles, can be improved up to 67% (plus 25% of approximate agreement) if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.
In-line three-dimensional holography of nanocrystalline objects at atomic resolution
Chen, F.-R.; Van Dyck, D.; Kisielowski, C.
2016-01-01
Resolution and sensitivity of the latest generation aberration-corrected transmission electron microscopes allow the vast majority of single atoms to be imaged with sub-Ångstrom resolution and their locations determined in an image plane with a precision that exceeds the 1.9-pm wavelength of 300 kV electrons. Such unprecedented performance allows expansion of electron microscopic investigations with atomic resolution into the third dimension. Here we report a general tomographic method to recover the three-dimensional shape of a crystalline particle from high-resolution images of a single projection without the need for sample rotation. The method is compatible with low dose rate electron microscopy, which improves on signal quality, while minimizing electron beam-induced structure modifications even for small particles or surfaces. We apply it to germanium, gold and magnesium oxide particles, and achieve a depth resolution of 1–2 Å, which is smaller than inter-atomic distances. PMID:26887849
Nagy, Szilvia; Pipek, János
2015-12-21
In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.
Significant Scales in Community Structure
NASA Astrophysics Data System (ADS)
Traag, V. A.; Krings, G.; van Dooren, P.
2013-10-01
Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of ``significance'' of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine ``good'' resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationality plays no role.
2010-10-01
Downloaded on February 20,2010 at 10:55:59 EST from IEEE Xplore . Restrictions apply. STUDENSKI et al.: ACQUISITION AND PROCESSING METHODS FOR A BEDSIDE...February 20,2010 at 10:55:59 EST from IEEE Xplore . Restrictions apply. 208 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 57, NO. 1, FEBRUARY 2010 from the...59 EST from IEEE Xplore . Restrictions apply. STUDENSKI et al.: ACQUISITION AND PROCESSING METHODS FOR A BEDSIDE CARDIAC SPECT IMAGING SYSTEM 209
A comparative verification of high resolution precipitation forecasts using model output statistics
NASA Astrophysics Data System (ADS)
van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees
2017-04-01
Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.
Principle and Reconstruction Algorithm for Atomic-Resolution Holography
NASA Astrophysics Data System (ADS)
Matsushita, Tomohiro; Muro, Takayuki; Matsui, Fumihiko; Happo, Naohisa; Hosokawa, Shinya; Ohoyama, Kenji; Sato-Tomita, Ayana; Sasaki, Yuji C.; Hayashi, Kouichi
2018-06-01
Atomic-resolution holography makes it possible to obtain the three-dimensional (3D) structure around a target atomic site. Translational symmetry of the atomic arrangement of the sample is not necessary, and the 3D atomic image can be measured when the local structure of the target atomic site is oriented. Therefore, 3D local atomic structures such as dopants and adsorbates are observable. Here, the atomic-resolution holography comprising photoelectron holography, X-ray fluorescence holography, neutron holography, and their inverse modes are treated. Although the measurement methods are different, they can be handled with a unified theory. The algorithm for reconstructing 3D atomic images from holograms plays an important role. Although Fourier transform-based methods have been proposed, they require the multiple-energy holograms. In addition, they cannot be directly applied to photoelectron holography because of the phase shift problem. We have developed methods based on the fitting method for reconstructing from single-energy and photoelectron holograms. The developed methods are applicable to all types of atomic-resolution holography.
Restoration of multichannel microwave radiometric images
NASA Technical Reports Server (NTRS)
Chin, R. T.; Yeh, C. L.; Olson, W. S.
1983-01-01
A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.
Beam deviation method as a diagnostic tool for the plasma focus.
Schmidt, H; Rückle, B
1978-04-15
The application of an optical method for density measurements in cylindrical plasmas is described. The angular deviation of a probing light beam sent through a plasma is proportional to the maximum of the density in the plasma column. The deviation does not depend on the plasma dimensions; however, it is influenced to a certain degree by the density profile. The method is successfully applied to the investigation of a dense plasma focus with a time resolution of 2 nsec and a spatial resolution (in axial direction) of 2 mm.
Salas, Desirée; Le Gall, Antoine; Fiche, Jean-Bernard; Valeri, Alessandro; Ke, Yonggang; Bron, Patrick; Bellot, Gaetan
2017-01-01
Superresolution light microscopy allows the imaging of labeled supramolecular assemblies at a resolution surpassing the classical diffraction limit. A serious limitation of the superresolution approach is sample heterogeneity and the stochastic character of the labeling procedure. To increase the reproducibility and the resolution of the superresolution results, we apply multivariate statistical analysis methods and 3D reconstruction approaches originally developed for cryogenic electron microscopy of single particles. These methods allow for the reference-free 3D reconstruction of nanomolecular structures from two-dimensional superresolution projection images. Since these 2D projection images all show the structure in high-resolution directions of the optical microscope, the resulting 3D reconstructions have the best possible isotropic resolution in all directions. PMID:28811371
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas
2017-01-01
The divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation (DEC-RI-MP2) theory method introduced in Baudin et al. [J. Chem. Phys. 144, 054102 (2016)] is significantly improved by introducing the Laplace transform of the orbital energy denominator in order to construct the double amplitudes directly in the local basis. Furthermore, this paper introduces the auxiliary reduction procedure, which reduces the set of the auxiliary functions employed in the individual fragments. The resulting Laplace transformed divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation method is applied to the insulin molecule where we obtain a factor 9.5 speedup compared to the DEC-RI-MP2 method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie
Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Resultsmore » from various data input to the method indicate significant improvements are provided in both image quality and resolution.« less
Data compression techniques applied to high resolution high frame rate video technology
NASA Technical Reports Server (NTRS)
Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.
1989-01-01
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.
Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data
NASA Astrophysics Data System (ADS)
Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.
2017-12-01
We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Andrew W; Leung, Lai R; Sridhar, V
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less
Application of wavefield compressive sensing in surface wave tomography
NASA Astrophysics Data System (ADS)
Zhan, Zhongwen; Li, Qingyang; Huang, Jianping
2018-06-01
Dense arrays allow sampling of seismic wavefield without significant aliasing, and surface wave tomography has benefitted from exploiting wavefield coherence among neighbouring stations. However, explicit or implicit assumptions about wavefield, irregular station spacing and noise still limit the applicability and resolution of current surface wave methods. Here, we propose to apply the theory of compressive sensing (CS) to seek a sparse representation of the surface wavefield using a plane-wave basis. Then we reconstruct the continuous surface wavefield on a dense regular grid before applying any tomographic methods. Synthetic tests demonstrate that wavefield CS improves robustness and resolution of Helmholtz tomography and wavefield gradiometry, especially when traditional approaches have difficulties due to sub-Nyquist sampling or complexities in wavefield.
Yamamoto, Kyosuke; Togami, Takashi; Yamaguchi, Norio
2017-11-06
Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture-in cooperation with image processing technologies-for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.
Togami, Takashi; Yamaguchi, Norio
2017-01-01
Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture—in cooperation with image processing technologies—for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis. PMID:29113104
Downscaling Thermal Infrared Radiance for Subpixel Land Surface Temperature Retrieval
Liu, Desheng; Pu, Ruiliang
2008-01-01
Land surface temperature (LST) retrieved from satellite thermal sensors often consists of mixed temperature components. Retrieving subpixel LST is therefore needed in various environmental and ecological studies. In this paper, we developed two methods for downscaling coarse resolution thermal infrared (TIR) radiance for the purpose of subpixel temperature retrieval. The first method was developed on the basis of a scale-invariant physical model on TIR radiance. The second method was based on a statistical relationship between TIR radiance and land cover fraction at high spatial resolution. The two methods were applied to downscale simulated 990-m ASTER TIR data to 90-m resolution. When validated against the original 90-m ASTER TIR data, the results revealed that both downscaling methods were successful in capturing the general patterns of the original data and resolving considerable spatial details. Further quantitative assessments indicated a strong agreement between the true values and the estimated values by both methods. PMID:27879844
Downscaling Thermal Infrared Radiance for Subpixel Land Surface Temperature Retrieval.
Liu, Desheng; Pu, Ruiliang
2008-04-06
Land surface temperature (LST) retrieved from satellite thermal sensors often consists of mixed temperature components. Retrieving subpixel LST is therefore needed in various environmental and ecological studies. In this paper, we developed two methods for downscaling coarse resolution thermal infrared (TIR) radiance for the purpose of subpixel temperature retrieval. The first method was developed on the basis of a scale-invariant physical model on TIR radiance. The second method was based on a statistical relationship between TIR radiance and land cover fraction at high spatial resolution. The two methods were applied to downscale simulated 990-m ASTER TIR data to 90-m resolution. When validated against the original 90-m ASTER TIR data, the results revealed that both downscaling methods were successful in capturing the general patterns of the original data and resolving considerable spatial details. Further quantitative assessments indicated a strong agreement between the true values and the estimated values by both methods.
High-resolution extraction of particle size via Fourier Ptychography
NASA Astrophysics Data System (ADS)
Li, Shengfu; Zhao, Yu; Chen, Guanghua; Luo, Zhenxiong; Ye, Yan
2017-11-01
This paper proposes a method which can extract the particle size information with a resolution beyond λ/NA. This is achieved by applying Fourier Ptychographic (FP) ideas to the present problem. In a typical FP imaging platform, a 2D LED array is used as light sources for angle-varied illuminations, a series of low-resolution images was taken by a full sequential scan of the array of LEDs. Here, we demonstrate the particle size information is extracted by turning on each single LED on a circle. The simulated results show that the proposed method can reduce the total number of images, without loss of reliability in the results.
Wu, Yicong; Chandris, Panagiotis; Winter, Peter W.; Kim, Edward Y.; Jaumouillé, Valentin; Kumar, Abhishek; Guo, Min; Leung, Jacqueline M.; Smith, Corey; Rey-Suarez, Ivan; Liu, Huafeng; Waterman, Clare M.; Ramamurthi, Kumaran S.; La Riviere, Patrick J.; Shroff, Hari
2016-01-01
Most fluorescence microscopes are inefficient, collecting only a small fraction of the emitted light at any instant. Besides wasting valuable signal, this inefficiency also reduces spatial resolution and causes imaging volumes to exhibit significant resolution anisotropy. We describe microscopic and computational techniques that address these problems by simultaneously capturing and subsequently fusing and deconvolving multiple specimen views. Unlike previous methods that serially capture multiple views, our approach improves spatial resolution without introducing any additional illumination dose or compromising temporal resolution relative to conventional imaging. When applying our methods to single-view wide-field or dual-view light-sheet microscopy, we achieve a twofold improvement in volumetric resolution (~235 nm × 235 nm × 340 nm) as demonstrated on a variety of samples including microtubules in Toxoplasma gondii, SpoVM in sporulating Bacillus subtilis, and multiple protein distributions and organelles in eukaryotic cells. In every case, spatial resolution is improved with no drawback by harnessing previously unused fluorescence. PMID:27761486
Low-count PET image restoration using sparse representation
NASA Astrophysics Data System (ADS)
Li, Tao; Jiang, Changhui; Gao, Juan; Yang, Yongfeng; Liang, Dong; Liu, Xin; Zheng, Hairong; Hu, Zhanli
2018-04-01
In the field of positron emission tomography (PET), reconstructed images are often blurry and contain noise. These problems are primarily caused by the low resolution of projection data. Solving this problem by improving hardware is an expensive solution, and therefore, we attempted to develop a solution based on optimizing several related algorithms in both the reconstruction and image post-processing domains. As sparse technology is widely used, sparse prediction is increasingly applied to solve this problem. In this paper, we propose a new sparse method to process low-resolution PET images. Two dictionaries (D1 for low-resolution PET images and D2 for high-resolution PET images) are learned from a group real PET image data sets. Among these two dictionaries, D1 is used to obtain a sparse representation for each patch of the input PET image. Then, a high-resolution PET image is generated from this sparse representation using D2. Experimental results indicate that the proposed method exhibits a stable and superior ability to enhance image resolution and recover image details. Quantitatively, this method achieves better performance than traditional methods. This proposed strategy is a new and efficient approach for improving the quality of PET images.
NASA Astrophysics Data System (ADS)
Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza
2017-03-01
Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.
Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images
NASA Astrophysics Data System (ADS)
Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred
2011-11-01
Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Hegazy, Maha A.; Mowaka, Shereen; Mohamed, Ekram Hany
2015-04-01
This work represents a comparative study of two smart spectrophotometric techniques namely; successive resolution and progressive resolution for the simultaneous determination of ternary mixtures of Amlodipine (AML), Hydrochlorothiazide (HCT) and Valsartan (VAL) without prior separation steps. These techniques consist of several consecutive steps utilizing zero and/or ratio and/or derivative spectra. By applying successive spectrum subtraction coupled with constant multiplication method, the proposed drugs were obtained in their zero order absorption spectra and determined at their maxima 237.6 nm, 270.5 nm and 250 nm for AML, HCT and VAL, respectively; while by applying successive derivative subtraction they were obtained in their first derivative spectra and determined at P230.8-246, P261.4-278.2, P233.7-246.8 for AML, HCT and VAL respectively. While in the progressive resolution, the concentrations of the components were determined progressively from the same zero order absorption spectrum using absorbance subtraction coupled with absorptivity factor methods or from the same ratio spectrum using only one divisor via amplitude modulation method can be used for the determination of ternary mixtures using only one divisor where the concentrations of the components are determined progressively. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. Moreover comparative study between spectrum addition technique as a novel enrichment technique and a well established one namely spiking technique was adopted for the analysis of pharmaceutical formulations containing low concentration of AML. The methods were validated as per ICH guidelines where accuracy, precision and specificity were found to be within their acceptable limits. The results obtained from the proposed methods were statistically compared with the reported one where no significant difference was observed.
Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun
2015-11-04
There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.
Reddy, S G; Cochran, B J; Worth, L L; Knutson, V P; Haddox, M K
1994-04-01
A high-resolution isoelectric focusing vertical slab gel method which can resolve proteins which differ by a single charge was developed and this method was applied to the study of the multiple isoelectric forms of ornithine decarboxylase. Separation of proteins at this high level of resolution was achieved by increasing the ampholyte concentration in the gels to 6%. Various lots of ampholytes, from the same or different commercial sources, differed significantly in their protein binding capacity. Ampholytes bound to proteins interfered both with the electrophoretic transfer of proteins from the gel to immunoblotting membranes and with the ability of antibodies to interact with proteins on the immunoblotting membranes. Increasing the amount of protein loaded into a gel lane also decreased the efficiency of the electrophoretic transfer and immunodetection. To overcome these problems, both gel washing and gel electrophoretic transfer protocols for disrupting the ampholyte-protein binding and enabling a quantitative electrophoretic transfer of proteins were developed. Two gel washing procedures, with either thiocyanate or borate buffers, and a two-step electrophoretic transfer method are described. The choice of which method to use to optimally disrupt the ampholyte-protein binding was found to vary with each lot of ampholytes employed.
Ocean wavenumber estimation from wave-resolving time series imagery
Plant, N.G.; Holland, K.T.; Haller, M.C.
2008-01-01
We review several approaches that have been used to estimate ocean surface gravity wavenumbers from wave-resolving remotely sensed image sequences. Two fundamentally different approaches that utilize these data exist. A power spectral density approach identifies wavenumbers where image intensity variance is maximized. Alternatively, a cross-spectral correlation approach identifies wavenumbers where intensity coherence is maximized. We develop a solution to the latter approach based on a tomographic analysis that utilizes a nonlinear inverse method. The solution is tolerant to noise and other forms of sampling deficiency and can be applied to arbitrary sampling patterns, as well as to full-frame imagery. The solution includes error predictions that can be used for data retrieval quality control and for evaluating sample designs. A quantitative analysis of the intrinsic resolution of the method indicates that the cross-spectral correlation fitting improves resolution by a factor of about ten times as compared to the power spectral density fitting approach. The resolution analysis also provides a rule of thumb for nearshore bathymetry retrievals-short-scale cross-shore patterns may be resolved if they are about ten times longer than the average water depth over the pattern. This guidance can be applied to sample design to constrain both the sensor array (image resolution) and the analysis array (tomographic resolution). ?? 2008 IEEE.
NASA Astrophysics Data System (ADS)
Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan
2018-03-01
High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-01-01
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties. PMID:24919017
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-06-10
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogura, Toshihiko, E-mail: t-ogura@aist.go.jp
Highlights: • We developed a high-sensitive frequency transmission electric-field (FTE) system. • The output signal was highly enhanced by applying voltage to a metal layer on SiN. • The spatial resolution of new FTE method is 41 nm. • New FTE system enables observation of the intact bacteria and virus in water. - Abstract: The high-resolution structural analysis of biological specimens by scanning electron microscopy (SEM) presents several advantages. Until now, wet bacterial specimens have been examined using atmospheric sample holders. However, images of unstained specimens in water using these holders exhibit very poor contrast and heavy radiation damage. Recently,more » we developed the frequency transmission electric-field (FTE) method, which facilitates the SEM observation of biological specimens in water without radiation damage. However, the signal detection system presents low sensitivity. Therefore, a high EB current is required to generate clear images, and thus reducing spatial resolution and inducing thermal damage to the samples. Here a high-sensitivity detection system is developed for the FTE method, which enhances the output signal amplitude by hundredfold. The detection signal was highly enhanced when voltage was applied to the metal layer on silicon nitride thin film. This enhancement reduced the EB current and improved the spatial resolution as well as the signal-to-noise ratio. The spatial resolution of a high-sensitive FTE system is 41 nm, which is considerably higher than previous FTE system. New FTE system can easily be utilised to examine various unstained biological specimens in water, such as living bacteria and viruses.« less
NASA Astrophysics Data System (ADS)
Rao, V. S. R.; Biswas, Margaret; Mukhopadhyay, Chaitali; Balaji, P. V.
1989-03-01
The CCEM method (Contact Criteria and Energy Minimisation) has been developed and applied to study protein-carbohydrate interactions. The method uses available X-ray data even on the native protein at low resolution (above 2.4 Å) to generate realistic models of a variety of proteins with various ligands. The two examples discussed in this paper are arabinose-binding protein (ABP) and pea lectin. The X-ray crystal structure data reported on ABP-β- L-arabinose complex at 2.8, 2.4 and 1.7 Å resolution differ drastically in predicting the nature of the interactions between the protein and ligand. It is shown that, using the data at 2.4 Å resolution, the CCEM method generates complexes which are as good as the higher (1.7 Å) resolution data. The CCEM method predicts some of the important hydrogen bonds between the ligand and the protein which are missing in the interpretation of the X-ray data at 2.4 Å resolution. The theoretically predicted hydrogen bonds are in good agreement with those reported at 1.7 Å resolution. Pea lectin has been solved only in the native form at 3 Å resolution. Application of the CCEM method also enables us to generate complexes of pea lectin with methyl-α- D-glucopyranoside and methyl-2,3-dimethyl-α- D-glucopyranoside which explain well the available experimental data in solution.
A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis
NASA Technical Reports Server (NTRS)
Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Correlative Stochastic Optical Reconstruction Microscopy and Electron Microscopy
Kim, Doory; Deerinck, Thomas J.; Sigal, Yaron M.; Babcock, Hazen P.; Ellisman, Mark H.; Zhuang, Xiaowei
2015-01-01
Correlative fluorescence light microscopy and electron microscopy allows the imaging of spatial distributions of specific biomolecules in the context of cellular ultrastructure. Recent development of super-resolution fluorescence microscopy allows the location of molecules to be determined with nanometer-scale spatial resolution. However, correlative super-resolution fluorescence microscopy and electron microscopy (EM) still remains challenging because the optimal specimen preparation and imaging conditions for super-resolution fluorescence microscopy and EM are often not compatible. Here, we have developed several experiment protocols for correlative stochastic optical reconstruction microscopy (STORM) and EM methods, both for un-embedded samples by applying EM-specific sample preparations after STORM imaging and for embedded and sectioned samples by optimizing the fluorescence under EM fixation, staining and embedding conditions. We demonstrated these methods using a variety of cellular targets. PMID:25874453
A method to improve the range resolution in stepped frequency continuous wave radar
NASA Astrophysics Data System (ADS)
Kaczmarek, Paweł
2018-04-01
In the paper one of high range resolution methods - Aperture Sampling - was analysed. Unlike MUSIC based techniques it proved to be very efficient in terms of achieving unambiguous synthetic range profile for ultra-wideband stepped frequency continuous wave radar. Assuming that minimal distance required to separate two targets in depth (distance) corresponds to -3 dB width of received echo, AS provided a 30,8 % improvement in range resolution in analysed scenario, when compared to results of applying IFFT. Output data is far superior in terms of both improved range resolution and reduced side lobe level than used typically in this area Inverse Fourier Transform. Furthermore it does not require prior knowledge or an estimate of number of targets to be detected in a given scan.
WE-G-18A-06: Sinogram Restoration in Helical Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, K; Riviere, P La
2014-06-15
Purpose: To extend CT sinogram restoration, which has been shown in 2D to reduce noise and to correct for geometric effects and other degradations at a low computational cost, from 2D to a 3D helical cone-beam geometry. Methods: A method for calculating sinogram degradation coefficients for a helical cone-beam geometry was proposed. These values were used to perform penalized-likelihood sinogram restoration on simulated data that were generated from the FORBILD thorax phantom. Sinogram restorations were performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods were used to obtain reconstructions. Resolution-variance trade-offs weremore » investigated for several locations within the reconstructions for the purpose of comparing sinogram restoration to no restoration. In order to compare potential differences, reconstructions were performed using different groups of neighbors in the penalty, two analytical reconstruction methods (Katsevich and single-slice rebinning), and differing helical pitches. Results: The resolution-variance properties of reconstructions restored using sinogram restoration with a Huber penalty outperformed those of reconstructions with no restoration. However, the use of a quadratic sinogram restoration penalty did not lead to an improvement over performing no restoration at the outer regions of the phantom. Application of the Huber penalty to neighbors both within a view and across views did not perform as well as only applying the penalty to neighbors within a view. General improvements in resolution-variance properties using sinogram restoration with the Huber penalty were not dependent on the reconstruction method used or the magnitude of the helical pitch. Conclusion: Sinogram restoration for noise and degradation effects for helical cone-beam CT is feasible and should be able to be applied to clinical data. When applied with the edge-preserving Huber penalty, sinogram restoration leads to an improvement in resolution-variance tradeoffs.« less
Land use change detection based on multi-date imagery from different satellite sensor systems
NASA Technical Reports Server (NTRS)
Stow, Douglas A.; Collins, Doretta; Mckinsey, David
1990-01-01
An empirical study is conducted to assess the accuracy of land use change detection using satellite image data acquired ten years apart by sensors with differing spatial resolutions. The primary goals of the investigation were to (1) compare standard change detection methods applied to image data of varying spatial resolution, (2) assess whether to transform the raster grid of the higher resolution image data to that of the lower resolution raster grid or vice versa in the registration process, (3) determine if Landsat/Thermatic Mapper or SPOT/High Resolution Visible multispectral data provide more accurate detection of land use changes when registered to historical Landsat/MSS data. It is concluded that image ratioing of multisensor, multidate satellite data produced higher change detection accuracies than did principal components analysis, and that it is useful as a land use change enhancement method.
NASA Astrophysics Data System (ADS)
Hong, Liang
2013-10-01
The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.
Development and calibration of a new gamma camera detector using large square Photomultiplier Tubes
NASA Astrophysics Data System (ADS)
Zeraatkar, N.; Sajedi, S.; Teimourian Fard, B.; Kaviani, S.; Akbarzadeh, A.; Farahani, M. H.; Sarkar, S.; Ay, M. R.
2017-09-01
Large area scintillation detectors applied in gamma cameras as well as Single Photon Computed Tomography (SPECT) systems, have a major role in in-vivo functional imaging. Most of the gamma detectors utilize hexagonal arrangement of Photomultiplier Tubes (PMTs). In this work we applied large square-shaped PMTs with row/column arrangement and positioning. The Use of large square PMTs reduces dead zones in the detector surface. However, the conventional center of gravity method for positioning may not introduce an acceptable result. Hence, the digital correlated signal enhancement (CSE) algorithm was optimized to obtain better linearity and spatial resolution in the developed detector. The performance of the developed detector was evaluated based on NEMA-NU1-2007 standard. The acquired images using this method showed acceptable uniformity and linearity comparing to three commercial gamma cameras. Also the intrinsic and extrinsic spatial resolutions with low-energy high-resolution (LEHR) collimator at 10 cm from surface of the detector were 3.7 mm and 7.5 mm, respectively. The energy resolution of the camera was measured 9.5%. The performance evaluation demonstrated that the developed detector maintains image quality with a reduced number of used PMTs relative to the detection area.
Kinetic resolution of racemic mixtures in gel media
NASA Astrophysics Data System (ADS)
Petrova, Rositza Iordanova
The goal of this research was to investigate the effect of chiral gels on the chiral crystal nucleation and growth and assess the gels' potential as media for kinetic separation of racemic mixtures. The morphologies of asparagine monohydrate and sodium bromate crystals grown in different gel media were examined in order to discern the effect of gel structure and density on the relative growth rates of those materials. Different crystal habits were observed when the gel chemical composition, density and solute concentration were varied. These studies showed that the physical properties of the gel, such as gel density and pore size, as well as its chemical composition affect the crystal habit. The method of kinetic resolution in gel media was first applied to sodium chlorate, which is achiral in solution but crystallizes in a chiral space group. Crystallization in agarose gels yielded an enantiomorphic bias, the direction and magnitude of which could be affected by changing the temperature or by the addition of an achiral cosolvent. Aqueous gels at 6°C produced crystalline mixtures enriched with the d-enantiomorph, while crystallization under MeOH diffusion favored l-crystals. Optimized conditions yielded e.e. of 53% of l-enantiomorph. The method was next applied to the organic molecular crystals of asparagine monohydrate and threonine. Asparagine monohydrate growth in aqueous agarose and iota-carrageenan gels produced crystal mixtures enriched with D-enantiomer. The degree of resolution was higher when the total amount of asparagine crystallized was low. The success of the resolution depends strongly on the concentrations of solute and the geling substance. Growth from agarose gels yielded e.e. of 44% under optimized conditions. The same method was applied to the resolution of Thr, albeit with modest success. In an effort to improve the resolution of asparagine monohydrate, agarose was synthetically modified by esterifying its side chains with homochiral asparagyl groups and used as a kinetic resolution media. The crystallization from L-Asn-agarose favored crystallization of L-enantiomer (28% e.e.), while D-Asn-agarose favored D-enantiomer (40% e.e.). The degree of resolution was sensitive to the concentrations of the gel and the total amount of crystallized asparagine, but the media was no better than that in pure agarose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderton, Christopher R.; Chu, Rosalie K.; Tolic, Nikola
The ability to visualize biochemical interactions between microbial communities using MALDI MSI has provided tremendous insights into a variety of biological fields. Matrix application using a sieve proved to be incredibly useful, but it had many limitations that include uneven matrix coverage and limitation in the types of matrices one could employ in their studies. Recently, there has been a concerted effort to improve matrix application for studying agar plated microbial cultures, many of which utilized automated matrix sprayers. Here, we describe the usefulness of using a robotic sprayer for matrix application. The robotic sprayer has two-dimensional control over wheremore » matrix is applied and a heated capillary that allows for rapid drying of the applied matrix. This method provided a significant increase in MALDI sensitivity over the sieve method, as demonstrated by FT-ICR MS analysis, facilitating the ability to gain higher lateral resolution MS images of Bacillus Subtilis than previously reported. This method also allowed for the use of different matrices to be applied to the culture surfaces.« less
NASA Astrophysics Data System (ADS)
Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.
In-line three-dimensional holography of nanocrystalline objects at atomic resolution
Chen, F. -R.; Van Dyck, D.; Kisielowski, C.
2016-02-18
We report that resolution and sensitivity of the latest generation aberration-corrected transmission electron microscopes allow the vast majority of single atoms to be imaged with sub-Ångstrom resolution and their locations determined in an image plane with a precision that exceeds the 1.9-pm wavelength of 300 kV electrons. Such unprecedented performance allows expansion of electron microscopic investigations with atomic resolution into the third dimension. Here we show a general tomographic method to recover the three-dimensional shape of a crystalline particle from high-resolution images of a single projection without the need for sample rotation. The method is compatible with low dose ratemore » electron microscopy, which improves on signal quality, while minimizing electron beam-induced structure modifications even for small particles or surfaces. Lastly, we apply it to germanium, gold and magnesium oxide particles, and achieve a depth resolution of 1–2 Å, which is smaller than inter-atomic distances.« less
iCLIP: Protein–RNA interactions at nucleotide resolution
Huppertz, Ina; Attig, Jan; D’Ambrogio, Andrea; Easton, Laura E.; Sibley, Christopher R.; Sugimoto, Yoichiro; Tajnik, Mojca; König, Julian; Ule, Jernej
2014-01-01
RNA-binding proteins (RBPs) are key players in the post-transcriptional regulation of gene expression. Precise knowledge about their binding sites is therefore critical to unravel their molecular function and to understand their role in development and disease. Individual-nucleotide resolution UV crosslinking and immunoprecipitation (iCLIP) identifies protein–RNA crosslink sites on a genome-wide scale. The high resolution and specificity of this method are achieved by an intramolecular cDNA circularization step that enables analysis of cDNAs that truncated at the protein–RNA crosslink sites. Here, we describe the improved iCLIP protocol and discuss critical optimization and control experiments that are required when applying the method to new RBPs. PMID:24184352
NASA Astrophysics Data System (ADS)
Cristea, Nicoleta C.; Breckheimer, Ian; Raleigh, Mark S.; HilleRisLambers, Janneke; Lundquist, Jessica D.
2017-08-01
Reliable maps of snow-covered areas at scales of meters to tens of meters, with daily temporal resolution, are essential to understanding snow heterogeneity, melt runoff, energy exchange, and ecological processes. Here we develop a parsimonious downscaling routine that can be applied to fractional snow covered area (fSCA) products from satellite platforms such as the Moderate Resolution Imaging Spectroradiometer (MODIS) that provide daily ˜500 m data, to derive higher-resolution snow presence/absence grids. The method uses a composite index combining both the topographic position index (TPI) to represent accumulation effects and the diurnal anisotropic heat (DAH, sun exposure) index to represent ablation effects. The procedure is evaluated and calibrated using airborne-derived high-resolution data sets across the Tuolumne watershed, CA using 11 scenes in 2014 to downscale to 30 m resolution. The average matching F score was 0.83. We then tested our method's transferability in time and space by comparing against the Tuolumne watershed in water years 2013 and 2015, and over an entirely different site, Mt. Rainier, WA in 2009 and 2011, to assess applicability to other topographic and climatic conditions. For application to sites without validation data, we recommend equal weights for the TPI and DAH indices and close TPI neighborhoods (60 and 27 m for downscaling to 30 and 3 m, respectively), which worked well in both our study areas. The method is less effective in forested areas, which still requires site-specific treatment. We demonstrate that the procedure can even be applied to downscale to 3 m resolution, a very fine scale relevant to alpine ecohydrology research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pau, G. S. H.; Bisht, G.; Riley, W. J.
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
Pau, G. S. H.; Bisht, G.; Riley, W. J.
2014-09-17
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.
2012-03-01
Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.
Salomon, M; Conklin, J W; Kozaczuk, J; Berberian, J E; Keiser, G M; Silbergleit, A S; Worden, P; Santiago, D I
2011-12-01
In this paper, we present a method to measure the frequency and the frequency change rate of a digital signal. This method consists of three consecutive algorithms: frequency interpolation, phase differencing, and a third algorithm specifically designed and tested by the authors. The succession of these three algorithms allowed a 5 parts in 10(10) resolution in frequency determination. The algorithm developed by the authors can be applied to a sampled scalar signal such that a model linking the harmonics of its main frequency to the underlying physical phenomenon is available. This method was developed in the framework of the gravity probe B (GP-B) mission. It was applied to the high frequency (HF) component of GP-B's superconducting quantum interference device signal, whose main frequency f(z) is close to the spin frequency of the gyroscopes used in the experiment. A 30 nHz resolution in signal frequency and a 0.1 pHz/s resolution in its decay rate were achieved out of a succession of 1.86 s-long stretches of signal sampled at 2200 Hz. This paper describes the underlying theory of the frequency measurement method as well as its application to GP-B's HF science signal.
First Human Brain Imaging by the jPET-D4 Prototype With a Pre-Computed System Matrix
NASA Astrophysics Data System (ADS)
Yamaya, Taiga; Yoshida, Eiji; Obi, Takashi; Ito, Hiroshi; Yoshikawa, Kyosan; Murayama, Hideo
2008-10-01
The jPET-D4 is a novel brain PET scanner which aims to achieve not only high spatial resolution but also high scanner sensitivity by using 4-layer depth-of-interaction (DOI) information. The dimensions of a system matrix for the jPET-D4 are 3.3 billion (lines-of-response) times 5 million (image elements) when a standard field-of-view (FOV) of 25 cm diameter is sampled with a (1.5 mm)3 voxel . The size of the system matrix is estimated as 117 petabytes (PB) with the accuracy of 8 bytes per element. An on-the-fly calculation is usually used to deal with such a huge system matrix. However we cannot avoid extension of the calculation time when we improve the accuracy of system modeling. In this work, we implemented an alternative approach based on pre-calculation of the system matrix. A histogram-based 3D OS-EM algorithm was implemented on a desktop workstation with 32 GB memory installed. The 117 PB system matrix was compressed under the limited amount of computer memory by (1) eliminating zero elements, (2) applying the DOI compression (DOIC) method and (3) applying rotational symmetry and an axial shift property of the crystal arrangement. Spanning, which degrades axial resolution, was not applied. The system modeling and the DOIC method, which had been validated in 2D image reconstruction, were expanded into 3D implementation. In particular, a new system model including the DOIC transformation was introduced to suppress resolution loss caused by the DOIC method. Experimental results showed that the jPET-D4 has almost uniform spatial resolution of better than 3 mm over the FOV. Finally the first human brain images were obtained with the jPET-D4.
Geographically weighted regression based methods for merging satellite and gauge precipitation
NASA Astrophysics Data System (ADS)
Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo
2018-03-01
Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.
NASA Astrophysics Data System (ADS)
Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia
2016-02-01
A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2015-01-07
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2016-01-01
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063
Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.
Choi, Jae-Seok; Kim, Munchurl
2017-03-01
Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.
High Resolution Melting (HRM) applied to wine authenticity.
Pereira, Leonor; Gomes, Sónia; Castro, Cláudia; Eiras-Dias, José Eduardo; Brazão, João; Graça, António; Fernandes, José R; Martins-Lopes, Paula
2017-02-01
Wine authenticity methods are in increasing demand mainly in Denomination of Origin designations. The DNA-based methodologies are a reliable means of tracking food/wine varietal composition. The main aim of this work was the study of High Resolution Melting (HRM) application as a screening method for must and wine authenticity. Three sample types (leaf, must and wine) were used to validate the three developed HRM assays (Vv1-705bp; Vv2-375bp; and Vv3-119bp). The Vv1 HRM assay was only successful when applied to leaf and must samples. The Vv2 HRM assay successfully amplified all sample types, allowing genotype discrimination based on melting temperature values. The smallest amplicon, Vv3, produced a coincident melting curve shape in all sample types (leaf and wine) with corresponding genotypes. This study presents sensitive, rapid and efficient HRM assays applied for the first time to wine samples suitable for wine authenticity purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
Windowed time-reversal music technique for super-resolution ultrasound imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Labyed, Yassin
Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements.
An assessment of two methods for identifying undocumented levees using remotely sensed data
Czuba, Christiana R.; Williams, Byron K.; Westman, Jack; LeClaire, Keith
2015-01-01
Many undocumented and commonly unmaintained levees exist in the landscape complicating flood forecasting, risk management, and emergency response. This report describes a pilot study completed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers to assess two methods to identify undocumented levees by using remotely sensed, high-resolution topographic data. For the first method, the U.S. Army Corps of Engineers examined hillshades computed from a digital elevation model that was derived from light detection and ranging (lidar) to visually identify potential levees and then used detailed site visits to assess the validity of the identifications. For the second method, the U.S. Geological Survey applied a wavelet transform to a lidar-derived digital elevation model to identify potential levees. The hillshade method was applied to Delano, Minnesota, and the wavelet-transform method was applied to Delano and Springfield, Minnesota. Both methods were successful in identifying levees but also identified other features that required interpretation to differentiate from levees such as constructed barriers, high banks, and bluffs. Both methods are complementary to each other, and a potential conjunctive method for testing in the future includes (1) use of the wavelet-transform method to rapidly identify slope-break features in high-resolution topographic data, (2) further examination of topographic data using hillshades and aerial photographs to classify features and map potential levees, and (3) a verification check of each identified potential levee with local officials and field visits.
Atomic-resolution transmission electron microscopy of electron beam–sensitive crystalline materials
NASA Astrophysics Data System (ADS)
Zhang, Daliang; Zhu, Yihan; Liu, Lingmei; Ying, Xiangrong; Hsiung, Chia-En; Sougrat, Rachid; Li, Kun; Han, Yu
2018-02-01
High-resolution imaging of electron beam–sensitive materials is one of the most difficult applications of transmission electron microscopy (TEM). The challenges are manifold, including the acquisition of images with extremely low beam doses, the time-constrained search for crystal zone axes, the precise image alignment, and the accurate determination of the defocus value. We develop a suite of methods to fulfill these requirements and acquire atomic-resolution TEM images of several metal organic frameworks that are generally recognized as highly sensitive to electron beams. The high image resolution allows us to identify individual metal atomic columns, various types of surface termination, and benzene rings in the organic linkers. We also apply our methods to other electron beam–sensitive materials, including the organic-inorganic hybrid perovskite CH3NH3PbBr3.
Evaluation of field methods for vertical high resolution aquifer characterization
NASA Astrophysics Data System (ADS)
Vienken, T.; Tinter, M.; Rogiers, B.; Leven, C.; Dietrich, P.
2012-12-01
The delineation and characterization of subsurface (hydro)-stratigraphic structures is one of the challenging tasks of hydrogeological site investigations. The knowledge about the spatial distribution of soil specific properties and hydraulic conductivity (K) is the prerequisite for understanding flow and fluid transport processes. This is especially true for heterogeneous unconsolidated sedimentary deposits with a complex sedimentary architecture. One commonly used approach to investigate and characterize sediment heterogeneity is soil sampling and lab analyses, e.g. grain size distribution. Tests conducted on 108 samples show that calculation of K based on grain size distribution is not suitable for high resolution aquifer characterization of highly heterogeneous sediments due to sampling effects and large differences of calculated K values between applied formulas (Vienken & Dietrich 2011). Therefore, extensive tests were conducted at two test sites under different geological conditions to evaluate the performance of innovative Direct Push (DP) based approaches for the vertical high resolution determination of K. Different DP based sensor probes for the in-situ subsurface characterization based on electrical, hydraulic, and textural soil properties were used to obtain high resolution vertical profiles. The applied DP based tools proved to be a suitable and efficient alternative to traditional approaches. Despite resolution differences, all of the applied methods captured the main aquifer structure. Correlation of the DP based K estimates and proxies with DP based slug tests show that it is possible to describe the aquifer hydraulic structure on less than a meter scale by combining DP slug test data and continuous DP measurements. Even though correlations are site specific and appropriate DP tools must be chosen, DP is reliable and efficient alternative for characterizing even strongly heterogeneous sites with complex structured sedimentary aquifers (Vienken et al. 2012). References: Vienken, T., Leven, C., and Dietrich, P. 2012. Use of CPT and other direct push methods for (hydro-) stratigraphic aquifer characterization — a field study. Canadian Geotechnical Journal, 49(2): 197-206. Vienken, T., and Dietrich, P. 2011. Field evaluation of methods for determining hydraulic conductivity from grain size data. Journal of Hydrology, 400(1-2): 58-71.
Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun
2017-01-01
To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837
Determination of carotid disease with the application of STFT and CWT methods.
Hardalaç, Firat; Yildirim, Hanefi; Serhatlioğlu, Selami
2007-06-01
In this study, Doppler signals were recorded from the output of carotid arteries of 40 subjects and transferred to a personal computer (PC) by using a 16-bit sound card. Doppler difference frequencies were recorded from each of the subjects, and then analyzed by using short-time Fourier transform (STFT) and the continuous wavelet transform (CWT) methods to obtain their sonograms. These sonograms were then used to determine the relationships of applied methods with medical conditions. The sonograms that were obtained by CWT method gave better results for spectral resolution than the STFT method. The sonograms of CWT method offer net envelope and better imaging, so that the measurement of blood flow and brain pressure can be made more accurately. Simultaneously, receiver operating characteristic (ROC) analysis has been conducted for this study and the estimation performance of the spectral resolution for the STFT and CTW has been obtained. The STFT has shown a 80.45% success for the spectral resolution while CTW has shown a 89.90% success.
Molecular Phylogeny of the Animal Kingdom.
ERIC Educational Resources Information Center
Field, Katharine G.; And Others
1988-01-01
A rapid sequencing method for ribosomal RNA was applied to the resolution of evolutionary relationships among Metazoa. Describes the four groups (chordates, echinoderms, arthropods, and eucoelomate protostomes) that radiated from the coelomates. (TW)
Non-invasive imaging methods applied to neo- and paleontological cephalopod research
NASA Astrophysics Data System (ADS)
Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.
2013-11-01
Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Tawakkol, Shereen M.; Fahmy, Nesma M.; Shehata, Mostafa A.
2015-02-01
Simultaneous determination of mixtures of lidocaine hydrochloride (LH), flucortolone pivalate (FCP), in presence of chlorquinaldol (CQ) without prior separation steps was applied using either successive or progressive resolution techniques. According to the concentration of CQ the extent of overlapping changed so it can be eliminated from the mixture to get the binary mixture of LH and FCP using ratio subtraction method for partially overlapped spectra or constant value via amplitude difference followed by ratio subtraction or constant center followed by spectrum subtraction spectrum subtraction for severely overlapped spectra. Successive ratio subtraction was coupled with extended ratio subtraction, constant multiplication, derivative subtraction coupled constant multiplication, and spectrum subtraction can be applied for the analysis of partially overlapped spectra. On the other hand severely overlapped spectra can be analyzed by constant center and the novel methods namely differential dual wavelength (D1 DWL) for CQ, ratio difference and differential derivative ratio (D1 DR) for FCP, while LH was determined by applying constant value via amplitude difference followed by successive ratio subtraction, and successive derivative subtraction. The spectra of the cited drugs can be resolved and their concentrations are determined progressively from the same ratio spectrum using amplitude modulation method. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and were successfully applied for the analysis of pharmaceutical formulations containing the cited drugs with no interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with those of the official or reported methods; using student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.
Sharpening method of satellite thermal image based on the geographical statistical model
NASA Astrophysics Data System (ADS)
Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng
2016-04-01
To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.
NASA Astrophysics Data System (ADS)
Hansen, Rebecca L.; Lee, Young Jin
2017-09-01
Metabolomics experiments require chemical identifications, often through MS/MS analysis. In mass spectrometry imaging (MSI), this necessitates running several serial tissue sections or using a multiplex data acquisition method. We have previously developed a multiplex MSI method to obtain MS and MS/MS data in a single experiment to acquire more chemical information in less data acquisition time. In this method, each raster step is composed of several spiral steps and each spiral step is used for a separate scan event (e.g., MS or MS/MS). One main limitation of this method is the loss of spatial resolution as the number of spiral steps increases, limiting its applicability for high-spatial resolution MSI. In this work, we demonstrate multiplex MS imaging is possible without sacrificing spatial resolution by the use of overlapping spiral steps, instead of spatially separated spiral steps as used in the previous work. Significant amounts of matrix and analytes are still left after multiple spectral acquisitions, especially with nanoparticle matrices, so that high quality MS and MS/MS data can be obtained on virtually the same tissue spot. This method was then applied to visualize metabolites and acquire their MS/MS spectra in maize leaf cross-sections at 10 μm spatial resolution. [Figure not available: see fulltext.
Mondal, Nagendra Nath
2009-01-01
This study presents Monte Carlo Simulation (MCS) results of detection efficiencies, spatial resolutions and resolving powers of a time-of-flight (TOF) PET detector systems. Cerium activated Lutetium Oxyorthosilicate (Lu2SiO5: Ce in short LSO), Barium Fluoride (BaF2) and BriLanCe 380 (Cerium doped Lanthanum tri-Bromide, in short LaBr3) scintillation crystals are studied in view of their good time and energy resolutions and shorter decay times. The results of MCS based on GEANT show that spatial resolution, detection efficiency and resolving power of LSO are better than those of BaF2 and LaBr3, although it possesses inferior time and energy resolutions. Instead of the conventional position reconstruction method, newly established image reconstruction (talked about in the previous work) method is applied to produce high-tech images. Validation is a momentous step to ensure that this imaging method fulfills all purposes of motivation discussed by reconstructing images of two tumors in a brain phantom. PMID:20098551
Garcia-Sucerquia, J; Alvarez-Palacio, D C; Kreuzer, H J
2008-09-10
We report the observation of the Talbot self-imaging effect in high resolution digital in-line holographic microscopy (DIHM) and its application to structural characterization of periodic samples. Holograms of self-assembled monolayers of micron-sized polystyrene spheres are reconstructed at different image planes. The point-source method of DIHM and the consequent high lateral resolution allows the true image (object) plane to be identified. The Talbot effect is then exploited to improve the evaluation of the pitch of the assembly and to examine defects in its periodicity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latychevskaia, Tatiana, E-mail: tatiana@physik.uzh.ch; Fink, Hans-Werner; Chushkin, Yuriy
Coherent diffraction imaging is a high-resolution imaging technique whose potential can be greatly enhanced by applying the extrapolation method presented here. We demonstrate the enhancement in resolution of a non-periodical object reconstructed from an experimental X-ray diffraction record which contains about 10% missing information, including the pixels in the center of the diffraction pattern. A diffraction pattern is extrapolated beyond the detector area and as a result, the object is reconstructed at an enhanced resolution and better agreement with experimental amplitudes is achieved. The optimal parameters for the iterative routine and the limits of the extrapolation procedure are discussed.
Resolution enhancement in digital holography by self-extrapolation of holograms.
Latychevskaia, Tatiana; Fink, Hans-Werner
2013-03-25
It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.
Abushareeda, Wadha; Lyris, Emmanouil; Kraiem, Suhail; Wahaibi, Aisha Al; Alyazidi, Sameera; Dbes, Najib; Lommen, Arjen; Nielen, Michel; Horvatovich, Peter L; Alsayrafi, Mohammed; Georgakopoulos, Costas
2017-09-15
This paper presents the development and validation of a high-resolution full scan (FS) electron impact ionization (EI) gas chromatography coupled to quadrupole Time-of-Flight mass spectrometry (GC/QTOF) platform for screening anabolic androgenic steroids (AAS) in human urine samples. The World Antidoping Agency (WADA) enlists AAS as prohibited doping agents in sports, and our method has been developed to comply with the qualitative specifications of WADA to be applied for the detection of sports antidoping prohibited substances, mainly for AAS. The method also comprises of the quantitative analysis of the WADA's Athlete Biological Passport (ABP) endogenous steroidal parameters. The applied preparation of urine samples includes enzymatic hydrolysis for the cleavage of the Phase II glucuronide conjugates, generic liquid-liquid extraction and trimethylsilyl (TMS) derivatization steps. Tandem mass spectrometry (MS/MS) acquisition was applied on few selected ions to enhance the specificity and sensitivity of GC/TOF signal of few compounds. The full scan high resolution acquisition of analytical signal, for known and unknown TMS derivatives of AAS provides the antidoping system with a new analytical tool for the detection designer drugs and novel metabolites, which prolongs the AAS detection, after electronic data files' reprocessing. The current method is complementary to the respective liquid chromatography coupled to mass spectrometry (LC/MS) methodology widely used to detect prohibited molecules in sport, which cannot be efficiently ionized with atmospheric pressure ionization interface. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R; Murshudov, Garib N; Short, Judith M; Scheres, Sjors H W; Henderson, Richard
2013-12-01
Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an unbiased FSC from the two curves, even when a substantial amount of overfitting is present. The approach is software independent. The user is therefore completely free to use any established method or novel combination of methods, provided the HR-noise test is carried out in parallel. Applying this procedure to cryoEM images of beta-galactosidase shows how overfitting varies greatly depending on the procedure, but in the best case shows no overfitting and a resolution of ~6 Å. (382 words). © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
MUSIC electromagnetic imaging with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Chen, Xudong; Zhong, Yu
2009-01-01
This paper investigates the influence of the test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC does not apply.
iCLIP: protein-RNA interactions at nucleotide resolution.
Huppertz, Ina; Attig, Jan; D'Ambrogio, Andrea; Easton, Laura E; Sibley, Christopher R; Sugimoto, Yoichiro; Tajnik, Mojca; König, Julian; Ule, Jernej
2014-02-01
RNA-binding proteins (RBPs) are key players in the post-transcriptional regulation of gene expression. Precise knowledge about their binding sites is therefore critical to unravel their molecular function and to understand their role in development and disease. Individual-nucleotide resolution UV crosslinking and immunoprecipitation (iCLIP) identifies protein-RNA crosslink sites on a genome-wide scale. The high resolution and specificity of this method are achieved by an intramolecular cDNA circularization step that enables analysis of cDNAs that truncated at the protein-RNA crosslink sites. Here, we describe the improved iCLIP protocol and discuss critical optimization and control experiments that are required when applying the method to new RBPs. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Estimation of wind regime from combination of RCM and NWP data in the Gulf of Riga (Baltic Sea)
NASA Astrophysics Data System (ADS)
Sile, T.; Sennikovs, J.; Bethers, U.
2012-04-01
Gulf of Riga is a semi-enclosed gulf located in the Eastern part of the Baltic Sea. Reliable wind climate data is crucial for the development of wind energy. The objective of this study is to create high resolution wind parameter datasets for the Gulf of Riga using climate and numerical weather prediction (NWP) models as an alternative to methods that rely on observations with the expectation of benefit from comparing different approaches. The models used for the estimation of the wind regime are an ensemble of Regional Climate Models (RCM, ENSEMBLES, 23 runs are considered) and high resolution NWP data. Future projections provided by RCM are of interest however their spatial resolution is unsatisfactory. We describe a method of spatial refinement of RCM data using NWP data to resolve small scale features. We apply the method of RCM bias correction (Sennikovs and Bethers, 2009) previously used for temperature and precipitation to wind data and use NWP data instead of observations. The refinement function is calculated using contemporary climate (1981- 2010) and later applied to RCM near future (2021 - 2050) projections to produce a dataset with the same resolution as NWP data. This method corrects for RCM biases that were shown to be present in the initial analysis and inter-model statistical analysis was carried out to estimate uncertainty. Using the datasets produced by this method the current and future projections of wind speed and wind energy density are calculated. Acknowledgments: This research is part of the GORWIND (The Gulf of Riga as a Resource for Wind Energy) project (EU34711). The ENSEMBLES data used in this work was funded by the EU FP6 Integrated Project ENSEMBLES (Contract number 505539) whose support is gratefully acknowledged.
Liao, Congyu; Chen, Ying; Cao, Xiaozhi; Chen, Song; He, Hongjian; Mani, Merry; Jacob, Mathews; Magnotta, Vincent; Zhong, Jianhui
2017-03-01
To propose a novel reconstruction method using parallel imaging with low rank constraint to accelerate high resolution multishot spiral diffusion imaging. The undersampled high resolution diffusion data were reconstructed based on a low rank (LR) constraint using similarities between the data of different interleaves from a multishot spiral acquisition. The self-navigated phase compensation using the low resolution phase data in the center of k-space was applied to correct shot-to-shot phase variations induced by motion artifacts. The low rank reconstruction was combined with sensitivity encoding (SENSE) for further acceleration. The efficiency of the proposed joint reconstruction framework, dubbed LR-SENSE, was evaluated through error quantifications and compared with ℓ1 regularized compressed sensing method and conventional iterative SENSE method using the same datasets. It was shown that with a same acceleration factor, the proposed LR-SENSE method had the smallest normalized sum-of-squares errors among all the compared methods in all diffusion weighted images and DTI-derived index maps, when evaluated with different acceleration factors (R = 2, 3, 4) and for all the acquired diffusion directions. Robust high resolution diffusion weighted image can be efficiently reconstructed from highly undersampled multishot spiral data with the proposed LR-SENSE method. Magn Reson Med 77:1359-1366, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Adaptive multi-resolution Modularity for detecting communities in networks
NASA Astrophysics Data System (ADS)
Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He
2018-02-01
Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity.
Artifacts in Digital Coincidence Timing
Moses, W. W.; Peng, Q.
2014-01-01
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into a time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator. All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e., the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the “optimal” method. The purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization. PMID:25321885
Artifacts in digital coincidence timing
Moses, W. W.; Peng, Q.
2014-10-16
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Artifacts in digital coincidence timing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, W. W.; Peng, Q.
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Resolution in forensic microbial genotyping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velsko, S P
2005-08-30
Resolution is a key parameter for differentiating among the large number of strain typing methods that could be applied to pathogens involved in bioterror events or biocrimes. In this report we develop a first-principles analysis of strain typing resolution using a simple mathematical model to provide a basis for the rational design of microbial typing systems for forensic applications. We derive two figures of merit that describe the resolving power and phylogenetic depth of a strain typing system. Rough estimates of these figures-of-merit for MLVA, MLST, IS element, AFLP, hybridization microarrays, and other bacterial typing methods are derived from mutationmore » rate data reported in the literature. We also discuss the general problem of how to construct a ''universal'' practical typing system that has the highest possible resolution short of whole-genome sequencing, and that is applicable with minimal modification to a wide range of pathogens.« less
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
NASA Astrophysics Data System (ADS)
Hegazy, Maha A.; Abdelwahab, Nada S.; Fayed, Ahmed S.
2015-04-01
A novel method was developed for spectral resolution and further determination of five-component mixture including Vitamin B complex (B1, B6, B12 and Benfotiamine) along with the commonly co-formulated Diclofenac. The method is simple, sensitive, precise and could efficiently determine the five components by a complementary application of two different techniques. The first is univariate second derivative method that was successfully applied for determination of Vitamin B12. The second is Multivariate Curve Resolution using the Alternating Least Squares method (MCR-ALS) by which an efficient resolution and quantitation of the quaternary spectrally overlapped Vitamin B1, Vitamin B6, Benfotiamine and Diclofenac sodium were achieved. The effect of different constraints was studied and the correlation between the true spectra and the estimated spectral profiles were found to be 0.9998, 0.9983, 0.9993 and 0.9933 for B1, B6, Benfotiamine and Diclofenac, respectively. All components were successfully determined in tablets and capsules and the results were compared to HPLC methods and they were found to be statistically non-significant.
NASA Astrophysics Data System (ADS)
Apostol, A. I.; Pantelica, A.; Sima, O.; Fugaru, V.
2016-09-01
Non-destructive methods were applied to determine the isotopic composition and the time elapsed since last chemical purification of nine uranium samples. The applied methods are based on measuring gamma and X radiations of uranium samples by high resolution low energy gamma spectrometric system with planar high purity germanium detector and low background gamma spectrometric system with coaxial high purity germanium detector. The ;Multigroup γ-ray Analysis Method for Uranium; (MGAU) code was used for the precise determination of samples' isotopic composition. The age of the samples was determined from the isotopic ratio 214Bi/234U. This ratio was calculated from the analyzed spectra of each uranium sample, using relative detection efficiency. Special attention is paid to the coincidence summing corrections that have to be taken into account when performing this type of analysis. In addition, an alternative approach for the age determination using full energy peak efficiencies obtained by Monte Carlo simulations with the GESPECOR code is described.
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-08-18
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.
KINETIC ENERGY FROM SUPERNOVA FEEDBACK IN HIGH-RESOLUTION GALAXY SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, Christine M.; Bryan, Greg L.; Ostriker, Jeremiah P.
We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (∼10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 10{sup 9} M{sub ⊙} dwarf halo. Wemore » find that in high-density media (≳50 cm{sup −3}) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.« less
Feng, Xiao-Liang; He, Yun-biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei
2013-01-01
Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria. PMID:24286016
Feng, Xiao-Liang; He, Yun-Biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei
2013-01-01
Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Veatch, Sarah L.; Machta, Benjamin B.; Shelby, Sarah A.; Chiang, Ethan N.; Holowka, David A.; Baird, Barbara A.
2012-01-01
We present an analytical method using correlation functions to quantify clustering in super-resolution fluorescence localization images and electron microscopy images of static surfaces in two dimensions. We use this method to quantify how over-counting of labeled molecules contributes to apparent self-clustering and to calculate the effective lateral resolution of an image. This treatment applies to distributions of proteins and lipids in cell membranes, where there is significant interest in using electron microscopy and super-resolution fluorescence localization techniques to probe membrane heterogeneity. When images are quantified using pair auto-correlation functions, the magnitude of apparent clustering arising from over-counting varies inversely with the surface density of labeled molecules and does not depend on the number of times an average molecule is counted. In contrast, we demonstrate that over-counting does not give rise to apparent co-clustering in double label experiments when pair cross-correlation functions are measured. We apply our analytical method to quantify the distribution of the IgE receptor (FcεRI) on the plasma membranes of chemically fixed RBL-2H3 mast cells from images acquired using stochastic optical reconstruction microscopy (STORM/dSTORM) and scanning electron microscopy (SEM). We find that apparent clustering of FcεRI-bound IgE is dominated by over-counting labels on individual complexes when IgE is directly conjugated to organic fluorophores. We verify this observation by measuring pair cross-correlation functions between two distinguishably labeled pools of IgE-FcεRI on the cell surface using both imaging methods. After correcting for over-counting, we observe weak but significant self-clustering of IgE-FcεRI in fluorescence localization measurements, and no residual self-clustering as detected with SEM. We also apply this method to quantify IgE-FcεRI redistribution after deliberate clustering by crosslinking with two distinct trivalent ligands of defined architectures, and we evaluate contributions from both over-counting of labels and redistribution of proteins. PMID:22384026
Brun, E; Grandl, S; Sztrókay-Gaul, A; Barbone, G; Mittone, A; Gasilov, S; Bravin, A; Coan, P
2014-11-01
Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer based phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure's possible applications. A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.
40 CFR 1065.275 - N2O measurement devices.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for interpretation of infrared spectra. For example, EPA Test Method 320 is considered a valid method... and length to achieve adequate resolution of the N2O peak for analysis. Examples of acceptable columns....550(b) that would otherwise apply. For example, you may perform a span gas measurement before and...
40 CFR 1065.275 - N2O measurement devices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for interpretation of infrared spectra. For example, EPA Test Method 320 is considered a valid method... and length to achieve adequate resolution of the N2O peak for analysis. Examples of acceptable columns....550(b) that would otherwise apply. For example, you may perform a span gas measurement before and...
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Chen, Dawei; Zhang, Yiping; Miao, Hong; Zhao, Yunfeng; Wu, Yongning
2015-11-11
A novel dispersive micro solid phase extraction (DMSPE) method based on a polymer cation exchange material (PCX) was applied to the simultaneous determination of the 30 triazine herbicides in drinking water with ultrahigh-performance liquid chromatography-high-resolution mass spectrometric detection. Drinking water samples were acidified with formic acid, and then triazines were adsorbed by the PCX sorbent. Subsequently, the analytes were eluted with ammonium hydroxide/acetonitrile. The chromatographic separation was performed on an HSS T3 column using water (4 mM ammonium formate and 0.1% formic acid) and acetonitrile (0.1% formic acid) as the mobile phase. The method achieved LODs of 0.2-30.0 ng/L for the 30 triazines, with recoveries in the range of 70.5-112.1%, and the precision of the method was better than 12.7%. These results indicated that the proposed method had the advantages of convenience and high efficiency when applied to the analysis of the 30 triazines in drinking water.
Elzanfaly, Eman S; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A
2015-12-05
A comparative study was established between two signal processing techniques showing the theoretical algorithm for each method and making a comparison between them to indicate the advantages and limitations. The methods under study are Numerical Differentiation (ND) and Continuous Wavelet Transform (CWT). These methods were studied as spectrophotometric resolution tools for simultaneous analysis of binary and ternary mixtures. To present the comparison, the two methods were applied for the resolution of Bisoprolol (BIS) and Hydrochlorothiazide (HCT) in their binary mixture and for the analysis of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) as an example for ternary mixtures. By comparing the results in laboratory prepared mixtures, it was proven that CWT technique is more efficient and advantageous in analysis of mixtures with severe overlapped spectra than ND. The CWT was applied for quantitative determination of the drugs in their pharmaceutical formulations and validated according to the ICH guidelines where accuracy, precision, repeatability and robustness were found to be within the acceptable limit. Copyright © 2015 Elsevier B.V. All rights reserved.
Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images
NASA Astrophysics Data System (ADS)
Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui
2018-05-01
In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.
How to model supernovae in simulations of star and galaxy formation
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Wetzel, Andrew; Kereš, Dušan; Faucher-Giguère, Claude-André; Quataert, Eliot; Boylan-Kolchin, Michael; Murray, Norman; Hayward, Christopher C.; El-Badry, Kareem
2018-06-01
We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting `preferred directions' on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common `fully thermal' (energy-dump) or `fully kinetic' (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution ≳100 M⊙, they diverge by orders of magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution (<100 M⊙). However, in both idealized single-SN simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models without re-tuning parameters.
Imaging the cell surface and its organization down to the level of single molecules.
Klenerman, David; Shevchuk, Andrew; Novak, Pavel; Korchev, Yuri E; Davis, Simon J
2013-02-05
Determining the organization of key molecules on the surface of live cells in two dimensions and how this changes during biological processes, such as signalling, is a major challenge in cell biology and requires methods with nanoscale spatial resolution and high temporal resolution. Here, we review biophysical tools, based on scanning ion conductance microscopy and single-molecule fluorescence and the combination of both of these methods, which have recently been developed to address these issues. We then give examples of how these methods have been be applied to provide new insights into cell membrane organization and function, and discuss some of the issues that will need to be addressed to further exploit these methods in the future.
NASA Astrophysics Data System (ADS)
Ma, Manyou; Rohling, Robert; Lampe, Lutz
2017-03-01
Synthetic transmit aperture beamforming is an increasingly used method to improve resolution in biomedical ultrasound imaging. Synthetic aperture sequential beamforming (SASB) is an implementation of this concept which features a relatively low computation complexity. Moreover, it can be implemented in a dual-stage architecture, where the first stage only applies simple single receive-focused delay-and-sum (srDAS) operations, while the second, more complex stage is performed either locally or remotely using more powerful processing. However, like traditional DAS-based beamforming methods, SASB is susceptible to inaccurate speed-of-sound (SOS) information. In this paper, we show how SOS estimation can be implemented using the srDAS beamformed image, and integrated into the dual-stage implementation of SASB, in an effort to obtain high resolution images with relatively low-cost hardware. Our approach builds on an existing per-channel radio frequency data-based direct estimation method, and applies an iterative refinement of the estimate. We use this estimate for SOS compensation, without the need to repeat the first stage beamforming. The proposed and previous methods are tested on both simulation and experimental studies. The accuracy of our SOS estimation method is on average 0.38% in simulation studies and 0.55% in phantom experiments, when the underlying SOS in the media is within the range 1450-1620 m/s. Using the estimated SOS, the beamforming lateral resolution of SASB is improved on average 52.6% in simulation studies and 50.0% in phantom experiments.
NASA Astrophysics Data System (ADS)
Welle, Paul D.; Mauter, Meagan S.
2017-09-01
This work introduces a generalizable approach for estimating the field-scale agricultural yield losses due to soil salinization. When integrated with regional data on crop yields and prices, this model provides high-resolution estimates for revenue losses over large agricultural regions. These methods account for the uncertainty inherent in model inputs derived from satellites, experimental field data, and interpreted model results. We apply this method to estimate the effect of soil salinity on agricultural outputs in California, performing the analysis with both high-resolution (i.e. field scale) and low-resolution (i.e. county-scale) data sources to highlight the importance of spatial resolution in agricultural analysis. We estimate that soil salinity reduced agricultural revenues by 3.7 billion (1.7-7.0 billion) in 2014, amounting to 8.0 million tons of lost production relative to soil salinities below the crop-specific thresholds. When using low-resolution data sources, we find that the costs of salinization are underestimated by a factor of three. These results highlight the need for high-resolution data in agro-environmental assessment as well as the challenges associated with their integration.
Fast high resolution reconstruction in multi-slice and multi-view cMRI
NASA Astrophysics Data System (ADS)
Velasco Toledo, Nelson; Romero Castro, Eduardo
2015-01-01
Cardiac magnetic resonance imaging (cMRI) is an useful tool in diagnosis, prognosis and research since it functionally tracks the heart structure. Although useful, this imaging technique is limited in spatial resolution because heart is a constant moving organ, also there are other non controled conditions such as patient movements and volumetric changes during apnea periods when data is acquired, those conditions limit the time to capture high quality information. This paper presents a very fast and simple strategy to reconstruct high resolution 3D images from a set of low resolution series of 2D images. The strategy is based on an information reallocation algorithm which uses the DICOM header to relocate voxel intensities in a regular grid. An interpolation method is applied to fill empty places with estimated data, the interpolation resamples the low resolution information to estimate the missing information. As a final step a gaussian filter that denoises the final result. A reconstructed image evaluation is performed using as a reference a super-resolution reconstructed image. The evaluation reveals that the method maintains the general heart structure with a small loss in detailed information (edge sharpening and blurring), some artifacts related with input information quality are detected. The proposed method requires low time and computational resources.
NASA Astrophysics Data System (ADS)
Quiers, M.; Perrette, Y.; Etienne, D.; Develle, A. L.; Jacq, K.
2017-12-01
The use of organic proxies increases in paleoenvironmental reconstructions from natural archives. Major advances have been achieved by the development of new highly informative molecular proxies usually linked to specific compounds. While studies focused on targeted compounds, offering a high information degree, advances on bulk organic matter are limited. However, this bulk is the main contributor to carbon cycle and has been shown to be a driver of many mineral or organic compounds transfer and record. Development of target proxies need complementary information on bulk organic matter to understand biases link to controlling factors or analytical methods, and provide a robust interpretation. Fluorescence methods have often been employed to characterize and quantify organic matter. However, these technics are mainly developed for liquid samples, inducing material and resolution loss when working on natural archives (either stalagmite or sediments). High-resolution solid phase fluorescence (SPF) was developed on speleothems. This method allows now to analyse organic matter quality and quantity if procedure to constrain the optical density are adopted. In fact, a calibration method using liquid phase fluorescence (LPF) was developed for speleothem, allowing to quantify organic carbon at high-resolution. We report here an application of such a procedure SPF/LPF measurements on lake sediments. In order to avoid sediment matrix effects on the fluorescence signal, a calibration using LPF measurements was realised. First results using this method provided organic matter quality record of different organic matter compounds (humic-like, protein-like and chlorophylle-like compounds) at high resolution for the sediment core. High resolution organic matter fluxes are obtained in a second time, applying pragmatic chemometrics model (non linear models, partial least square models) on high resolution fluorescence data. SPF method can be considered as a promising tool for high resolution record on organic matter quality and quantity. Potential application of this method will be evocated (lake ecosystem dynamic, changes in trophic levels)
3D super-resolution imaging with blinking quantum dots
Wang, Yong; Fruhwirth, Gilbert; Cai, En; Ng, Tony; Selvin, Paul R.
2013-01-01
Quantum dots are promising candidates for single molecule imaging due to their exceptional photophysical properties, including their intense brightness and resistance to photobleaching. They are also notorious for their blinking. Here we report a novel way to take advantage of quantum dot blinking to develop an imaging technique in three-dimensions with nanometric resolution. We first applied this method to simulated images of quantum dots, and then to quantum dots immobilized on microspheres. We achieved imaging resolutions (FWHM) of 8–17 nm in the x-y plane and 58 nm (on coverslip) or 81 nm (deep in solution) in the z-direction, approximately 3–7 times better than what has been achieved previously with quantum dots. This approach was applied to resolve the 3D distribution of epidermal growth factor receptor (EGFR) molecules at, and inside of, the plasma membrane of resting basal breast cancer cells. PMID:24093439
Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification
NASA Astrophysics Data System (ADS)
Gao, G.; Zhang, M.; Gu, Y.
2017-05-01
Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".
Weighted least squares phase unwrapping based on the wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia
2007-01-01
The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.
A self-trained classification technique for producing 30 m percent-water maps from Landsat data
Rover, Jennifer R.; Wylie, Bruce K.; Ji, Lei
2010-01-01
Small bodies of water can be mapped with moderate-resolution satellite data using methods where water is mapped as subpixel fractions using field measurements or high-resolution images as training datasets. A new method, developed from a regression-tree technique, uses a 30 m Landsat image for training the regression tree that, in turn, is applied to the same image to map subpixel water. The self-trained method was evaluated by comparing the percent-water map with three other maps generated from established percent-water mapping methods: (1) a regression-tree model trained with a 5 m SPOT 5 image, (2) a regression-tree model based on endmembers and (3) a linear unmixing classification technique. The results suggest that subpixel water fractions can be accurately estimated when high-resolution satellite data or intensively interpreted training datasets are not available, which increases our ability to map small water bodies or small changes in lake size at a regional scale.
Automated structure refinement of macromolecular assemblies from cryo-EM maps using Rosetta.
Wang, Ray Yu-Ruei; Song, Yifan; Barad, Benjamin A; Cheng, Yifan; Fraser, James S; DiMaio, Frank
2016-09-26
Cryo-EM has revealed the structures of many challenging yet exciting macromolecular assemblies at near-atomic resolution (3-4.5Å), providing biological phenomena with molecular descriptions. However, at these resolutions, accurately positioning individual atoms remains challenging and error-prone. Manually refining thousands of amino acids - typical in a macromolecular assembly - is tedious and time-consuming. We present an automated method that can improve the atomic details in models that are manually built in near-atomic-resolution cryo-EM maps. Applying the method to three systems recently solved by cryo-EM, we are able to improve model geometry while maintaining the fit-to-density. Backbone placement errors are automatically detected and corrected, and the refinement shows a large radius of convergence. The results demonstrate that the method is amenable to structures with symmetry, of very large size, and containing RNA as well as covalently bound ligands. The method should streamline the cryo-EM structure determination process, providing accurate and unbiased atomic structure interpretation of such maps.
Li, Jiansen; Song, Ying; Zhu, Zhen; Zhao, Jun
2017-05-01
Dual-dictionary learning (Dual-DL) method utilizes both a low-resolution dictionary and a high-resolution dictionary, which are co-trained for sparse coding and image updating, respectively. It can effectively exploit a priori knowledge regarding the typical structures, specific features, and local details of training sets images. The prior knowledge helps to improve the reconstruction quality greatly. This method has been successfully applied in magnetic resonance (MR) image reconstruction. However, it relies heavily on the training sets, and dictionaries are fixed and nonadaptive. In this research, we improve Dual-DL by using self-adaptive dictionaries. The low- and high-resolution dictionaries are updated correspondingly along with the image updating stage to ensure their self-adaptivity. The updated dictionaries incorporate both the prior information of the training sets and the test image directly. Both dictionaries feature improved adaptability. Experimental results demonstrate that the proposed method can efficiently and significantly improve the quality and robustness of MR image reconstruction.
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021
Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique
NASA Astrophysics Data System (ADS)
Michaels, Joshua A.
With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to 60 days for a relatively small enhancement area) severely limits the practical usefulness of fraction-area super-resolution. Fractional-area super-resolution is very sensitive to relative input image co-registration, which must be accurate to a sub-pixel degree. However, use of this technique, if input conditions permit, could be applied as a "pinpoint" super-resolution technique. Such an application could be possible by only applying it to only very small areas with very good input image co-registration.
Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu
2015-07-07
Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6 ± 4.2 µm (energy weighted centroid approximation) to 132.3 ± 3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.
NASA Astrophysics Data System (ADS)
Wang, Qian; Liu, Zhen; Ziegler, Sibylle I.; Shi, Kuangyu
2015-07-01
Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by 18F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [18F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6 ± 4.2 µm (energy weighted centroid approximation) to 132.3 ± 3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.
NASA Astrophysics Data System (ADS)
Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna
2013-08-01
Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.
Reconstruction of full high-resolution HSQC using signal split in aliased spectra.
Foroozandeh, Mohammadali; Jeannerat, Damien
2015-11-01
Resolution enhancement is a long-sought goal in NMR spectroscopy. In conventional multidimensional NMR experiments, such as the (1) H-(13) C HSQC, the resolution in the indirect dimensions is typically 100 times lower as in 1D spectra because it is limited by the experimental time. Reducing the spectral window can significantly increase the resolution but at the cost of ambiguities in frequencies as a result of spectral aliasing. Fortunately, this information is not completely lost and can be retrieved using methods in which chemical shifts are encoded in the aliased spectra and decoded after processing to reconstruct high-resolution (1) H-(13) C HSQC spectrum with full spectral width and a resolution similar to that of 1D spectra. We applied a new reconstruction method, RHUMBA (reconstruction of high-resolution using multiplet built on aliased spectra), to spectra obtained from the differential evolution for non-ambiguous aliasing-HSQC and the new AMNA (additional modulation for non-ambiguous aliasing)-HSQC experiments. The reconstructed spectra significantly facilitate both manual and automated spectral analyses and structure elucidation based on heteronuclear 2D experiments. The resolution is enhanced by two orders of magnitudes without the usual complications due to spectral aliasing. Copyright © 2015 John Wiley & Sons, Ltd.
Live CLEM imaging to analyze nuclear structures at high resolution.
Haraguchi, Tokuko; Osakada, Hiroko; Koujin, Takako
2015-01-01
Fluorescence microscopy (FM) and electron microscopy (EM) are powerful tools for observing molecular components in cells. FM can provide temporal information about cellular proteins and structures in living cells. EM provides nanometer resolution images of cellular structures in fixed cells. We have combined FM and EM to develop a new method of correlative light and electron microscopy (CLEM), called "Live CLEM." In this method, the dynamic behavior of specific molecules of interest is first observed in living cells using fluorescence microscopy (FM) and then cellular structures in the same cell are observed using electron microscopy (EM). Following image acquisition, FM and EM images are compared to enable the fluorescent images to be correlated with the high-resolution images of cellular structures obtained using EM. As this method enables analysis of dynamic events involving specific molecules of interest in the context of specific cellular structures at high resolution, it is useful for the study of nuclear structures including nuclear bodies. Here we describe Live CLEM that can be applied to the study of nuclear structures in mammalian cells.
Multiview boosting digital pathology analysis of prostate cancer.
Kwak, Jin Tae; Hewitt, Stephen M
2017-04-01
Various digital pathology tools have been developed to aid in analyzing tissues and improving cancer pathology. The multi-resolution nature of cancer pathology, however, has not been fully analyzed and utilized. Here, we develop an automated, cooperative, and multi-resolution method for improving prostate cancer diagnosis. Digitized tissue specimen images are obtained from 5 tissue microarrays (TMAs). The TMAs include 70 benign and 135 cancer samples (TMA1), 74 benign and 89 cancer samples (TMA2), 70 benign and 115 cancer samples (TMA3), 79 benign and 82 cancer samples (TMA4), and 72 benign and 86 cancer samples (TMA5). The tissue specimen images are segmented using intensity- and texture-based features. Using the segmentation results, a number of morphological features from lumens and epithelial nuclei are computed to characterize tissues at different resolutions. Applying a multiview boosting algorithm, tissue characteristics, obtained from differing resolutions, are cooperatively combined to achieve accurate cancer detection. In segmenting prostate tissues, the multiview boosting method achieved≥ 0.97 AUC using TMA1. For detecting cancers, the multiview boosting method achieved an AUC of 0.98 (95% CI: 0.97-0.99) as trained on TMA2 and tested on TMA3, TMA4, and TMA5. The proposed method was superior to single-view approaches, utilizing features from a single resolution or merging features from all the resolutions. Moreover, the performance of the proposed method was insensitive to the choice of the training dataset. Trained on TMA3, TMA4, and TMA5, the proposed method obtained an AUC of 0.97 (95% CI: 0.96-0.98), 0.98 (95% CI: 0.96-0.99), and 0.97 (95% CI: 0.96-0.98), respectively. The multiview boosting method is capable of integrating information from multiple resolutions in an effective and efficient fashion and identifying cancers with high accuracy. The multiview boosting method holds a great potential for improving digital pathology tools and research. Copyright © 2017 Elsevier B.V. All rights reserved.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Nanoscale Chemical Imaging of Zeolites Using Atom Probe Tomography.
Weckhuysen, Bert Marc; Schmidt, Joel; Peng, Linqing; Poplawsky, Jonathan
2018-05-02
Understanding structure-composition-property relationships in zeolite-based materials is critical to engineering improved solid catalysts. However, this can be difficult to realize as even single zeolite crystals can exhibit heterogeneities spanning several orders of magnitude, with consequences for e.g. reactivity, diffusion as well as stability. Great progress has been made in characterizing these porous solids using tomographic techniques, though each method has an ultimate spatial resolution limitation. Atom Probe Tomography (APT) is the only technique so far capable of producing 3-D compositional reconstructions with sub-nm-scale resolution, and has only recently been applied to zeolite-based catalysts. Herein, we discuss the use of APT to study zeolites, including the critical aspects of sample preparation, data collection, assignment of mass spectral peaks including the predominant CO peak, the limitations of spatial resolution for the recovery of crystallographic information, and proper data analysis. All sections are illustrated with examples from recent literature, as well as previously unpublished data and analyses to demonstrate practical strategies to overcome potential pitfalls in applying APT to zeolites, thereby highlighting new insights gained from the APT method. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Assessing and monitoring of urban vegetation using multiple endmember spectral mixture analysis
NASA Astrophysics Data System (ADS)
Zoran, M. A.; Savastru, R. S.; Savastru, D. M.
2013-08-01
During last years urban vegetation with significant health, biological and economical values had experienced dramatic changes due to urbanization and human activities in the metropolitan area of Bucharest in Romania. We investigated the utility of remote sensing approaches of multiple endmember spectral mixture analysis (MESMA) applied to IKONOS and Landsat TM/ETM satellite data for estimating fractional cover of urban/periurban forest, parks, agricultural vegetation areas. Because of the spectral heterogeneity of same physical features of urban vegetation increases with the increase of image resolution, the traditional spectral information-based statistical method may not be useful to classify land cover dynamics from high resolution imageries like IKONOS. So we used hierarchy tree classification method in classification and MESMA for vegetation land cover dynamics assessment based on available IKONOS high-resolution imagery of Bucharest town. This study employs thirty two endmembers and six hundred and sixty spectral models to identify all Earth's features (vegetation, water, soil, impervious) and shade in the Bucharest area. The mean RMS error for the selected vegetation land cover classes range from 0.0027 to 0.018. The Pearson correlation between the fraction outputs from MESMA and reference data from all IKONOS images 1m panchromatic resolution data for urban/periurban vegetation were ranging in the domain 0.7048 - 0.8287. The framework in this study can be applied to other urban vegetation areas in Romania.
Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao
2015-08-20
The limitations of satellite data acquisition mean that there is a lack of satellite data with high spatial and temporal resolutions for environmental process monitoring. In this study, we address this problem by applying the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Spatial and Temporal Data Fusion Approach (STDFA) to combine Huanjing satellite charge coupled device (HJ CCD), Gaofen satellite no. 1 wide field of view camera (GF-1 WFV) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to generate daily high spatial resolution synthetic data for land surface process monitoring. Actual HJ CCD and GF-1 WFV data were used to evaluate the precision of the synthetic images using the correlation analysis method. Our method was tested and validated for two study areas in Xinjiang Province, China. The results show that both the ESTARFM and STDFA can be applied to combine HJ CCD and MODIS reflectance data, and GF-1 WFV and MODIS reflectance data, to generate synthetic HJ CCD data and synthetic GF-1 WFV data that closely match actual data with correlation coefficients (r) greater than 0.8989 and 0.8643, respectively. Synthetic red- and near infrared (NIR)-band data generated by ESTARFM are more suitable for the calculation of Normalized Different Vegetation Index (NDVI) than the data generated by STDFA.
Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao
2015-01-01
The limitations of satellite data acquisition mean that there is a lack of satellite data with high spatial and temporal resolutions for environmental process monitoring. In this study, we address this problem by applying the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Spatial and Temporal Data Fusion Approach (STDFA) to combine Huanjing satellite charge coupled device (HJ CCD), Gaofen satellite no. 1 wide field of view camera (GF-1 WFV) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to generate daily high spatial resolution synthetic data for land surface process monitoring. Actual HJ CCD and GF-1 WFV data were used to evaluate the precision of the synthetic images using the correlation analysis method. Our method was tested and validated for two study areas in Xinjiang Province, China. The results show that both the ESTARFM and STDFA can be applied to combine HJ CCD and MODIS reflectance data, and GF-1 WFV and MODIS reflectance data, to generate synthetic HJ CCD data and synthetic GF-1 WFV data that closely match actual data with correlation coefficients (r) greater than 0.8989 and 0.8643, respectively. Synthetic red- and near infrared (NIR)-band data generated by ESTARFM are more suitable for the calculation of Normalized Different Vegetation Index (NDVI) than the data generated by STDFA. PMID:26308017
NASA Astrophysics Data System (ADS)
Umehara, Kensuke; Ota, Junko; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
Single image super-resolution (SR) method can generate a high-resolution (HR) image from a low-resolution (LR) image by enhancing image resolution. In medical imaging, HR images are expected to have a potential to provide a more accurate diagnosis with the practical application of HR displays. In recent years, the super-resolution convolutional neural network (SRCNN), which is one of the state-of-the-art deep learning based SR methods, has proposed in computer vision. In this study, we applied and evaluated the SRCNN scheme to improve the image quality of magnified images in chest radiographs. For evaluation, a total of 247 chest X-rays were sampled from the JSRT database. The 247 chest X-rays were divided into 93 training cases with non-nodules and 152 test cases with lung nodules. The SRCNN was trained using the training dataset. With the trained SRCNN, the HR image was reconstructed from the LR one. We compared the image quality of the SRCNN and conventional image interpolation methods, nearest neighbor, bilinear and bicubic interpolations. For quantitative evaluation, we measured two image quality metrics, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the SRCNN scheme, PSNR and SSIM were significantly higher than those of three interpolation methods (p<0.001). Visual assessment confirmed that the SRCNN produced much sharper edge than conventional interpolation methods without any obvious artifacts. These preliminary results indicate that the SRCNN scheme significantly outperforms conventional interpolation algorithms for enhancing image resolution and that the use of the SRCNN can yield substantial improvement of the image quality of magnified images in chest radiographs.
High spatial resolution compressed sensing (HSPARSE) functional MRI.
Fang, Zhongnan; Van Le, Nguyen; Choy, ManKin; Lee, Jin Hyung
2016-08-01
To propose a novel compressed sensing (CS) high spatial resolution functional MRI (fMRI) method and demonstrate the advantages and limitations of using CS for high spatial resolution fMRI. A randomly undersampled variable density spiral trajectory enabling an acceleration factor of 5.3 was designed with a balanced steady state free precession sequence to achieve high spatial resolution data acquisition. A modified k-t SPARSE method was then implemented and applied with a strategy to optimize regularization parameters for consistent, high quality CS reconstruction. The proposed method improves spatial resolution by six-fold with 12 to 47% contrast-to-noise ratio (CNR), 33 to 117% F-value improvement and maintains the same temporal resolution. It also achieves high sensitivity of 69 to 99% compared the original ground-truth, small false positive rate of less than 0.05 and low hemodynamic response function distortion across a wide range of CNRs. The proposed method is robust to physiological noise and enables detection of layer-specific activities in vivo, which cannot be resolved using the highest spatial resolution Nyquist acquisition. The proposed method enables high spatial resolution fMRI that can resolve layer-specific brain activity and demonstrates the significant improvement that CS can bring to high spatial resolution fMRI. Magn Reson Med 76:440-455, 2016. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Interpolation of diffusion weighted imaging datasets.
Dyrby, Tim B; Lundell, Henrik; Burke, Mark W; Reislev, Nina L; Paulson, Olaf B; Ptito, Maurice; Siebner, Hartwig R
2014-12-01
Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical resolution and more anatomical details in complex regions such as tract boundaries and cortical layers, which are normally only visualized at higher image resolutions. Similar results were found with typical clinical human DWI dataset. However, a possible bias in quantitative values imposed by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue compartments. Copyright © 2014. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Cowley, Garret S.; Niemann, Jeffrey D.; Green, Timothy R.; Seyfried, Mark S.; Jones, Andrew S.; Grazaitis, Peter J.
2017-02-01
Soil moisture can be estimated at coarse resolutions (>1 km) using satellite remote sensing, but that resolution is poorly suited for many applications. The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales coarse-resolution soil moisture using fine-resolution topographic, vegetation, and soil data to produce fine-resolution (10-30 m) estimates of soil moisture. The EMT+VS model performs well at catchments with low topographic relief (≤124 m), but it has not been applied to regions with larger ranges of elevation. Large relief can produce substantial variations in precipitation and potential evapotranspiration (PET), which might affect the fine-resolution patterns of soil moisture. In this research, simple methods to downscale temporal average precipitation and PET are developed and included in the EMT+VS model, and the effects of spatial variations in these variables on the surface soil moisture estimates are investigated. The methods are tested against ground truth data at the 239 km2 Reynolds Creek watershed in southern Idaho, which has 1145 m of relief. The precipitation and PET downscaling methods are able to capture the main features in the spatial patterns of both variables. The space-time Nash-Sutcliffe coefficients of efficiency of the fine-resolution soil moisture estimates improve from 0.33 to 0.36 and 0.41 when the precipitation and PET downscaling methods are included, respectively. PET downscaling provides a larger improvement in the soil moisture estimates than precipitation downscaling likely because the PET pattern is more persistent through time, and thus more predictable, than the precipitation pattern.
Zhong, Suyu; He, Yong; Gong, Gaolang
2015-05-01
Using diffusion MRI, a number of studies have investigated the properties of whole-brain white matter (WM) networks with differing network construction methods (node/edge definition). However, how the construction methods affect individual differences of WM networks and, particularly, if distinct methods can provide convergent or divergent patterns of individual differences remain largely unknown. Here, we applied 10 frequently used methods to construct whole-brain WM networks in a healthy young adult population (57 subjects), which involves two node definitions (low-resolution and high-resolution) and five edge definitions (binary, FA weighted, fiber-density weighted, length-corrected fiber-density weighted, and connectivity-probability weighted). For these WM networks, individual differences were systematically analyzed in three network aspects: (1) a spatial pattern of WM connections, (2) a spatial pattern of nodal efficiency, and (3) network global and local efficiencies. Intriguingly, we found that some of the network construction methods converged in terms of individual difference patterns, but diverged with other methods. Furthermore, the convergence/divergence between methods differed among network properties that were adopted to assess individual differences. Particularly, high-resolution WM networks with differing edge definitions showed convergent individual differences in the spatial pattern of both WM connections and nodal efficiency. For the network global and local efficiencies, low-resolution and high-resolution WM networks for most edge definitions consistently exhibited a highly convergent pattern in individual differences. Finally, the test-retest analysis revealed a decent temporal reproducibility for the patterns of between-method convergence/divergence. Together, the results of the present study demonstrated a measure-dependent effect of network construction methods on the individual difference of WM network properties. © 2015 Wiley Periodicals, Inc.
Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research
NASA Astrophysics Data System (ADS)
Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.
2014-05-01
Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied.
Hegazy, Maha A; Abdelwahab, Nada S; Fayed, Ahmed S
2015-04-05
A novel method was developed for spectral resolution and further determination of five-component mixture including Vitamin B complex (B1, B6, B12 and Benfotiamine) along with the commonly co-formulated Diclofenac. The method is simple, sensitive, precise and could efficiently determine the five components by a complementary application of two different techniques. The first is univariate second derivative method that was successfully applied for determination of Vitamin B12. The second is Multivariate Curve Resolution using the Alternating Least Squares method (MCR-ALS) by which an efficient resolution and quantitation of the quaternary spectrally overlapped Vitamin B1, Vitamin B6, Benfotiamine and Diclofenac sodium were achieved. The effect of different constraints was studied and the correlation between the true spectra and the estimated spectral profiles were found to be 0.9998, 0.9983, 0.9993 and 0.9933 for B1, B6, Benfotiamine and Diclofenac, respectively. All components were successfully determined in tablets and capsules and the results were compared to HPLC methods and they were found to be statistically non-significant. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhang, Weihong; Howell, Steven C; Wright, David W; Heindel, Andrew; Qiu, Xiangyun; Chen, Jianhan; Curtis, Joseph E
2017-05-01
We describe a general method to use Monte Carlo simulation followed by torsion-angle molecular dynamics simulations to create ensembles of structures to model a wide variety of soft-matter biological systems. Our particular emphasis is focused on modeling low-resolution small-angle scattering and reflectivity structural data. We provide examples of this method applied to HIV-1 Gag protein and derived fragment proteins, TraI protein, linear B-DNA, a nucleosome core particle, and a glycosylated monoclonal antibody. This procedure will enable a large community of researchers to model low-resolution experimental data with greater accuracy by using robust physics based simulation and sampling methods which are a significant improvement over traditional methods used to interpret such data. Published by Elsevier Inc.
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki
2008-08-01
Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.
Henley, W Hampton; He, Yan; Mellors, J Scott; Batz, Nicholas G; Ramsey, J Michael; Jorgenson, James W
2017-11-10
Ultra-high voltage capillary electrophoresis with high electric field strength has been applied to the separation of the charge variants, drug conjugates, and disulfide isomers of monoclonal antibodies. Samples composed of many closely related species are difficult to resolve and quantify using traditional analytical instrumentation. High performance instrumentation can often save considerable time and effort otherwise spent on extensive method development. Ideally, the resolution obtained for a given CE buffer system scales with the square root of the applied voltage. Currently available commercial CE instrumentation is limited to an applied voltage of approximately 30kV and a maximum electric field strength of 1kV/cm due to design limitations. The instrumentation described here is capable of safely applying potentials of at least 120kV with electric field strengths over 2000V/cm, potentially doubling the resolution of the best conventional CE buffer/capillary systems while decreasing analysis time in some applications. Separations of these complex mixtures using this new instrumentation demonstrate the potential of ultra-high voltage CE to identify the presence of previously unresolved components and to reduce analysis time for complex mixtures of antibody variants and drug conjugates. Copyright © 2017 Elsevier B.V. All rights reserved.
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
Multispectral high-resolution hologram generation using orthographic projection images
NASA Astrophysics Data System (ADS)
Muniraj, I.; Guo, C.; Sheridan, J. T.
2016-08-01
We present a new method of synthesizing a digital hologram of three-dimensional (3D) real-world objects from multiple orthographic projection images (OPI). A high-resolution multiple perspectives of 3D objects (i.e., two dimensional elemental image array) are captured under incoherent white light using synthetic aperture integral imaging (SAII) technique and their OPIs are obtained respectively. The reference beam is then multiplied with the corresponding OPI and integrated to form a Fourier hologram. Eventually, a modified phase retrieval algorithm (GS/HIO) is applied to reconstruct the hologram. The principle is validated experimentally and the results support the feasibility of the proposed method.
Big Data is a powerful tool for environmental improvements in the construction business
NASA Astrophysics Data System (ADS)
Konikov, Aleksandr; Konikov, Gregory
2017-10-01
The work investigates the possibility of applying the Big Data method as a tool to implement environmental improvements in the construction business. The method is recognized as effective in analyzing big volumes of heterogeneous data. It is noted that all preconditions exist for this method to be successfully used for resolution of environmental issues in the construction business. It is proven that the principal Big Data techniques (cluster analysis, crowd sourcing, data mixing and integration) can be applied in the sphere in question. It is concluded that Big Data is a truly powerful tool to implement environmental improvements in the construction business.
Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T
2005-10-01
Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.
NASA Astrophysics Data System (ADS)
Ding, Chenliang; Wei, Jingsong; Xiao, Mufei
2018-05-01
We herein propose a far-field super-resolution imaging with metal thin films based on the temperature-dependent electron-phonon collision frequency effect. In the proposed method, neither fluorescence labeling nor any special properties are required for the samples. The 100 nm lands and 200 nm grooves on the Blu-ray disk substrates were clearly resolved and imaged through a laser scanning microscope of wavelength 405 nm. The spot size was approximately 0.80 μm , and the imaging resolution of 1/8 of the laser spot size was experimentally obtained. This work can be applied to the far-field super-resolution imaging of samples with neither fluorescence labeling nor any special properties.
Fundamental techniques for resolution enhancement of average subsampled images
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Chiu, Chui-Wen
2012-07-01
Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R
2018-02-01
High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.
Three-dimensional imaging of sediment cores: a multi-scale approach
NASA Astrophysics Data System (ADS)
Deprez, Maxim; Van Daele, Maarten; Boone, Marijn; Anselmetti, Flavio; Cnudde, Veerle
2017-04-01
Downscaling is a method used in building-material research, where several imaging methods are applied to obtain information on the petrological and petrophysical properties of materials from a centimetre to a sub-micrometre scale (De Boever et al., 2015). However, to reach better resolutions, the sample size is necessarily adjusted as well. If, for instance, X-ray micro computed tomography (µCT) is applied on the material, the resolution can increase as the sample size decreases. In sedimentological research, X-ray computed tomography (CT) is a commonly used technique (Cnudde & Boone, 2013). The ability to visualise materials with different X-ray attenuations reveals structures in sediment cores that cannot be seen with the bare eye. This results in discoveries of sedimentary structures that can lead to a reconstruction of parts of the depositional history in a sedimentary basin (Van Daele et al., 2014). Up to now, most of the CT data used for this kind of research are acquired with a medical CT scanner, of which the highest obtainable resolution is about 250 µm (Cnudde et al., 2006). As the size of most sediment grains is smaller than 250 µm, a lot of information, concerning sediment fabric, grain-size and shape, is not obtained when using medical CT. Therefore, downscaling could be a useful method in sedimentological research. After identifying a region of interest within the sediment core with medical CT, a subsample of several millimetres diameter can be taken and imaged with µCT, allowing images with a resolution of a few micrometres. The subsampling process, however, needs to be considered thoroughly. As the goal is to image the structure and fabric of the sediments, deformation of the sediments during subsampling should be avoided as much as possible. After acquiring the CT data, image processing and analysis are performed in order to retrieve shape and orientation parameters of single grains, mud clasts and organic material. This single-grain data can then be combined for a physical layer of sediments to collect data on the sediment fabric within the subsample. Additionally, it can be upscaled further to help reconstructing the depositional history of the sedimentary basin. As a proof of principle, a workflow was developed on an oriented sediment core retrieved from Lake Lucerne, Switzerland. After identifying a megaturbidite with medical CT, a part of that deposit was subsampled using a U-channel with a cross section of 2 by 2 cm, to perform a high-resolution µCT scan. The resulting 3D images with a spatial resolution of 15.2 µm enable us to attribute absolute flow directions to sand layers from different pulses within the turbidite. Yet, the limits of this method have not been explored fully, as applying different sampling methods can lead to higher resolutions and, therefore, more revelations on smaller-grained sediments. References: Cnudde, V., Masschaele, B., Dierick, M., Vlassenbroeck, J., Van Hoorebeke, L., Jacobs, P. (2006). Recent progress in X-ray CT as a geoscience tool. Applied Geochemistry, 21(5), 826-832. Cnudde, V., Boone, M. (2013). High-resolution X-ray computed tomography in geosciences: a review of the current technology and applications. Earth-science reviews, 123, 1-17. De Boever, W., Derluyn, H., Van Loo, D., Van Hoorebeke, L., Cnudde, V. (2015). Data-fusion of high resolution X-ray CT, SEM and EDS for 3D and pseudo-3D chemical and structural characterization of sandstone. Micron, 74, 15-21. Van Daele, M., Cnudde, V., Duyck, P., Pino, M. (2014). Multidirectional, synchronously triggered seismo-turbiditesand debrites revealed by X-ray computed tomography. Sedimentology, 61, 861-880
NASA Astrophysics Data System (ADS)
Chybicki, Andrzej; Łubniewski, Zbigniew
2017-09-01
Satellite imaging systems have known limitations regarding their spatial and temporal resolution. The approaches based on subpixel mapping of the Earth's environment, which rely on combining the data retrieved from sensors of higher temporal and lower spatial resolution with the data characterized by lower temporal but higher spatial resolution, are of considerable interest. The paper presents the downscaling process of the land surface temperature (LST) derived from low resolution imagery acquired by the Advanced Very High Resolution Radiometer (AVHRR), using the inverse technique. The effective emissivity derived from another data source is used as a quantity describing thermal properties of the terrain in higher resolution, and allows the downsampling of low spatial resolution LST images. The authors propose an optimized downscaling method formulated as the inverse problem and show that the proposed approach yields better results than the use of other downsampling methods. The proposed method aims to find estimation of high spatial resolution LST data by minimizing the global error of the downscaling. In particular, for the investigated region of the Gulf of Gdansk, the RMSE between the AVHRR image downscaled by the proposed method and the Landsat 8 LST reference image was 2.255°C with correlation coefficient R equal to 0.828 and Bias = 0.557°C. For comparison, using the PBIM method, it was obtained RMSE = 2.832°C, R = 0.775 and Bias = 0.997°C for the same satellite scene. It also has been shown that the obtained results are also good in local scale and can be used for areas much smaller than the entire satellite imagery scene, depicting diverse biophysical conditions. Specifically, for the analyzed set of small sub-datasets of the whole scene, the obtained RSME between the downscaled and reference image was smaller, by approx. 0.53°C on average, in the case of applying the proposed method than in the case of using the PBIM method.
High frequency resolution terahertz time-domain spectroscopy
NASA Astrophysics Data System (ADS)
Sangala, Bagvanth Reddy
2013-12-01
A new method for the high frequency resolution terahertz time-domain spectroscopy is developed based on the characteristic matrix method. This method is useful for studying planar samples or stack of planar samples. The terahertz radiation was generated by optical rectification in a ZnTe crystal and detected by another ZnTe crystal via electro-optic sampling method. In this new characteristic matrix based method, the spectra of the sample and reference waveforms will be modeled by using characteristic matrices. We applied this new method to measure the optical constants of air. The terahertz transmission through the layered systems air-Teflon-air-Quartz-air and Nitrogen gas-Teflon-Nitrogen gas-Quartz-Nitrogen gas was modeled by the characteristic matrix method. A transmission coefficient is derived from these models which was optimized to fit the experimental transmission coefficient to extract the optical constants of air. The optimization of an error function involving the experimental complex transmission coefficient and the theoretical transmission coefficient was performed using patternsearch algorithm of MATLAB. Since this method takes account of the echo waveforms due to reflections in the layered samples, this method allows analysis of longer time-domain waveforms giving rise to very high frequency resolution in the frequency-domain. We have presented the high frequency resolution terahertz time-domain spectroscopy of air and compared the results with the literature values. We have also fitted the complex susceptibility of air to the Lorentzian and Gaussian functions to extract the linewidths.
The spatial resolving power of earth resources satellites: A review
NASA Technical Reports Server (NTRS)
Townshend, J. R. G.
1980-01-01
The significance of spatial resolving power on the utility of current and future Earth resources satellites is critically discussed and the relative merits of different approaches in defining and estimating spatial resolution are outlined. It is shown that choice of a particular measure of spatial resolution depends strongly on the particular needs of the user. Several experiments have simulated the capabilities of future satellite systems by degradation of aircraft images. Surprisingly, many of these indicated that improvements in resolution may lead to a reduction in the classification accuracy of land cover types using computer assisted methods. However, where the frequency of boundary pixels is high, the converse relationship is found. Use of imagery dependent upon visual interpretation is likely to benefit more consistently from higher resolutions. Extraction of information from images will depend upon several other factors apart from spatial resolving power: these include characteristics of the terrain being sensed, the image processing methods that are applied as well as certain sensor characteristics.
Propane spectral resolution enhancement by the maximum entropy method
NASA Technical Reports Server (NTRS)
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
High-resolution mapping of transcription factor binding sites on native chromatin
Kasinathan, Sivakanthan; Orsi, Guillermo A.; Zentner, Gabriel E.; Ahmad, Kami; Henikoff, Steven
2014-01-01
Sequence-specific DNA-binding proteins including transcription factors (TFs) are key determinants of gene regulation and chromatin architecture. Formaldehyde cross-linking and sonication followed by Chromatin ImmunoPrecipitation (X-ChIP) is widely used for profiling of TF binding, but is limited by low resolution and poor specificity and sensitivity. We present a simple protocol that starts with micrococcal nuclease-digested uncross-linked chromatin and is followed by affinity purification of TFs and paired-end sequencing. The resulting ORGANIC (Occupied Regions of Genomes from Affinity-purified Naturally Isolated Chromatin) profiles of Saccharomyces cerevisiae Abf1 and Reb1 provide highly accurate base-pair resolution maps that are not biased toward accessible chromatin, and do not require input normalization. We also demonstrate the high specificity of our method when applied to larger genomes by profiling Drosophila melanogaster GAGA Factor and Pipsqueak. Our results suggest that ORGANIC profiling is a widely applicable high-resolution method for sensitive and specific profiling of direct protein-DNA interactions. PMID:24336359
16 nm-resolution lithography using ultra-small-gap bowtie apertures
NASA Astrophysics Data System (ADS)
Chen, Yang; Qin, Jin; Chen, Jianfeng; Zhang, Liang; Ma, Chengfu; Chu, Jiaru; Xu, Xianfan; Wang, Liang
2017-02-01
Photolithography has long been a critical technology for nanoscale manufacturing, especially in the semiconductor industry. However, the diffractive nature of light has limited the continuous advance of optical lithography resolution. To overcome this obstacle, near-field scanning optical lithography (NSOL) is an alternative low-cost technique, whose resolution is determined by the near-field localization that can be achieved. Here, we apply the newly-developed backside milling method to fabricate bowtie apertures with a sub-15 nm gap, which can substantially improve the resolution of NSOL. A highly confined electric near field is produced by localized surface plasmon excitation and nanofocusing of the closely-tapered gap. We show contact lithography results with a record 16 nm resolution (FWHM). This photolithography scheme promises potential applications in data storage, high-speed computation, energy harvesting, and other nanotechnology areas.
Peptide Peak Detection for Low Resolution MALDI-TOF Mass Spectrometry.
Yao, Jingwen; Utsunomiya, Shin-Ichi; Kajihara, Shigeki; Tabata, Tsuyoshi; Aoshima, Ken; Oda, Yoshiya; Tanaka, Koichi
2014-01-01
A new peak detection method has been developed for rapid selection of peptide and its fragment ion peaks for protein identification using tandem mass spectrometry. The algorithm applies classification of peak intensities present in the defined mass range to determine the noise level. A threshold is then given to select ion peaks according to the determined noise level in each mass range. This algorithm was initially designed for the peak detection of low resolution peptide mass spectra, such as matrix-assisted laser desorption/ionization Time-of-Flight (MALDI-TOF) mass spectra. But it can also be applied to other type of mass spectra. This method has demonstrated obtaining a good rate of number of real ions to noises for even poorly fragmented peptide spectra. The effect of using peak lists generated from this method produces improved protein scores in database search results. The reliability of the protein identifications is increased by finding more peptide identifications. This software tool is freely available at the Mass++ home page (http://www.first-ms3d.jp/english/achievement/software/).
Peptide Peak Detection for Low Resolution MALDI-TOF Mass Spectrometry
Yao, Jingwen; Utsunomiya, Shin-ichi; Kajihara, Shigeki; Tabata, Tsuyoshi; Aoshima, Ken; Oda, Yoshiya; Tanaka, Koichi
2014-01-01
A new peak detection method has been developed for rapid selection of peptide and its fragment ion peaks for protein identification using tandem mass spectrometry. The algorithm applies classification of peak intensities present in the defined mass range to determine the noise level. A threshold is then given to select ion peaks according to the determined noise level in each mass range. This algorithm was initially designed for the peak detection of low resolution peptide mass spectra, such as matrix-assisted laser desorption/ionization Time-of-Flight (MALDI-TOF) mass spectra. But it can also be applied to other type of mass spectra. This method has demonstrated obtaining a good rate of number of real ions to noises for even poorly fragmented peptide spectra. The effect of using peak lists generated from this method produces improved protein scores in database search results. The reliability of the protein identifications is increased by finding more peptide identifications. This software tool is freely available at the Mass++ home page (http://www.first-ms3d.jp/english/achievement/software/). PMID:26819872
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
Estimating intercellular surface tension by laser-induced cell fusion.
Fujita, Masashi; Onami, Shuichi
2011-12-01
Intercellular surface tension is a key variable in understanding cellular mechanics. However, conventional methods are not well suited for measuring the absolute magnitude of intercellular surface tension because these methods require determination of the effective viscosity of the whole cell, a quantity that is difficult to measure. In this study, we present a novel method for estimating the intercellular surface tension at single-cell resolution. This method exploits the cytoplasmic flow that accompanies laser-induced cell fusion when the pressure difference between cells is large. Because the cytoplasmic viscosity can be measured using well-established technology, this method can be used to estimate the absolute magnitudes of tension. We applied this method to two-cell-stage embryos of the nematode Caenorhabditis elegans and estimated the intercellular surface tension to be in the 30-90 µN m(-1) range. Our estimate was in close agreement with cell-medium surface tensions measured at single-cell resolution.
Hu, Shao-Qiang; Lü, Wen-Juan; Ma, Yan-Hua; Hu, Qin; Dong, Li-Jun; Chen, Xing-Guo
2013-01-01
Based on the investigation of the effect of microemulsion charge on the chiral separation, a new chiral separation method with MEEKC employing neutral microemulsion was established. The method used a microemulsion containing 3.0% (w/v) neutral surfactant Tween 20 and 0.8% (w/v, 30 mM) dibutyl l-tartrate in 40 mM sodium tetraborate buffer to separate the enantiomers of β-blockers. The effect of major parameters on the chiral separation was investigated. The applied voltage had little effect on the resolution, but the chiral separation could be improved by suppressing the EOF. Nine racemic β-blockers obtained relatively good enantioseparation after appropriate concentrations of tetradecyl trimethyl ammonium bromide were added into the microemulsion to suppress the EOF. These results were explained based on the analysis of the separation mechanism of the method and deduced separation equations. The resolution equation of the method was further elucidated. It was found that the fourth term in the resolution equation, an additional term compared to the conventional resolution equation for column chromatography, represents the ratio of the relative movement distance between the analyte and microemulsion droplets relative to the effective capillary length. It can be regarded as a correction for the effective capillary length. These findings are significant for the development of the theory of MEEKC and the development of new chiral MEEKC method. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resonating periodic waveguides as ultraresolution sensors in biomedicine
NASA Astrophysics Data System (ADS)
Wawro, Debra D.; Priambodo, Purnomo; Magnusson, Robert
2004-10-01
Optical sensor technology based on subwavelength periodic waveguides is applied for tag-free, high-resolution biomedical and chemical detection. Measured resonance wavelength shifts of 6.4 nm for chemically attached Bovine Serum Albumin agree well with theory for a sensor tested in air. Reflection peak efficiencies of 90% are measured, and do not degrade upon biolayer attachment. Phase detection methods are investigated to enhance sensor sensitivity and resolution. Direct measurement of the resonant phase response is reported for the first time using ellipsometric measurement techniques.
Lucky Imaging: Improved Localization Accuracy for Single Molecule Imaging
Cronin, Bríd; de Wet, Ben; Wallace, Mark I.
2009-01-01
We apply the astronomical data-analysis technique, Lucky imaging, to improve resolution in single molecule fluorescence microscopy. We show that by selectively discarding data points from individual single-molecule trajectories, imaging resolution can be improved by a factor of 1.6 for individual fluorophores and up to 5.6 for more complex images. The method is illustrated using images of fluorescent dye molecules and quantum dots, and the in vivo imaging of fluorescently labeled linker for activation of T cells. PMID:19348772
Macro-actor execution on multilevel data-driven architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Najjar, W.
1988-12-31
The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.
Downscaling of Seasonal Landsat-8 and MODIS Land Surface Temperature (LST) in Kolkata, India
NASA Astrophysics Data System (ADS)
Garg, R. D.; Guha, S.; Mondal, A.; Lakshmi, V.; Kundu, S.
2017-12-01
The quality of life of urban people is affected by urban heat environment. The urban heat studies can be carried out using remotely sensed thermal infrared imagery for retrieving Land Surface Temperature (LST). Currently, high spatial resolution (<200 m) thermal images are limited and their temporal resolution is low (e.g., 17 days of Landsat-8). Coarse spatial resolution (1000 m) and high temporal resolution (daily) thermal images of MODIS (Moderate Resolution Imaging Spectroradiometer) are frequently available. The present study is to downscale spatially coarser resolution of the thermal image to fine resolution thermal image using regression based downscaling technique. This method is based on the relationship between (LST) and vegetation indices (e.g., Normalized Difference Vegetation Index or NDVI) over a heterogeneous landscape. The Kolkata metropolitan city, which experiences a tropical wet-and-dry type of climate has been selected for the study. This study applied different seasonal open source satellite images viz., Landsat-8 and Terra MODIS. The Landsat-8 images are aggregated at 960 m resolution and downscaled into 480, 240 120 and 60 m. Optical and thermal resolution of Landsat-8 and MODIS are 30 m and 60 m; 250 m and 1000 m respectively. The homogeneous land cover areas have shown better accuracy than heterogeneous land cover areas. The downscaling method plays a crucial role while the spatial resolution of thermal band renders it unable for advanced study. Key words: Land Surface Temperature (LST), Downscale, MODIS, Landsat, Kolkata
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Arafa, Reham M.; Abbas, Samah S.; Amer, Sawsan M.
2016-01-01
Spectral resolution of cefquinome sulfate (CFQ) in the presence of its degradation products was studied. Three selective, accurate and rapid spectrophotometric methods were performed for the determination of CFQ in the presence of either its hydrolytic, oxidative or photo-degradation products. The proposed ratio difference, derivative ratio and mean centering are ratio manipulating spectrophotometric methods that were satisfactorily applied for selective determination of CFQ within linear range of 5.0-40.0 μg mL- 1. Concentration Residuals Augmented Classical Least Squares was applied and evaluated for the determination of the cited drug in the presence of its all degradation products. Traditional Partial Least Squares regression was also applied and benchmarked against the proposed advanced multivariate calibration. Experimentally designed 25 synthetic mixtures of three factors at five levels were used to calibrate and validate the multivariate models. Advanced chemometrics succeeded in quantitative and qualitative analyses of CFQ along with its hydrolytic, oxidative and photo-degradation products. The proposed methods were applied successfully for different pharmaceutical formulations analyses. These developed methods were simple and cost-effective compared with the manufacturer's RP-HPLC method.
Regional forest cover estimation via remote sensing: the calibration center concept
Louis R. Iverson; Elizabeth A. Cook; Robin L. Graham; Robin L. Graham
1994-01-01
A method for combining Landsat Thematic Mapper (TM), Advanced Very High Resolution Radiometer (AVHRR) imagery, and other biogeographic data to estimate forest cover over large regions is applied and evaluated at two locations. In this method, TM data are used to classify a small area (calibration center) into forest/nonforest; the resulting forest cover map is then...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokurei, S; Department of Radiology, Yamaguchi University Hospital, Ube, Yamaguchi; Morishita, J
2015-06-15
Purpose: To develop a method for improving sharpness of images reproduced on liquid-crystal displays (LCDs) by compensating for the degradation of modulation transfer function (MTF) of the LCD. Methods: The inherent MTF of a color LCD (display MTF) was measured using a commercially available color digital camera. The frequency responses necessary to compensate for the resolution property of the LCD were calculated from the inverses of the display MTFs in both the horizontal and vertical directions. In addition, the inverses of the display MTFs were combined with the response of the human eye. The finite impulse response (FIR) filters weremore » computed by taking the inverse Fourier transform of the frequency responses, and the effects of the FIR filtering on both the resolution and noise properties of the displayed images were verified by measuring the MTF and Wiener spectrum (WS), respectively. The FIR filtering was then applied to the representation of digital bone and chest radiographs. Results: The FIR filtering improved the MTF values by up to almost 1.0 or greater over the frequency range of interest, while it minimally increased the WS values. Combining the inverses of the display MTFs with the response of the human eye led to further refinement of the MTF. Our method was successfully and beneficially applied to the image interpretation of bone radiographs. The resolution enhancement of chest radiographs, which include larger scattered radiation than bone radiographs, was easily perceived by incorporating the response of the human eye. In addition, no artifacts were observed on the processed images. Conclusion: Our proposed method to compensate for the degradation of the resolution properties of LCDs has the potential to improve the observer performance of radiologists when reading digital radiographs. This work was supported in part by grant from EIZO Corporation.« less
NASA Technical Reports Server (NTRS)
Kim, H.; Swain, P. H.
1991-01-01
A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2015-01-01
Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.
Dioxins in beef samples from Mexico using a low resolution GC/MS screening method.
Naccha, Lidia; Alanis, Guadalupe; Torres, Anabel; Abad, Esteban; Ábalos, Manuela; Rivera, Josep; Heyer, Lorenzo; Morales, Alberto; Waksman, Noemí
2010-01-01
Dioxins in beef were quantified by high resolution gas chromatography coupled to low-resolution mass spectrometry (GC/LRMS). The analyses were performed according to the minimum requirements described in the USEPA 1613 method with some minor modifications. Levels found in the samples were in the range 1.02-8.04 pg WHO-TEQ PCDDs/PCDFs g(-1) fat. For comparison purposes, the maximum level allowed by the European Union is 3 pg WHO-TEQ PCDDs/PCDFs g(-1) fat, and some of these samples surpassed the above-mentioned limit and can be considered as contaminated food. The results confirm that a preliminary screening of dioxins in beef can be performed by GC/LRMS. As far as we know, this is the first report of dioxins in beef in Mexico. After the appropriated tests, the applied methodology could be considered as an alternative screening method for the analysis of PCDD/Fs in other food products.
Zhao, Ming-liang; Liu, Guo-long; Sui, Jian-feng; Ruan, Huai-zhen; Xiong, Ying
2007-05-01
To develop simple but reliable intracellular labelling method for high-resolution visualization of the fine structure of single neurons in brain slice with thickness of 500 microm. Biocytin was introduced into neurons in 500 microm-thickness brain slices while blind whole cell recording. Following processed for histochemistry using the avidin-biotin-complex method, stained slices were mounted in glycerol on special glass slides. Labelled cells were digital photomicrographed every 30 microm and reconstructed with Adobe Photoshop software. After histochemistry, limited background staining was produced. The resolution was so high that fine structure, including branching, termination of individual axons and even spines of neurons could be identified in exquisite detail with optic microscope. With the help of software, the neurons of interest could be reconstructed from a stack of photomicrographs. The modified method provides an easy and reliable approach to revealing the detailed morphological properties of single neurons in 500 microm-thickness brain slice. Without requisition of special equipment, it is suited to be broadly applied.
NASA Astrophysics Data System (ADS)
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca; Haider, Masoom A.; Jaffray, David A.
Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarselymore » sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.« less
NASA Astrophysics Data System (ADS)
Pawłowicz, Joanna A.
2017-10-01
The TLS method (Terrestrial Laser Scanning) may replace the traditional building survey methods, e.g. those requiring the use measuring tapes or range finders. This technology allows for collecting digital data in the form of a point cloud, which can be used to create a 3D model of a building. In addition, it allows for collecting data with an incredible precision, which translates into the possibility to reproduce all architectural features of a building. This data is applied in reverse engineering to create a 3D model of an object existing in a physical space. This study presents the results of a research carried out using a point cloud to recreate the architectural features of a historical building with the application of reverse engineering. The research was conducted on a two-storey residential building with a basement and an attic. Out of the building’s façade sticks a veranda featuring a complicated, wooden structure. The measurements were taken at the medium and the highest resolution using a ScanStation C10 laser scanner by Leica. The data obtained was processed using specialist software, which allowed for the application of reverse engineering, especially for reproducing the sculpted details of the veranda. Following digitization, all redundant data was removed from the point cloud and the cloud was subjected to modelling. For testing purposes, a selected part of the veranda was modelled by means of two methods: surface matching and Triangulated Irregular Network. Both modelling methods were applied in the case of data collected at medium and the highest resolution. Creating a model based on data obtained at medium resolution, both by means of the surface matching and the TIN method, does not allow for a precise recreation of architectural details. The study presents certain sculpted elements recreated based on the highest resolution data with superimposed TIN juxtaposed against a digital image. The resulting model is very precise. Creating good models requires highly accurate field data. It is important to properly choose the distance between the measuring station and the measured object in order to ensure that the angles of incidence (horizontal and vertical) of the laser beam are as straight as possible. The model created based on medium resolution offers very poor quality of details, i.e. only the bigger, basic elements of each detail are clearly visible, while the smaller ones are blurred. This is why in order to obtain data sufficient to reproduce architectural details laser scanning should be performed at the highest resolution. In addition, modelling by means of the surface matching method should be avoided - a better idea is to use the TIN method. In addition to providing a realistically-looking visualization, the method has one more important advantage - it is 4 times faster than the surface matching method.
NASA Astrophysics Data System (ADS)
Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean
2018-05-01
A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).
Varma, Gopal; Clough, Rachel E; Acher, Peter; Sénégas, Julien; Dahnke, Hannes; Keevil, Stephen F; Schaeffter, Tobias
2011-05-01
In magnetic resonance imaging, implantable devices are usually visualized with a negative contrast. Recently, positive contrast techniques have been proposed, such as susceptibility gradient mapping (SGM). However, SGM reduces the spatial resolution making positive visualization of small structures difficult. Here, a development of SGM using the original resolution (SUMO) is presented. For this, a filter is applied in k-space and the signal amplitude is analyzed in the image domain to determine quantitatively the susceptibility gradient for each pixel. It is shown in simulations and experiments that SUMO results in a better visualization of small structures in comparison to SGM. SUMO is applied to patient datasets for visualization of stent and prostate brachytherapy seeds. In addition, SUMO also provides quantitative information about the number of prostate brachytherapy seeds. The method might be extended to application for visualization of other interventional devices, and, like SGM, it might also be used to visualize magnetically labelled cells. Copyright © 2010 Wiley-Liss, Inc.
Image interpolation used in three-dimensional range data compression.
Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian
2016-05-20
Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.
Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy
NASA Astrophysics Data System (ADS)
Xiong, Zhen; Engle, Isaiah; Garan, Jacob; Melzer, Jeffrey E.; McLeod, Euan
2018-02-01
Lens-free holographic microscopy is a promising diagnostic approach because it is cost-effective, compact, and suitable for point-of-care applications, while providing high resolution together with an ultra-large field-of-view. It has been applied to biomedical sensing, where larger targets like eukaryotic cells, bacteria, or viruses can be directly imaged without labels, and smaller targets like proteins or DNA strands can be detected via scattering labels like micro- or nano-spheres. Automated image processing routines can count objects and infer target concentrations. In these sensing applications, sensitivity and specificity are critically affected by image resolution and signal-to-noise ratio (SNR). Pixel super-resolution approaches have been shown to boost resolution and SNR by synthesizing a high-resolution image from multiple, partially redundant, low-resolution images. However, there are several computational methods that can be used to synthesize the high-resolution image, and previously, it has been unclear which methods work best for the particular case of small-particle sensing. Here, we quantify the SNR achieved in small-particle sensing using regularized gradient-descent optimization method, where the regularization is based on cardinal-neighbor differences, Bayer-pattern noise reduction, or sparsity in the image. In particular, we find that gradient-descent with sparsity-based regularization works best for small-particle sensing. These computational approaches were evaluated on images acquired using a lens-free microscope that we assembled from an off-the-shelf LED array and color image sensor. Compared to other lens-free imaging systems, our hardware integration, calibration, and sample preparation are particularly simple. We believe our results will help to enable the best performance in lens-free holographic sensing.
Sinogram restoration in computed tomography with an edge-preserving penalty
Little, Kevin J.; La Rivière, Patrick J.
2015-01-01
Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogates (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data. PMID:25735286
Sinogram restoration in computed tomography with an edge-preserving penalty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, Kevin J., E-mail: little@uchicago.edu; La Rivière, Patrick J.
2015-03-15
Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogatesmore » (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data.« less
Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.
Jung, Hyung-Sup; Park, Sung-Whan
2014-12-18
Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.
Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.
Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner
2011-09-26
Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America
3D resolved mapping of optical aberrations in thick tissues
Zeng, Jun; Mahou, Pierre; Schanne-Klein, Marie-Claire; Beaurepaire, Emmanuel; Débarre, Delphine
2012-01-01
We demonstrate a simple method for mapping optical aberrations with 3D resolution within thick samples. The method relies on the local measurement of the variation in image quality with externally applied aberrations. We discuss the accuracy of the method as a function of the signal strength and of the aberration amplitude and we derive the achievable resolution for the resulting measurements. We then report on measured 3D aberration maps in human skin biopsies and mouse brain slices. From these data, we analyse the consequences of tissue structure and refractive index distribution on aberrations and imaging depth in normal and cleared tissue samples. The aberration maps allow the estimation of the typical aplanetism region size over which aberrations can be uniformly corrected. This method and data pave the way towards efficient correction strategies for tissue imaging applications. PMID:22876353
2003-11-01
Distributions In contrast to the linear time-frequency transforms such as the short-time Fourier transform, the Wigner - Ville distribution ( WVD ) is...23 9 Results of nine TFDs: (a) Wigner - Ville distribution , (b) Born-Jordan distribution , (c) Choi-Williams distribution , (d) bilinear TFD...are applied in the Wigner - Ville class of time-frequency transforms and the reassignment methods, which are applied to any time-frequency distribution
2016 Summer Series - Ophir Frieder - Searching Harsh Environments
2016-07-12
Analysis of selective data that fits our investigative tool may lead to erroneous or limited conclusions. The universe consists of multi states and our recording of them adds complexity. By finding methods to increase the robustness of our digital data collection and applying likely relationship search methods that can handle all the data, we will increase the resolution of our conclusions. Frieder will present methods to increase our ability to capture and search digital data.
Functional imaging with low-resolution brain electromagnetic tomography (LORETA): a review.
Pascual-Marqui, R D; Esslen, M; Kochi, K; Lehmann, D
2002-01-01
This paper reviews several recent publications that have successfully used the functional brain imaging method known as LORETA. Emphasis is placed on the electrophysiological and neuroanatomical basis of the method, on the localization properties of the method, and on the validation of the method in real experimental human data. Papers that criticize LORETA are briefly discussed. LORETA publications in the 1994-1997 period based localization inference on images of raw electric neuronal activity. In 1998, a series of papers appeared that based localization inference on the statistical parametric mapping methodology applied to high-time resolution LORETA images. Starting in 1999, quantitative neuroanatomy was added to the methodology, based on the digitized Talairach atlas provided by the Brain Imaging Centre, Montreal Neurological Institute. The combination of these methodological developments has placed LORETA at a level that compares favorably to the more classical functional imaging methods, such as PET and fMRI.
NASA Technical Reports Server (NTRS)
Bourke, M.; Balme, M.; Beyer, R. A.; Williams, K. K.
2004-01-01
Methods traditionally used to estimate the relative height of surface features on Mars include: photoclinometry, shadow length and stereography. The MOLA data set enables a more accurate assessment of the surface topography of Mars. However, many small-scale aeolian bedforms remain below the sample resolution of the MOLA data set. In response to this a number of research teams have adopted and refined existing methods and applied them to high resolution (2-6 m/pixel) narrow angle MOC satellite images. Collectively, the methods provide data on a range of morphometric parameters (many not previously available for dunes on Mars). These include dune height, width, length, surface area, volume, longitudinal and cross profiles). This data will facilitate a more accurate analysis of aeolian bedforms on Mars. In this paper we undertake a comparative analysis of methods used to determine the height of aeolian dunes and ripples.
Brain Volume Estimation Enhancement by Morphological Image Processing Tools.
Zeinali, R; Keshtkar, A; Zamani, A; Gharehaghaji, N
2017-12-01
Volume estimation of brain is important for many neurological applications. It is necessary in measuring brain growth and changes in brain in normal/abnormal patients. Thus, accurate brain volume measurement is very important. Magnetic resonance imaging (MRI) is the method of choice for volume quantification due to excellent levels of image resolution and between-tissue contrast. Stereology method is a good method for estimating volume but it requires to segment enough MRI slices and have a good resolution. In this study, it is desired to enhance stereology method for volume estimation of brain using less MRI slices with less resolution. In this study, a program for calculating volume using stereology method has been introduced. After morphologic method, dilation was applied and the stereology method enhanced. For the evaluation of this method, we used T1-wighted MR images from digital phantom in BrainWeb which had ground truth. The volume of 20 normal brain extracted from BrainWeb, was calculated. The volumes of white matter, gray matter and cerebrospinal fluid with given dimension were estimated correctly. Volume calculation from Stereology method in different cases was made. In three cases, Root Mean Square Error (RMSE) was measured. Case I with T=5, d=5, Case II with T=10, D=10 and Case III with T=20, d=20 (T=slice thickness, d=resolution as stereology parameters). By comparing these results of two methods, it is obvious that RMSE values for our proposed method are smaller than Stereology method. Using morphological operation, dilation allows to enhance the estimation volume method, Stereology. In the case with less MRI slices and less test points, this method works much better compared to Stereology method.
Vendelbo, S B; Kooyman, P J; Creemer, J F; Morana, B; Mele, L; Dona, P; Nelissen, B J; Helveg, S
2013-10-01
In situ high-resolution transmission electron microscopy (TEM) of solids under reactive gas conditions can be facilitated by microelectromechanical system devices called nanoreactors. These nanoreactors are windowed cells containing nanoliter volumes of gas at ambient pressures and elevated temperatures. However, due to the high spatial confinement of the reaction environment, traditional methods for measuring process parameters, such as the local temperature, are difficult to apply. To address this issue, we devise an electron energy loss spectroscopy (EELS) method that probes the local temperature of the reaction volume under inspection by the electron beam. The local gas density, as measured using quantitative EELS, is combined with the inherent relation between gas density and temperature, as described by the ideal gas law, to obtain the local temperature. Using this method we determined the temperature gradient in a nanoreactor in situ, while the average, global temperature was monitored by a traditional measurement of the electrical resistivity of the heater. The local gas temperatures had a maximum of 56 °C deviation from the global heater values under the applied conditions. The local temperatures, obtained with the proposed method, are in good agreement with predictions from an analytical model. Copyright © 2013 Elsevier B.V. All rights reserved.
Gao, Yuan; Zhang, Haijun; Zou, Lili; Wu, Ping; Yu, Zhengkun; Lu, Xianbo; Chen, Jiping
2016-04-05
Analysis of short-chain chlorinated paraffins (SCCPs) is extremely difficult because of their complex compositions with thousands of isomers and homologues. A novel analytical method, deuterodechlorination combined with high resolution gas chromatography-high resolution mass spectrometry (HRGC-HRMS), was developed. A protocol is applied in the deuterodechlorination of SCCPs with LiAlD4, and the formed deuterated n-alkanes of different alkane chains can be distinguished readily from each other on the basis of their retention time and fragment mass ([M](+)) by HRGC-HRMS. An internal standard quantification of individual SCCP congeners was achieved, in which branched C10-CPs and branched C12-CPs were used as the extraction and reaction internal standards, respectively. A maximum factor of 1.26 of the target SCCP concentrations were determined by this method, and the relative standard deviations for quantification of total SCCPs were within 10%. This method was applied to determine the congener compositions of SCCPs in commercial chlorinated paraffins and environmental and biota samples after method validation. Low-chlorinated SCCP congeners (Cl1-4) were found to account for 32.4%-62.4% of the total SCCPs. The present method provides an attractive perspective for further studies on the toxicological and environmental characteristics of SCCPs.
Super-resolution method for face recognition using nonlinear mappings on coherent features.
Huang, Hua; He, Huiting
2011-01-01
Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.
Farhoud, Murtada H; Wessels, Hans J C T; Wevers, Ron A; van Engelen, Baziel G; van den Heuvel, Lambert P; Smeitink, Jan A
2005-01-01
In 2D-based comparative proteomics of scarce samples, such as limited patient material, established methods for prefractionation and subsequent use of different narrow range IPG strips to increase overall resolution are difficult to apply. Also, a high number of samples, a prerequisite for drawing meaningful conclusions when pathological and control samples are considered, will increase the associated amount of work almost exponentially. Here, we introduce a novel, effective, and economic method designed to obtain maximum 2D resolution while maintaining the high throughput necessary to perform large-scale comparative proteomics studies. The method is based on connecting different IPG strips serially head-to-tail so that a complete line of different IPG strips with sequential pH regions can be focused in the same experiment. We show that when 3 IPG strips (covering together the pH range of 3-11) are connected head-to-tail an optimal resolution is achieved along the whole pH range. Sample consumption, time required, and associated costs are reduced by almost 70%, and the workload is reduced significantly.
NASA Astrophysics Data System (ADS)
Chan, Y. C.; Shih, N. C.; Hsieh, Y. C.
2016-12-01
Geologic maps have provided fundamental information for many scientific and engineering applications in human societies. Geologic maps directly influence the reliability of research results or the robustness of engineering projects. In the past, geologic maps were mainly produced by field geologists through direct field investigations and 2D topographic maps. However, the quality of traditional geologic maps was significantly compromised by field conditions, particularly, when the map area is covered by heavy forest canopies. Recent developments in airborne LiDAR technology may virtually remove trees or buildings, thus, providing a useful data set for improving geological mapping. Because high-quality topographic information still needs to be interpreted in terms of geology, there are many fundamental questions regarding how to best apply the data set for high-resolution geological mapping. In this study, we aim to test the quality and reliability of high-resolution geologic maps produced by recent technological methods through an example from the fold-and-thrust belt in northern Taiwan. We performed the geological mapping by applying the LiDAR-derived DEM, self-developed program tools and many layers of relevant information at interactive 3D environments. Our mapping results indicate that the proposed methods will considerably improve the quality and consistency of the geologic maps. The study also shows that in order to gain consistent mapping results, future high-resolution geologic maps should be produced at interactive 3D environments on the basis of existing geologic maps.
NASA Astrophysics Data System (ADS)
Erskine, David J.; Edelstein, J.; Sirk, M.; Wishnow, E.; Ishikawa, Y.; McDonald, E.; Shourt, W. V.
2014-07-01
High resolution broad-band spectroscopy at near-infrared wavelengths has been performed using externally dis- persed interferometry (EDI) at the Hale telescope at Mt. Palomar. The EDI technique uses a field-widened Michelson interferometer in series with a dispersive spectrograph, and is able to recover a spectrum with a resolution 4 to 10 times higher than the existing grating spectrograph. This method increases the resolution well beyond the classical limits enforced by the slit width and the detector pixel Nyquist limit and, in principle, decreases the effect of pupil variation on the instrument line-shape function. The EDI technique permits arbi- trarily higher resolution measurements using the higher throughput, lower weight, size, and expense of a lower resolution spectrograph. Observations of many stars were performed with the TEDI interferometer mounted within the central hole of the 200 inch primary mirror. Light from the interferometer was then dispersed by the TripleSpec near-infrared echelle spectrograph. Continuous spectra between 950 and 2450 nm with a resolution as high as ~27,000 were recovered from data taken with TripleSpec at a native resolution of ˜2,700. Aspects of data analysis for interferometric spectral reconstruction are described. This technique has applications in im- proving measurements of high-resolution stellar template spectra, critical for precision Doppler velocimetry using conventional spectroscopic methods. A new interferometer to be applied for this purpose at visible wavelengths is under construction.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images
NASA Technical Reports Server (NTRS)
Sams, Clarence F.
2016-01-01
The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.
Zhang, Zeng-yan; Ji, Te; Zhu, Zhi-yong; Zhao, Hong-wei; Chen, Min; Xiao, Ti-qiao; Guo, Zhi
2015-01-01
Terahertz radiation is an electromagnetic radiation in the range between millimeter waves and far infrared. Due to its low energy and non-ionizing characters, THz pulse imaging emerges as a novel tool in many fields, such as material, chemical, biological medicine, and food safety. Limited spatial resolution is a significant restricting factor of terahertz imaging technology. Near field imaging method was proposed to improve the spatial resolution of terahertz system. Submillimeter scale's spauial resolution can be achieved if the income source size is smaller than the wawelength of the incoming source and the source is very close to the sample. But many changes were needed to the traditional terahertz time domain spectroscopy system, and it's very complex to analyze sample's physical parameters through the terahertz signal. A method of inserting a pinhole upstream to the sample was first proposed in this article to improve the spatial resolution of traditional terahertz time domain spectroscopy system. The measured spatial resolution of terahertz time domain spectroscopy system by knife edge method can achieve spatial resolution curves. The moving stage distance between 10 % and 90 Yo of the maximum signals respectively was defined as the, spatial resolution of the system. Imaging spatial resolution of traditional terahertz time domain spectroscopy system was improved dramatically after inserted a pinhole with diameter 0. 5 mm, 2 mm upstream to the sample. Experimental results show that the spatial resolution has been improved from 1. 276 mm to 0. 774 mm, with the increment about 39 %. Though this simple method, the spatial resolution of traditional terahertz time domain spectroscopy system was increased from millimeter scale to submillimeter scale. A pinhole with diameter 1 mm on a polyethylene plate was taken as sample, to terahertz imaging study. The traditional terahertz time domain spectroscopy system and pinhole inserted terahertz time domain spectroscopy system were applied in the imaging experiment respectively. The relative THz-power loss imaging of samples were use in this article. This method generally delivers the best signal to noise ratio in loss images, dispersion effects are cancelled. Terahertz imaging results show that the sample's boundary was more distinct after inserting the pinhole in front of, sample. The results also conform that inserting pinhole in front of sample can improve the imaging spatial resolution effectively. The theoretical analyses of the method which improve the spatial resolution by inserting a pinhole in front of sample were given in this article. The analyses also indicate that the smaller the pinhole size, the longer spatial coherence length of the system, the better spatial resolution of the system. At the same time the terahertz signal will be reduced accordingly. All the experimental results and theoretical analyses indicate that the method of inserting a pinhole in front of sample can improve the spatial resolution of traditional terahertz time domain spectroscopy system effectively, and it will further expand the application of terahertz imaging technology.
NASA Astrophysics Data System (ADS)
Labzovskii, Lev D.; Papayannis, Alexandros; Binietoglou, Ioannis; Banks, Robert F.; Baldasano, Jose M.; Toanca, Florica; Tzanis, Chris G.; Christodoulakis, John
2018-02-01
Accurate continuous measurements of relative humidity (RH) vertical profiles in the lower troposphere have become a significant scientific challenge. In recent years a synergy of various ground-based remote sensing instruments have been successfully used for RH vertical profiling, which has resulted in the improvement of spatial resolution and, in some cases, of the accuracy of the measurement. Some studies have also suggested the use of high-resolution model simulations as input datasets into RH vertical profiling techniques. In this paper we apply two synergetic methods for RH profiling, including the synergy of lidar with a microwave radiometer and high-resolution atmospheric modeling. The two methods are employed for RH retrieval between 100 and 6000 m with increased spatial resolution, based on datasets from the HygrA-CD (Hygroscopic Aerosols to Cloud Droplets) campaign conducted in Athens, Greece from May to June 2014. RH profiles from synergetic methods are then compared with those retrieved using single instruments or as simulated by high-resolution models. Our proposed technique for RH profiling provides improved statistical agreement with reference to radiosoundings by 27 % when the lidar-radiometer (in comparison with radiometer measurements) approach is used and by 15 % when a lidar model is used (in comparison with WRF-model simulations). Mean uncertainty of RH due to temperature bias in RH profiling was ˜ 4.34 % for the lidar-radiometer and ˜ 1.22 % for the lidar-model methods. However, maximum uncertainty in RH retrievals due to temperature bias showed that lidar-model method is more reliable at heights greater than 2000 m. Overall, our results have demonstrated the capability of both combined methods for daytime measurements in heights between 100 and 6000 m when lidar-radiometer or lidar-WRF combined datasets are available.
Demonstration Of Ultra HI-FI (UHF) Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2004-01-01
Computational aero-acoustics (CAA) requires efficient, high-resolution simulation tools. Most current techniques utilize finite-difference approaches because high order accuracy is considered too difficult or expensive to achieve with finite volume or finite element methods. However, a novel finite volume approach (Ultra HI-FI or UHF) which utilizes Hermite fluxes is presented which can achieve both arbitrary accuracy and fidelity in space and time. The technique can be applied to unstructured grids with some loss of fidelity or with multi-block structured grids for maximum efficiency and resolution. In either paradigm, it is possible to resolve ultra-short waves (less than 2 PPW). This is demonstrated here by solving the 4th CAA workshop Category 1 Problem 1.
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging
NASA Astrophysics Data System (ADS)
Micó, Vicente; Zalevsky, Zeev
2010-07-01
Digital in-line holographic microscopy (DIHM) is a modern approach capable of achieving micron-range lateral and depth resolutions in three-dimensional imaging. DIHM in combination with numerical imaging reconstruction uses an extremely simplified setup while retaining the advantages provided by holography with enhanced capabilities derived from algorithmic digital processing. We introduce superresolved DIHM incoming from time and angular multiplexing of the sample spatial frequency information and yielding in the generation of a synthetic aperture (SA). The SA expands the cutoff frequency of the imaging system, allowing submicron resolutions in both transversal and axial directions. The proposed approach can be applied when imaging essentially transparent (low-concentration dilutions) and static (slow dynamics) samples. Validation of the method for both a synthetic object (U.S. Air Force resolution test) to quantify the resolution improvement and a biological specimen (sperm cells biosample) are reported showing the generation of high synthetic numerical aperture values working without lenses.
Emmerich, F; Thielemann, C
2016-05-20
Multilayers of silicon oxide/silicon nitride/silicon oxide (ONO) are known for their good electret properties due to deep energy traps near the material interfaces, facilitating charge storage. However, measurement of the space charge distribution in such multilayers is a challenge for conventional methods if layer thickness dimensions shrink below 1 μm. In this paper, we propose an atomic force microscope based method to determine charge distributions in ONO layers with spatial resolution below 100 nm. By applying Kelvin probe force microscopy (KPFM) on freshly cleaved, corona-charged multilayers, the surface potential is measured directly along the z-axis and across the interfaces. This new method gives insights into charge distribution and charge movement in inorganic electrets with a high spatial resolution.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
Jamalian, Azadeh; Sneekes, Evert-Jan; Wienk, Hans; Dekker, Lennard J. M.; Ruttink, Paul J. A.; Ursem, Mario; Luider, Theo M.; Burgers, Peter C.
2014-01-01
Here we describe a new method to identify calcium-binding sites in proteins using high-resolution liquid chromatography-mass spectrometry in concert with calcium-directed collision-induced dissociations. Our method does not require any modifications to the liquid chromatography-mass spectrometry apparatus, uses standard digestion protocols, and can be applied to existing high-resolution MS data files. In contrast to NMR, our method is applicable to very small amounts of complex protein mixtures (femtomole level). Calcium-bound peptides can be identified using three criteria: (1) the calculated exact mass of the calcium containing peptide; (2) specific dissociations of the calcium-containing peptide from threonine and serine residues; and (3) the very similar retention times of the calcium-containing peptide and the free peptide. PMID:25023127
NASA Astrophysics Data System (ADS)
Corucci, Linda; Masini, Andrea; Cococcioni, Marco
2011-01-01
This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.
NASA Astrophysics Data System (ADS)
Gholoum, M.; Bruce, D.; Hazeam, S. Al
2012-07-01
A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine the quality of the information derived from image classification. The research will be applied to the Kuwait's southern coral reefs: Kubbar and Um Al-Maradim.
NASA Astrophysics Data System (ADS)
Li, Bao Qiong; Wang, Xue; Li Xu, Min; Zhai, Hong Lin; Chen, Jing; Liu, Jin Jin
2018-01-01
Fluorescence spectroscopy with an excitation-emission matrix (EEM) is a fast and inexpensive technique and has been applied to the detection of a very wide range of analytes. However, serious scattering and overlapping signals hinder the applications of EEM spectra. In this contribution, the multi-resolution capability of Tchebichef moments was investigated in depth and applied to the analysis of two EEM data sets (data set 1 consisted of valine-tyrosine-valine, tryptophan-glycine and phenylalanine, and data set 2 included vitamin B1, vitamin B2 and vitamin B6) for the first time. By means of the Tchebichef moments with different orders, the different information in the EEM spectra can be represented. It is owing to this multi-resolution capability that the overlapping problem was solved, and the information of chemicals and scatterings were separated. The obtained results demonstrated that the Tchebichef moment method is very effective, which provides a promising tool for the analysis of EEM spectra. It is expected that the applications of Tchebichef moment method could be developed and extended in complex systems such as biological fluids, food, environment and others to deal with the practical problems (overlapped peaks, unknown interferences, baseline drifts, and so on) with other spectra.
García, M D Gil; Culzoni, M J; De Zan, M M; Valverde, R Santiago; Galera, M Martínez; Goicoechea, H C
2008-02-01
A new powerful algorithm (unfolded-partial least squares followed by residual bilinearization (U-PLS/RBL)) was applied for first time on second-order liquid chromatography with diode array detection (LC-DAD) data and compared with a well-known established method (multivariate curve resolution-alternating least squares (MCR-ALS)) for the simultaneous determination of eight tetracyclines (tetracycline, oxytetracycline, meclocycline, minocycline, metacycline, chlortetracycline, demeclocycline and doxycycline) in wastewaters. Tetracyclines were pre-concentrated using Oasis Max C18 cartridges and then separated on a Thermo Aquasil C18 (150 mm x 4.6mm, 5 microm) column. The whole method was validated using Milli-Q water samples and both univariate and multivariate analytical figures of merit were obtained. Additionally, two data pre-treatment were applied (baseline correction and piecewise direct standardization), which allowed to correct the effect of breakthrough and to reduce the total interferences retained after pre-concentration of wastewaters. The results showed that the eight tetracycline antibiotics can be successfully determined in wastewaters, the drawbacks due to matrix interferences being adequately handled and overcome by using U-PSL/RBL.
Ptychographic imaging with partially coherent plasma EUV sources
NASA Astrophysics Data System (ADS)
Bußmann, Jan; Odstrčil, Michal; Teramoto, Yusuke; Juschkin, Larissa
2017-12-01
We report on high-resolution lens-less imaging experiments based on ptychographic scanning coherent diffractive imaging (CDI) method employing compact plasma sources developed for extreme ultraviolet (EUV) lithography applications. Two kinds of discharge sources were used in our experiments: a hollow-cathode-triggered pinch plasma source operated with oxygen and for the first time a laser-assisted discharge EUV source with a liquid tin target. Ptychographic reconstructions of different samples were achieved by applying constraint relaxation to the algorithm. Our ptychography algorithms can handle low spatial coherence and broadband illumination as well as compensate for the residual background due to plasma radiation in the visible spectral range. Image resolution down to 100 nm is demonstrated even for sparse objects, and it is limited presently by the sample structure contrast and the available coherent photon flux. We could extract material properties by the reconstruction of the complex exit-wave field, gaining additional information compared to electron microscopy or CDI with longer-wavelength high harmonic laser sources. Our results show that compact plasma-based EUV light sources of only partial spatial and temporal coherence can be effectively used for lens-less imaging applications. The reported methods may be applied in combination with reflectometry and scatterometry for high-resolution EUV metrology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huiqiang; Wu, Xizeng, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn; Xiao, Tiqiao, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn
Purpose: Propagation-based phase-contrast CT (PPCT) utilizes highly sensitive phase-contrast technology applied to x-ray microtomography. Performing phase retrieval on the acquired angular projections can enhance image contrast and enable quantitative imaging. In this work, the authors demonstrate the validity and advantages of a novel technique for high-resolution PPCT by using the generalized phase-attenuation duality (PAD) method of phase retrieval. Methods: A high-resolution angular projection data set of a fish head specimen was acquired with a monochromatic 60-keV x-ray beam. In one approach, the projection data were directly used for tomographic reconstruction. In two other approaches, the projection data were preprocessed bymore » phase retrieval based on either the linearized PAD method or the generalized PAD method. The reconstructed images from all three approaches were then compared in terms of tissue contrast-to-noise ratio and spatial resolution. Results: The authors’ experimental results demonstrated the validity of the PPCT technique based on the generalized PAD-based method. In addition, the results show that the authors’ technique is superior to the direct PPCT technique as well as the linearized PAD-based PPCT technique in terms of their relative capabilities for tissue discrimination and characterization. Conclusions: This novel PPCT technique demonstrates great potential for biomedical imaging, especially for applications that require high spatial resolution and limited radiation exposure.« less
Chen, Xuehui; Sun, Yunxiang; An, Xiongbo; Ming, Dengming
2011-10-14
Normal mode analysis of large biomolecular complexes at atomic resolution remains challenging in computational structure biology due to the requirement of large amount of memory space and central processing unit time. In this paper, we present a method called virtual interface substructure synthesis method or VISSM to calculate approximate normal modes of large biomolecular complexes at atomic resolution. VISSM introduces the subunit interfaces as independent substructures that join contacting molecules so as to keep the integrity of the system. Compared with other approximate methods, VISSM delivers atomic modes with no need of a coarse-graining-then-projection procedure. The method was examined for 54 protein-complexes with the conventional all-atom normal mode analysis using CHARMM simulation program and the overlap of the first 100 low-frequency modes is greater than 0.7 for 49 complexes, indicating its accuracy and reliability. We then applied VISSM to the satellite panicum mosaic virus (SPMV, 78,300 atoms) and to F-actin filament structures of up to 39-mer, 228,813 atoms and found that VISSM calculations capture functionally important conformational changes accessible to these structures at atomic resolution. Our results support the idea that the dynamics of a large biomolecular complex might be understood based on the motions of its component subunits and the way in which subunits bind one another. © 2011 American Institute of Physics
Accurately determining log and bark volumes of saw logs using high-resolution laser scan data
R. Edward Thomas; Neal D. Bennett
2014-01-01
Accurately determining the volume of logs and bark is crucial to estimating the total expected value recovery from a log. Knowing the correct size and volume of a log helps to determine which processing method, if any, should be used on a given log. However, applying volume estimation methods consistently can be difficult. Errors in log measurement and oddly shaped...
CNV detection method optimized for high-resolution arrayCGH by normality test.
Ahn, Jaegyoon; Yoon, Youngmi; Park, Chihyun; Park, Sanghyun
2012-04-01
High-resolution arrayCGH platform makes it possible to detect small gains and losses which previously could not be measured. However, current CNV detection tools fitted to early low-resolution data are not applicable to larger high-resolution data. When CNV detection tools are applied to high-resolution data, they suffer from high false-positives, which increases validation cost. Existing CNV detection tools also require optimal parameter values. In most cases, obtaining these values is a difficult task. This study developed a CNV detection algorithm that is optimized for high-resolution arrayCGH data. This tool operates up to 1500 times faster than existing tools on a high-resolution arrayCGH of whole human chromosomes which has 42 million probes whose average length is 50 bases, while preserving false positive/negative rates. The algorithm also uses a normality test, thereby removing the need for optimal parameters. To our knowledge, this is the first formulation for CNV detecting problems that results in a near-linear empirical overall complexity for real high-resolution data. Copyright © 2012 Elsevier Ltd. All rights reserved.
Solliec, Morgan; Roy-Lachapelle, Audrey; Sauvé, Sébastien
2015-12-30
Swine manure can contain a wide range of veterinary antibiotics, which could enter the environment via manure spreading on agricultural fields. A suspect and non-target screening method was applied to swine manure samples to attempt to identify veterinary antibiotics and pharmaceutical compounds for a future targeted analysis method. A combination of suspect and non-target screening method was developed to identify various veterinary antibiotic families using liquid chromatography coupled with high-resolution mass spectrometry (LC/HRMS). The sample preparation was based on the physicochemical parameters of antibiotics for the wide scope extraction of polar compounds prior to LC/HRMS analysis. The amount of data produced was processed by applying restrictive thresholds and filters to significantly reduce the number of compounds found and eliminate matrix components. The suspect and non-target screening was applied on swine manure samples and revealed the presence of seven common veterinary antibiotics and some of their relative metabolites, including tetracyclines, β-lactams, sulfonamides and lincosamides. However, one steroid and one analgesic were also identified. The occurrence of the identified compounds was validated by comparing their retention times, isotopic abundance patterns and fragmentation patterns with certified standards. This identification method could be very useful as an initial step to screen for and identify emerging contaminants such as veterinary antibiotics and pharmaceuticals in environmental and biological matrices prior to quantification. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; McGinnis, S. A.
2017-12-01
Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.
NASA Astrophysics Data System (ADS)
Franch, B.; Skakun, S.; Vermote, E.; Roger, J. C.
2017-12-01
Surface albedo is an essential parameter not only for developing climate models, but also for most energy balance studies. While climate models are usually applied at coarse resolution, the energy balance studies, which are mainly focused on agricultural applications, require a high spatial resolution. The albedo, estimated through the angular integration of the BRDF, requires an appropriate angular sampling of the surface. However, Sentinel-2A sampling characteristics, with nearly constant observation geometry and low illumination variation, prevent from deriving a surface albedo product. In this work, we apply an algorithm developed to derive a Landsat surface albedo to Sentinel-2A. It is based on the BRDF parameters estimated from the MODerate Resolution Imaging Spectroradiometer (MODIS) CMG surface reflectance product (M{O,Y}D09) using the VJB method (Vermote et al., 2009). Sentinel-2A unsupervised classification images are used to disaggregate the BRDF parameters to the Sentinel-2 spatial resolution. We test the results over five different sites of the US SURFRAD network and plot the results versus albedo field measurements. Additionally, we also test this methodology using Landsat-8 images.
Ehler, Martin; Dobrosotskaya, Julia; Cunningham, Denise; Wong, Wai T.; Chew, Emily Y.; Czaja, Wojtek; Bonner, Robert F.
2015-01-01
We introduce and describe a novel non-invasive in-vivo method for mapping local rod rhodopsin distribution in the human retina over a 30-degree field. Our approach is based on analyzing the brightening of detected lipofuscin autofluorescence within small pixel clusters in registered imaging sequences taken with a commercial 488nm confocal scanning laser ophthalmoscope (cSLO) over a 1 minute period. We modeled the kinetics of rhodopsin bleaching by applying variational optimization techniques from applied mathematics. The physical model and the numerical analysis with its implementation are outlined in detail. This new technique enables the creation of spatial maps of the retinal rhodopsin and retinal pigment epithelium (RPE) bisretinoid distribution with an ≈ 50μm resolution. PMID:26196397
An operational approach to high resolution agro-ecological zoning in West-Africa.
Le Page, Y; Vasconcelos, Maria; Palminha, A; Melo, I Q; Pereira, J M C
2017-01-01
The objective of this work is to develop a simple methodology for high resolution crop suitability analysis under current and future climate, easily applicable and useful in Least Developed Countries. The approach addresses both regional planning in the context of climate change projections and pre-emptive short-term rural extension interventions based on same-year agricultural season forecasts, while implemented with off-the-shelf resources. The developed tools are applied operationally in a case-study developed in three regions of Guinea-Bissau and the obtained results, as well as the advantages and limitations of methods applied, are discussed. In this paper we show how a simple approach can easily generate information on climate vulnerability and how it can be operationally used in rural extension services.
Domingo-Almenara, Xavier; Perera, Alexandre; Brezmes, Jesus
2016-11-25
Gas chromatography-mass spectrometry (GC-MS) produces large and complex datasets characterized by co-eluted compounds and at trace levels, and with a distinct compound ion-redundancy as a result of the high fragmentation by the electron impact ionization. Compounds in GC-MS can be resolved by taking advantage of the multivariate nature of GC-MS data by applying multivariate resolution methods. However, multivariate methods have to be applied in small regions of the chromatogram, and therefore chromatograms are segmented prior to the application of the algorithms. The automation of this segmentation process is a challenging task as it implies separating between informative data and noise from the chromatogram. This study demonstrates the capabilities of independent component analysis-orthogonal signal deconvolution (ICA-OSD) and multivariate curve resolution-alternating least squares (MCR-ALS) with an overlapping moving window implementation to avoid the typical hard chromatographic segmentation. Also, after being resolved, compounds are aligned across samples by an automated alignment algorithm. We evaluated the proposed methods through a quantitative analysis of GC-qTOF MS data from 25 serum samples. The quantitative performance of both moving window ICA-OSD and MCR-ALS-based implementations was compared with the quantification of 33 compounds by the XCMS package. Results shown that most of the R 2 coefficients of determination exhibited a high correlation (R 2 >0.90) in both ICA-OSD and MCR-ALS moving window-based approaches. Copyright © 2016 Elsevier B.V. All rights reserved.
High resolution laboratory grating-based x-ray phase-contrast CT
NASA Astrophysics Data System (ADS)
Viermetz, Manuel P.; Birnbacher, Lorenz J. B.; Fehringer, Andreas; Willner, Marian; Noel, Peter B.; Pfeiffer, Franz; Herzen, Julia
2017-03-01
Grating-based phase-contrast computed tomography (gbPC-CT) is a promising imaging method for imaging of soft tissue contrast without the need of any contrast agent. The focus of this study is the increase in spatial resolution without loss in sensitivity to allow visualization of pathologies comparable to the convincing results obtained at the synchrotron. To improve the effective pixel size a super-resolution reconstruction based on subpixel shifts involving a deconvolution of the image is applied on differential phase-contrast data. In our study we could achieve an effective pixel sizes of 28mm without any drawback in terms of sensitivity or the ability to measure quantitative data.
BLIPPED (BLIpped Pure Phase EncoDing) high resolution MRI with low amplitude gradients
NASA Astrophysics Data System (ADS)
Xiao, Dan; Balcom, Bruce J.
2017-12-01
MRI image resolution is proportional to the maximum k-space value, i.e. the temporal integral of the magnetic field gradient. High resolution imaging usually requires high gradient amplitudes and/or long spatial encoding times. Special gradient hardware is often required for high amplitudes and fast switching. We propose a high resolution imaging sequence that employs low amplitude gradients. This method was inspired by the previously proposed PEPI (π Echo Planar Imaging) sequence, which replaced EPI gradient reversals with multiple RF refocusing pulses. It has been shown that when the refocusing RF pulse is of high quality, i.e. sufficiently close to 180°, the magnetization phase introduced by the spatial encoding magnetic field gradient can be preserved and transferred to the following echo signal without phase rewinding. This phase encoding scheme requires blipped gradients that are identical for each echo, with low and constant amplitude, providing opportunities for high resolution imaging. We now extend the sequence to 3D pure phase encoding with low amplitude gradients. The method is compared with the Hybrid-SESPI (Spin Echo Single Point Imaging) technique to demonstrate the advantages in terms of low gradient duty cycle, compensation of concomitant magnetic field effects and minimal echo spacing, which lead to superior image quality and high resolution. The 3D imaging method was then applied with a parallel plate resonator RF probe, achieving a nominal spatial resolution of 17 μm in one dimension in the 3D image, requiring a maximum gradient amplitude of only 5.8 Gauss/cm.
NASA Astrophysics Data System (ADS)
Silvestri, Ludovico; Rudinskiy, Nikita; Paciscopi, Marco; Müllenbroich, Marie Caroline; Costantini, Irene; Sacconi, Leonardo; Frasconi, Paolo; Hyman, Bradley T.; Pavone, Francesco S.
2016-03-01
Mapping neuronal activity patterns across the whole brain with cellular resolution is a challenging task for state-of-the-art imaging methods. Indeed, despite a number of technological efforts, quantitative cellular-resolution activation maps of the whole brain have not yet been obtained. Many techniques are limited by coarse resolution or by a narrow field of view. High-throughput imaging methods, such as light sheet microscopy, can be used to image large specimens with high resolution and in reasonable times. However, the bottleneck is then moved from image acquisition to image analysis, since many TeraBytes of data have to be processed to extract meaningful information. Here, we present a full experimental pipeline to quantify neuronal activity in the entire mouse brain with cellular resolution, based on a combination of genetics, optics and computer science. We used a transgenic mouse strain (Arc-dVenus mouse) in which neurons which have been active in the last hours before brain fixation are fluorescently labelled. Samples were cleared with CLARITY and imaged with a custom-made confocal light sheet microscope. To perform an automatic localization of fluorescent cells on the large images produced, we used a novel computational approach called semantic deconvolution. The combined approach presented here allows quantifying the amount of Arc-expressing neurons throughout the whole mouse brain. When applied to cohorts of mice subject to different stimuli and/or environmental conditions, this method helps finding correlations in activity between different neuronal populations, opening the possibility to infer a sort of brain-wide 'functional connectivity' with cellular resolution.
Toward 10 meV electron energy-loss spectroscopy resolution for plasmonics.
Bellido, Edson P; Rossouw, David; Botton, Gianluigi A
2014-06-01
Energy resolution is one of the most important parameters in electron energy-loss spectroscopy. This is especially true for measurement of surface plasmon resonances, where high-energy resolution is crucial for resolving individual resonance peaks, in particular close to the zero-loss peak. In this work, we improve the energy resolution of electron energy-loss spectra of surface plasmon resonances, acquired with a monochromated beam in a scanning transmission electron microscope, by the use of the Richardson-Lucy deconvolution algorithm. We test the performance of the algorithm in a simulated spectrum and then apply it to experimental energy-loss spectra of a lithographically patterned silver nanorod. By reduction of the point spread function of the spectrum, we are able to identify low-energy surface plasmon peaks in spectra, more localized features, and higher contrast in surface plasmon energy-filtered maps. Thanks to the combination of a monochromated beam and the Richardson-Lucy algorithm, we improve the effective resolution down to 30 meV, and evidence of success up to 10 meV resolution for losses below 1 eV. We also propose, implement, and test two methods to limit the number of iterations in the algorithm. The first method is based on noise measurement and analysis, while in the second we monitor the change of slope in the deconvolved spectrum.
Precise Point Positioning with Partial Ambiguity Fixing.
Li, Pan; Zhang, Xiaohong
2015-06-10
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate.
Precise Point Positioning with Partial Ambiguity Fixing
Li, Pan; Zhang, Xiaohong
2015-01-01
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate. PMID:26067196
Cartography of asteroids and comet nuclei from low resolution data
NASA Technical Reports Server (NTRS)
Stooke, Philip J.
1992-01-01
High resolution images of non-spherical objects, such as Viking images of Phobos and the anticipated Galileo images of Gaspra, lend themselves to conventional planetary cartographic procedures: control network analysis, stereophotogrammetry, image mosaicking in 2D or 3D, and airbrush mapping. There remains the problem of a suitable map projection for bodies which are extremely elongated or irregular in shape. Many bodies will soon be seen at lower resolution (5-30 pixels across the disk) in images from speckle interferometry, the Hubble Space Telescope, ground-based radar, distinct spacecraft encounters, and closer images degraded by smear. Different data with similar effective resolutions are available from stellar occultations, radar or lightcurve convex hulls, lightcurve modeling of albedo variations, and cometary jet modeling. With such low resolution, conventional methods of shape determination will be less useful or will fail altogether, leaving limb and terminator topography as the principal sources of topographic information. A method for shape determination based on limb and terminator topography was developed. It has been applied to the nucleus of Comet Halley and the jovian satellite Amalthea. The Amalthea results are described to give an example of the cartographic possibilities and problems of anticipated data sets.
A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light
Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning
2017-01-01
Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759
Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan
2017-01-01
In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866
NASA Astrophysics Data System (ADS)
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2013-10-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose (18F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian
2013-10-21
Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Kassler, Alexander; Pittenauer, Ernst; Doerr, Nicole; Allmaier, Guenter
2014-01-15
For the qualification and quantification of antioxidants (aromatic amines and sterically hindered phenols), most of them applied as lubricant additives, two ultrahigh-performance liquid chromatography (UHPLC) electrospray ionization mass spectrometric methods applying the positive and negative ion mode have been developed for lubricant design and engineering thus allowing e.g. the study of the degradation of lubricants. Based on the different chemical properties of the two groups of antioxidants, two methods offering a fast separation (10 min) without prior derivatization were developed. In order to reach these requirements, UHPLC was coupled with an LTQ Orbitrap hybrid tandem mass spectrometer with positive and negative ion electrospray ionization for simultaneous detection of spectra from UHPLC-high-resolution (HR)-MS (full scan mode) and UHPLC-low-resolution linear ion trap MS(2) (LITMS(2)), which we term UHPLC/HRMS-LITMS(2). All 20 analytes investigated could be qualified by an UHPLC/HRMS-LITMS(2) approach consisting of simultaneous UHPLC/HRMS (elemental composition) and UHPLC/LITMS(2) (diagnostic product ions) according to EC guidelines. Quantification was based on an UHPLC/LITMS(2) approach due to increased sensitivity and selectivity compared to UHPLC/HRMS. Absolute quantification was only feasible for seven analytes with well-specified purity of references whereas relative quantification was obtainable for another nine antioxidants. All of them showed good standard deviation and repeatability. The combined methods allow qualitative and quantitative determination of a wide variety of different antioxidants including aminic/phenolic compounds applied in lubricant engineering. These data show that the developed methods will be versatile tools for further research on identification and characterization of the thermo-oxidative degradation products of antioxidants in lubricants. Copyright © 2013 John Wiley & Sons, Ltd.
Lao, Yexing; Yang, Cuiping; Zou, Wei; Gan, Manquan; Chen, Ping; Su, Weiwei
2012-05-01
The cryptand Kryptofix 2.2.2 is used extensively as a phase-transfer reagent in the preparation of [18F]fluoride-labelled radiopharmaceuticals. However, it has considerable acute toxicity. The aim of this study was to develop and validate a method for rapid (within 1 min), specific and sensitive quantification of Kryptofix 2.2.2 at trace levels. Chromatographic separations were carried out by rapid-resolution liquid chromatography (Agilent ZORBAX SB-C18 rapid-resolution column, 2.1 × 30 mm, 3.5 μm). Tandem mass spectra were acquired using a triple quadrupole mass spectrometer equipped with an electrospray ionization interface. Quantitative mass spectrometric analysis was conducted in positive ion mode and multiple reaction monitoring mode for the m/z 377.3 → 114.1 transition for Kryptofix 2.2.2. The external standard method was used for quantification. The method met the precision and efficiency requirements for PET radiopharmaceuticals, providing satisfactory results for specificity, matrix effect, stability, linearity (0.5-100 ng/ml, r(2)=0.9975), precision (coefficient of variation < 5%), accuracy (relative error < ± 3%), sensitivity (lower limit of quantification=0.5 ng) and detection time (<1 min). Fluorodeoxyglucose (n=6) was analysed, and the Kryptofix 2.2.2 content was found to be well below the maximum permissible levels approved by the US Food and Drug Administration. The developed method has a short analysis time (<1 min) and high sensitivity (lower limit of quantification=0.5 ng/ml) and can be successfully applied to rapid quantification of Kryptofix 2.2.2 at trace levels in fluorodeoxyglucose. This method could also be applied to other [18F]fluorine-labelled radiopharmaceuticals that use Kryptofix 2.2.2 as a phase-transfer reagent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Y; Mutic, S; Du, D
Purpose: To evaluate the feasibility of using the weighted hybrid iterative spiral k-space encoded estimation (WHISKEE) technique to improve spatial resolution of tracking images for onboard MR image guided radiation therapy (MR-IGRT). Methods: MR tracking images of abdomen and pelvis had been acquired from healthy volunteers using the ViewRay onboard MRIGRT system (ViewRay Inc. Oakwood Village, OH) at a spatial resolution of 2.0mm*2.0mm*5.0mm. The tracking MR images were acquired using the TrueFISP sequence. The temporal resolution had to be traded off to 2 frames per second (FPS) to achieve the 2.0mm in-plane spatial resolution. All MR images were imported intomore » the MATLAB software. K-space data were synthesized through the Fourier Transform of the MR images. A mask was created to selected k-space points that corresponded to the under-sampled spiral k-space trajectory with an acceleration (or undersampling) factor of 3. The mask was applied to the fully sampled k-space data to synthesize the undersampled k-space data. The WHISKEE method was applied to the synthesized undersampled k-space data to reconstructed tracking MR images at 6 FPS. As a comparison, the undersampled k-space data were also reconstructed using the zero-padding technique. The reconstructed images were compared to the original image. The relatively reconstruction error was evaluated using the percentage of the norm of the differential image over the norm of the original image. Results: Compared to the zero-padding technique, the WHISKEE method was able to reconstruct MR images with better image quality. It significantly reduced the relative reconstruction error from 39.5% to 3.1% for the pelvis image and from 41.5% to 4.6% for the abdomen image at an acceleration factor of 3. Conclusion: We demonstrated that it was possible to use the WHISKEE method to expedite MR image acquisition for onboard MR-IGRT systems to achieve good spatial and temporal resolutions simultaneously. Y. Hu and O. green receive travel reimbursement from ViewRay. S. Mutic has consulting and research agreements with ViewRay. Q. Zeng, R. Nana, J.L. Patrick, S. Shvartsman and J.F. Dempsey are ViewRay employees.« less
Trade-off studies of a hyperspectral infrared sounder on a geostationary satellite.
Wang, Fang; Li, Jun; Schmit, Timothy J; Ackerman, Steven A
2007-01-10
Trade-off studies on spectral coverage, signal-to-noise ratio (SNR), and spectral resolution for a hyperspectral infrared (IR) sounder on a geostationary satellite are summarized. The data density method is applied for the vertical resolution analysis, and the rms error between true and retrieved profiles is used to represent the retrieval accuracy. The effects of spectral coverage, SNR, and spectral resolution on vertical resolution and retrieval accuracy are investigated. The advantages of IR and microwave sounder synergy are also demonstrated. When focusing on instrument performance and data processing, the results from this study show that the preferred spectral coverage combines long-wave infrared (LWIR) with the shorter middle-wave IR (SMidW). Using the appropriate spectral coverage, a hyperspectral IR sounder with appropriate SNR can achieve the required science performance (1 km vertical resolution, 1 K temperature, and 10% relative humidity retrieval accuracy). The synergy of microwave and IR sounders can improve the vertical resolution and retrieval accuracy compared to either instrument alone.
NASA Astrophysics Data System (ADS)
Atencia, A.; Llasat, M. C.; Garrote, L.; Mediero, L.
2010-10-01
The performance of distributed hydrological models depends on the resolution, both spatial and temporal, of the rainfall surface data introduced. The estimation of quantitative precipitation from meteorological radar or satellite can improve hydrological model results, thanks to an indirect estimation at higher spatial and temporal resolution. In this work, composed radar data from a network of three C-band radars, with 6-minutal temporal and 2 × 2 km2 spatial resolution, provided by the Catalan Meteorological Service, is used to feed the RIBS distributed hydrological model. A Window Probability Matching Method (gage-adjustment method) is applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation in both convective and stratiform Z/R relations used over Catalonia. Once the rainfall field has been adequately obtained, an advection correction, based on cross-correlation between two consecutive images, was introduced to get several time resolutions from 1 min to 30 min. Each different resolution is treated as an independent event, resulting in a probable range of input rainfall data. This ensemble of rainfall data is used, together with other sources of uncertainty, such as the initial basin state or the accuracy of discharge measurements, to calibrate the RIBS model using probabilistic methodology. A sensitivity analysis of time resolutions was implemented by comparing the various results with real values from stream-flow measurement stations.
Research and Development Services: Methods Development
1982-07-23
At an applied potential of -1.15 volts, the minimum detectable amount was 500 ng, which was not very sensitive. From Hammett linear free energy... Equation 1, the value of N was optimized by using two columns. The other factors which can influence resolution are the capacity factor, k, and the
A comparison of different interpolation methods for wind data in Central Asia
NASA Astrophysics Data System (ADS)
Reinhardt, Katja; Samimi, Cyrus
2017-04-01
For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst results.
NASA Astrophysics Data System (ADS)
Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei
2017-07-01
In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.
Improved spatial resolution of luminescence images acquired with a silicon line scanning camera
NASA Astrophysics Data System (ADS)
Teal, Anthony; Mitchell, Bernhard; Juhl, Mattias K.
2018-04-01
Luminescence imaging is currently being used to provide spatially resolved defect in high volume silicon solar cell production. One option to obtain the high throughput required for on the fly detection is the use a silicon line scan cameras. However, when using a silicon based camera, the spatial resolution is reduced as a result of the weakly absorbed light scattering within the camera's chip. This paper address this issue by applying deconvolution from a measured point spread function. This paper extends the methods for determining the point spread function of a silicon area camera to a line scan camera with charge transfer. The improvement in resolution is quantified in the Fourier domain and in spatial domain on an image of a multicrystalline silicon brick. It is found that light spreading beyond the active sensor area is significant in line scan sensors, but can be corrected for through normalization of the point spread function. The application of this method improves the raw data, allowing effective detection of the spatial resolution of defects in manufacturing.
Assessing the Assessment Methods: Climate Change and Hydrologic Impacts
NASA Astrophysics Data System (ADS)
Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.
2014-12-01
The Bureau of Reclamation, the U.S. Army Corps of Engineers, and other water management agencies have an interest in developing reliable, science-based methods for incorporating climate change information into longer-term water resources planning. Such assessments must quantify projections of future climate and hydrology, typically relying on some form of spatial downscaling and bias correction to produce watershed-scale weather information that subsequently drives hydrology and other water resource management analyses (e.g., water demands, water quality, and environmental habitat). Water agencies continue to face challenging method decisions in these endeavors: (1) which downscaling method should be applied and at what resolution; (2) what observational dataset should be used to drive downscaling and hydrologic analysis; (3) what hydrologic model(s) should be used and how should these models be configured and calibrated? There is a critical need to understand the ramification of these method decisions, as they affect the signal and uncertainties produced by climate change assessments and, thus, adaptation planning. This presentation summarizes results from a three-year effort to identify strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic conditions. Methods were evaluated from two perspectives: historical fidelity, and tendency to modulate a global climate model's climate change signal. On downscaling, four methods were applied at multiple resolutions: statistically using Bias Correction Spatial Disaggregation, Bias Correction Constructed Analogs, and Asynchronous Regression; dynamically using the Weather Research and Forecasting model. Downscaling results were then used to drive hydrologic analyses over the contiguous U.S. using multiple models (VIC, CLM, PRMS), with added focus placed on case study basins within the Colorado Headwaters. The presentation will identify which types of climate changes are expressed robustly across methods versus those that are sensitive to method choice; which method choices seem relatively more important; and where strategic investments in research and development can substantially improve guidance on climate change provided to water managers.
Spatial resolution of the electrical conductance of ionic fluids using a Green-Kubo method.
Jones, R E; Ward, D K; Templeton, J A
2014-11-14
We present a Green-Kubo method to spatially resolve transport coefficients in compositionally heterogeneous mixtures. We develop the underlying theory based on well-known results from mixture theory, Irving-Kirkwood field estimation, and linear response theory. Then, using standard molecular dynamics techniques, we apply the methodology to representative systems. With a homogeneous salt water system, where the expectation of the distribution of conductivity is clear, we demonstrate the sensitivities of the method to system size, and other physical and algorithmic parameters. Then we present a simple model of an electrochemical double layer where we explore the resolution limit of the method. In this system, we observe significant anisotropy in the wall-normal vs. transverse ionic conductances, as well as near wall effects. Finally, we discuss extensions and applications to more realistic systems such as batteries where detailed understanding of the transport properties in the vicinity of the electrodes is of technological importance.
Yehia, Ali M; Arafa, Reham M; Abbas, Samah S; Amer, Sawsan M
2016-01-15
Spectral resolution of cefquinome sulfate (CFQ) in the presence of its degradation products was studied. Three selective, accurate and rapid spectrophotometric methods were performed for the determination of CFQ in the presence of either its hydrolytic, oxidative or photo-degradation products. The proposed ratio difference, derivative ratio and mean centering are ratio manipulating spectrophotometric methods that were satisfactorily applied for selective determination of CFQ within linear range of 5.0-40.0 μg mL(-1). Concentration Residuals Augmented Classical Least Squares was applied and evaluated for the determination of the cited drug in the presence of its all degradation products. Traditional Partial Least Squares regression was also applied and benchmarked against the proposed advanced multivariate calibration. Experimentally designed 25 synthetic mixtures of three factors at five levels were used to calibrate and validate the multivariate models. Advanced chemometrics succeeded in quantitative and qualitative analyses of CFQ along with its hydrolytic, oxidative and photo-degradation products. The proposed methods were applied successfully for different pharmaceutical formulations analyses. These developed methods were simple and cost-effective compared with the manufacturer's RP-HPLC method. Copyright © 2015 Elsevier B.V. All rights reserved.
Hussaini, Zahra; Lin, Pin Ann; Natarajan, Bharath; Zhu, Wenhui; Sharma, Renu
2018-03-01
For many reaction processes, such as catalysis, phase transformations, nanomaterial synthesis etc., nanoscale observations at high spatial (sub-nanometer) and temporal (millisecond) resolution are required to characterize and comprehend the underlying factors that favor one reaction over another. The combination of such spatial and temporal resolution (up to 600 µs), while rich in information, produces a large number of snapshots, each of which must be analyzed to obtain the structural (and thereby chemical) information. Here we present a methodology for automated quantitative measurement of real-time atomic position fluctuations in a nanoparticle. We leverage a combination of several image processing algorithms to precisely identify the positions of the atomic columns in each image. A geometric model is then used to measure the time-evolution of distances and angles between neighboring atomic columns to identify different phases and quantify local structural fluctuations. We apply this technique to determine the atomic-level fluctuations in the relative fractions of metal and metal-carbide phases in a cobalt catalyst nanoparticle during single-walled carbon nanotube (SWCNT) growth. These measurements provided a means to obtain the number of carbon atoms incorporated into and released from the catalyst particle, thereby helping resolve carbon reaction pathways during SWCNT growth. Further we demonstrate the use of this technique to measure the reaction kinetics of iron oxide reduction. Apart from reducing the data analysis time, the statistical approach allows us to measure atomic distances with sub-pixel resolution. We show that this method can be applied universally to measure atomic positions with a precision of 0.01 nm from any set of atomic-resolution video images. With the advent of high time-resolution direct detection cameras, we anticipate such methods will be essential in addressing the metrology problem of quantifying large datasets of time-resolved images in future. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Holz, Robert E.; Ackerman, Steve; Antonelli, Paolo; Nagle, Fred; McGill, Matthew; Hlavka, Dennis L.; Hart, William D.
2005-01-01
This paper presents a comparison of cloud-top altitude retrieval methods applied to S-HIS (Scanning High Resolution Interferometer Sounder) measurements. Included in this comparison is an improvement to the traditional CO2 Slicing method. The new method, CO2 Sorting, determines optimal channel pairs to apply the CO2 Slicing. Measurements from collocated samples of the Cloud Physics Lidar (CPL) and Modis Airborne Simulator (MAS) instruments assist in the comparison. For optically thick clouds good correlation between the S-HIS and lidar cloud-top retrievals are found. For tenuous ice clouds there can be large differences between lidar (CPL) and S-HIS retrieved cloud-tops. It is found that CO2 Sorting significantly reduces the cloud height biases for the optically thin cloud (total optical depths less then 1.0). For geometrically thick but optically thin cirrus clouds large differences between the S-HIS infrared cloud top retrievals and the CPL detected cloud top where found. For these cases the cloud height retrieved by the S-HIS cloud retrievals correlated closely with the level the CPL integrated cloud optical depth was approximately 1.0.
Applying high-resolution melting (HRM) technology to identify five commonly used Artemisia species.
Song, Ming; Li, Jingjian; Xiong, Chao; Liu, Hexia; Liang, Junsong
2016-10-04
Many members of the genus Artemisia are important for medicinal purposes with multiple pharmacological properties. Often, these herbal plants sold on the markets are in processed forms so it is difficult to authenticate. Routine testing and identification of these herbal materials should be performed to ensure that the raw materials used in pharmaceutical products are suitable for their intended use. In this study, five commonly used Artemisia species included Artemisia argyi, Artemisia annua, Artemisia lavandulaefolia, Artemisia indica, and Artemisia atrovirens were analyzed using high resolution melting (HRM) analysis based on the internal transcribed spacer 2 (ITS2) sequences. The melting profiles of the ITS2 amplicons of the five closely related herbal species are clearly separated so that they can be differentiated by HRM method. The method was further applied to authenticate commercial products in powdered. HRM curves of all the commercial samples tested are similar to the botanical species as labeled. These congeneric medicinal products were also clearly separated using the neighbor-joining (NJ) tree. Therefore, HRM method could provide an efficient and reliable authentication system to distinguish these commonly used Artemisia herbal products on the markets and offer a technical reference for medicines quality control in the drug supply chain.
NASA Astrophysics Data System (ADS)
Zhou, Yu; Walker, Richard T.; Elliott, John R.; Parsons, Barry
2016-04-01
Fault dips are usually measured from outcrops in the field or inferred through geodetic or seismological modeling. Here we apply the classic structural geology approach of calculating dip from a fault's 3-D surface trace using recent, high-resolution topography. A test study applied to the 2010 El Mayor-Cucapah earthquake shows very good agreement between our results and those previously determined from field measurements. To obtain a reliable estimate, a fault segment ≥120 m long with a topographic variation ≥15 m is suggested. We then applied this method to the 2013 Balochistan earthquake, getting dips similar to previous estimates. Our dip estimates show a switch from north to south dipping at the southern end of the main trace, which appears to be a response to local extension within a stepover. We suggest that this previously unidentified geometrical complexity may act as the endpoint of earthquake ruptures for the southern end of the Hoshab fault.
High Efficiency Multi-shot Interleaved Spiral-In/Out Acquisition for High Resolution BOLD fMRI
Jung, Youngkyoo; Samsonov, Alexey A.; Liu, Thomas T.; Buracas, Giedrius T.
2012-01-01
Growing demand for high spatial resolution BOLD functional MRI faces a challenge of the spatial resolution vs. coverage or temporal resolution tradeoff, which can be addressed by methods that afford increased acquisition efficiency. Spiral acquisition trajectories have been shown to be superior to currently prevalent echo-planar imaging in terms of acquisition efficiency, and high spatial resolution can be achieved by employing multiple-shot spiral acquisition. The interleaved spiral in-out trajectory is preferred over spiral-in due to increased BOLD signal CNR and higher acquisition efficiency than that of spiral-out or non-interleaved spiral in/out trajectories (1), but to date applicability of the multi-shot interleaved spiral in-out for high spatial resolution imaging has not been studied. Herein we propose multi-shot interleaved spiral in-out acquisition and investigate its applicability for high spatial resolution BOLD fMRI. Images reconstructed from interleaved spiral-in and -out trajectories possess artifacts caused by differences in T2* decay, off-resonance and k-space errors associated with the two trajectories. We analyze the associated errors and demonstrate that application of conjugate phase reconstruction and spectral filtering can substantially mitigate these image artifacts. After applying these processing steps, the multishot interleaved spiral in-out pulse sequence yields high BOLD CNR images at in-plane resolution below 1x1 mm while preserving acceptable temporal resolution (4 s) and brain coverage (15 slices of 2 mm thickness). Moreover, this method yields sufficient BOLD CNR at 1.5 mm isotropic resolution for detection of activation in hippocampus associated with cognitive tasks (Stern memory task). The multi-shot interleaved spiral in-out acquisition is a promising technique for high spatial resolution BOLD fMRI applications. PMID:23023395
NASA Astrophysics Data System (ADS)
Aviv, O.; Lipshtat, A.
2018-05-01
On-Site Inspection (OSI) activities under the Comprehensive Nuclear-Test-Ban Treaty (CTBT) allow limitations to measurement equipment. Thus, certain detectors require modifications to be operated in a restricted mode. The accuracy and reliability of results obtained by a restricted device may be impaired. We present here a method for limiting data acquisition during OSI. Limitations are applied to a high-resolution high-purity germanium detector system, where the vast majority of the acquired data that is not relevant to the inspection is filtered out. The limited spectrum is displayed to the user and allows analysis using standard gamma spectrometry procedures. The proposed method can be incorporated into commercial gamma-ray spectrometers, including both stationary and mobile-based systems. By applying this procedure to more than 1000 spectra, representing various scenarios, we show that partial data are sufficient for reaching reliable conclusions. A comprehensive survey of potential false-positive identifications of various radionuclides is presented as well. It is evident from the results that the analysis of a limited spectrum is practically identical to that of a standard spectrum in terms of detection and quantification of OSI-relevant radionuclides. A future limited system can be developed making use of the principles outlined by the suggested method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, E., E-mail: emmanuel.brun@esrf.fr; Grandl, S.; Sztrókay-Gaul, A.
Purpose: Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. Methods: The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer basedmore » phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure’s possible applications. Results: A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. Conclusions: The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.« less
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin
2017-11-01
Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.
Effects of daily, high spatial resolution a priori profiles of satellite-derived NOx emissions
NASA Astrophysics Data System (ADS)
Laughner, J.; Zare, A.; Cohen, R. C.
2016-12-01
The current generation of space-borne NO2 column observations provides a powerful method of constraining NOx emissions due to the spatial resolution and global coverage afforded by the Ozone Monitoring Instrument (OMI). The greater resolution available in next generation instruments such as TROPOMI and the capabilities of geosynchronous platforms TEMPO, Sentinel-4, and GEMS will provide even greater capabilities in this regard, but we must apply lessons learned from the current generation of retrieval algorithms to make the best use of these instruments. Here, we focus on the effect of the resolution of the a priori NO2 profiles used in the retrieval algorithms. We show that for an OMI retrieval, using daily high-resolution a priori profiles results in changes in the retrieved VCDs up to 40% when compared to a retrieval using monthly average profiles at the same resolution. Further, comparing a retrieval with daily high spatial resolution a priori profiles to a more standard one, we show that emissions derived increase by 100% when using the optimized retrieval.
Robust video super-resolution with registration efficiency adaptation
NASA Astrophysics Data System (ADS)
Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen
2010-07-01
Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.
xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures.
McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus
2014-09-01
X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of D-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.
Method of LSD profile asymmetry for estimating the center of mass velocities of pulsating stars
NASA Astrophysics Data System (ADS)
Britavskiy, Nikolay; Pancino, Elena; Romano, Donatella; Tsymbal, Vadim
2015-08-01
We present radial velocity analysis for 20 solar neighborhood RR Lyrae and 3 Population II Cepheids. High-resolution spectra were observed with either TNG/SARG or VLT/UVES over varying phases. To estimate the center of mass (barycentric) velocities of the program stars, we utilized two independent methods. First, the 'classic' method was employed, which is based on RR Lyrae radial velocity curve templates. Second, we provide the new method that used absorption line profile asymmetry to determine both the pulsation and the barycentric velocities even with a low number of high-resolution spectra and in cases where the phase of the observations is uncertain. This new method is based on a Least Squares Deconvolution (LSD) of the line profiles in order to analyze line asymmetry that occurs in the spectra of pulsating stars. By applying this method to our sample stars we attain accurate measurements (± 1 km/s) of the pulsation component of the radial velocity. This results in determination of the barycentric velocity to within 5 km/s even with a low number of high-resolution spectra. A detailed investigation of LSD profile asymmetry shows the variable nature of the project factor at different pulsation phases, which should be taken into account in the detailed spectroscopic analysis of pulsating stars.
Method of LSD profile asymmetry for estimating the center of mass velocities of pulsating stars
NASA Astrophysics Data System (ADS)
Britavskiy, N.; Pancino, E.; Tsymbal, V.; Romano, D.; Cacciari, C.; Clementini, C.
2016-05-01
We present radial velocity analysis for 20 solar neighborhood RR Lyrae and 3 Population II Cepheids. High-resolution spectra were observed with either TNG/SARG or VLT/UVES over varying phases. To estimate the center of mass (barycentric) velocities of the program stars, we utilized two independent methods. First, the 'classic' method was employed, which is based on RR Lyrae radial velocity curve templates. Second, we provide the new method that used absorption line profile asymmetry to determine both the pulsation and the barycentric velocities even with a low number of high-resolution spectra and in cases where the phase of the observations is uncertain. This new method is based on a least squares deconvolution (LSD) of the line profiles in order to an- alyze line asymmetry that occurs in the spectra of pulsating stars. By applying this method to our sample stars we attain accurate measurements (+- 2 kms^-1) of the pulsation component of the radial velocity. This results in determination of the barycentric velocity to within 5 kms^-1 even with a low number of high- resolution spectra. A detailed investigation of LSD profile asymmetry shows the variable nature of the project factor at different pulsation phases, which should be taken into account in the detailed spectroscopic analysis of pulsating stars.
Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; ...
2016-11-25
Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less
NASA Astrophysics Data System (ADS)
Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.
2016-11-01
We present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography-mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arranged into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.
A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results
NASA Technical Reports Server (NTRS)
Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert
2014-01-01
This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.
A super-resolution algorithm for enhancement of flash lidar data: flight test results
NASA Astrophysics Data System (ADS)
Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert
2013-03-01
This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.
The High-Resolution Wave-Propagation Method Applied to Meso- and Micro-Scale Flows
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2012-01-01
The high-resolution wave-propagation method for computing the nonhydrostatic atmospheric flows on meso- and micro-scales is described. The design and implementation of the Riemann solver used for computing the Godunov fluxes is discussed in detail. The method uses a flux-based wave decomposition in which the flux differences are written directly as the linear combination of the right eigenvectors of the hyperbolic system. The two advantages of the technique are: 1) the need for an explicit definition of the Roe matrix is eliminated and, 2) the inclusion of source term due to gravity does not result in discretization errors. The resulting flow solver is conservative and able to resolve regions of large gradients without introducing dispersion errors. The methodology is validated against exact analytical solutions and benchmark cases for non-hydrostatic atmospheric flows.
Automatic Mrf-Based Registration of High Resolution Satellite Video Data
NASA Astrophysics Data System (ADS)
Platias, C.; Vakalopoulou, M.; Karantzalos, K.
2016-06-01
In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.
Yehia, Ali M; Mohamed, Heba M
2016-01-05
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)
1985-01-01
Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Khomri, Bilal; Christodoulidis, Argyrios; Djerou, Leila; Babahenini, Mohamed Chaouki; Cheriet, Farida
2018-05-01
Retinal vessel segmentation plays an important role in the diagnosis of eye diseases and is considered as one of the most challenging tasks in computer-aided diagnosis (CAD) systems. The main goal of this study was to propose a method for blood-vessel segmentation that could deal with the problem of detecting vessels of varying diameters in high- and low-resolution fundus images. We proposed to use the particle swarm optimization (PSO) algorithm to improve the multiscale line detection (MSLD) method. The PSO algorithm was applied to find the best arrangement of scales in the MSLD method and to handle the problem of multiscale response recombination. The performance of the proposed method was evaluated on two low-resolution (DRIVE and STARE) and one high-resolution fundus (HRF) image datasets. The data include healthy (H) and diabetic retinopathy (DR) cases. The proposed approach improved the sensitivity rate against the MSLD by 4.7% for the DRIVE dataset and by 1.8% for the STARE dataset. For the high-resolution dataset, the proposed approach achieved 87.09% sensitivity rate, whereas the MSLD method achieves 82.58% sensitivity rate at the same specificity level. When only the smallest vessels were considered, the proposed approach improved the sensitivity rate by 11.02% and by 4.42% for the healthy and the diabetic cases, respectively. Integrating the proposed method in a comprehensive CAD system for DR screening would allow the reduction of false positives due to missed small vessels, misclassified as red lesions. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei
2017-02-01
Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.
Long-term optical imaging of intrinsic signals in anesthetized and awake monkeys
NASA Astrophysics Data System (ADS)
Roe, Anna W.
2007-04-01
Some exciting new efforts to use intrinsic signal optical imaging methods for long-term studies in anesthetized and awake monkeys are reviewed. The development of such methodologies opens the door for studying behavioral states such as attention, motivation, memory, emotion, and other higher-order cognitive functions. Long-term imaging is also ideal for studying changes in the brain that accompany development, plasticity, and learning. Although intrinsic imaging lacks the temporal resolution offered by dyes, it is a high spatial resolution imaging method that does not require application of any external agents to the brain. The bulk of procedures described here have been developed in the monkey but can be applied to the study of surface structures in any in vivo preparation.
Fundamental Characteristics of Bioprint on Calcium Alginate Gel
NASA Astrophysics Data System (ADS)
Umezu, Shinjiro; Hatta, Tatsuru; Ohmori, Hitoshi
2013-05-01
The goal of this study is to fabricate precision three-dimensional (3D) biodevices those are micro fluidics and artificial organs utilizing digital fabrication. Digital fabrication is fabrication method utilizing inkjet technologies. Electrostatic inkjet is one of the inkjet technologies. The electrostatic inkjet method has following two merits; those are high resolution to print and ability to eject highly viscous liquid. These characteristics are suitable to print biomaterials precisely. We are now applying for bioprint. In this paper, the electrostatic inkjet method is applied for fabrication of 3D biodevices that has cave like blood vessel. When aqueous solution of sodium alginate is printed to aqueous solution of calcium chloride, calcium alginate is produced. 3D biodevices are fabricated in case that calcium alginate is piled.
Text Line Detection from Rectangle Traffic Panels of Natural Scene
NASA Astrophysics Data System (ADS)
Wang, Shiyuan; Huang, Linlin; Hu, Jian
2018-01-01
Traffic sign detection and recognition is very important for Intelligent Transportation. Among traffic signs, traffic panel contains rich information. However, due to low resolution and blur in the rectangular traffic panel, it is difficult to extract the character and symbols. In this paper, we propose a coarse-to-fine method to detect the Chinese character on traffic panels from natural scenes. Given a traffic panel Color Quantization is applied to extract candidate regions of Chinese characters. Second, a multi-stage filter based on learning is applied to discard the non-character regions. Third, we aggregate the characters for text lines by Distance Metric Learning method. Experimental results on real traffic images from Baidu Street View demonstrate the effectiveness of the proposed method.
Multi-resolution analysis for ear recognition using wavelet features
NASA Astrophysics Data System (ADS)
Shoaib, M.; Basit, A.; Faye, I.
2016-11-01
Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.
NASA Astrophysics Data System (ADS)
Osten, W.; Pedrini, G.; Weidmann, P.; Gadow, R.
2015-08-01
A minimum invasive but high resolution method for residual stress analysis of ceramic coatings made by thermal spraycoating using a pulsed laser for flexible hole drilling is described. The residual stresses are retrieved by applying the measured surface data for a model-based reconstruction procedure. While the 3D deformations and the profile of the machined area are measured with digital holography, the residual stresses are calculated by FE analysis. To improve the sensitivity of the method, a SLM is applied to control the distribution and the shape of the holes. The paper presents the complete measurement and reconstruction procedure and discusses the advantages and challenges of the new technology.
Soil moisture downscaling using a simple thermal based proxy
NASA Astrophysics Data System (ADS)
Peng, Jian; Loew, Alexander; Niesel, Jonathan
2016-04-01
Microwave remote sensing has been largely applied to retrieve soil moisture (SM) from active and passive sensors. The obvious advantage of microwave sensor is that SM can be obtained regardless of atmospheric conditions. However, existing global SM products only provide observations at coarse spatial resolutions, which often hamper their applications in regional hydrological studies. Therefore, various downscaling methods have been proposed to enhance the spatial resolution of satellite soil moisture products. The aim of this study is to investigate the validity and robustness of a simple Vegetation Temperature Condition Index (VTCI) downscaling scheme over different climates and regions. Both polar orbiting (MODIS) and geostationary (MSG SEVIRI) satellite data are used to improve the spatial resolution of the European Space Agency's Water Cycle Multi-mission Observation Strategy and Climate Change Initiative (ESA CCI) soil moisture, which is a merged product based on both active and passive microwave observations. The results from direct validation against soil moisture in-situ measurements, spatial pattern comparison, as well as seasonal and land use analyses show that the downscaling method can significantly improve the spatial details of CCI soil moisture while maintain the accuracy of CCI soil moisture. The application of the scheme with different satellite platforms and over different regions further demonstrate the robustness and effectiveness of the proposed method. Therefore, the VTCI downscaling method has the potential to facilitate relevant hydrological applications that require high spatial and temporal resolution soil moisture.
Thermal stability control system of photo-elastic interferometer in the PEM-FTs
NASA Astrophysics Data System (ADS)
Zhang, M. J.; Jing, N.; Li, K. W.; Wang, Z. B.
2018-01-01
A drifting model for the resonant frequency and retardation amplitude of a photo-elastic modulator (PEM) in the photo-elastic modulated Fourier transform spectrometer (PEM-FTs) is presented. A multi-parameter broadband-matching driving control method is proposed to improve the thermal stability of the PEM interferometer. The automatically frequency-modulated technology of the driving signal based on digital phase-locked technology is used to track the PEM's changing resonant frequency. Simultaneously the maximum optical-path-difference of a laser's interferogram is measured to adjust the amplitude of the PEM's driving signal so that the spectral resolution is stable. In the experiment, the multi-parameter broadband-matching control method is applied to the driving control system of the PEM-FTs. Control of resonant frequency and retardation amplitude stabilizes the maximum optical-path-difference to approximately 236 μm and results in a spectral resolution of 42 cm-1. This corresponds to a relative error smaller than 2.16% (4.28 standard deviation). The experiment shows that the method can effectively stabilize the spectral resolution of the PEM-FTs.
Towards sub-nanometer real-space observation of spin and orbital magnetism at the Fe/MgO interface
Thersleff, Thomas; Muto, Shunsuke; Werwiński, Mirosław; Spiegelberg, Jakob; Kvashnin, Yaroslav; Hjӧrvarsson, Björgvin; Eriksson, Olle; Rusz, Ján; Leifer, Klaus
2017-01-01
While the performance of magnetic tunnel junctions based on metal/oxide interfaces is determined by hybridization, charge transfer, and magnetic properties at the interface, there are currently only limited experimental techniques with sufficient spatial resolution to directly observe these effects simultaneously in real-space. In this letter, we demonstrate an experimental method based on Electron Magnetic Circular Dichroism (EMCD) that will allow researchers to simultaneously map magnetic transitions and valency in real-space over interfacial cross-sections with sub-nanometer spatial resolution. We apply this method to an Fe/MgO bilayer system, observing a significant enhancement in the orbital to spin moment ratio that is strongly localized to the interfacial region. Through the use of first-principles calculations, multivariate statistical analysis, and Electron Energy-Loss Spectroscopy (EELS), we explore the extent to which this enhancement can be attributed to emergent magnetism due to structural confinement at the interface. We conclude that this method has the potential to directly visualize spin and orbital moments at buried interfaces in magnetic systems with unprecedented spatial resolution. PMID:28338011
Towards sub-nanometer real-space observation of spin and orbital magnetism at the Fe/MgO interface
NASA Astrophysics Data System (ADS)
Thersleff, Thomas; Muto, Shunsuke; Werwiński, Mirosław; Spiegelberg, Jakob; Kvashnin, Yaroslav; Hjӧrvarsson, Björgvin; Eriksson, Olle; Rusz, Ján; Leifer, Klaus
2017-03-01
While the performance of magnetic tunnel junctions based on metal/oxide interfaces is determined by hybridization, charge transfer, and magnetic properties at the interface, there are currently only limited experimental techniques with sufficient spatial resolution to directly observe these effects simultaneously in real-space. In this letter, we demonstrate an experimental method based on Electron Magnetic Circular Dichroism (EMCD) that will allow researchers to simultaneously map magnetic transitions and valency in real-space over interfacial cross-sections with sub-nanometer spatial resolution. We apply this method to an Fe/MgO bilayer system, observing a significant enhancement in the orbital to spin moment ratio that is strongly localized to the interfacial region. Through the use of first-principles calculations, multivariate statistical analysis, and Electron Energy-Loss Spectroscopy (EELS), we explore the extent to which this enhancement can be attributed to emergent magnetism due to structural confinement at the interface. We conclude that this method has the potential to directly visualize spin and orbital moments at buried interfaces in magnetic systems with unprecedented spatial resolution.
HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models
NASA Astrophysics Data System (ADS)
Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.
2016-03-01
A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.
HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models
NASA Astrophysics Data System (ADS)
Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.
2015-12-01
A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Does this part apply in the case of a workout, resolution, or settlement of obligations? 340.8 Section 340.8 Banks and Banking FEDERAL DEPOSIT INSURANCE... INSURANCE CORPORATION § 340.8 Does this part apply in the case of a workout, resolution, or settlement of...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Does this part apply in the case of a workout, resolution, or settlement of obligations? 340.8 Section 340.8 Banks and Banking FEDERAL DEPOSIT INSURANCE... INSURANCE CORPORATION § 340.8 Does this part apply in the case of a workout, resolution, or settlement of...
Estimation of modal parameters using bilinear joint time frequency distributions
NASA Astrophysics Data System (ADS)
Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.
2007-07-01
In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.
Vegetation Continuous Fields--Transitioning from MODIS to VIIRS
NASA Astrophysics Data System (ADS)
DiMiceli, C.; Townshend, J. R.; Sohlberg, R. A.; Kim, D. H.; Kelly, M.
2015-12-01
Measurements of fractional vegetation cover are critical for accurate and consistent monitoring of global deforestation rates. They also provide important parameters for land surface, climate and carbon models and vital background data for research into fire, hydrological and ecosystem processes. MODIS Vegetation Continuous Fields (VCF) products provide four complementary layers of fractional cover: tree cover, non-tree vegetation, bare ground, and surface water. MODIS VCF products are currently produced globally and annually at 250m resolution for 2000 to the present. Additionally, annual VCF products at 1/20° resolution derived from AVHRR and MODIS Long-Term Data Records are in development to provide Earth System Data Records of fractional vegetation cover for 1982 to the present. In order to provide continuity of these valuable products, we are extending the VCF algorithms to create Suomi NPP/VIIRS VCF products. This presentation will highlight the first VIIRS fractional cover product: global percent tree cover at 1 km resolution. To create this product, phenological and physiological metrics were derived from each complete year of VIIRS 8-day surface reflectance products. A supervised regression tree method was applied to the metrics, using training derived from Landsat data supplemented by high-resolution data from Ikonos, RapidEye and QuickBird. The regression tree model was then applied globally to produce fractional tree cover. In our presentation we will detail our methods for creating the VIIRS VCF product. We will compare the new VIIRS VCF product to our current MODIS VCF products and demonstrate continuity between instruments. Finally, we will outline future VIIRS VCF development plans.
a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Li, J.; Wan, Y.; Gao, X.
2012-07-01
With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.
Marais, Willem J; Holz, Robert E; Hu, Yu Hen; Kuehn, Ralph E; Eloranta, Edwin E; Willett, Rebecca M
2016-10-10
Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.
Retrieval of Cloud Properties for Partially Cloud-Filled Pixels During CRYSTAL-FACE
NASA Astrophysics Data System (ADS)
Nguyen, L.; Minnis, P.; Smith, W. L.; Khaiyer, M. M.; Heck, P. W.; Sun-Mack, S.; Uttal, T.; Comstock, J.
2003-12-01
Partially cloud-filled pixels can be a significant problem for remote sensing of cloud properties. Generally, the optical depth and effective particle sizes are often too small or too large, respectively, when derived from radiances that are assumed to be overcast but contain radiation from both clear and cloud areas within the satellite imager field of view. This study presents a method for reducing the impact of such partially cloud field pixels by estimating the cloud fraction within each pixel using higher resolution visible (VIS, 0.65mm) imager data. Although the nominal resolution for most channels on the Geostationary Operational Environmental Satellite (GOES) imager and the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra are 4 and 1 km, respectively, both instruments also take VIS channel data at 1 km and 0.25 km, respectively. Thus, it may be possible to obtain an improved estimate of cloud fraction within the lower resolution pixels by using the information contained in the higher resolution VIS data. GOES and MODIS multi-spectral data, taken during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment (CRYSTAL-FACE), are analyzed with the algorithm used for the Atmospheric Radiation Measurement Program (ARM) and the Clouds and Earth's Radiant Energy System (CERES) to derive cloud amount, temperature, height, phase, effective particle size, optical depth, and water path. Normally, the algorithm assumes that each pixel is either entirely clear or cloudy. In this study, a threshold method is applied to the higher resolution VIS data to estimate the partial cloud fraction within each low-resolution pixel. The cloud properties are then derived from the observed low-resolution radiances using the cloud cover estimate to properly extract the radiances due only to the cloudy part of the scene. This approach is applied to both GOES and MODIS data to estimate the improvement in the retrievals for each resolution. Results are compared with the radar reflectivity techniques employed by the NOAA ETL MMCR and the PARSL 94 GHz radars located at the CRYSTAL-FACE Eastern & Western Ground Sites, respectively. This technique is most likely to yield improvements for low and midlevel layer clouds that have little thermal variability in cloud height.
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy
2017-09-27
Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.
New developments in super-resolution for GaoFen-4
NASA Astrophysics Data System (ADS)
Li, Feng; Fu, Jie; Xin, Lei; Liu, Yuhong; Liu, Zhijia
2017-10-01
In this paper, the application of super resolution (SR, restoring a high spatial resolution image from a series of low resolution images of the same scene) techniques to GaoFen(GF)-4, which is the most advanced geostationaryorbit earth observing satellite in China, remote sensing images is investigated and tested. SR has been a hot research area for decades, but one of the barriers of applying SR in remote sensing community is the time slot between those low resolution (LR) images acquisition. In general, the longer the time slot, the less reliable the reconstruction. GF-4 has the unique advantage of capturing a sequence of LR of the same region in minutes, i.e. working as a staring camera from the point view of SR. This is the first experiment of applying super resolution to a sequence of low resolution images captured by GF-4 within a short time period. In this paper, we use Maximum a Posteriori (MAP) to solve the ill-conditioned problem of SR. Both the wavelet transform and the curvelet transform are used to setup a sparse prior for remote sensing images. By combining several images of both the BeiJing and DunHuang regions captured by GF-4 our method can improve spatial resolution both visually and numerically. Experimental tests show that lots of detail cannot be observed in the captured LR images, but can be seen in the super resolved high resolution (HR) images. To help the evaluation, Google Earth image can also be referenced. Moreover, our experimental tests also show that the higher the temporal resolution, the better the HR images can be resolved. The study illustrates that the application for SR to geostationary-orbit based earth observation data is very feasible and worthwhile, and it holds the potential application for all other geostationary-orbit based earth observing systems.
NASA Astrophysics Data System (ADS)
Jin, Zhenyu; Lin, Jing; Liu, Zhong
2008-07-01
By study of the classical testing techniques (such as Shack-Hartmann Wave-front Sensor) adopted in testing the aberration of ground-based astronomical optical telescopes, we bring forward two testing methods on the foundation of high-resolution image reconstruction technology. One is based on the averaged short-exposure OTF and the other is based on the Speckle Interferometric OTF by Antoine Labeyrie. Researches made by J.Ohtsubo, F. Roddier, Richard Barakat and J.-Y. ZHANG indicated that the SITF statistical results would be affected by the telescope optical aberrations, which means the SITF statistical results is a function of optical system aberration and the atmospheric Fried parameter (seeing). Telescope diffraction-limited information can be got through two statistics methods of abundant speckle images: by the first method, we can extract the low frequency information such as the full width at half maximum (FWHM) of the telescope PSF to estimate the optical quality; by the second method, we can get a more precise description of the telescope PSF with high frequency information. We will apply the two testing methods to the 2.4m optical telescope of the GMG Observatory, in china to validate their repeatability and correctness and compare the testing results with that of the Shack-Hartmann Wave-Front Sensor got. This part will be described in detail in our paper.
Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines
NASA Astrophysics Data System (ADS)
Kuter, S.; Akyürek, Z.; Weber, G.-W.
2016-10-01
Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th
Threshold matrix for digital halftoning by genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero
1998-10-01
Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.
Hyper-resolution monitoring of urban flooding with social media and crowdsourcing data
NASA Astrophysics Data System (ADS)
Wang, Ruo-Qian; Mao, Huina; Wang, Yuan; Rae, Chris; Shaw, Wesley
2018-02-01
Hyper-resolution datasets for urban flooding are rare. This problem prevents detailed flooding risk analysis, urban flooding control, and the validation of hyper-resolution numerical models. We employed social media and crowdsourcing data to address this issue. Natural Language Processing and Computer Vision techniques are applied to the data collected from Twitter and MyCoast (a crowdsourcing app). We found these big data based flood monitoring approaches can complement the existing means of flood data collection. The extracted information is validated against precipitation data and road closure reports to examine the data quality. The two data collection approaches are compared and the two data mining methods are discussed. A series of suggestions is given to improve the data collection strategy.
Advancement of X-Ray Microscopy Technology and its Application to Metal Solidification Studies
NASA Technical Reports Server (NTRS)
Kaukler, William F.; Curreri, Peter A.
1996-01-01
The technique of x-ray projection microscopy is being used to view, in real time, the structures and dynamics of the solid-liquid interface during solidification. By employing a hard x-ray source with sub-micron dimensions, resolutions of 2 micrometers can be obtained with magnifications of over 800 X. Specimen growth conditions need to be optimized and the best imaging technologies applied to maintain x-ray image resolution, contrast and sensitivity. It turns out that no single imaging technology offers the best solution and traditional methods like radiographic film cannot be used due to specimen motion (solidification). In addition, a special furnace design is required to permit controlled growth conditions and still offer maximum resolution and image contrast.
Applying simulation model to uniform field space charge distribution measurements by the PEA method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less
Design and construction of an Offner spectrometer based on geometrical analysis of ring fields.
Kim, Seo Hyun; Kong, Hong Jin; Lee, Jong Ung; Lee, Jun Ho; Lee, Jai Hoon
2014-08-01
A method to obtain an aberration-corrected Offner spectrometer without ray obstruction is proposed. A new, more efficient spectrometer optics design is suggested in order to increase its spectral resolution. The derivation of a new ring equation to eliminate ray obstruction is based on geometrical analysis of the ring fields for various numerical apertures. The analytical design applying this equation was demonstrated using the optical design software Code V in order to manufacture a spectrometer working in wavelengths of 900-1700 nm. The simulation results show that the new concept offers an analytical initial design taking the least time of calculation. The simulated spectrometer exhibited a modulation transfer function over 80% at Nyquist frequency, root-mean-square spot diameters under 8.6 μm, and a spectral resolution of 3.2 nm. The final design and its realization of a high resolution Offner spectrometer was demonstrated based on the simulation result. The equation and analytical design procedure shown here can be applied to most Offner systems regardless of the wavelength range.
NASA Technical Reports Server (NTRS)
Pagnutti, Mary; Holekamp, Kara; Ryan, Robert E.; Vaughan, Ronand; Russell, Jeff; Prados, Don; Stanley, Thomas
2005-01-01
Remotely sensed ground reflectance is the foundation of any interoperability or change detection technique. Satellite intercomparisons and accurate vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), require the generation of accurate reflectance maps (NDVI is used to describe or infer a wide variety of biophysical parameters and is defined in terms of near-infrared (NIR) and red band reflectances). Accurate reflectance-map generation from satellite imagery relies on the removal of solar and satellite geometry and of atmospheric effects and is generally referred to as atmospheric correction. Atmospheric correction of remotely sensed imagery to ground reflectance has been widely applied to a few systems only. The ability to obtain atmospherically corrected imagery and products from various satellites is essential to enable widescale use of remotely sensed, multitemporal imagery for a variety of applications. An atmospheric correction approach derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that can be applied to high-spatial-resolution satellite imagery under many conditions was evaluated to demonstrate a reliable, effective reflectance map generation method. Additional information is included in the original extended abstract.
Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas
2017-04-01
Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.
USDA-ARS?s Scientific Manuscript database
Observations of land surface temperature (LST) are crucial for the monitoring of surface energy fluxes from satellite. Methods that require high temporal resolution LST observations (e.g., from geostationary orbit) can be difficult to apply globally because several geostationary sensors are required...
Nanoscale imaging of clinical specimens using pathology-optimized expansion microscopy
Zhao, Yongxin; Bucur, Octavian; Irshad, Humayun; Chen, Fei; Weins, Astrid; Stancu, Andreea L.; Oh, Eun-Young; DiStasio, Marcello; Torous, Vanda; Glass, Benjamin; Stillman, Isaac E.; Schnitt, Stuart J.; Beck, Andrew H.; Boyden, Edward S.
2017-01-01
Expansion microscopy (ExM), a method for improving the resolution of light microscopy by physically expanding the specimen, has not been applied to clinical tissue samples. Here we report a clinically optimized form of ExM that supports nanoscale imaging of human tissue specimens that have been fixed with formalin, embedded in paraffin, stained with hematoxylin and eosin (H&E), and/or fresh frozen. The method, which we call expansion pathology (ExPath), converts clinical samples into an ExM-compatible state, then applies an ExM protocol with protein anchoring and mechanical homogenization steps optimized for clinical samples. ExPath enables ~70 nm resolution imaging of diverse biomolecules in intact tissues using conventional diffraction-limited microscopes, and standard antibody and fluorescent DNA in situ hybridization reagents. We use ExPath for optical diagnosis of kidney minimal-change disease, which previously required electron microscopy (EM), and demonstrate high-fidelity computational discrimination between early breast neoplastic lesions that to date have challenged human judgment. ExPath may enable the routine use of nanoscale imaging in pathology and clinical research. PMID:28714966
Nanoscale imaging of clinical specimens using pathology-optimized expansion microscopy.
Zhao, Yongxin; Bucur, Octavian; Irshad, Humayun; Chen, Fei; Weins, Astrid; Stancu, Andreea L; Oh, Eun-Young; DiStasio, Marcello; Torous, Vanda; Glass, Benjamin; Stillman, Isaac E; Schnitt, Stuart J; Beck, Andrew H; Boyden, Edward S
2017-08-01
Expansion microscopy (ExM), a method for improving the resolution of light microscopy by physically expanding a specimen, has not been applied to clinical tissue samples. Here we report a clinically optimized form of ExM that supports nanoscale imaging of human tissue specimens that have been fixed with formalin, embedded in paraffin, stained with hematoxylin and eosin, and/or fresh frozen. The method, which we call expansion pathology (ExPath), converts clinical samples into an ExM-compatible state, then applies an ExM protocol with protein anchoring and mechanical homogenization steps optimized for clinical samples. ExPath enables ∼70-nm-resolution imaging of diverse biomolecules in intact tissues using conventional diffraction-limited microscopes and standard antibody and fluorescent DNA in situ hybridization reagents. We use ExPath for optical diagnosis of kidney minimal-change disease, a process that previously required electron microscopy, and we demonstrate high-fidelity computational discrimination between early breast neoplastic lesions for which pathologists often disagree in classification. ExPath may enable the routine use of nanoscale imaging in pathology and clinical research.
Gang, Yadong; Zhou, Hongfu; Jia, Yao; Liu, Ling; Liu, Xiuli; Rao, Gong; Li, Longhui; Wang, Xiaojun; Lv, Xiaohua; Xiong, Hanqing; Yang, Zhongqin; Luo, Qingming; Gong, Hui; Zeng, Shaoqun
2017-01-01
Resin embedding has been widely applied to fixing biological tissues for sectioning and imaging, but has long been regarded as incompatible with green fluorescent protein (GFP) labeled sample because it reduces fluorescence. Recently, it has been reported that resin-embedded GFP-labeled brain tissue can be imaged with high resolution. In this protocol, we describe an optimized protocol for resin embedding and chemical reactivation of fluorescent protein labeled mouse brain, we have used mice as experiment model, but the protocol should be applied to other species. This method involves whole brain embedding and chemical reactivation of the fluorescent signal in resin-embedded tissue. The whole brain embedding process takes a total of 7 days. The duration of chemical reactivation is ~2 min for penetrating 4 μm below the surface in the resin-embedded brain. This protocol provides an efficient way to prepare fluorescent protein labeled sample for high-resolution optical imaging. This kind of sample was demonstrated to be imaged by various optical micro-imaging methods. Fine structures labeled with GFP across a whole brain can be detected. PMID:28352214
GPUs benchmarking in subpixel image registration algorithm
NASA Astrophysics Data System (ADS)
Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier
2015-05-01
Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.
A Simple Downscaling Algorithm for Remotely Sensed Land Surface Temperature
NASA Astrophysics Data System (ADS)
Sandholt, I.; Nielsen, C.; Stisen, S.
2009-05-01
The method is illustrated using a combination of MODIS NDVI data with a spatial resolution of 250m and 3 Km Meteosat Second Generation SEVIRI LST data. Geostationary Earth Observation data carry a large potential for assessment of surface state variables. Not the least the European Meteosat Second Generation platform with its SEVIRI sensor is well suited for studies of the dynamics of land surfaces due to its high temporal frequency (15 minutes) and its red, Near Infrared (NIR) channels that provides vegetation indices, and its two split window channels in the thermal infrared for assessment of Land Surface Temperature (LST). For some applications the spatial resolution in geostationary data is too coarse. Due to the low statial resolution of 4.8 km at nadir for the SEVIRI sensor, a means of providing sub pixel information is sought for. By combining and properly scaling two types of satellite images, namely data from the MODIS sensor onboard the polar orbiting platforms TERRA and AQUA and the coarse resolution MSG-SEVIRI, we exploit the best from two worlds. The vegetation index/surface temperature space has been used in a vast number of studies for assessment of air temperature, soil moisture, dryness indices, evapotranspiration and for studies of land use change. In this paper, we present an improved method to derive a finer resolution Land Surface Temperature (LST). A new, deterministic scaling method has been applied, and is compared to existing deterministic downscaling methods based on LST and NDVI. We also compare our results from in situ measurements of LST from the Dahra test site in West Africa.
Kapetanakis, Myron; Zhou, Wu; Oxley, Mark P.; ...
2015-09-25
Photon-based spectroscopies have played a central role in exploring the electronic properties of crystalline solids and thin films. They are a powerful tool for probing the electronic properties of nanostructures, but they are limited by lack of spatial resolution. On the other hand, electron-based spectroscopies, e.g., electron energy loss spectroscopy (EELS), are now capable of subangstrom spatial resolution. Core-loss EELS, a spatially resolved analog of x-ray absorption, has been used extensively in the study of inhomogeneous complex systems. In this paper, we demonstrate that low-loss EELS in an aberration-corrected scanning transmission electron microscope, which probes low-energy excitations, combined with amore » theoretical framework for simulating and analyzing the spectra, is a powerful tool to probe low-energy electron excitations with atomic-scale resolution. The theoretical component of the method combines density functional theory–based calculations of the excitations with dynamical scattering theory for the electron beam. We apply the method to monolayer graphene in order to demonstrate that atomic-scale contrast is inherent in low-loss EELS even in a perfectly periodic structure. The method is a complement to optical spectroscopy as it probes transitions entailing momentum transfer. The theoretical analysis identifies the spatial and orbital origins of excitations, holding the promise of ultimately becoming a powerful probe of the structure and electronic properties of individual point and extended defects in both crystals and inhomogeneous complex nanostructures. The method can be extended to probe magnetic and vibrational properties with atomic resolution.« less
NASA Astrophysics Data System (ADS)
Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus
2018-04-01
Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.
Linear mixing model applied to coarse resolution satellite data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Banville, Frederic A; Moreau, Julien; Sarkar, Mitradeep; Besbes, Mondher; Canva, Michael; Charette, Paul G
2018-04-16
Surface plasmon resonance imaging (SPRI) is an optical near-field method used for mapping the spatial distribution of chemical/physical perturbations above a metal surface without exogenous labeling. Currently, the majority of SPRI systems are used in microarray biosensing, requiring only modest spatial resolution. There is increasing interest in applying SPRI for label-free near-field imaging of biological cells to study cell/surface interactions. However, the required resolution (sub-µm) greatly exceeds what current systems can deliver. Indeed, the attenuation length of surface plasmon polaritons (SPP) severely limits resolution along one axis, typically to tens of µm. Strategies to date for improving spatial resolution result in a commensurate deterioration in other imaging parameters. Unlike the smooth metal surfaces used in SPRI that support purely propagating surface modes, nanostructured metal surfaces support "hybrid" SPP modes that share attributes from both propagating and localized modes. We show that these hybrid modes are especially well-suited to high-resolution imaging and demonstrate how the nanostructure geometry can be designed to achieve sub-µm resolution while mitigating the imaging parameter trade-off according to an application-specific optimum.
Compressed sensing reconstruction of cardiac cine MRI using golden angle spiral trajectories
NASA Astrophysics Data System (ADS)
Tolouee, Azar; Alirezaie, Javad; Babyn, Paul
2015-11-01
In dynamic cardiac cine Magnetic Resonance Imaging (MRI), the spatiotemporal resolution is limited by the low imaging speed. Compressed sensing (CS) theory has been applied to improve the imaging speed and thus the spatiotemporal resolution. The purpose of this paper is to improve CS reconstruction of under sampled data by exploiting spatiotemporal sparsity and efficient spiral trajectories. We extend k-t sparse algorithm to spiral trajectories to achieve high spatio temporal resolutions in cardiac cine imaging. We have exploited spatiotemporal sparsity of cardiac cine MRI by applying a 2D + time wavelet-Fourier transform. For efficient coverage of k-space, we have used a modified version of multi shot (interleaved) spirals trajectories. In order to reduce incoherent aliasing artifact, we use different random undersampling pattern for each temporal frame. Finally, we have used nonuniform fast Fourier transform (NUFFT) algorithm to reconstruct the image from the non-uniformly acquired samples. The proposed approach was tested in simulated and cardiac cine MRI data. Results show that higher acceleration factors with improved image quality can be obtained with the proposed approach in comparison to the existing state-of-the-art method. The flexibility of the introduced method should allow it to be used not only for the challenging case of cardiac imaging, but also for other patient motion where the patient moves or breathes during acquisition.
A patch-based convolutional neural network for remote sensing image classification.
Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di
2017-11-01
Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Xin; Wen, Zongyong; Zhu, Zhaorong; Xia, Qiang; Shun, Lan
2016-06-01
Image classification will still be a long way in the future, although it has gone almost half a century. In fact, researchers have gained many fruits in the image classification domain, but there is still a long distance between theory and practice. However, some new methods in the artificial intelligence domain will be absorbed into the image classification domain and draw on the strength of each to offset the weakness of the other, which will open up a new prospect. Usually, networks play the role of a high-level language, as is seen in Artificial Intelligence and statistics, because networks are used to build complex model from simple components. These years, Bayesian Networks, one of probabilistic networks, are a powerful data mining technique for handling uncertainty in complex domains. In this paper, we apply Tree Augmented Naive Bayesian Networks (TAN) to texture classification of High-resolution remote sensing images and put up a new method to construct the network topology structure in terms of training accuracy based on the training samples. Since 2013, China government has started the first national geographical information census project, which mainly interprets geographical information based on high-resolution remote sensing images. Therefore, this paper tries to apply Bayesian network to remote sensing image classification, in order to improve image interpretation in the first national geographical information census project. In the experiment, we choose some remote sensing images in Beijing. Experimental results demonstrate TAN outperform than Naive Bayesian Classifier (NBC) and Maximum Likelihood Classification Method (MLC) in the overall classification accuracy. In addition, the proposed method can reduce the workload of field workers and improve the work efficiency. Although it is time consuming, it will be an attractive and effective method for assisting office operation of image interpretation.
An edge-directed interpolation method for fetal spine MR images.
Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin
2013-10-10
Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.
NASA Astrophysics Data System (ADS)
Pestana, S. J.; Halverson, G. H.; Barker, M.; Cooley, S.
2016-12-01
Increased demand for agricultural products and limited water supplies in Guanacaste, Costa Rica have encouraged the improvement of water management practices to increase resource use efficiency. Remotely sensed evapotranspiration (ET) data can contribute by providing insights into variables like crop health and water loss, as well as better inform the use of various irrigation techniques. EARTH University currently collects data in the region that are limited to costly and time-intensive in situ observations and will greatly benefit from the expanded spatial and temporal resolution of remote sensing measurements from the ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS). In this project, Moderate Resolution Imaging Spectroradiometer (MODIS) Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) data, with a resolution of 5 km per pixel, was used to demonstrate to our partners at EARTH University the application of remotely sensed ET measurements. An experimental design was developed to provide a method of applying future ECOSTRESS data, at the higher resolution of 70 m per pixel, to research in managing and implementing sustainable farm practices. Our investigation of the diurnal cycle of land surface temperature, net radiation, and evapotranspiration will advance the model science for ECOSTRESS, which will be launched in 2018 and installed on the International Space Station.
Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.
Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H
2014-03-17
We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Improvement of spatial resolution in a Timepix based CdTe photon counting detector using ToT method
NASA Astrophysics Data System (ADS)
Park, Kyeongjin; Lee, Daehee; Lim, Kyung Taek; Kim, Giyoon; Chang, Hojong; Yi, Yun; Cho, Gyuseong
2018-05-01
Photon counting detectors (PCDs) have been recognized as potential candidates in X-ray radiography and computed tomography due to their many advantages over conventional energy-integrating detectors. In particular, a PCD-based X-ray system shows an improved contrast-to-noise ratio, reduced radiation exposure dose, and more importantly, exhibits a capability for material decomposition with energy binning. For some applications, a very high resolution is required, which translates into smaller pixel size. Unfortunately, small pixels may suffer from energy spectral distortions (distortion in energy resolution) due to charge sharing effects (CSEs). In this work, we propose a method for correcting CSEs by measuring the point of interaction of an incident X-ray photon by the time-of-threshold (ToT) method. Moreover, we also show that it is possible to obtain an X-ray image with a reduced pixel size by using the concept of virtual pixels at a given pixel size. To verify the proposed method, modulation transfer function (MTF) and signal-to-noise ratio (SNR) measurements were carried out with the Timepix chip combined with the CdTe pixel sensor. The X-ray test condition was set at 80 kVp with 5 μA, and a tungsten edge phantom and a lead line phantom were used for the measurements. Enhanced spatial resolution was achieved by applying the proposed method when compared to that of the conventional photon counting method. From experiment results, MTF increased from 6.3 (conventional counting method) to 8.3 lp/mm (proposed method) at 0.3 MTF. On the other hand, the SNR decreased from 33.08 to 26.85 dB due to four virtual pixels.
High-resolution in vivo Wistar rodent brain atlas based on T1 weighted image
NASA Astrophysics Data System (ADS)
Huang, Su; Lu, Zhongkang; Huang, Weimin; Seramani, Sankar; Ramasamy, Boominathan; Sekar, Sakthivel; Guan, Cuntai; Bhakoo, Kishore
2016-03-01
Image based atlases for rats brain have a significant impact on pre-clinical research. In this project we acquired T1-weighted images from Wistar rodent brains with fine 59μm isotropical resolution for generation of the atlas template image. By applying post-process procedures using a semi-automatic brain extraction method, we delineated the brain tissues from source data. Furthermore, we applied a symmetric group-wise normalization method to generate an optimized template of T1 image of rodent brain, then aligned our template to the Waxholm Space. In addition, we defined several simple and explicit landmarks to corresponding our template with the well known Paxinos stereotaxic reference system. Anchoring at the origin of the Waxholm Space, we applied piece-wise linear transformation method to map the voxels of the template into the coordinates system in Paxinos' stereotoxic coordinates to facilitate the labelling task. We also cross-referenced our data with both published rodent brain atlas and image atlases available online, methodologically labelling the template to produce a Wistar brain atlas identifying more than 130 structures. Particular attention was paid to the cortex and cerebellum, as these areas encompass the most researched aspects of brain functions. Moreover, we adopted the structure hierarchy and naming nomenclature common to various atlases, so that the names and hierarchy structure presented in the atlas are readily recognised for easy use. It is believed the atlas will present a useful tool in rodent brain functional and pharmaceutical studies.
NASA Astrophysics Data System (ADS)
Schlögel, R.; Marchesini, I.; Alvioli, M.; Reichenbach, P.; Rossi, M.; Malet, J.-P.
2018-01-01
We perform landslide susceptibility zonation with slope units using three digital elevation models (DEMs) of varying spatial resolution of the Ubaye Valley (South French Alps). In so doing, we applied a recently developed algorithm automating slope unit delineation, given a number of parameters, in order to optimize simultaneously the partitioning of the terrain and the performance of a logistic regression susceptibility model. The method allowed us to obtain optimal slope units for each available DEM spatial resolution. For each resolution, we studied the susceptibility model performance by analyzing in detail the relevance of the conditioning variables. The analysis is based on landslide morphology data, considering either the whole landslide or only the source area outline as inputs. The procedure allowed us to select the most useful information, in terms of DEM spatial resolution, thematic variables and landslide inventory, in order to obtain the most reliable slope unit-based landslide susceptibility assessment.
Resolution improvement in positron emission tomography using anatomical Magnetic Resonance Imaging.
Chu, Yong; Su, Min-Ying; Mandelkern, Mark; Nalcioglu, Orhan
2006-08-01
An ideal imaging system should provide information with high-sensitivity, high spatial, and temporal resolution. Unfortunately, it is not possible to satisfy all of these desired features in a single modality. In this paper, we discuss methods to improve the spatial resolution in positron emission imaging (PET) using a priori information from Magnetic Resonance Imaging (MRI). Our approach uses an image restoration algorithm based on the maximization of mutual information (MMI), which has found significant success for optimizing multimodal image registration. The MMI criterion is used to estimate the parameters in the Sharpness-Constrained Wiener filter. The generated filter is then applied to restore PET images of a realistic digital brain phantom. The resulting restored images show improved resolution and better signal-to-noise ratio compared to the interpolated PET images. We conclude that a Sharpness-Constrained Wiener filter having parameters optimized from a MMI criterion may be useful for restoring spatial resolution in PET based on a priori information from correlated MRI.
High-resolution near real-time drought monitoring in South Asia
NASA Astrophysics Data System (ADS)
Aadhar, Saran; Mishra, Vimal
2017-10-01
Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning, and management of water resources at sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. We develop a high-resolution (0.05°) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat and cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature, which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05°. The bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub-basin levels.
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
A user's guide to localization-based super-resolution fluorescence imaging.
Dempsey, Graham T
2013-01-01
Advances in far-field fluorescence microscopy over the past decade have led to the development of super-resolution imaging techniques that provide more than an order of magnitude improvement in spatial resolution compared to conventional light microscopy. One such approach, called Stochastic Optical Reconstruction Microscopy (STORM) uses the sequential, nanometer-scale localization of individual fluorophores to reconstruct a high-resolution image of a structure of interest. This is an attractive method for biological investigation at the nanoscale due to its relative simplicity, both conceptually and practically in the laboratory. Like most research tools, however, the devil is in the details. The aim of this chapter is to serve as a guide for applying STORM to the study of biological samples. This chapter will discuss considerations for choosing a photoswitchable fluorescent probe, preparing a sample, selecting hardware for data acquisition, and collecting and analyzing data for image reconstruction. Copyright © 2013 Elsevier Inc. All rights reserved.
Monitoring Cyanobacteria Bloom in Taihu Lake by High-Resolution Geostationary Satellite GF4
NASA Astrophysics Data System (ADS)
Liu, J.
2018-04-01
The high-resolution remote-sensing satellite, GF4 PMS, of China's geosynchronous earth orbit was successfully launched on December 29, 2015. Its high spatial resolution and high temporal resolution allow GF4 PMS to play a very important role in water environment monitoring, especially in the dynamic monitoring of lake and reservoir cyanobacteria blooms. As GF4 PMS has just been launched, there is still relatively little related research, and the practical application effect of GF4 PMS in the extraction of cyanobacteria blooms remains to be further tested. Therefore, in this study, the method and effect of GF4 PMS application in cyanobacteria bloom monitoring were studied in Taihu. It turned that GF4 PMS can be applied to the dynamic monitoring of the distribution of cyanobacteria blooms in Taihu, thereby finding the temporal and spatial variation of the distribution of cyanobacteria blooms.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
NASA Astrophysics Data System (ADS)
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-12-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane.
NASA Astrophysics Data System (ADS)
Ji, X.; Shen, C.
2017-12-01
Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-01-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane. PMID:27929085
A draft map of the mouse pluripotent stem cell spatial proteome
Christoforou, Andy; Mulvey, Claire M.; Breckels, Lisa M.; Geladaki, Aikaterini; Hurrell, Tracey; Hayward, Penelope C.; Naake, Thomas; Gatto, Laurent; Viner, Rosa; Arias, Alfonso Martinez; Lilley, Kathryn S.
2016-01-01
Knowledge of the subcellular distribution of proteins is vital for understanding cellular mechanisms. Capturing the subcellular proteome in a single experiment has proven challenging, with studies focusing on specific compartments or assigning proteins to subcellular niches with low resolution and/or accuracy. Here we introduce hyperLOPIT, a method that couples extensive fractionation, quantitative high-resolution accurate mass spectrometry with multivariate data analysis. We apply hyperLOPIT to a pluripotent stem cell population whose subcellular proteome has not been extensively studied. We provide localization data on over 5,000 proteins with unprecedented spatial resolution to reveal the organization of organelles, sub-organellar compartments, protein complexes, functional networks and steady-state dynamics of proteins and unexpected subcellular locations. The method paves the way for characterizing the impact of post-transcriptional and post-translational modification on protein location and studies involving proteome-level locational changes on cellular perturbation. An interactive open-source resource is presented that enables exploration of these data. PMID:26754106
Alcaráz, Mirta R; Vera-Candioti, Luciana; Culzoni, María J; Goicoechea, Héctor C
2014-04-01
This paper presents the development of a capillary electrophoresis method with diode array detector coupled to multivariate curve resolution-alternating least squares (MCR-ALS) to conduct the resolution and quantitation of a mixture of six quinolones in the presence of several unexpected components. Overlapping of time profiles between analytes and water matrix interferences were mathematically solved by data modeling with the well-known MCR-ALS algorithm. With the aim of overcoming the drawback originated by two compounds with similar spectra, a special strategy was implemented to model the complete electropherogram instead of dividing the data in the region as usually performed in previous works. The method was first applied to quantitate analytes in standard mixtures which were randomly prepared in ultrapure water. Then, tap water samples spiked with several interferences were analyzed. Recoveries between 76.7 and 125 % and limits of detection between 5 and 18 μg L(-1) were achieved.
NASA Astrophysics Data System (ADS)
Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.
2017-02-01
Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.
Scheduled Relaxation Jacobi method: Improvements and applications
NASA Astrophysics Data System (ADS)
Adsuara, J. E.; Cordero-Carrión, I.; Cerdá-Durán, P.; Aloy, M. A.
2016-09-01
Elliptic partial differential equations (ePDEs) appear in a wide variety of areas of mathematics, physics and engineering. Typically, ePDEs must be solved numerically, which sets an ever growing demand for efficient and highly parallel algorithms to tackle their computational solution. The Scheduled Relaxation Jacobi (SRJ) is a promising class of methods, atypical for combining simplicity and efficiency, that has been recently introduced for solving linear Poisson-like ePDEs. The SRJ methodology relies on computing the appropriate parameters of a multilevel approach with the goal of minimizing the number of iterations needed to cut down the residuals below specified tolerances. The efficiency in the reduction of the residual increases with the number of levels employed in the algorithm. Applying the original methodology to compute the algorithm parameters with more than 5 levels notably hinders obtaining optimal SRJ schemes, as the mixed (non-linear) algebraic-differential system of equations from which they result becomes notably stiff. Here we present a new methodology for obtaining the parameters of SRJ schemes that overcomes the limitations of the original algorithm and provide parameters for SRJ schemes with up to 15 levels and resolutions of up to 215 points per dimension, allowing for acceleration factors larger than several hundreds with respect to the Jacobi method for typical resolutions and, in some high resolution cases, close to 1000. Most of the success in finding SRJ optimal schemes with more than 10 levels is based on an analytic reduction of the complexity of the previously mentioned system of equations. Furthermore, we extend the original algorithm to apply it to certain systems of non-linear ePDEs.
Single-Molecule and Superresolution Imaging in Live Bacteria Cells
Biteen, Julie S.; Moerner, W.E.
2010-01-01
Single-molecule imaging enables biophysical measurements devoid of ensemble averaging, gives enhanced spatial resolution beyond the diffraction limit, and permits superresolution reconstructions. Here, single-molecule and superresolution imaging are applied to the study of proteins in live Caulobacter crescentus cells to illustrate the power of these methods in bacterial imaging. Based on these techniques, the diffusion coefficient and dynamics of the histidine protein kinase PleC, the localization behavior of the polar protein PopZ, and the treadmilling behavior and protein superstructure of the structural protein MreB are investigated with sub-40-nm spatial resolution, all in live cells. PMID:20300204
Time reversal and phase coherent music techniques for super-resolution ultrasound imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Labyed, Yassin
Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements. A modified TR-MUSIC imaging algorithm is used to account for ultrasound scattering from both density and compressibility contrasts. The phase response of ultrasound transducer elements is accounted for in a PC-MUSIC system.
NASA Astrophysics Data System (ADS)
Wiskin, James; Klock, John; Iuanow, Elaine; Borup, Dave T.; Terry, Robin; Malik, Bilal H.; Lenox, Mark
2017-03-01
There has been a great deal of research into ultrasound tomography for breast imaging over the past 35 years. Few successful attempts have been made to reconstruct high-resolution images using transmission ultrasound. To this end, advances have been made in 2D and 3D algorithms that utilize either time of arrival or full wave data to reconstruct images with high spatial and contrast resolution suitable for clinical interpretation. The highest resolution and quantitative accuracy result from inverse scattering applied to full wave data in 3D. However, this has been prohibitively computationally expensive, meaning that full inverse scattering ultrasound tomography has not been considered clinically viable. Here we show the results of applying a nonlinear inverse scattering algorithm to 3D data in a clinically useful time frame. This method yields Quantitative Transmission (QT) ultrasound images with high spatial and contrast resolution. We reconstruct sound speeds for various 2D and 3D phantoms and verify these values with independent measurements. The data are fully 3D as is the reconstruction algorithm, with no 2D approximations. We show that 2D reconstruction algorithms can introduce artifacts into the QT breast image which are avoided by using a full 3D algorithm and data. We show high resolution gross and microscopic anatomic correlations comparing cadaveric breast QT images with MRI to establish imaging capability and accuracy. Finally, we show reconstructions of data from volunteers, as well as an objective visual grading analysis to confirm clinical imaging capability and accuracy.
Tomková, Jana; Ondra, Peter; Kocianová, Eva; Václavík, Jan
2017-07-01
This paper presents a method for the determination of acebutolol, betaxolol, bisoprolol, metoprolol, nebivolol and sotalol in human serum by liquid-liquid extraction and ultra-high-performance liquid chromatography coupled with ultra-high-resolution TOF mass spectrometry. After liquid-liquid extraction, beta blockers were separated on a reverse-phase analytical column (Acclaim RS 120; 100 × 2.1 mm, 2.2 μm). The total run time was 6 min for each sample. Linearity, limit of detection, limit of quantification, matrix effects, specificity, precision, accuracy, recovery and sample stability were evaluated. The method was successfully applied to the therapeutic drug monitoring of 108 patients with hypertension. This method was also used for determination of beta blockers in 33 intoxicated patients. Copyright © 2016 John Wiley & Sons, Ltd.
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
NASA Astrophysics Data System (ADS)
Nizarul, O.; Hermana, M.; Bashir, Y.; Ghosh, D. P.
2016-02-01
In delineating complex subsurface geological feature, broad band of frequencies are needed to unveil the often hidden features of hydrocarbon basin such as thin bedding. The ability to resolve thin geological horizon on seismic data is recognized to be a fundamental importance for hydrocarbon exploration, seismic interpretation and reserve prediction. For thin bedding, high frequency content is needed to enable tuning, which can be done by applying the band width extension technique. This paper shows an application of Short Time Fourier Transform Half Cepstrum (STFTHC) method, a frequency bandwidth expansion technique for non-stationary seismic signal in increasing the temporal resolution to uncover thin beds and improve characterization of the basin. A wedge model and synthetic seismic data is used to quantify the algorithm as well as real data from Sarawak basin were used to show the effectiveness of this method in enhancing the resolution.
Towards an Optimal Noise Versus Resolution Trade-Off in Wind Scatterometry
NASA Technical Reports Server (NTRS)
Williams, Brent A.
2011-01-01
A scatterometer is a radar that measures the normalized radar cross section sigma(sup 0) of the Earth's surface. Over the ocean this signal is related to the wind via the geophysical model function (GMF). The objective of wind scatterometry is to estimate the wind vector field from sigma(sup 0) measurements; however, there are many subtleties that complicate this problem-making it difficult to obtain a unique wind field estimate. Conventionally, wind estimation is split into two stages: a wind retrieval stage in which several ambiguous solutions are obtained, and an ambiguity removal stage in which ambiguities are chosen to produce an appropriate wind vector field estimate. The most common approach to wind field estimation is to grid the scatterometer swath into wind vector cells and estimate wind vector ambiguities independently for each cell. Then, field wise structure is imposed on the solution by an ambiguity selection routine. Although this approach is simple and practical, it neglects field wise structure in the retrieval step and does not account for the spatial correlation imposed by the sampling. This makes it difficult to develop a theoretically appropriate noise versus resolution trade-off using pointwise retrieval. Fieldwise structure may be imposed in the retrieval step using a model-based approach. However, this approach is generally only practical if a low order wind field model is applied, which may discard more information than is desired. Furthermore, model-based approaches do not account for the structure imposed by the sampling. A more general fieldwise approach is to estimate all the wind vectors for all the WVCs simultaneously from all the measurements. This approach can account for structure of the wind field as well as structure imposed by the sampling in the wind retrieval step. Williams and Long in 2010 developed a fieldwise retrieval method based on maximum a posteriori estimation (MAP). This MAP approach can be extended to perform a noise versus resolution trade-off, and deal with ambiguity selection. This paper extends the fieldwise MAP estimation approach and investigates both the noise versus resolution trade-off as well as ambiguity removal in the fieldwise wind retrieval step. The method is then applied to the Sea Winds scatterometer and the results are analyzed. This paper extends the fieldwise MAP estimation approach and investigates both the noise versus resolution trade-off as well as ambiguity removal in the fieldwise wind retrieval step. The method is then applied to the Sea Winds scatterometer and the results are analyzed.
NASA Astrophysics Data System (ADS)
Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos
2013-04-01
In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission response and the reflection response for a 1D multilayer structure, and next 3D approach (Wapenaar 2004). As a result of this seismic interferometry experiment the 3D reflectivity model (frequencies and resolution ranges) was obtained. We proved also that the seismic interferometry approach can be applied in asynchronous seismic auscultation. The reflections detected in the virtual seismic sections are in agreement with the geological features encountered during the excavation of the tunnel and also with the petrophysical properties and parameters measured in previous geophysical borehole logging. References Claerbout J.F., 1968. Synthesis of a layered medium from its acoustic transmision response. Geophysics, 33, 264-269 Flavio Poletto, Piero Corubolo and Paolo Comeli.2010. Drill-bit seismic interferometry whith and whitout pilot signals. Geophysical Prospecting, 2010, 58, 257-265. Wapenaar, K., J. Thorbecke, and D. Draganov, 2004, Relations between reflection and transmission responses of three-dimensional inhomogeneous media: Geophysical Journal International, 156, 179-194.
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
NASA Astrophysics Data System (ADS)
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which can have significant implications in preclinical and clinical ROI imaging applications.
Nuclear Emulsion Analysis Methods of Locating Neutrino Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Carolyn Lee
2006-12-01
The Fermilab experiment 872 (DONUT) was the first to directly observe tau neutrinos in the charged current interactionV more » $$\\tau$$+N →$$\\tau$$ +X. The observation was made using a hybrid emulsion-spectrometer detector to identify the signature kink or trident decay of the tau particle. Although nuclear emulsion has the benefit of sub-micron resolution, its use incorporates difficulties such as significant distortions and a high density of data resulting from its continuously active state. Finding events and achieving sub-micron resolution in emulsion requires a multi-pronged strategy of tracking and vertex location to deal with these inherent difficulties. By applying the methods developed in this thesis, event location efficiency can be improved from a value of 58% to 87%.« less
Quantitative high-resolution genomic analysis of single cancer cells.
Hannemann, Juliane; Meyer-Staeckling, Sönke; Kemming, Dirk; Alpers, Iris; Joosse, Simon A; Pospisil, Heike; Kurtz, Stefan; Görndt, Jennifer; Püschel, Klaus; Riethdorf, Sabine; Pantel, Klaus; Brandt, Burkhard
2011-01-01
During cancer progression, specific genomic aberrations arise that can determine the scope of the disease and can be used as predictive or prognostic markers. The detection of specific gene amplifications or deletions in single blood-borne or disseminated tumour cells that may give rise to the development of metastases is of great clinical interest but technically challenging. In this study, we present a method for quantitative high-resolution genomic analysis of single cells. Cells were isolated under permanent microscopic control followed by high-fidelity whole genome amplification and subsequent analyses by fine tiling array-CGH and qPCR. The assay was applied to single breast cancer cells to analyze the chromosomal region centred by the therapeutical relevant EGFR gene. This method allows precise quantitative analysis of copy number variations in single cell diagnostics.
NASA Astrophysics Data System (ADS)
Guerra, J. E.; Ullrich, P. A.
2015-12-01
Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods at very high spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At global horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of meso-scale test cases to validate the performance of the SNFEM applied in the vertical. Internal gravity wave, mountain wave, convective, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.
NASA Astrophysics Data System (ADS)
Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.
2018-02-01
While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.
NASA Technical Reports Server (NTRS)
Feldman, Sandra C.
1987-01-01
Methods of applying principal component (PC) analysis to high resolution remote sensing imagery were examined. Using Airborne Imaging Spectrometer (AIS) data, PC analysis was found to be useful for removing the effects of albedo and noise and for isolating the significant information on argillic alteration, zeolite, and carbonate minerals. An effective technique for using PC analysis using an input the first 16 AIS bands, 7 intermediate bands, and the last 16 AIS bands from the 32 flat field corrected bands between 2048 and 2337 nm. Most of the significant mineralogical information resided in the second PC. PC color composites and density sliced images provided a good mineralogical separation when applied to a AIS data set. Although computer intensive, the advantage of PC analysis is that it employs algorithms which already exist on most image processing systems.
Ultra-high resolution computed tomography imaging
Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.
2002-01-01
A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.
NASA Astrophysics Data System (ADS)
Carpintero, Elisabet; González-Dugo, María P.; José Polo, María; Hain, Christopher; Nieto, Héctor; Gao, Feng; Andreu, Ana; Kustas, William; Anderson, Martha
2017-04-01
The integration of currently available satellite data into surface energy balance models can provide estimates of evapotranspiration (ET) with spatial and temporal resolutions determined by sensor characteristics. The use of data fusion techniques may increase the temporal resolution of these estimates using multiple satellites, providing a more frequent ET monitoring for hydrological purposes. The objective of this work is to analyze the effects of pixel resolution on the estimation of evapotranspiration using different remote sensing platforms, and to provide continuous monitoring of ET over a water-controlled ecosystem, the Holm oak savanna woodland known as dehesa. It is an agroforestry system with a complex canopy structure characterized by widely-spaced oak trees combined with crops, pasture and shrubs. The study was carried out during two years, 2013 and 2014, combining ET estimates at different spatial and temporal resolutions and applying data fusion techniques for a frequent monitoring of water use at fine spatial resolution. A global and daily ET product at 5 km resolution, developed with the ALEXI model using MODIS day-night temperature difference (Anderson et al., 2015a) was used as a starting point. The associated flux disaggregation scheme, DisALEXI (Norman et al., 2003), was later applied to constrain higher resolution ET from both MODIS and Landsat 7/8 images. The Climate Forecast System Reanalysis (CFSR) provided the meteorological data. Finally, a data fusion technique, the STARFM model (Gao et al., 2006), was applied to fuse MODIS and Landsat ET maps in order to obtain daily ET at 30 m resolution. These estimates were validated and analyzed at two different scales: at local scale over a dehesa experimental site and at watershed scale with a predominant Mediterranean oak savanna landscape, both located in Southern Spain. Local ET estimates from the modeling system were validated with measurements provided by an eddy covariance tower installed in the dehesa (38 ° 12 'N, 4 ° 17' W, 736 m a.s.l.). The results supported the ability of ALEXI/DisALEXI model to accurately estimate turbulent and radiative fluxes over this complex landscape, both at 1 Km and at 30 m spatial resolution. The application of the STARFM model gave significant improvement in capturing the spatio-temporal heterogeneity of ET over the different seasons, compared with traditional interpolation methods using MODIS and Landsat ET data. At basin scale, the physically-based distributed hydrological model WiMMed has been applied to evaluate ET estimates. This model focuses on the spatial interpolation of the meteorological variables and the physical modelling of the daily water balance at the cell and watershed scale, using daily streamflow rates measured at the watershed outlet for final comparison.
Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.
Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H
2013-05-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.
Kalman Filter Techniques for Accelerated Cartesian Dynamic Cardiac Imaging
Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.
2012-01-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories, because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and SNR. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. PMID:22926804
Quadruplex MAPH: improvement of throughput in high-resolution copy number screening.
Tyson, Jess; Majerus, Tamsin Mo; Walker, Susan; Armour, John Al
2009-09-28
Copy number variation (CNV) in the human genome is recognised as a widespread and important source of human genetic variation. Now the challenge is to screen for these CNVs at high resolution in a reliable, accurate and cost-effective way. Multiplex Amplifiable Probe Hybridisation (MAPH) is a sensitive, high-resolution technology appropriate for screening for CNVs in a defined region, for a targeted population. We have developed MAPH to a highly multiplexed format ("QuadMAPH") that allows the user a four-fold increase in the number of loci tested simultaneously. We have used this method to analyse a genomic region of 210 kb, including the MSH2 gene and 120 kb of flanking DNA. We show that the QuadMAPH probes report copy number with equivalent accuracy to simplex MAPH, reliably demonstrating diploid copy number in control samples and accurately detecting deletions in Hereditary Non-Polyposis Colorectal Cancer (HNPCC) samples. QuadMAPH is an accurate, high-resolution method that allows targeted screening of large numbers of subjects without the expense of genome-wide approaches. Whilst we have applied this technique to a region of the human genome, it is equally applicable to the genomes of other organisms.
Quadruplex MAPH: improvement of throughput in high-resolution copy number screening
Tyson, Jess; Majerus, Tamsin MO; Walker, Susan; Armour, John AL
2009-01-01
Background Copy number variation (CNV) in the human genome is recognised as a widespread and important source of human genetic variation. Now the challenge is to screen for these CNVs at high resolution in a reliable, accurate and cost-effective way. Results Multiplex Amplifiable Probe Hybridisation (MAPH) is a sensitive, high-resolution technology appropriate for screening for CNVs in a defined region, for a targeted population. We have developed MAPH to a highly multiplexed format ("QuadMAPH") that allows the user a four-fold increase in the number of loci tested simultaneously. We have used this method to analyse a genomic region of 210 kb, including the MSH2 gene and 120 kb of flanking DNA. We show that the QuadMAPH probes report copy number with equivalent accuracy to simplex MAPH, reliably demonstrating diploid copy number in control samples and accurately detecting deletions in Hereditary Non-Polyposis Colorectal Cancer (HNPCC) samples. Conclusion QuadMAPH is an accurate, high-resolution method that allows targeted screening of large numbers of subjects without the expense of genome-wide approaches. Whilst we have applied this technique to a region of the human genome, it is equally applicable to the genomes of other organisms. PMID:19785739
NASA Astrophysics Data System (ADS)
Wang, Le
2003-10-01
Modern forest management poses an increasing need for detailed knowledge of forest information at different spatial scales. At the forest level, the information for tree species assemblage is desired whereas at or below the stand level, individual tree related information is preferred. Remote Sensing provides an effective tool to extract the above information at multiple spatial scales in the continuous time domain. To date, the increasing volume and readily availability of high-spatial-resolution data have lead to a much wider application of remotely sensed products. Nevertheless, to make effective use of the improving spatial resolution, conventional pixel-based classification methods are far from satisfactory. Correspondingly, developing object-based methods becomes a central challenge for researchers in the field of Remote Sensing. This thesis focuses on the development of methods for accurate individual tree identification and tree species classification. We develop a method in which individual tree crown boundaries and treetop locations are derived under a unified framework. We apply a two-stage approach with edge detection followed by marker-controlled watershed segmentation. Treetops are modeled from radiometry and geometry aspects. Specifically, treetops are assumed to be represented by local radiation maxima and to be located near the center of the tree-crown. As a result, a marker image was created from the derived treetop to guide a watershed segmentation to further differentiate overlapping trees and to produce a segmented image comprised of individual tree crowns. The image segmentation method developed achieves a promising result for a 256 x 256 CASI image. Then further effort is made to extend our methods to the multiscales which are constructed from a wavelet decomposition. A scale consistency and geometric consistency are designed to examine the gradients along the scale-space for the purpose of separating true crown boundary from unwanted textures occurring due to branches and twigs. As a result from the inverse wavelet transform, the tree crown boundary is enhanced while the unwanted textures are suppressed. Based on the enhanced image, an improvement is achieved when applying the two-stage methods to a high resolution aerial photograph. To improve tree species classification, we develop a new method to choose the optimal scale parameter with the aid of Bhattacharya Distance (BD), a well-known index of class separability in traditional pixel-based classification. The optimal scale parameter is then fed in the process of a region-growing-based segmentation as a break-off value. Our object classification achieves a better accuracy in separating tree species when compared to the conventional Maximum Likelihood Classification (MLC). In summary, we develop two object-based methods for identifying individual trees and classifying tree species from high-spatial resolution imagery. Both methods achieve promising results and will promote integration of Remote Sensing and GIS in forest applications.
NASA Astrophysics Data System (ADS)
Karadjov, Metody; Velitchkova, Nikolaya; Veleva, Olga; Velichkov, Serafim; Markov, Pavel; Daskalova, Nonka
2016-05-01
This paper deals with spectral interferences of complex matrix containing Mo, Al, Ti, Fe, Mg, Ca and Cu in the determination of rhenium in molybdenum and copper concentrates by inductively coupled plasma optical emission spectrometry (ICP-OES). By radial viewing 40.68 MHz ICP equipped with a high resolution spectrometer (spectral bandwidth = 5 pm) the hyperfine structure (HFS) of the most prominent lines of rhenium (Re II 197.248 nm, Re II 221.426 nm and Re II 227.525 nm) was registered. The HFS components under high resolution conditions were used as separate prominent line in order to circumvent spectral interferences. The Q-concept was applied for quantification of spectral interferences. The quantitative databases for the type and the magnitude of the spectral interferences in the presence of above mentioned matrix constituents were obtained by using a radial viewing 40.68 MHz ICP with high resolution and an axial viewing 27.12 MHz ICP with middle resolution. The data for the both ICP-OES systems were collected chiefly with a view to spectrochemical analysis for comparing the magnitude of line and wing (background) spectral interference and the true detection limits with spectroscopic apparatus with different spectral resolution. The sample pretreatment methods by sintering with magnesium oxide and oxidizing agents as well as a microwave acid digestion were applied. The feasibility, accuracy and precision of the analytical results were experimentally demonstrated by certified reference materials.
NASA Astrophysics Data System (ADS)
Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia
2015-02-01
Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.
Jeffrey T. Walton
2008-01-01
Three machine learning subpixel estimation methods (Cubist, Random Forests, and support vector regression) were applied to estimate urban cover. Urban forest canopy cover and impervious surface cover were estimated from Landsat-7 ETM+ imagery using a higher resolution cover map resampled to 30 m as training and reference data. Three different band combinations (...
USDA-ARS?s Scientific Manuscript database
Despite a recent new classification, a stable tree of life for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study we apply five single copy nuclear genes (SCNGs) to the phylogeny of the order Cycadales. We specifically aim to evaluate seve...
NASA Technical Reports Server (NTRS)
Roache, P. J.
1979-01-01
A summary is given of the attempts made to apply semidirect methods to the calculation of three-dimensional viscous flows over suction holes in laminar flow control surfaces. The attempts were all unsuccessful, due to either (1) lack of resolution capability, (2) lack of computer efficiency, or (3) instability.
Double difference method in deep inelastic neutron scattering on the VESUVIO spectrometer
NASA Astrophysics Data System (ADS)
Andreani, C.; Colognesi, D.; Degiorgi, E.; Filabozzi, A.; Nardone, M.; Pace, E.; Pietropaolo, A.; Senesi, R.
2003-02-01
The principles of the Double Difference (DD) method, applied to the neutron spectrometer VESUVIO, are discussed. VESUVIO, an inverse geometry spectrometer operating at the ISIS pulsed neutron source in the eV energy region, has been specifically designed to measure the single particle dynamical properties in condensed matter. The width of the nuclear resonance of the absorbing filter, used for the neutron energy analysis, provides the most important contribution to the energy resolution of the inverse geometry instruments. In this paper, the DD method, which is based on a linear combination of two measurements recorded with filter foils of the same resonance material but of different thickness, is shown to improve significantly the instrumental energy resolution, as compared with the Single Difference (SD) method. The asymptotic response functions, derived through Monte-Carlo simulations for polycrystalline Pb and ZrH 2 samples, are analysed in both DD and SD methods, and compared with the experimental ones for Pb sample. The response functions have been modelled for two distinct experimental configurations of the VESUVIO spectrometer, employing 6Li-glass neutron detectors and NaI γ detectors revealing the γ-ray cascade from the ( n,γ) reaction, respectively. The DD method appears to be an effective experimental procedure for Deep Inelastic Neutron Scattering measurements on VESUVIO spectrometer, since it reduces the experimental resolution of the instrument in both 6Li-glass neutron detector and γ detector configurations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, Paul
Spectroscopic imaging tools and methods, based on scanning tunneling microscopes (STMs), are being developed and applied to examine buried layers and interfaces with ultrahigh resolution. These new methods measure buried contacts, molecule-substrate bonds, buried dipoles in molecular layers, and key structural aspects of adsorbed molecules, such as tilt angles. We are developing the ability to locate lateral projections of molecular parts as a means of determining the structures of molecular layers. We are developing the ability to measure the orientation of buried functionality.
Genome-scale engineering of Saccharomyces cerevisiae with single-nucleotide precision.
Bao, Zehua; HamediRad, Mohammad; Xue, Pu; Xiao, Han; Tasan, Ipek; Chao, Ran; Liang, Jing; Zhao, Huimin
2018-07-01
We developed a CRISPR-Cas9- and homology-directed-repair-assisted genome-scale engineering method named CHAnGE that can rapidly output tens of thousands of specific genetic variants in yeast. More than 98% of target sequences were efficiently edited with an average frequency of 82%. We validate the single-nucleotide resolution genome-editing capability of this technology by creating a genome-wide gene disruption collection and apply our method to improve tolerance to growth inhibitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuprat, A.P.; Glasser, A.H.
The authors discuss unstructured grids for application to transport in the tokamak edge SOL. They have developed a new metric with which to judge element elongation and resolution requirements. Using this method, the authors apply a standard moving finite element technique to advance the SOL equations while inserting/deleting dynamically nodes that violate an elongation criterion. In a tokamak plasma, this method achieves a more uniform accuracy, and results in highly stretched triangular finite elements, except near separatrix X-point where transport is more isotropic.
Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J
2015-06-01
Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.
Ultra-long high-sensitivity Φ-OTDR for high spatial resolution intrusion detection of pipelines.
Peng, Fei; Wu, Han; Jia, Xin-Hong; Rao, Yun-Jiang; Wang, Zi-Nan; Peng, Zheng-Pu
2014-06-02
An ultra-long phase-sensitive optical time domain reflectometry (Φ-OTDR) that can achieve high-sensitivity intrusion detection over 131.5km fiber with high spatial resolution of 8m is presented, which is the longest Φ-OTDR reported to date, to the best of our knowledge. It is found that the combination of distributed Raman amplification with heterodyne detection can extend the sensing distance and enhances the sensitivity substantially, leading to the realization of ultra-long Φ-OTDR with high sensitivity and spatial resolution. Furthermore, the feasibility of applying such an ultra-long Φ-OTDR to pipeline security monitoring is demonstrated and the features of intrusion signal can be extracted with improved SNR by using the wavelet detrending/denoising method proposed.
Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling
NASA Technical Reports Server (NTRS)
Kenward, T.; Lettenmaier, D. P.
1997-01-01
The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.
NASA Astrophysics Data System (ADS)
Kurose, Noriko; Matsumoto, Kota; Yamada, Fumihiko; Roffi, Teuku Muhammad; Kamiya, Itaru; Iwata, Naotaka; Aoyagi, Yoshinobu
2018-01-01
A method for laser-induced local p-type activation of an as-grown Mg-doped GaN sample with a high lateral resolution is developed for realizing high power vertical devices for the first time. As-grown Mg-doped GaN is converted to p-type GaN in a confined local area. The transition from an insulating to a p-type area is realized to take place within about 1-2 μm fine resolution. The results show that the technique can be applied in fabricating the devices such as vertical field effect transistors, vertical bipolar transistors and vertical Schottkey diode so on with a current confinement region using a p-type carrier-blocking layer formed by this technique.
Gleber, Sophie -Charlotte; Wojcik, Michael; Liu, Jie; ...
2014-11-05
Focusing efficiency of Fresnel zone plates (FZPs) for X-rays depends on zone height, while the achievable spatial resolution depends on the width of the finest zones. FZPs with optimal efficiency and sub-100-nm spatial resolution require high aspect ratio structures which are difficult to fabricate with current technology especially for the hard X-ray regime. A possible solution is to stack several zone plates. To increase the number of FZPs within one stack, we first demonstrate intermediate-field stacking and apply this method by stacks of up to five FZPs with adjusted diameters. Approaching the respective optimum zone height, we maximized efficiencies formore » high resolution focusing at three different energies, 10, 11.8, and 25 keV.« less
Design of UAV high resolution image transmission system
NASA Astrophysics Data System (ADS)
Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng
2017-02-01
In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.
Resolution enhancement in coherent x-ray diffraction imaging by overcoming instrumental noise.
Kim, Chan; Kim, Yoonhee; Song, Changyong; Kim, Sang Soo; Kim, Sunam; Kang, Hyon Chol; Hwu, Yeukuang; Tsuei, Ku-Ding; Liang, Keng San; Noh, Do Young
2014-11-17
We report that reference objects, strong scatterers neighboring weak phase objects, enhance the phase retrieval and spatial resolution in coherent x-ray diffraction imaging (CDI). A CDI experiment with Au nano-particles exhibited that the reference objects amplified the signal-to-noise ratio in the diffraction intensity at large diffraction angles, which significantly enhanced the image resolution. The interference between the diffracted x-ray from reference objects and a specimen also improved the retrieval of the phase of the diffraction signal. The enhancement was applied to image NiO nano-particles and a mitochondrion and confirmed in a simulation with a bacteria phantom. We expect that the proposed method will be of great help in imaging weakly scattering soft matters using coherent x-ray sources including x-ray free electron lasers.
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
Design and performance of optical endoscopes for the early detection of cancer
NASA Astrophysics Data System (ADS)
Keenan, Maureen Molly
Cancer is a multistage, heterogeneous disease that develops through a series of genetic mutations. Early stage cancer is most responsive to treatment but can be the hardest to detect due to its small size, lack of definitive symptoms and potential location deep in the body. Whole body imaging methods, MRI/CT/PET, lack the necessary resolution to detect cellular level abnormalities. Optical methods, which have sufficient resolution, can be miniaturized into endoscopes, which are necessary to overcome limited penetration of light into tissue. By combining optical coherence tomography (OCT) and fluorescence imaging methods it is possible to create endoscopes sensitive to molecular and structural changes. I applied a dual-modality 2mm diameter rigid endoscope to the study of the natural history of colon cancer in a mouse model, and later applied this knowledge to the design and characterization of a 0.8 mm dual-modality flexible probe for use in human fallopian tubes. By using this endoscope, which is introduced through the natural orifice and is compatible with existing hysteroscopes, high-risk women could be screened in a procedure at a similar level of invasiveness as a colonoscopy. Therefore, the endoscope fills this gap in clinical care for women at high-risk for ovarian cancer.
Cerebral TOF Angiography at 7T: Impact of B1+ Shimming with a 16-Channel Transceiver Array
Schmitter, Sebastian; Wu, Xiaoping; Adriany, Gregor; Auerbach, Edward J.; Uğurbil, Kâmil; Van de Moortele, Pierre-François
2014-01-01
Purpose Time-of-flight (TOF) MR imaging is clinically among the most common cerebral non-contrast enhanced MR angiography techniques allowing for high spatial resolution. As shown by several groups TOF contrast significantly improves at ultra-high field (UHF) of B0=7T, however, spatially varying transmit B1 (B1+) fields at 7T reduce TOF contrast uniformity, typically resulting in sub-optimal contrast and reduced vessel conspicuity in the brain periphery. Methods Using a 16-channel B1+ shimming system we compare different dynamically applied B1+ phase shimming approaches on the RF excitation to improve contrast homogeneity for a (0.5 mm)3 resolution multi-slab TOF acquisition. In addition, B1+ shimming applied on the venous saturation pulse was investigated to improve venous suppression, subcutaneous fat signal reduction and enhanced background suppression originating from MT effect. Results B1+ excitation homogeneity was improved by a factor 2.2 to 2.6 on average depending on the shimming approach, compared to a standard CP-like phase setting, leading to improved vessel conspicuity particularly in the periphery. Stronger saturation, higher fat suppression and improved background suppression were observed when dynamically applying B1+ shimming on the venous saturation pulse. Conclusion B1+ shimming can significantly improve high resolution TOF vascular investigations at UHF, holding strong promise for non contrast-enhanced clinical applications. PMID:23640915
Mapping turbidity in the Charles River, Boston using a high-resolution satellite.
Hellweger, Ferdi L; Miller, Will; Oshodi, Kehinde Sarat
2007-09-01
The usability of high-resolution satellite imagery for estimating spatial water quality patterns in urban water bodies is evaluated using turbidity in the lower Charles River, Boston as a case study. Water turbidity was surveyed using a boat-mounted optical sensor (YSI) at 5 m spatial resolution, resulting in about 4,000 data points. The ground data were collected coincidently with a satellite imagery acquisition (IKONOS), which consists of multispectral (R, G, B) reflectance at 1 m resolution. The original correlation between the raw ground and satellite data was poor (R2 = 0.05). Ground data were processed by removing points affected by contamination (e.g., sensor encounters a particle floc), which were identified visually. Also, the ground data were corrected for the memory effect introduced by the sensor's protective casing using an analytical model. Satellite data were processed to remove pixels affected by permanent non-water features (e.g., shoreline). In addition, water pixels within a certain buffer distance from permanent non-water features were removed due to contamination by the adjacency effect. To determine the appropriate buffer distance, a procedure that explicitly considers the distance of pixels to the permanent non-water features was applied. Two automatic methods for removing the effect of temporary non-water features (e.g., boats) were investigated, including (1) creating a water-only mask based on an unsupervised classification and (2) removing (filling) all local maxima in reflectance. After the various processing steps, the correlation between the ground and satellite data was significantly better (R2 = 0.70). The correlation was applied to the satellite image to develop a map of turbidity in the lower Charles River, which reveals large-scale patterns in water clarity. However, the adjacency effect prevented the application of this method to near-shore areas, where high-resolution patterns were expected (e.g., outfall plumes).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.
Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less
Giroussi, S; Voulgaropoulos, A; Ayiannidis, A K; Golimowski, J; Janicki, M
1995-12-22
A selective and sensitive voltammetric method for the determination of cobalt in vegetable animal foodstuffs is developed. The method is based on the use of alpha-benzil dioxime (alpha-BD) as a chelating agent for differential pulse adsorptive stripping voltammetry (DPASV) and is free from zinc interferences. The influence of pH, time and alpha-BD concentration on the peak resolution and height are discussed. The method was successfully applied in some typical vegetable animal foodstuffs with R.S.D. < 6%.
Lin, Zhichao; Wu, Zhongyu
2009-05-01
A rapid and reliable radiochemical method coupled with a simple and compact plating apparatus was developed, validated, and applied for the analysis of (210)Po in variety of food products and bioassay samples. The method performance characteristics, including accuracy, precision, robustness, and specificity, were evaluated along with a detailed measurement uncertainty analysis. With high Po recovery, improved energy resolution, and effective removal of interfering elements by chromatographic extraction, the overall method accuracy was determined to be better than 5% with measurement precision of 10%, at 95% confidence level.
Beyramysoltan, Samira; Rajkó, Róbert; Abdollahi, Hamid
2013-08-12
The obtained results by soft modeling multivariate curve resolution methods often are not unique and are questionable because of rotational ambiguity. It means a range of feasible solutions equally fit experimental data and fulfill the constraints. Regarding to chemometric literature, a survey of useful constraints for the reduction of the rotational ambiguity is a big challenge for chemometrician. It is worth to study the effects of applying constraints on the reduction of rotational ambiguity, since it can help us to choose the useful constraints in order to impose in multivariate curve resolution methods for analyzing data sets. In this work, we have investigated the effect of equality constraint on decreasing of the rotational ambiguity. For calculation of all feasible solutions corresponding with known spectrum, a novel systematic grid search method based on Species-based Particle Swarm Optimization is proposed in a three-component system. Copyright © 2013 Elsevier B.V. All rights reserved.
Finite slice analysis (FINA) of sliced and velocity mapped images on a Cartesian grid
NASA Astrophysics Data System (ADS)
Thompson, J. O. F.; Amarasinghe, C.; Foley, C. D.; Rombes, N.; Gao, Z.; Vogels, S. N.; van de Meerakker, S. Y. T.; Suits, A. G.
2017-08-01
Although time-sliced imaging yields improved signal-to-noise and resolution compared with unsliced velocity mapped ion images, for finite slice widths as encountered in real experiments there is a loss of resolution and recovered intensities for the slow fragments. Recently, we reported a new approach that permits correction of these effects for an arbitrarily sliced distribution of a 3D charged particle cloud. This finite slice analysis (FinA) method utilizes basis functions that model the out-of-plane contribution of a given velocity component to the image for sequential subtraction in a spherical polar coordinate system. However, the original approach suffers from a slow processing time due to the weighting procedure needed to accurately model the out-of-plane projection of an anisotropic angular distribution. To overcome this issue we present a variant of the method in which the FinA approach is performed in a cylindrical coordinate system (Cartesian in the image plane) rather than a spherical polar coordinate system. Dubbed C-FinA, we show how this method is applied in much the same manner. We compare this variant to the polar FinA method and find that the processing time (of a 510 × 510 pixel image) in its most extreme case improves by a factor of 100. We also show that although the resulting velocity resolution is not quite as high as the polar version, this new approach shows superior resolution for fine structure in the differential cross sections. We demonstrate the method on a range of experimental and synthetic data at different effective slice widths.
A practical approach to superresolution
NASA Astrophysics Data System (ADS)
Farsiu, Sina; Elad, Michael; Milanfar, Peyman
2006-01-01
Theoretical and practical limitations usually constrain the achievable resolution of any imaging device. Super-Resolution (SR) methods are developed through the years to go beyond this limit by acquiring and fusing several low-resolution (LR) images of the same scene, producing a high-resolution (HR) image. The early works on SR, although occasionally mathematically optimal for particular models of data and noise, produced poor results when applied to real images. In this paper, we discuss two of the main issues related to designing a practical SR system, namely reconstruction accuracy and computational efficiency. Reconstruction accuracy refers to the problem of designing a robust SR method applicable to images from different imaging systems. We study a general framework for optimal reconstruction of images from grayscale, color, or color filtered (CFA) cameras. The performance of our proposed method is boosted by using powerful priors and is robust to both measurement (e.g. CCD read out noise) and system noise (e.g. motion estimation error). Noting that the motion estimation is often considered a bottleneck in terms of SR performance, we introduce the concept of "constrained motions" for enhancing the quality of super-resolved images. We show that using such constraints will enhance the quality of the motion estimation and therefore results in more accurate reconstruction of the HR images. We also justify some practical assumptions that greatly reduce the computational complexity and memory requirements of the proposed methods. We use efficient approximation of the Kalman Filter (KF) and adopt a dynamic point of view to the SR problem. Novel methods for addressing these issues are accompanied by experimental results on real data.
High-order ENO schemes applied to two- and three-dimensional compressible flow
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley
1991-01-01
High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.
NASA Astrophysics Data System (ADS)
Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.
2016-06-01
In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Anderson, David M. G.; Mills, Daniel; Spraggins, Jeffrey; Lambert, Wendi S.; Calkins, David J.
2013-01-01
Purpose To develop a method for generating high spatial resolution (10 µm) matrix-assisted laser desorption ionization (MALDI) images of lipids in rodent optic nerve tissue. Methods Ice-embedded optic nerve tissue from rats and mice were cryosectioned across the coronal and sagittal axes of the nerve fiber. Sections were thaw mounted on gold-coated MALDI plates and were washed with ammonium acetate to remove biologic salts before being coated in 2,5-dihydroxybenzoic acid by sublimation. MALDI images were generated in positive and negative ion modes at 10 µm spatial resolution. Lipid identification was performed with a high mass resolution Fourier transform ion cyclotron resonance mass spectrometer. Results Several lipid species were observed with high signal intensity in MALDI images of optic nerve tissue. Several lipids were localized to specific structures including in the meninges surrounding the optic nerve and in the central neuronal tissue. Specifically, phosphatidylcholine species were observed throughout the nerve tissue in positive ion mode while sulfatide species were observed in high abundance in the meninges surrounding the optic nerve in negative ion mode. Accurate mass measurements and fragmentation using sustained off-resonance irradiation with a high mass resolution Fourier transform ion cyclotron resonance mass spectrometer instrument allowed for identification of lipid species present in the small structure of the optic nerve directly from tissue sections. Conclusions An optimized sample preparation method provides excellent sensitivity for lipid species present within optic nerve tissue. This allowed the laser spot size and fluence to be reduced to obtain a high spatial resolution of 10 µm. This new imaging modality can now be applied to determine spatial and molecular changes in optic nerve tissue with disease. PMID:23559852
Yun, Seong Dae
2017-01-01
The relatively high imaging speed of EPI has led to its widespread use in dynamic MRI studies such as functional MRI. An approach to improve the performance of EPI, EPI with Keyhole (EPIK), has been previously presented and its use in fMRI was verified at 1.5T as well as 3T. The method has been proven to achieve a higher temporal resolution and smaller image distortions when compared to single-shot EPI. Furthermore, the performance of EPIK in the detection of functional signals was shown to be comparable to that of EPI. For these reasons, we were motivated to employ EPIK here for high-resolution imaging. The method was optimised to offer the highest possible in-plane resolution and slice coverage under the given imaging constraints: fixed TR/TE, FOV and acceleration factors for parallel imaging and partial Fourier techniques. The performance of EPIK was evaluated in direct comparison to the optimised protocol obtained from EPI. The two imaging methods were applied to visual fMRI experiments involving sixteen subjects. The results showed that enhanced spatial resolution with a whole-brain coverage was achieved by EPIK (1.00 mm × 1.00 mm; 32 slices) when compared to EPI (1.25 mm × 1.25 mm; 28 slices). As a consequence, enhanced characterisation of functional areas has been demonstrated in EPIK particularly for relatively small brain regions such as the lateral geniculate nucleus (LGN) and superior colliculus (SC); overall, a significantly increased t-value and activation area were observed from EPIK data. Lastly, the use of EPIK for fMRI was validated with the simulation of different types of data reconstruction methods. PMID:28945780
[4D-MRI using the synchronized sampling method (SSM)].
Shimada, Yasuhiro; Fujimoto, Ichirou; Takemoto, Hironori; Takano, Sayoko; Masaki, Shinobu; Honda, Kiyoshi; Takeo, Kazuhiro
2002-12-01
A synchronized sampling method (SSM) was developed for the study of voluntary movements by combining the electrocardiographic (ECG) gating method with an external triggering device, and four-dimensional magnetic resonance imaging (4D-MRI) at a rate of 30 frames per second was accomplished by volumetric imaging with the SSM. This method was first applied to the motion imaging of articulatory organs during repetitions of a Japanese five-vowel sequence, and the dynamic change in vocal tract area function was demonstrated with sufficient temporal resolution. This paper describes the methodology, applicability, and limitations of 4D-MRI with the SSM.
Applications of asynoptic space - Time Fourier transform methods to scanning satellite measurements
NASA Technical Reports Server (NTRS)
Lait, Leslie R.; Stanford, John L.
1988-01-01
A method proposed by Salby (1982) for computing the zonal space-time Fourier transform of asynoptically acquired satellite data is discussed. The method and its relationship to other techniques are briefly described, and possible problems in applying it to real data are outlined. Examples of results obtained using this technique are given which demonstrate its sensitivity to small-amplitude signals. A number of waves are found which have previously been observed as well as two not heretofore reported. A possible extension of the method which could increase temporal and longitudinal resolution is described.
Stephanson, N N; Signell, P; Helander, A; Beck, O
2017-08-01
The influx of new psychoactive substances (NPS) has created a need for improved methods for drug testing in toxicology laboratories. The aim of this work was to design, validate and apply a multi-analyte liquid chromatography-high-resolution mass spectrometry (LC-HRMS) method for screening of 148 target analytes belonging to the NPS class, plant alkaloids and new psychoactive therapeutic drugs. The analytical method used a fivefold dilution of urine with nine deuterated internal standards and injection of 2 μl. The LC system involved a 2.0 μm 100 × 2.0 mm YMC-UltraHT Hydrosphere-C 18 column and gradient elution with a flow rate of 0.5 ml/min and a total analysis time of 6.0 min. Solvent A consisted of 10 mmol/l ammonium formate and 0.005% formic acid, pH 4.8, and Solvent B was methanol with 10 mmol/l ammonium formate and 0.005% formic acid. The HRMS (Q Exactive, Thermo Scientific) used a heated electrospray interface and was operated in positive mode with 70 000 resolution. The scan range was 100-650 Da, and data for extracted ion chromatograms used ± 10 ppm tolerance. Product ion monitoring was applied for confirmation analysis and for some selected analytes also for screening. Method validation demonstrated limited influence from urine matrix, linear response within the measuring range (typically 0.1-1.0 μg/ml) and acceptable imprecision in quantification (CV <15%). A few analytes were found to be unstable in urine upon storage. The method was successfully applied for routine drug testing of 17 936 unknown samples, of which 2715 (15%) contained 52 of the 148 analytes. It is concluded that the method design based on simple dilution of urine and using LC-HRMS in extracted ion chromatogram mode may offer an analytical system for urine drug testing that fulfils the requirement of a 'black box' solution and can replace immunochemical screening applied on autoanalyzers. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Soni, V.; Hadjadj, A.; Roussel, O.
2017-12-01
In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method.
High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods
NASA Astrophysics Data System (ADS)
Yoon, Yeo-Sun; Amin, Moeness G.
2008-04-01
Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.
Hurtaud-Pessel, D; Jagadeshwar-Reddy, T; Verdon, E
2011-10-01
A liquid chromatography-high resolution mass spectrometry (LC-HRMS) method was developed for screening meat for a wide range of antibiotics used in veterinary medicine. Full-scan mode under high resolution mass spectral conditions using an LTQ-Orbitrap mass spectrometer with resolving power 60,000 full width at half maximum (FWHM) was applied for analysis of the samples. Samples were prepared using two extraction protocols prior to LC-HRMS analysis. The scope of the method focuses on screening the following main families of antibacterial veterinary drugs: penicillins, cephalosporins, sulfonamides, macrolides, tetracyclines, aminoglucosides and quinolones. Compounds were successfully identified in spiked samples from their accurate mass and LC retention times from the acquired full-scan chromatogram. Automated data processing using ToxId software allowed rapid treatment of the data. Analyses of muscle tissues from real samples collected from antibiotic-treated animals was carried out using the above methodology and antibiotic residues were identified unambiguously. Further analysis of the data for real samples allowed the identification of the targeted antibiotic residues but also non-targeted compounds, such as some of their metabolites.
High efficiency multishot interleaved spiral-in/out: acquisition for high-resolution BOLD fMRI.
Jung, Youngkyoo; Samsonov, Alexey A; Liu, Thomas T; Buracas, Giedrius T
2013-08-01
Growing demand for high spatial resolution blood oxygenation level dependent (BOLD) functional magnetic resonance imaging faces a challenge of the spatial resolution versus coverage or temporal resolution tradeoff, which can be addressed by methods that afford increased acquisition efficiency. Spiral acquisition trajectories have been shown to be superior to currently prevalent echo-planar imaging in terms of acquisition efficiency, and high spatial resolution can be achieved by employing multiple-shot spiral acquisition. The interleaved spiral in/out trajectory is preferred over spiral-in due to increased BOLD signal contrast-to-noise ratio (CNR) and higher acquisition efficiency than that of spiral-out or noninterleaved spiral in/out trajectories (Law & Glover. Magn Reson Med 2009; 62:829-834.), but to date applicability of the multishot interleaved spiral in/out for high spatial resolution imaging has not been studied. Herein we propose multishot interleaved spiral in/out acquisition and investigate its applicability for high spatial resolution BOLD functional magnetic resonance imaging. Images reconstructed from interleaved spiral-in and -out trajectories possess artifacts caused by differences in T2 decay, off-resonance, and k-space errors associated with the two trajectories. We analyze the associated errors and demonstrate that application of conjugate phase reconstruction and spectral filtering can substantially mitigate these image artifacts. After applying these processing steps, the multishot interleaved spiral in/out pulse sequence yields high BOLD CNR images at in-plane resolution below 1 × 1 mm while preserving acceptable temporal resolution (4 s) and brain coverage (15 slices of 2 mm thickness). Moreover, this method yields sufficient BOLD CNR at 1.5 mm isotropic resolution for detection of activation in hippocampus associated with cognitive tasks (Stern memory task). The multishot interleaved spiral in/out acquisition is a promising technique for high spatial resolution BOLD functional magnetic resonance imaging applications. © 2012 Wiley Periodicals, Inc.
Multiresolution persistent homology for excessively large biomolecular datasets
NASA Astrophysics Data System (ADS)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-10-01
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
Woldegebriel, Michael; Derks, Eduard
2017-01-17
In this work, a novel probabilistic untargeted feature detection algorithm for liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS) using artificial neural network (ANN) is presented. The feature detection process is approached as a pattern recognition problem, and thus, ANN was utilized as an efficient feature recognition tool. Unlike most existing feature detection algorithms, with this approach, any suspected chromatographic profile (i.e., shape of a peak) can easily be incorporated by training the network, avoiding the need to perform computationally expensive regression methods with specific mathematical models. In addition, with this method, we have shown that the high-resolution raw data can be fully utilized without applying any arbitrary thresholds or data reduction, therefore improving the sensitivity of the method for compound identification purposes. Furthermore, opposed to existing deterministic (binary) approaches, this method rather estimates the probability of a feature being present/absent at a given point of interest, thus giving chance for all data points to be propagated down the data analysis pipeline, weighed with their probability. The algorithm was tested with data sets generated from spiked samples in forensic and food safety context and has shown promising results by detecting features for all compounds in a computationally reasonable time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei, E-mail: wei@math.msu.edu
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topologicalmore » analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.« less
Range and azimuth resolution enhancement for 94 GHz real-beam radar
NASA Astrophysics Data System (ADS)
Liu, Guoqing; Yang, Ken; Sykora, Brian; Salha, Imad
2008-04-01
In this paper, two-dimensional (2D) (range and azimuth) resolution enhancement is investigated for millimeter wave (mmW) real-beam radar (RBR) with linear or non-linear antenna scan in the azimuth dimension. We design a new architecture of super resolution processing, in which a dual-mode approach is used for defining region of interest for 2D resolution enhancement and a combined approach is deployed for obtaining accurate location and amplitude estimations of targets within the region of interest. To achieve 2D resolution enhancement, we first adopt the Capon Beamformer (CB) approach (also known as the minimum variance method (MVM)) to enhance range resolution. A generalized CB (GCB) approach is then applied to azimuth dimension for azimuth resolution enhancement. The GCB approach does not rely on whether the azimuth sampling is even or not and thus can be used in both linear and non-linear antenna scanning modes. The effectiveness of the resolution enhancement is demonstrated by using both simulation and test data. The results of using a 94 GHz real-beam frequency modulation continuous wave (FMCW) radar data show that the overall image quality is significantly improved per visual evaluation and comparison with respect to the original real-beam radar image.
Dictionary learning based noisy image super-resolution via distance penalty weight model
Han, Yulan; Zhao, Yongping; Wang, Qisong
2017-01-01
In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness. PMID:28759633
Facial identification in very low-resolution images simulating prosthetic vision.
Chang, M H; Kim, H S; Shin, J H; Park, K S
2012-08-01
Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.
NASA Astrophysics Data System (ADS)
Greaves, Heather E.
Climate change is disproportionately affecting high northern latitudes, and the extreme temperatures, remoteness, and sheer size of the Arctic tundra biome have always posed challenges that make application of remote sensing technology especially appropriate. Advances in high-resolution remote sensing continually improve our ability to measure characteristics of tundra vegetation communities, which have been difficult to characterize previously due to their low stature and their distribution in complex, heterogeneous patches across large landscapes. In this work, I apply terrestrial lidar, airborne lidar, and high-resolution airborne multispectral imagery to estimate tundra vegetation characteristics for a research area near Toolik Lake, Alaska. Initially, I explored methods for estimating shrub biomass from terrestrial lidar point clouds, finding that a canopy-volume based algorithm performed best. Although shrub biomass estimates derived from airborne lidar data were less accurate than those from terrestrial lidar data, algorithm parameters used to derive biomass estimates were similar for both datasets. Additionally, I found that airborne lidar-based shrub biomass estimates were just as accurate whether calibrated against terrestrial lidar data or harvested shrub biomass--suggesting that terrestrial lidar potentially could replace destructive biomass harvest. Along with smoothed Normalized Differenced Vegetation Index (NDVI) derived from airborne imagery, airborne lidar-derived canopy volume was an important predictor in a Random Forest model trained to estimate shrub biomass across the 12.5 km2 covered by our lidar and imagery data. The resulting 0.80 m resolution shrub biomass maps should provide important benchmarks for change detection in the Toolik area, especially as deciduous shrubs continue to expand in tundra regions. Finally, I applied 33 lidar- and imagery-derived predictor layers in a validated Random Forest modeling approach to map vegetation community distribution at 20 cm resolution across the data collection area, creating maps that will enable validation of coarser maps, as well as study of fine-scale ecological processes in the area. These projects have pushed the limits of what can be accomplished for vegetation mapping using airborne remote sensing in a challenging but important region; it is my hope that the methods explored here will illuminate potential paths forward as landscapes and technologies inevitably continue to change.
Katsarov, Plamen; Gergov, Georgi; Alin, Aylin; Pilicheva, Bissera; Al-Degs, Yahya; Simeonov, Vasil; Kassarova, Margarita
2018-03-01
The prediction power of partial least squares (PLS) and multivariate curve resolution-alternating least squares (MCR-ALS) methods have been studied for simultaneous quantitative analysis of the binary drug combination - doxylamine succinate and pyridoxine hydrochloride. Analysis of first-order UV overlapped spectra was performed using different PLS models - classical PLS1 and PLS2 as well as partial robust M-regression (PRM). These linear models were compared to MCR-ALS with equality and correlation constraints (MCR-ALS-CC). All techniques operated within the full spectral region and extracted maximum information for the drugs analysed. The developed chemometric methods were validated on external sample sets and were applied to the analyses of pharmaceutical formulations. The obtained statistical parameters were satisfactory for calibration and validation sets. All developed methods can be successfully applied for simultaneous spectrophotometric determination of doxylamine and pyridoxine both in laboratory-prepared mixtures and commercial dosage forms.
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
NASA Astrophysics Data System (ADS)
Toutin, Thierry; Wang, Huili; Charbonneau, Francois; Schmitt, Carla
2013-08-01
This paper presented two methods for the orthorectification of full/compact polarimetric SAR data: the polarimetric processing is performed in the image space (scientist's idealism) or in the ground space (user's realism) before or after the geometric processing, respectively. Radarsat-2 (R2) fine-quad and simulated very high-resolution RCM data acquired with different look angles over a hilly relief study site were processed using accurate lidar digital surface model. Quantitative evaluations between the two methods as a function of different geometric and radiometric parameters were performed to evaluate the impact during the orthorectification. The results demonstrated that the ground-space method can be safely applied to polarimetric R2 SAR data with an exception with the steep look angles and steep terrain slopes. On the other hand, the ground-space method cannot be applied to simulated compact RCM data due to 17dB noise floor and oversampling.
NASA Astrophysics Data System (ADS)
Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2018-01-01
Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.
NASA Astrophysics Data System (ADS)
Heaps, Charles W.; Schatz, George C.
2017-06-01
A computational method to model diffraction-limited images from super-resolution surface-enhanced Raman scattering microscopy is introduced. Despite significant experimental progress in plasmon-based super-resolution imaging, theoretical predictions of the diffraction limited images remain a challenge. The method is used to calculate localization errors and image intensities for a single spherical gold nanoparticle-molecule system. The light scattering is calculated using a modification of generalized Mie (T-matrix) theory with a point dipole source and diffraction limited images are calculated using vectorial diffraction theory. The calculation produces the multipole expansion for each emitter and the coherent superposition of all fields. Imaging the constituent fields in addition to the total field provides new insight into the strong coupling between the molecule and the nanoparticle. Regardless of whether the molecular dipole moment is oriented parallel or perpendicular to the nanoparticle surface, the anisotropic excitation distorts the center of the nanoparticle as measured by the point spread function by approximately fifty percent of the particle radius toward to the molecule. Inspection of the nanoparticle multipoles reveals that distortion arises from a weak quadrupole resonance interfering with the dipole field in the nanoparticle. When the nanoparticle-molecule fields are in-phase, the distorted nanoparticle field dominates the observed image. When out-of-phase, the nanoparticle and molecule are of comparable intensity and interference between the two emitters dominates the observed image. The method is also applied to different wavelengths and particle radii. At off-resonant wavelengths, the method predicts images closer to the molecule not because of relative intensities but because of greater distortion in the nanoparticle. The method is a promising approach to improving the understanding of plasmon-enhanced super-resolution experiments.
Imaging White Matter in Human Brainstem
Ford, Anastasia A.; Colon-Perez, Luis; Triplett, William T.; Gullett, Joseph M.; Mareci, Thomas H.; FitzGerald, David B.
2013-01-01
The human brainstem is critical for the control of many life-sustaining functions, such as consciousness, respiration, sleep, and transfer of sensory and motor information between the brain and the spinal cord. Most of our knowledge about structure and organization of white and gray matter within the brainstem is derived from ex vivo dissection and histology studies. However, these methods cannot be applied to study structural architecture in live human participants. Tractography from diffusion-weighted magnetic resonance imaging (MRI) may provide valuable insights about white matter organization within the brainstem in vivo. However, this method presents technical challenges in vivo due to susceptibility artifacts, functionally dense anatomy, as well as pulsatile and respiratory motion. To investigate the limits of MR tractography, we present results from high angular resolution diffusion imaging of an intact excised human brainstem performed at 11.1 T using isotropic resolution of 0.333, 1, and 2 mm, with the latter reflecting resolution currently used clinically. At the highest resolution, the dense fiber architecture of the brainstem is evident, but the definition of structures degrades as resolution decreases. In particular, the inferred corticopontine/corticospinal tracts (CPT/CST), superior (SCP) and middle cerebellar peduncle (MCP), and medial lemniscus (ML) pathways are clearly discernable and follow known anatomical trajectories at the highest spatial resolution. At lower resolutions, the CST/CPT, SCP, and MCP pathways are artificially enlarged due to inclusion of collinear and crossing fibers not inherent to these three pathways. The inferred ML pathways appear smaller at lower resolutions, indicating insufficient spatial information to successfully resolve smaller fiber pathways. Our results suggest that white matter tractography maps derived from the excised brainstem can be used to guide the study of the brainstem architecture using diffusion MRI in vivo. PMID:23898254
Imaging white matter in human brainstem.
Ford, Anastasia A; Colon-Perez, Luis; Triplett, William T; Gullett, Joseph M; Mareci, Thomas H; Fitzgerald, David B
2013-01-01
The human brainstem is critical for the control of many life-sustaining functions, such as consciousness, respiration, sleep, and transfer of sensory and motor information between the brain and the spinal cord. Most of our knowledge about structure and organization of white and gray matter within the brainstem is derived from ex vivo dissection and histology studies. However, these methods cannot be applied to study structural architecture in live human participants. Tractography from diffusion-weighted magnetic resonance imaging (MRI) may provide valuable insights about white matter organization within the brainstem in vivo. However, this method presents technical challenges in vivo due to susceptibility artifacts, functionally dense anatomy, as well as pulsatile and respiratory motion. To investigate the limits of MR tractography, we present results from high angular resolution diffusion imaging of an intact excised human brainstem performed at 11.1 T using isotropic resolution of 0.333, 1, and 2 mm, with the latter reflecting resolution currently used clinically. At the highest resolution, the dense fiber architecture of the brainstem is evident, but the definition of structures degrades as resolution decreases. In particular, the inferred corticopontine/corticospinal tracts (CPT/CST), superior (SCP) and middle cerebellar peduncle (MCP), and medial lemniscus (ML) pathways are clearly discernable and follow known anatomical trajectories at the highest spatial resolution. At lower resolutions, the CST/CPT, SCP, and MCP pathways are artificially enlarged due to inclusion of collinear and crossing fibers not inherent to these three pathways. The inferred ML pathways appear smaller at lower resolutions, indicating insufficient spatial information to successfully resolve smaller fiber pathways. Our results suggest that white matter tractography maps derived from the excised brainstem can be used to guide the study of the brainstem architecture using diffusion MRI in vivo.
The Analytical Limits of Modeling Short Diffusion Timescales
NASA Astrophysics Data System (ADS)
Bradshaw, R. W.; Kent, A. J.
2016-12-01
Chemical and isotopic zoning in minerals is widely used to constrain the timescales of magmatic processes such as magma mixing and crystal residence, etc. via diffusion modeling. Forward modeling of diffusion relies on fitting diffusion profiles to measured compositional gradients. However, an individual measurement is essentially an average composition for a segment of the gradient defined by the spatial resolution of the analysis. Thus there is the potential for the analytical spatial resolution to limit the timescales that can be determined for an element of given diffusivity, particularly where the scale of the gradient approaches that of the measurement. Here we use a probabilistic modeling approach to investigate the effect of analytical spatial resolution on estimated timescales from diffusion modeling. Our method investigates how accurately the age of a synthetic diffusion profile can be obtained by modeling an "unknown" profile derived from discrete sampling of the synthetic compositional gradient at a given spatial resolution. We also include the effects of analytical uncertainty and the position of measurements relative to the diffusion gradient. We apply this method to the spatial resolutions of common microanalytical techniques (LA-ICP-MS, SIMS, EMP, NanoSIMS). Our results confirm that for a given diffusivity, higher spatial resolution gives access to shorter timescales, and that each analytical spacing has a minimum timescale, below which it overestimates the timescale. For example, for Ba diffusion in plagioclase at 750 °C timescales are accurate (within 20%) above 10, 100, 2,600, and 71,000 years at 0.3, 1, 5, and 25 mm spatial resolution, respectively. For Sr diffusion in plagioclase at 750 °C, timescales are accurate above 0.02, 0.2, 4, and 120 years at the same spatial resolutions. Our results highlight the importance of selecting appropriate analytical techniques to estimate accurate diffusion-based timescales.
Sensitivity of worst-case strom surge considering influence of climate change
NASA Astrophysics Data System (ADS)
Takayabu, Izuru; Hibino, Kenshi; Sasaki, Hidetaka; Shiogama, Hideo; Mori, Nobuhito; Shibutani, Yoko; Takemi, Tetsuya
2016-04-01
There are two standpoints when assessing risk caused by climate change. One is how to prevent disaster. For this purpose, we get probabilistic information of meteorological elements, from enough number of ensemble simulations. Another one is to consider disaster mitigation. For this purpose, we have to use very high resolution sophisticated model to represent a worst case event in detail. If we could use enough computer resources to drive many ensemble runs with very high resolution model, we can handle these all themes in one time. However resources are unfortunately limited in most cases, and we have to select the resolution or the number of simulations if we design the experiment. Applying PGWD (Pseudo Global Warming Downscaling) method is one solution to analyze a worst case event in detail. Here we introduce an example to find climate change influence on the worst case storm-surge, by applying PGWD to a super typhoon Haiyan (Takayabu et al, 2015). 1 km grid WRF model could represent both the intensity and structure of a super typhoon. By adopting PGWD method, we can only estimate the influence of climate change on the development process of the Typhoon. Instead, the changes in genesis could not be estimated. Finally, we drove SU-WAT model (which includes shallow water equation model) to get the signal of storm surge height. The result indicates that the height of the storm surge increased up to 20% owing to these 150 years climate change.
Using Empirical Orthogonal Teleconnections to Analyze Interannual Precipitation Variability in China
NASA Astrophysics Data System (ADS)
Stephan, C.; Klingaman, N. P.; Vidale, P. L.; Turner, A. G.; Demory, M. E.; Guo, L.
2017-12-01
Interannual rainfall variability in China affects agriculture, infrastructure and water resource management. A consistent and objective method, Empirical Orthogonal Teleconnection (EOT) analysis, is applied to precipitation observations over China in all seasons. Instead of maximizing the explained space-time variance, the method identifies regions in China that best explain the temporal variability in domain-averaged rainfall. It produces known teleconnections, that include high positive correlations with ENSO in eastern China in winter, along the Yangtze River in summer, and in southeast China during spring. New findings include that variability along the southeast coast in winter, in the Yangtze valley in spring, and in eastern China in autumn, are associated with extratropical Rossby wave trains. The same analysis is applied to six climate simulations of the Met Office Unified Model with and without air-sea coupling and at various horizontal resolutions of 40, 90 and 200 km. All simulations reproduce the observed patterns of interannual rainfall variability in winter, spring and autumn; the leading pattern in summer is present in all but one simulation. However, only in two simulations are all patterns associated with the observed physical mechanism. Coupled simulations capture more observed patterns of variability and associate more of them with the correct physical mechanism, compared to atmosphere-only simulations at the same resolution. Finer resolution does not improve the fidelity of these patterns or their associated mechanisms. Evaluating climate models by only geographical distribution of mean precipitation and its interannual variance is insufficient; attention must be paid to associated mechanisms.
Maximum Entropy Method applied to Real-time Time-Dependent Density Functional Theory
NASA Astrophysics Data System (ADS)
Zempo, Yasunari; Toogoshi, Mitsuki; Kano, Satoru S.
Maximum Entropy Method (MEM) is widely used for the analysis of a time-series data such as an earthquake, which has fairly long-periodicity but short observable data. We have examined MEM to apply to the optical analysis of the time-series data from the real-time TDDFT. In the analysis, usually Fourier Transform (FT) is used, and we have to pay our attention to the lower energy part such as the band gap, which requires the long time evolution. The computational cost naturally becomes quite expensive. Since MEM is based on the autocorrelation of the signal, in which the periodicity can be described as the difference of time-lags, its value in the lower energy naturally gets small compared to that in the higher energy. To improve the difficulty, our MEM has the two features: the raw data is repeated it many times and concatenated, which provides the lower energy resolution in high resolution; together with the repeated data, an appropriate phase for the target frequency is introduced to reduce the side effect of the artificial periodicity. We have compared our improved MEM and FT spectrum using small-to-medium size molecules. We can see the clear spectrum of MEM, compared to that of FT. Our new technique provides higher resolution in fewer steps, compared to that of FT. This work was partially supported by JSPS Grants-in-Aid for Scientific Research (C) Grant number 16K05047, Sumitomo Chemical, Co. Ltd., and Simulatio Corp.
Parallel mapping of optical near-field interactions by molecular motor-driven quantum dots.
Groß, Heiko; Heil, Hannah S; Ehrig, Jens; Schwarz, Friedrich W; Hecht, Bert; Diez, Stefan
2018-04-30
In the vicinity of metallic nanostructures, absorption and emission rates of optical emitters can be modulated by several orders of magnitude 1,2 . Control of such near-field light-matter interaction is essential for applications in biosensing 3 , light harvesting 4 and quantum communication 5,6 and requires precise mapping of optical near-field interactions, for which single-emitter probes are promising candidates 7-11 . However, currently available techniques are limited in terms of throughput, resolution and/or non-invasiveness. Here, we present an approach for the parallel mapping of optical near-field interactions with a resolution of <5 nm using surface-bound motor proteins to transport microtubules carrying single emitters (quantum dots). The deterministic motion of the quantum dots allows for the interpolation of their tracked positions, resulting in an increased spatial resolution and a suppression of localization artefacts. We apply this method to map the near-field distribution of nanoslits engraved into gold layers and find an excellent agreement with finite-difference time-domain simulations. Our technique can be readily applied to a variety of surfaces for scalable, nanometre-resolved and artefact-free near-field mapping using conventional wide-field microscopes.
Richards, Selena; Miller, Robert; Gemperline, Paul
2008-02-01
An extension to the penalty alternating least squares (P-ALS) method, called multi-way penalty alternating least squares (NWAY P-ALS), is presented. Optionally, hard constraints (no deviation from predefined constraints) or soft constraints (small deviations from predefined constraints) were applied through the application of a row-wise penalty least squares function. NWAY P-ALS was applied to the multi-batch near-infrared (NIR) data acquired from the base catalyzed esterification reaction of acetic anhydride in order to resolve the concentration and spectral profiles of l-butanol with the reaction constituents. Application of the NWAY P-ALS approach resulted in the reduction of the number of active constraints at the solution point, while the batch column-wise augmentation allowed hard constraints in the spectral profiles and resolved rank deficiency problems of the measurement matrix. The results were compared with the multi-way multivariate curve resolution (MCR)-ALS results using hard and soft constraints to determine whether any advantages had been gained through using the weighted least squares function of NWAY P-ALS over the MCR-ALS resolution.
Automated segmentations of skin, soft-tissue, and skeleton, from torso CT images
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Yokoyama, Ryujiro; Kiryu, Takuji; Hoshi, Hiroaki
2004-05-01
We have been developing a computer-aided diagnosis (CAD) scheme for automatically recognizing human tissue and organ regions from high-resolution torso CT images. We show some initial results for extracting skin, soft-tissue and skeleton regions. 139 patient cases of torso CT images (male 92, female 47; age: 12-88) were used in this study. Each case was imaged with a common protocol (120kV/320mA) and covered the whole torso with isotopic spatial resolution of about 0.63 mm and density resolution of 12 bits. A gray-level thresholding based procedure was applied to separate the human body from background. The density and distance features to body surface were used to determine the skin, and separate soft-tissue from the others. A 3-D region growing based method was used to extract the skeleton. We applied this system to the 139 cases and found that the skin, soft-tissue and skeleton regions were recognized correctly for 93% of the patient cases. The accuracy of segmentation results was acceptable by evaluating the results slice by slice. This scheme will be included in CAD systems for detecting and diagnosing the abnormal lesions in multi-slice torso CT images.
2D-crystallization of Rhodococcus 20S proteasome at the liquid-liquid interface
NASA Astrophysics Data System (ADS)
Aoyama, Kazuhiro
1996-10-01
The 2D-crystallization method using the liquid-liquid interface between a aqueous phase (protein solution) and a thin organic liquid (dehydroabietylamine) layer has been applied to the Rhodococcus 20S proteasome. The 20S proteasome is known to be the core complex of the 26S proteasome, which is the central protease of the ubiquitin-dependent pathway. Two types of ordered arrays were obtained, both large enough for high resolution analysis by electron crystallography. The first one had a four-fold symmetry, whereas the second one was found out to be a hexagonally close-packed array. By image analysis based on a real space correlation averaging (CAV) technique, the close-packed array was found to be hexagonally packed, but the molecules had presumably rotational freedom. The four-fold array was found to be a true crystal with p4 symmetry. Lattice constants were a = b = 20.0 nm and α = 90°. The unit cell of this crystal contained two molecules. The diffraction pattern computed from the original picture showed spots up to (4, 5) that corresponds to 3.1 nm resolution. After applying an unbending procedure, the diffraction pattern showed spots extending to 1.8 nm resolution.
NASA Astrophysics Data System (ADS)
Kim, Jongyoun; Hogue, Terri S.
2012-01-01
The current study investigates a method to provide land surface parameters [i.e., land surface temperature (LST) and normalized difference vegetation index (NDVI)] at a high spatial (˜30 and 60 m) and temporal (daily and 8-day) resolution by combining advantages from Landsat and moderate-resolution imaging spectroradiometer (MODIS) satellites. We adopt a previously developed subtraction method that merges the spatial detail of higher-resolution imagery (Landsat) with the temporal change observed in coarser or moderate-resolution imagery (MODIS). Applying the temporal difference between MODIS images observed at two different dates to a higher-resolution Landsat image allows prediction of a combined or fused image (Landsat+MODIS) at a future date. Evaluation of the resultant merged products is undertaken within the Southeastern Arizona region where data is available from a range of flux tower sites. The Landsat+MODIS fused products capture the raw Landsat values and also reflect the MODIS temporal variation. The predicted Landsat+MODIS LST improves mean absolute error around 5°C at the more heterogeneous sites compared to the original satellite products. The fused Landsat+MODIS NDVI product also shows good correlation to ground-based data and is relatively consistent except during the acute (monsoon) growing season. The sensitivity of the fused product relative to temporal gaps in Landsat data appears to be more affected by uncertainty associated with regional precipitation and green-up, than the length of the gap associated with Landsat viewing, suggesting the potential to use a minimal number of original Landsat images during relatively stable land surface and climate conditions. Our extensive validation yields insight on the ability of the proposed method to integrate multiscale platforms and the potential for reducing costs associated with high-resolution satellite systems (e.g., SPOT, QuickBird, IKONOS).
Retrieval of Mid-tropospheric CO2 Directly from AIRS Measurements
NASA Technical Reports Server (NTRS)
Olsen, Edward T.; Chahine, Moustafa T.; Chen, Luke L.; Pagano, Thomas S.
2008-01-01
We apply the method of Vanishing Partial Derivatives (VPD) to AIRS spectra to retrieve daily the global distribution of CO2 at a nadir geospatial resolution of 90 km x 90 km without requiring a first-guess input beyond the global average. Our retrievals utilize the 15 (micro)m band radiances, a complex spectral region. This method may be of value in other applications, in which spectral signatures of multiple species are not well isolated spectrally from one another.
Plasma-driven ultrashort bunch diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dornmair, I.; Schroeder, C. B.; Floettmann, K.
2016-06-10
Ultrashort electron bunches are crucial for an increasing number of applications, however, diagnosing their longitudinal phase space remains a challenge. We propose a new method that harnesses the strong electric fields present in a laser driven plasma wakefield. By transversely displacing driver laser and witness bunch, a streaking field is applied to the bunch. This field maps the time information to a transverse momentum change and, consequently, to a change of transverse position. We illustrate our method with simulations where we achieve a time resolution in the attosecond range.
de Oliveira, Marcus Vinicius Linhares; Santos, António Carvalho; Paulo, Graciano; Campos, Paulo Sergio Flores; Santos, Joana
2017-06-01
The purpose of this study was to apply a newly developed free software program, at low cost and with minimal time, to evaluate the quality of dental and maxillofacial cone-beam computed tomography (CBCT) images. A polymethyl methacrylate (PMMA) phantom, CQP-IFBA, was scanned in 3 CBCT units with 7 protocols. A macro program was developed, using the free software ImageJ, to automatically evaluate the image quality parameters. The image quality evaluation was based on 8 parameters: uniformity, the signal-to-noise ratio (SNR), noise, the contrast-to-noise ratio (CNR), spatial resolution, the artifact index, geometric accuracy, and low-contrast resolution. The image uniformity and noise depended on the protocol that was applied. Regarding the CNR, high-density structures were more sensitive to the effect of scanning parameters. There were no significant differences between SNR and CNR in centered and peripheral objects. The geometric accuracy assessment showed that all the distance measurements were lower than the real values. Low-contrast resolution was influenced by the scanning parameters, and the 1-mm rod present in the phantom was not depicted in any of the 3 CBCT units. Smaller voxel sizes presented higher spatial resolution. There were no significant differences among the protocols regarding artifact presence. This software package provided a fast, low-cost, and feasible method for the evaluation of image quality parameters in CBCT.
NASA Astrophysics Data System (ADS)
Marson, Avishai; Stern, Adrian
2015-05-01
One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.
High-Resolution Near Real-Time Drought Monitoring in South Asia
NASA Astrophysics Data System (ADS)
Aadhar, S.; Mishra, V.
2017-12-01
Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning and management of water resources at the sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. Here we develop a high resolution (0.05 degree) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat waves, cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature (maximum and minimum), which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05˚. We find that the bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub- basin levels.
High-resolution absolute position detection using a multiple grating
NASA Astrophysics Data System (ADS)
Schilling, Ulrich; Drabarek, Pawel; Kuehnle, Goetz; Tiziani, Hans J.
1996-08-01
To control electro-mechanical engines, high-resolution linear and rotary encoders are needed. Interferometric methods (grating interferometers) promise a resolution of a few nanometers, but have an ambiguity range of some microns. Incremental encoders increase the absolute measurement range by counting the signal periods starting from a defined initial point. In many applications, however, it is not possible to move to this initial point, so that absolute encoders have to be used. Absolute encoders generally have a scale with two or more tracks placed next to each other. Therefore, they use a two-dimensional grating structure to measure a one-dimensional position. We present a new method, which uses a one-dimensional structure to determine the position in one dimension. It is based on a grating with a large grating period up to some millimeters, having the same diffraction efficiency in several predefined diffraction orders (multiple grating). By combining the phase signals of the different diffraction orders, it is possible to establish the position in an absolute range of the grating period with a resolution like incremental grating interferometers. The principal functionality was demonstrated by applying the multiple grating in a heterodyne grating interferometer. The heterodyne frequency was generated by a frequency modulated laser in an unbalanced interferometer. In experimental measurements an absolute range of 8 mm was obtained while achieving a resolution of 10 nm.
Correction of Near-infrared High-resolution Spectra for Telluric Absorption at 0.90–1.35 μm
NASA Astrophysics Data System (ADS)
Sameshima, Hiroaki; Matsunaga, Noriyuki; Kobayashi, Naoto; Kawakita, Hideyo; Hamano, Satoshi; Ikeda, Yuji; Kondo, Sohei; Fukue, Kei; Taniguchi, Daisuke; Mizumoto, Misaki; Arai, Akira; Otsubo, Shogo; Takenaka, Keiichi; Watase, Ayaka; Asano, Akira; Yasui, Chikako; Izumi, Natsuko; Yoshikawa, Tomohiro
2018-07-01
We report a method of correcting a near-infrared (0.90–1.35 μm) high-resolution (λ/Δλ ∼ 28,000) spectrum for telluric absorption using the corresponding spectrum of a telluric standard star. The proposed method uses an A0 V star or its analog as a standard star from which on the order of 100 intrinsic stellar lines are carefully removed with the help of a reference synthetic telluric spectrum. We find that this method can also be applied to feature-rich objects having spectra with heavily blended intrinsic stellar and telluric lines and present an application to a G-type giant using this approach. We also develop a new diagnostic method for evaluating the accuracy of telluric correction and use it to demonstrate that our method achieves an accuracy better than 2% for spectral parts for which the atmospheric transmittance is as low as ∼20% if telluric standard stars are observed under the following conditions: (1) the difference in airmass between the target and the standard is ≲ 0.05; and (2) that in time is less than 1 hr. In particular, the time variability of water vapor has a large impact on the accuracy of telluric correction and minimizing the difference in time from that of the telluric standard star is important especially in near-infrared high-resolution spectroscopic observation.
A Method of Mapping Burned Area Using Chinese FengYun-3 MERSI Satellite Data
NASA Astrophysics Data System (ADS)
Shan, T.
2017-12-01
Wildfire is a naturally reoccurring global phenomenon which has environmental and ecological consequences such as effects on the global carbon budget, changes to the global carbon cycle and disruption to ecosystem succession. The information of burned area is significant for post disaster assessment, ecosystems protection and restoration. The Medium Resolution Spectral Imager (MERSI) onboard FENGYUN-3C (FY-3C) has shown good ability for fire detection and monitoring but lacks recognition among researchers. In this study, an automated burned area mapping algorithm was proposed based on FY-3C MERSI data. The algorithm is generally divided into two phases: 1) selection of training pixels based on 1000-m resolution MERSI data, which offers more spectral information through the use of more vegetation indices; and 2) classification: first the region growing method is applied to 1000-m MERSI data to calculate the core burned area and then the same classification method is applied to the 250-m MERSI data set by using the core burned area as a seed to obtain results at a finer spatial resolution. An evaluation of the performance of the algorithm was carried out at two study sites in America and Canada. The accuracy assessment and validation were made by comparing our results with reference results derived from Landsat OLI data. The result has a high kappa coefficient and the lower commission error, indicating that this algorithm can improve the burned area mapping accuracy at the two study sites. It may then be possible to use MERSI and other data to fill the gaps in the imaging of burned areas in the future.
NASA Astrophysics Data System (ADS)
Zikmund, T.; Novotná, M.; Kavková, M.; Tesařová, M.; Kaucká, M.; Szarowská, B.; Adameyko, I.; Hrubá, E.; Buchtová, M.; Dražanová, E.; Starčuk, Z.; Kaiser, J.
2018-02-01
The biomedically focused brain research is largely performed on laboratory mice considering a high homology between the human and mouse genomes. A brain has an intricate and highly complex geometrical structure that is hard to display and analyse using only 2D methods. Applying some fast and efficient methods of brain visualization in 3D will be crucial for the neurobiology in the future. A post-mortem analysis of experimental animals' brains usually involves techniques such as magnetic resonance and computed tomography. These techniques are employed to visualize abnormalities in the brains' morphology or reparation processes. The X-ray computed microtomography (micro CT) plays an important role in the 3D imaging of internal structures of a large variety of soft and hard tissues. This non-destructive technique is applied in biological studies because the lab-based CT devices enable to obtain a several-micrometer resolution. However, this technique is always used along with some visualization methods, which are based on the tissue staining and thus differentiate soft tissues in biological samples. Here, a modified chemical contrasting protocol of tissues for a micro CT usage is introduced as the best tool for ex vivo 3D imaging of a post-mortem mouse brain. This way, the micro CT provides a high spatial resolution of the brain microscopic anatomy together with a high tissue differentiation contrast enabling to identify more anatomical details in the brain. As the micro CT allows a consequent reconstruction of the brain structures into a coherent 3D model, some small morphological changes can be given into context of their mutual spatial relationships.
Morphological characterization of diesel soot agglomerates based on the Beer-Lambert law
NASA Astrophysics Data System (ADS)
Lapuerta, Magín; Martos, Francisco J.; José Expósito, Juan
2013-03-01
A new method is proposed for the determination of the number of primary particles composing soot agglomerates emitted from diesel engines as well as their individual fractal dimension. The method is based on the Beer-Lambert law and it is applied to micro-photographs taken in high resolution transmission electron microscopy. Differences in the grey levels of the images lead to a more accurate estimation of the geometry of the agglomerate (in this case radius of gyration) than other methods based exclusively on the planar projections of the agglomerates. The method was validated by applying it to different images of the same agglomerate observed from different angles of incidence, and proving that the effect of the angle of incidence is minor, contrary to other methods. Finally, the comparisons with other methods showed that the size, number of primary particles and fractal dimension (the latter depending on the particle size) are usually underestimated when only planar projections of the agglomerates are considered.
Yang, Lei; Lu, Jun; Dai, Ming; Ren, Li-Jie; Liu, Wei-Zong; Li, Zhen-Zhou; Gong, Xue-Hao
2016-10-06
An ultrasonic image speckle noise removal method by using total least squares model is proposed and applied onto images of cardiovascular structures such as the carotid artery. On the basis of the least squares principle, the related principle of minimum square method is applied to cardiac ultrasound image speckle noise removal process to establish the model of total least squares, orthogonal projection transformation processing is utilized for the output of the model, and the denoising processing for the cardiac ultrasound image speckle noise is realized. Experimental results show that the improved algorithm can greatly improve the resolution of the image, and meet the needs of clinical medical diagnosis and treatment of the cardiovascular system for the head and neck. Furthermore, the success in imaging of carotid arteries has strong implications in neurological complications such as stroke.
ERIC Educational Resources Information Center
Colom, Roberto; Stein, Jason L.; Rajagopalan, Priya; Martinez, Kenia; Hermel, David; Wang, Yalin; Alvarez-Linera, Juan; Burgaleta, Miguel; Quiroga, Ma. Angeles; Shih, Pei Chun; Thompson, Paul M.
2013-01-01
Here we apply a method for automated segmentation of the hippocampus in 3D high-resolution structural brain MRI scans. One hundred and four healthy young adults completed twenty one tasks measuring abstract, verbal, and spatial intelligence, along with working memory, executive control, attention, and processing speed. After permutation tests…
NASA Astrophysics Data System (ADS)
Xie, Jian.-Fei.; He, S.; Zu, Y. Q.; Lamy-Chappuis, B.; Yardley, B. W. D.
2017-08-01
In this paper, the migration of supercritical carbon dioxide (CO2) in realistic sandstone rocks under conditions of saline aquifers, with applications to the carbon geological storage, has been investigated by a two-phase lattice Boltzmann method (LBM). Firstly the digital images of sandstone rocks were reproduced utilizing the X-ray computed microtomography (micro-CT), and high resolutions (up to 2.5 μm) were applied to the pore-scale LBM simulations. For the sake of numerical stability, the digital images were "cleaned" by closing the dead holes and removing the suspended particles in sandstone rocks. In addition, the effect of chemical reactions occurred in the carbonation process on the permeability was taken into account. For the wetting brine and non-wetting supercritical CO2 flows, they were treated as the immiscible fluids and were driven by pressure gradients in sandstone rocks. Relative permeabilities of brine and supercritical CO2 in sandstone rocks were estimated. Particularly the dynamic saturation was applied to improve the reliability of the calculations of the relative permeabilities. Moreover, the effects of the viscosity ratio of the two immiscible fluids and the resolution of digital images on the relative permeability were systematically investigated.
Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction
Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.
2016-01-01
X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902
BOREHOLE NEUTRON ACTIVATION: THE RARE EARTHS.
Mikesell, J.L.; Senftle, F.E.
1987-01-01
Neutron-induced borehole gamma-ray spectroscopy has been widely used as a geophysical exploration technique by the petroleum industry, but its use for mineral exploration is not as common. Nuclear methods can be applied to mineral exploration, for determining stratigraphy and bed correlations, for mapping ore deposits, and for studying mineral concentration gradients. High-resolution detectors are essential for mineral exploration, and by using them an analysis of the major element concentrations in a borehole can usually be made. A number of economically important elements can be detected at typical ore-grade concentrations using this method. Because of the application of the rare-earth elements to high-temperature superconductors, these elements are examined in detail as an example of how nuclear techniques can be applied to mineral exploration.
NASA Astrophysics Data System (ADS)
Wei, Jia; Liu, Huaishan; Xing, Lei; Du, Dong
2018-02-01
The stability of submarine geological structures has a crucial influence on the construction of offshore engineering projects and the exploitation of seabed resources. Marine geologists should possess a detailed understanding of common submarine geological hazards. Current marine seismic exploration methods are based on the most effective detection technologies. Therefore, current research focuses on improving the resolution and precision of shallow stratum structure detection methods. In this article, the feasibility of shallow seismic structure imaging is assessed by building a complex model, and differences between the seismic interferometry imaging method and the traditional imaging method are discussed. The imaging effect of the model is better for shallow layers than for deep layers because coherent noise produced by this method can result in an unsatisfactory imaging effect for deep layers. The seismic interference method has certain advantages for geological structural imaging of shallow submarine strata, which indicates continuous horizontal events, a high resolution, a clear fault, and an obvious structure boundary. The effects of the actual data applied to the Shenhu area can fully illustrate the advantages of the method. Thus, this method has the potential to provide new insights for shallow submarine strata imaging in the area.
Quantitative High-Resolution Genomic Analysis of Single Cancer Cells
Hannemann, Juliane; Meyer-Staeckling, Sönke; Kemming, Dirk; Alpers, Iris; Joosse, Simon A.; Pospisil, Heike; Kurtz, Stefan; Görndt, Jennifer; Püschel, Klaus; Riethdorf, Sabine; Pantel, Klaus; Brandt, Burkhard
2011-01-01
During cancer progression, specific genomic aberrations arise that can determine the scope of the disease and can be used as predictive or prognostic markers. The detection of specific gene amplifications or deletions in single blood-borne or disseminated tumour cells that may give rise to the development of metastases is of great clinical interest but technically challenging. In this study, we present a method for quantitative high-resolution genomic analysis of single cells. Cells were isolated under permanent microscopic control followed by high-fidelity whole genome amplification and subsequent analyses by fine tiling array-CGH and qPCR. The assay was applied to single breast cancer cells to analyze the chromosomal region centred by the therapeutical relevant EGFR gene. This method allows precise quantitative analysis of copy number variations in single cell diagnostics. PMID:22140428
NASA Astrophysics Data System (ADS)
Guerra, Jorge; Ullrich, Paul
2016-04-01
Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods for a wide range of spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of idealized test cases to validate the performance of the SNFEM applied in the vertical with an emphasis on flow features and dynamic behavior. Internal gravity wave, mountain wave, convective bubble, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.
NASA Astrophysics Data System (ADS)
Anker, Y.; Hershkovitz, Y.; Gasith, A.; Ben-Dor, E.
2011-12-01
Although remote sensing of fluvial ecosystems is well developed, the tradeoff between spectral and spatial resolutions prevents its application in small streams (<3m width). In the current study, a remote sensing approach for monitoring and research of small ecosystem was developed. The method is based on differentiation between two indicative vegetation species out of the ecosystem flora. Since when studied, the channel was covered mostly by a filamentous green alga (Cladophora glomerata) and watercress (Nasturtium officinale), these species were chosen as indicative; nonetheless, common reed (Phragmites australis) was also classified in order to exclude it from the stream ROI. The procedure included: A. For both section and habitat scales classifications, acquisition of aerial digital RGB datasets. B. For section scale classification, hyperspectral (HSR) dataset acquisition. C. For calibration, HSR reflectance measurements of specific ground targets, in close proximity to each dataset acquisition swath. D. For habitat scale classification, manual, in-stream flora grid transects classification. The digital RGB datasets were converted to reflectance units by spectral calibration against colored reference plates. These red, green, blue, white, and black EVA foam reference plates were measured by an ASD field spectrometer and each was given a spectral value. Each spectral value was later applied to the spectral calibration and radiometric correction of spectral RGB (SRGB) cube. Spectral calibration of the HSR dataset was done using the empirical line method, based on reference values of progressive grey scale targets. Differentiation between the vegetation species was done by supervised classification both for the HSR and for the SRGB datasets. This procedure was done using the Spectral Angle Mapper function with the spectral pattern of each vegetation species as a spectral end member. Comparison between the two remote sensing techniques and between the SRGB classification and the in-situ transects indicates that: A. Stream vegetation classification resolution is about 4 cm by the SRGB method compared to about 1 m by HSR. Moreover, this resolution is also higher than of the manual grid transect classification. B. The SRGB method is by far the most cost-efficient. The combination of spectral information (rather than the cognitive color) and high spatial resolution of aerial photography provides noise filtration and better sub-water detection capabilities than the HSR technique. C. Only the SRGB method applies for habitat and section scales; hence, its application together with in-situ grid transects for validation, may be optimal for use in similar scenarios.
The HSR dataset was first degraded to 17 bands with the same spectral range as the RGB dataset and also to a dataset with 3 equivalent bands
NASA Technical Reports Server (NTRS)
Bedka, Kristopher M.; Dworak, Richard; Brunner, Jason; Feltz, Wayne
2012-01-01
Two satellite infrared-based overshooting convective cloud-top (OT) detection methods have recently been described in the literature: 1) the 11-mm infrared window channel texture (IRW texture) method, which uses IRW channel brightness temperature (BT) spatial gradients and thresholds, and 2) the water vapor minus IRW BT difference (WV-IRW BTD). While both methods show good performance in published case study examples, it is important to quantitatively validate these methods relative to overshooting top events across the globe. Unfortunately, no overshooting top database currently exists that could be used in such study. This study examines National Aeronautics and Space Administration CloudSat Cloud Profiling Radar data to develop an OT detection validation database that is used to evaluate the IRW-texture and WV-IRW BTD OT detection methods. CloudSat data were manually examined over a 1.5-yr period to identify cases in which the cloud top penetrates above the tropopause height defined by a numerical weather prediction model and the surrounding cirrus anvil cloud top, producing 111 confirmed overshooting top events. When applied to Moderate Resolution Imaging Spectroradiometer (MODIS)-based Geostationary Operational Environmental Satellite-R Series (GOES-R) Advanced Baseline Imager proxy data, the IRW-texture (WV-IRW BTD) method offered a 76% (96%) probability of OT detection (POD) and 16% (81%) false-alarm ratio. Case study examples show that WV-IRW BTD.0 K identifies much of the deep convective cloud top, while the IRW-texture method focuses only on regions with a spatial scale near that of commonly observed OTs. The POD decreases by 20% when IRW-texture is applied to current geostationary imager data, highlighting the importance of imager spatial resolution for observing and detecting OT regions.
Gigault, Julien; El Hadri, Hind; Reynaud, Stéphanie; Deniau, Elise; Grassl, Bruno
2017-11-01
In the last 10 years, asymmetrical flow field flow fractionation (AF4) has been one of the most promising approaches to characterize colloidal particles. Nevertheless, despite its potentialities, it is still considered a complex technique to set up, and the theory is difficult to apply for the characterization of complex samples containing submicron particles and nanoparticles. In the present work, we developed and propose a simple analytical strategy to rapidly determine the presence of several submicron populations in an unknown sample with one programmed AF4 method. To illustrate this method, we analyzed polystyrene particles and fullerene aggregates of size covering the whole colloidal size distribution. A global and fast AF4 method (method O) allowed us to screen the presence of particles with size ranging from 1 to 800 nm. By examination of the fractionating power F d , as proposed in the literature, convenient fractionation resolution was obtained for size ranging from 10 to 400 nm. The global F d values, as well as the steric inversion diameter, for the whole colloidal size distribution correspond to the predicted values obtained by model studies. On the basis of this method and without the channel components or mobile phase composition being changed, four isocratic subfraction methods were performed to achieve further high-resolution separation as a function of different size classes: 10-100 nm, 100-200 nm, 200-450 nm, and 450-800 nm in diameter. Finally, all the methods developed were applied in characterization of nanoplastics, which has received great attention in recent years. Graphical Absract Characterization of the nanoplastics by asymmetrical flow field flow fractionation within the colloidal size range.
Comparison of subpixel image registration algorithms
NASA Astrophysics Data System (ADS)
Boye, R. R.; Nelson, C. L.
2009-02-01
Research into the use of multiframe superresolution has led to the development of algorithms for providing images with enhanced resolution using several lower resolution copies. An integral component of these algorithms is the determination of the registration of each of the low resolution images to a reference image. Without this information, no resolution enhancement can be attained. We have endeavored to find a suitable method for registering severely undersampled images by comparing several approaches. To test the algorithms, an ideal image is input to a simulated image formation program, creating several undersampled images with known geometric transformations. The registration algorithms are then applied to the set of low resolution images and the estimated registration parameters compared to the actual values. This investigation is limited to monochromatic images (extension to color images is not difficult) and only considers global geometric transformations. Each registration approach will be reviewed and evaluated with respect to the accuracy of the estimated registration parameters as well as the computational complexity required. In addition, the effects of image content, specifically spatial frequency content, as well as the immunity of the registration algorithms to noise will be discussed.
A new time calibration method for switched-capacitor-array-based waveform samplers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H.; Chen, C. -T.; Eclov, N.
2014-08-24
Here we have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibrationmore » is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. Ultimately, the new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.« less
A new time calibration method for switched-capacitor-array-based waveform samplers
NASA Astrophysics Data System (ADS)
Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.
2014-12-01
We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be 2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.
A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.
Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M
2014-12-11
We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.
A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers
Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.
2014-01-01
We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration. PMID:25506113
Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2011-04-01
In this study, we propose and evaluate a method for spectral characterization of acousto-optic tunable filter (AOTF) hyperspectral imaging systems in the near-infrared (NIR) spectral region from 900 nm to 1700 nm. The proposed spectral characterization method is based on the SRM-2035 standard reference material, exhibiting distinct spectral features, which enables robust non-rigid matching of the acquired and reference spectra. The matching is performed by simultaneously optimizing the parameters of the AOTF tuning curve, spectral resolution, baseline, and multiplicative effects. In this way, the tuning curve (frequency-wavelength characteristics) and the corresponding spectral resolution of the AOTF hyperspectral imaging system can be characterized simultaneously. Also, the method enables simple spectral characterization of the entire imaging plane of hyperspectral imaging systems. The results indicate that the method is accurate and efficient and can easily be integrated with systems operating in diffuse reflection or transmission modes. Therefore, the proposed method is suitable for characterization, calibration, or validation of AOTF hyperspectral imaging systems. © 2011 Society for Applied Spectroscopy
[Optimum design of imaging spectrometer based on toroidal uniform-line-spaced (TULS) spectrometer].
Xue, Qing-Sheng; Wang, Shu-Rong
2013-05-01
Based on the geometrical aberration theory, a optimum-design method for designing an imaging spectrometer based on toroidal uniform grating spectrometer is proposed. To obtain the best optical parameters, twice optimization is carried out using genetic algorithm(GA) and optical design software ZEMAX A far-ultraviolet(FUV) imaging spectrometer is designed using this method. The working waveband is 110-180 nm, the slit size is 50 microm x 5 mm, and the numerical aperture is 0.1. Using ZEMAX software, the design result is analyzed and evaluated. The results indicate that the MTF for different wavelengths is higher than 0.7 at Nyquist frequency 10 lp x mm(-1), and the RMS spot radius is less than 14 microm. The good imaging quality is achieved over the whole working waveband, the design requirements of spatial resolution 0.5 mrad and spectral resolution 0.6 nm are satisfied. It is certificated that the optimum-design method proposed in this paper is feasible. This method can be applied in other waveband, and is an instruction method for designing grating-dispersion imaging spectrometers.
An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography
Yu, Chenglong; Yue, Shihong; Wang, Jianpei; Wang, Huaxiang
2014-01-01
As an advanced process detection technology, electrical impedance tomography (EIT) has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes. PMID:25165735