MATLAB implementation of W-matrix multiresolution analyses
Kwong, Man Kam
1997-01-01
We present a MATLAB toolbox on multiresolution analysis based on the W-transform introduced by Kwong and Tang. The toolbox contains basic commands to perform forward and inverse transforms on finite 1D and 2D signals of arbitrary length, to perform multiresolution analysis of given signals to a specified number of levels, to visualize the wavelet decomposition, and to do compression. Examples of numerical experiments are also discussed.
NASA Astrophysics Data System (ADS)
Brown, I.; Wennbom, M.
2013-12-01
Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors
Multiresolution Subdivision Snakes.
Badoual, Anais; Schmitter, Daniel; Uhlmann, Virginie; Unser, Michael
2017-03-01
We present a new family of snakes that satisfy the property of multiresolution by exploiting subdivision schemes. We show in a generic way how to construct such snakes based on an admissible subdivision mask. We derive the necessary energy formulations and provide the formulas for their efficient computation. Depending on the choice of the mask, such models have the ability to reproduce trigonometric or polynomial curves. They can also be designed to be interpolating, a property that is useful in user-interactive applications. We provide explicit examples of subdivision snakes and illustrate their use for the segmentation of bioimages. We show that they are robust in the presence of noise and provide a multiresolution algorithm to enlarge their basin of attraction, which decreases their dependence on initialization compared to singleresolution snakes. We show the advantages of the proposed model in terms of computation and segmentation of structures with different sizes.
Hair analyses: worthless for vitamins, limited for minerals.
Hambidge, K M
1982-11-01
Despite many major and minor problems with interpretation of analytical data, chemical analyses of human hair have some potential value. Extensive research will be necessary to define this value, including correlation of hair concentrations of specific elements with those in other tissues and metabolic pools and definition of normal physiological concentration ranges. Many factors that may compromise the correct interpretation of analytical data require detailed evaluation for each specific element. Meanwhile, hair analyses are of some value in the comparison of different populations and, for example, in public health community surveys of environmental exposure to heavy metals. On an individual basis, their established usefulness is much more restricted and the limitations are especially notable for evaluation of mineral nutritional status. There is a wide gulf between the limited and mainly tentative scientific justification for their use on an individual basis and the current exploitation of multielement chemical analyses of human hair.
Hair analyses: worthless for vitamins, limited for minerals
Hambridge, K.M.
1982-11-01
Despite many major and minor problems with interpretation of analytical data, chemical analyses of human hair have some potential value. Extensive research will be necessary to define this value, including correlation of hair concentrations of specific elements with those in other tissues and metabolic pools and definition of normal physiological concentration ranges. Many factors that may compromise the correct interpretation of analytical data require detailed evaluation for each specific element. Meanwhile, hair analyses are of some value in the comparison of different populations and, for example, in public health community surveys of environmental exposure to heavy metals. On an individual basis, their established usefulness is much more restricted and the limitations are especially notable for evaluation of mineral nutritional status. There is a wide gulf between the limited and mainly tentative scientific justification for their use on an individual basis and the current exploitation of multielement chemical analyses of human hair.
Research potential and limitations of trace analyses of cremated remains.
Harbeck, Michaela; Schleuder, Ramona; Schneider, Julius; Wiechmann, Ingrid; Schmahl, Wolfgang W; Grupe, Gisela
2011-01-30
Human cremation is a common funeral practice all over the world and will presumably become an even more popular choice for interment in the future. Mainly for purposes of identification, there is presently a growing need to perform trace analyses such as DNA or stable isotope analyses on human remains after cremation in order to clarify pending questions in civil or criminal court cases. The aim of this study was to experimentally test the potential and limitations of DNA and stable isotope analyses when conducted on cremated remains. For this purpose, tibiae from modern cattle were experimentally cremated by incinerating the bones in increments of 100°C until a maximum of 1000°C was reached. In addition, cremated human remains were collected from a modern crematory. The samples were investigated to determine level of DNA preservation and stable isotope values (C and N in collagen, C and O in the structural carbonate, and Sr in apatite). Furthermore, we assessed the integrity of microstructural organization, appearance under UV-light, collagen content, as well as the mineral and crystalline organization. This was conducted in order to provide a general background with which to explain observed changes in the trace analyses data sets. The goal is to develop an efficacious screening method for determining at which degree of burning bone still retains its original biological signals. We found that stable isotope analysis of the tested light elements in bone is only possible up to a heat exposure of 300°C while the isotopic signal from strontium remains unaltered even in bones exposed to very high temperatures. DNA-analyses seem theoretically possible up to a heat exposure of 600°C but can not be advised in every case because of the increased risk of contamination. While the macroscopic colour and UV-fluorescence of cremated bone give hints to temperature exposure of the bone's outer surface, its histological appearance can be used as a reliable indicator for the
Wavelet-Based Multiresolution Analyses of Signals
1992-06-01
classification. Some signals, notably those of a transient nature, are inherently difficult to analyze with these traditional tools. The Discrete Wavelet Transform has...scales. This thesis investigates dyadic discrete wavelet decompositions of signals. A new multiphase wavelet transform is proposed and investigated. The
Multiresolution morphological analysis of document images
NASA Astrophysics Data System (ADS)
Bloomberg, Dan S.
1992-11-01
An image-based approach to document image analysis is presented, that uses shape and textural properties interchangeably at multiple scales. Image-based techniques permit a relatively small number of simple and fast operations to be used for a wide variety of analysis problems with document images. The primary binary image operations are morphological and multiresolution. The generalized opening, a morphological operation, allows extraction of image features that have both shape and textural properties, and that are not limited by properties related to image connectivity. Reduction operations are necessary due to the large number of pixels at scanning resolution, and threshold reduction is used for efficient and controllable shape and texture transformations between resolution levels. Aspects of these techniques, which include sequences of threshold reductions, are illustrated by problems such as text/halftone segmentation and word-level extraction. Both the generalized opening and these multiresolution operations are then used to identify italic and bold words in text. These operations are performed without any attempt at identification of individual characters. Their robustness derives from the aggregation of statistical properties over entire words. However, the analysis of the statistical properties is performed implicitly, in large part through nonlinear image processing operations. The approximate computational cost of the basic operations is given, and the importance of operating at the lowest feasable resolution is demonstrated.
Force limited vibration testing of Cassini spacecraft cosmic dust analyser
NASA Technical Reports Server (NTRS)
Jahn, Heiko; Ritzmann, Swen; Chang, Kurng; Scharton, Terry
1996-01-01
The testing of the cosmic dust analyzer for the Cassini mission using the force limited method in order to avoid overtesting and to verify the ability of the specimen design to withstand the loads during launch and cruise, is reported on. In order to implement the method, force gages, fixtures and a test controller are required and the test specimen is subjected to sine vibration, random vibration and half sine shock. The practical aspects of the use of the force limited method are described. Due to the high loads and the weak design of the structural element, a notching method is used which provides the possibility of limiting the excitation to flight expected levels.
Permissible performance limits of regression analyses in method comparisons.
Haeckel, Rainer; Wosniok, Werner; Al Shareef, Nadera
2011-11-01
Method comparisons are indispensable tools for the extensive validation of analytic procedures. Laboratories often only want to know whether an established procedure (x-method) can be replaced by another one (y-method) without interfering with diagnostic purposes. Then split patients' samples are analyzed more or less simultaneously with both procedures designed to measure the same quantity. The measured values are usually presented graphically as a scatter or difference plots. The two methods are considered to be equivalent (comparable) if the data pairs scatter around the line of equality (x=y line) within permissible equivalence lines. It is proposed to derive these limits of permissible imprecision limits which are based on false-positive error rates. If all data pairs are within the limits, both methods lead to comparable false error rates. If one or more data pairs are outside the permissible equivalence limits, the x-method cannot simply be replaced by the y-method and further studies are required. The discordance may be caused either by aberrant values (outliers), non-linearity, bias or a higher variation of e.g., the y-values. The spread around the line of best fit can detect possible interferences if more than 1% of the data pairs are outside permissible spread lines in a scatter plot. Because bias between methods and imprecision can be inter-related, both require specific examinations for their identification.
Multiresolution Diffeomorphic Mapping for Cortical Surfaces.
Tan, Mingzhen; Qiu, Anqi
2015-01-01
Due to the convoluted folding pattern of the cerebral cortex, accurate alignment of cortical surfaces remains challenging. In this paper, we present a multiresolution diffeomorphic surface mapping algorithm under the framework of large deformation diffeomorphic metric mapping (LDDMM). Our algorithm takes advantage of multiresolution analysis (MRA) for surfaces and constructs cortical surfaces at multiresolution. This family of multiresolution surfaces are used as natural sparse priors of the cortical anatomy and provide the anchor points where the parametrization of deformation vector fields is supported. This naturally constructs tangent bundles of diffeomorphisms at different resolution levels and hence generates multiresolution diffeomorphic transformation. We show that our construction of multiresolution LDDMM surface mapping can potentially reduce computational cost and improves the mapping accuracy of cortical surfaces.
The Limited Informativeness of Meta-Analyses of Media Effects.
Valkenburg, Patti M
2015-09-01
In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children's video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I compare Ferguson's nonsignificant effects sizes with earlier meta-analyses on the same topics that yielded larger, significant effect sizes. I argue that Ferguson's choice for partial effects sizes is unjustified on both methodological and theoretical grounds. I then plead for a more constructive debate on the effects of violent video games on children and adolescents. Until now, this debate has been dominated by two camps with diametrically opposed views on the effects of violent media on children. However, even the earliest media effects studies tell us that children can react quite differently to the same media content. Thus, if researchers truly want to understand how media affect children, rather than fight for the presence or absence of effects, they need to adopt a perspective that takes differential susceptibility to media effects more seriously.
[The advantages and limitations of brain function analyses by PET].
Kato, M; Taniwaki, T; Kuwabara, Y
2000-12-01
PET has been proved to be a powerful tool for exploring the brain function. We discussed the advantages and limitations of PET for analyzing the brain function on the basis of our clinical and experimental experiences of functional imaging. A multimodality PET study measuring cerebral energy metabolism (CMRO2 and CMRglc), cerebral blood flow (CBF), oxygen extraction fraction (OEF) and neurotransmitter function (presynaptic and postsynaptic) opens up a closer insight into a precise pathophysiology of the brain dysfunction: In cerebral infarction, it reveals a state of "misery perfusion" in the acute stage, "luxury perfusion" in the intermediate stage, and proportionately decreased CBF and CMRO2 in the chronic stage. Neurotransmitter function may identify specifically a neuronal subgroup of dysfunction. Owing to the low temporal resolution of PET, a neuronal activity may propagate transsynaptically to remote areas during the period of scanning, resulting in an obscured primary site of the neuronal activity. Uncoupling between neuronal activities and cerebral energy metabolism/CBF may occur under a certain state of brain pathology, particularly after an acute destructive lesion, according to our experimental studies. Neurotransmitter function may reveal the effect of drugs on the brain function, and may be useful for developing a new method of drug therapy for brain diseases in the future.
Wavelet-based Multiresolution Particle Methods
NASA Astrophysics Data System (ADS)
Bergdorf, Michael; Koumoutsakos, Petros
2006-03-01
Particle methods offer a robust numerical tool for solving transport problems across disciplines, such as fluid dynamics, quantitative biology or computer graphics. Their strength lies in their stability, as they do not discretize the convection operator, and appealing numerical properties, such as small dissipation and dispersion errors. Many problems of interest are inherently multiscale, and their efficient solution requires either multiscale modeling approaches or spatially adaptive numerical schemes. We present a hybrid particle method that employs a multiresolution analysis to identify and adapt to small scales in the solution. The method combines the versatility and efficiency of grid-based Wavelet collocation methods while retaining the numerical properties and stability of particle methods. The accuracy and efficiency of this method is then assessed for transport and interface capturing problems in two and three dimensions, illustrating the capabilities and limitations of our approach.
MOSES Inversions using Multiresolution SMART
NASA Astrophysics Data System (ADS)
Rust, Thomas; Fox, Lewis; Kankelborg, Charles; Courrier, Hans; Plovanic, Jacob
2014-06-01
We present improvements to the SMART inversion algorithm for the MOSES imaging spectrograph. MOSES, the Multi-Order Solar EUV Spectrograph, is a slitless extreme ultraviolet spectrograph designed to measure cotemporal narrowband spectra over a wide field of view via tomographic inversion of images taken at three orders of a concave diffraction grating. SMART, the Smooth Multiplicative Algebraic Reconstruction Technique, relies on a global chi squared goodness of fit criterion, which enables overfit and underfit regions to "balance out" when judging fit quality. "Good" reconstructions show poor fits at some positions and length scales. Here we take a multiresolution approach to SMART, applying corrections to the reconstruction at positions and scales where correction is warranted based on the noise. The result is improved fit residuals that more closely resemble the expected noise in the images. Within the multiresolution framework it is also easy to include a regularized deconvolution of the instrument point spread functions, which we do. Different point spread functions among MOSES spectral orders results in spurious doppler shifts in the reconstructions, most notable near bright compact emission. We estimate the point spread funtions from the data. Deconvolution is done using the Richardson-Lucy method, which is algorithmically similar to SMART. Regularization results from only correcting the reconstruction at positions and scales where correction is warranted based on the noise. We expect the point spread function deconvolution to increase signal to noise and reduce systematic error in MOSES reconstructions.
Nonlinear Harten's multiresolution on the quincunx pyramid
NASA Astrophysics Data System (ADS)
Amat, Sergio; Busquier, S.; Trillo, J. C.
2006-05-01
Multiresolution transforms provide useful tools for image processing applications. For an optimal representation of the edges, it is crucial to develop nonlinear schemes which are not based on tensor product. This paper links the nonseparable quincunx pyramid and the nonlinear discrete Harten's multiresolution framework. In order to obtain the stability of these representations, an error-control multiresolution algorithm is introduced. A prescribed accuracy in various norms is ensured by these strategies. Explicit error bounds are presented. Finally, a nonlinear reconstruction is proposed and tested.
Milani, Gabriele Valente, Marco
2014-10-06
This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.
Riesz wavelets and multiresolution structures
NASA Astrophysics Data System (ADS)
Larson, David R.; Tang, Wai-Shing; Weber, Eric
2001-12-01
Multiresolution structures are important in applications, but they are also useful for analyzing properties of associated wavelets. Given a nonorthogonal (multi-) wavelet in a Hilbert space, we construct a core subspace. Subsequently, the dilates of the core subspace defines a ladder of nested subspaces. Of fundamental importance are two questions: 1) when is the core subspace shift invariant; and if yes, then 2) when is the core subspace generated by shifts of a single vector, i.e. there exists a scaling vector. If the wavelet generates a Riesz basis then the answer to question 1) is yes if and only if the wavelet is a biorthogonal wavelet. Additionally, if the wavelet generates a tight frame of arbitrary frame constant, then the core subspace is shift invariant. Question 1) is still open in case the wavelet generates a non-tight frame. We also present some known results to question 2) and provide some preliminary improvements. Our analysis here arises from investigating the dimension function and the multiplicity function of a wavelet. These two functions agree if the wavelet is orthogonal. Finally, we discuss how these questions are important for considering linear perturbation of wavelets. Utilizing the idea of the local commutant of a unitary system developed by Dai and Larson, we show that nearly all linear perturbations of two orthonormal wavelets form a Riesz wavelet. If in fact these wavelets correspond to a von Neumann algebra in the local commutant of a base wavelet, then the interpolated wavelet is biorthogonal. Moreover, we demonstrate that in this case the interpolated wavelets have a scaling vector if the base wavelet has a scaling vector.
Analysing the capabilities and limitations of tracer tests in stream-aquifer systems
Wagner, B.J.; Harvey, J.W.
2001-01-01
The goal of this study was to identify the limitations that apply when we couple conservative-tracer injection with reactive solute sampling to identify the transport and reaction processes active in a stream. Our methodology applies Monte Carlo uncertainty analysis to assess the ability of the tracer approach to identify the governing transport and reaction processes for a wide range of stream-solute transport and reaction scenarios likely to be encountered in high-gradient streams. Our analyses identified dimensionless factors that define the capabilities and limitations of the tracer approach. These factors provide a framework for comparing and contrasting alternative tracer test designs.
A multiresolution restoration method for cardiac SPECT
NASA Astrophysics Data System (ADS)
Franquiz, Juan Manuel
Single-photon emission computed tomography (SPECT) is affected by photon attenuation and image blurring due to Compton scatter and geometric detector response. Attenuation correction is important to increase diagnostic accuracy of cardiac SPECT. However, in attenuation-corrected scans, scattered photons from radioactivity in the liver could produce a spillover of counts into the inferior myocardial wall. In the clinical setting, blurring effects could be compensated by restoration with Wiener and Metz filters. Inconveniences of these procedures are that the Wiener filter depends upon the power spectra of the object image and noise, which are unknown, while Metz parameters have to be optimized by trial and error. This research develops an alternative restoration procedure based on a multiresolution denoising and regularization algorithm. It was hypothesized that this representation leads to a more straightforward and automatic restoration than conventional filters. The main objective of the research was the development and assessment of the multiresolution algorithm for compensating the liver spillover artifact. The multiresolution algorithm decomposes original SPECT projections into a set of sub-band frequency images. This allows a simple denoising and regularization procedure by discarding high frequency channels and performing inversion only in low and intermediate frequencies. The method was assessed in bull's eye polar maps and short- axis attenuation-corrected reconstructions of a realistic cardiac-chest phantom with a custom-made liver insert and different 99mTc liver-to-heart activity ratios. Inferior myocardial defects were simulated in some experiments. The cardiac phantom in free air was considered as the gold standard reference. Quantitative analysis was performed by calculating contrast of short- axis slices and the normalized chi-square measure, defect size and mean and standard deviation of polar map counts. The performance of the multiresolution
Tan, Mingzhen; Qiu, Anqi
2016-09-01
Brain surface registration is an important tool for characterizing cortical anatomical variations and understanding their roles in normal cortical development and psychiatric diseases. However, surface registration remains challenging due to complicated cortical anatomy and its large differences across individuals. In this paper, we propose a fast coarse-to-fine algorithm for surface registration by adapting the large diffeomorphic deformation metric mapping (LDDMM) framework for surface mapping and show improvements in speed and accuracy via a multiresolution analysis of surface meshes and the construction of multiresolution diffeomorphic transformations. The proposed method constructs a family of multiresolution meshes that are used as natural sparse priors of the cortical morphology. At varying resolutions, these meshes act as anchor points where the parameterization of multiresolution deformation vector fields can be supported, allowing the construction of a bundle of multiresolution deformation fields, each originating from a different resolution. Using a coarse-to-fine approach, we show a potential reduction in computation cost along with improvements in sulcal alignment when compared with LDDMM surface mapping.
Comparison of multiresolution techniques for digital signal processing
NASA Astrophysics Data System (ADS)
Hamlett, Neil A.
1993-03-01
A comprehensive study of multiresolution techniques is conducted. Background material in functional analysis and Quadrature Mirror Filter (QMF) banks is presented. The development of Mallat's algorithm for multiresolution decomposition and reconstruction is outlined and demonstrated to be equivalent to QMF banks. The Laplacian pyramid and the a trous algorithm are described and demonstrated. General multiresolution structures are constructed from cascades of QMF and pseudo-QMF banks and are demonstrated for applications in signal decomposition and reconstruction and for signal detection and identification.
Optical design and system engineering of a multiresolution foveated laparoscope
Qin, Yi; Hua, Hong
2016-01-01
The trade-off between the spatial resolution and field of view is one major limitation of state-of-the-art laparoscopes. In order to address this limitation, we demonstrated a multiresolution foveated laparoscope (MRFL) which is capable of simultaneously capturing both a wide-angle overview for situational awareness and a high-resolution zoomed-in view for accurate surgical operation. In this paper, we focus on presenting the optical design and system engineering process for developing the MRFL prototype. More specifically, the first-order specifications and properties of the optical system are discussed, followed by a detailed discussion on the optical design strategy and procedures of each subsystem. The optical performance of the final system, including diffraction efficiency, tolerance analysis, stray light and ghost image, is fully analyzed. Finally, the prototype assembly process and the final prototype are demonstrated. PMID:27139875
Multiresolution approach based on projection matrices
Vargas, Javier; Quiroga, Juan Antonio
2009-03-01
Active triangulation measurement systems with a rigid geometric configuration are inappropriate for scanning large objects with low measuring tolerances. The reason is that the ratio between the depth recovery error and the lateral extension is a constant that depends on the geometric setup. As a consequence, measuring large areas with low depth recovery error requires the use of multiresolution techniques. We propose a multiresolution technique based on a camera-projector system previously calibrated. The method consists of changing the camera or projector's parameters in order to increase the system depth sensitivity. A subpixel retroprojection error in the self-calibration process and a decrease of approximately one order of magnitude in the depth recovery error can be achieved using the proposed method.
Multiresolution modulation for efficient broadcast of information
NASA Astrophysics Data System (ADS)
Grundstrom, Mika; Renfors, Markku
1994-09-01
Methods for reliable transmission of information, especially digital television, are considered. In the broadcast channel, several different receiver configurations and channel conditions make an optimization of the channel coding practically impossible. To efficiently utilize available spectrum and to allow robust reception in adverse channel conditions, joint source- channel coding is applied. This is achieved utilizing multiresolution modulation combined with unequal error protection in the channel coding part and data prioritization in source coder. These design parameters for the joint system are considered. The emphasis of this paper is on the modulation part. Multiresolution 32 QAM is presented. Simulations show good performance in the additive white gaussian noise channel and, moreover, results in multipath fading channel are encouraging as far as high priority level of the data is considered.
EEG Multiresolution Analysis Using Wavelet Transform
2007-11-02
Wavelet transform (WT) is a new multiresolution time-frequency analysis method. WT possesses well localization feature both in tine and frequency...plays a key role in the diagnosing diseases and is useful for both physiological research and medical applications. Using the dyadic wavelet ... transform the EEG signals are successfully decomposed to the alpha rhythm (8-13Hz) beta rhythm (14-30Hz) theta rhythm (4-7Hz) and delta rhythm (0.3-3Hz) and
Multiresolution saliency map based object segmentation
NASA Astrophysics Data System (ADS)
Yang, Jian; Wang, Xin; Dai, ZhenYou
2015-11-01
Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.
Optical multiresolution analysis with spatial localization
NASA Astrophysics Data System (ADS)
Mazzaferri, Javier; Ledesma, Silvia
2010-05-01
Multiresolution analysis is very useful for characterization of textures, segmentation tasks, and feature enhancement. The development of optical methods to perform such procedures is highly promissory for real-time applications. Usually, the optical implementations of multiresolution analysis consist in the decomposition of the input scene in different frequency bands, obtaining various filtered versions of the scene. However, under certain circumstances it could be useful to provide just one version of the scene where the different filters are applied in different regions. This procedure could be specially interesting for biological and medical applications in situations when the approximate localization of the scale information is known a priori. In this paper we present a fully optical method to perform multiresolution analysis with spatial localization. By means of the proposed technique, the multi-scale analysis is performed at once in a unique image. The experimental set-up consist of a double-pass convergent optical processor. The first stage of the device allows the multiple band decomposition, while the second stage confines the information of each band to different regions of the object and recombines it to achieve the desired operation. Numerical simulations and experimental results, which prove the very good performance of the method, are presented.
Carbon dioxide analysers: accuracy, alarm limits and effects of interfering gases.
Lauber, R; Seeberger, B; Zbinden, A M
1995-07-01
Six mainstream and twelve sidestream infrared carbon dioxide (CO2) analysers were tested for accuracy of the CO2 display value, alarm activation and the effects of nitrous oxide (N2O), oxygen (O2) and water vapour according to the ISO Draft International Standard (DIS)#9918. Mainstream analysers (M-type): Novametrix Capnogard 1265; Hewlett Packard HP M1166A (CO2-module HP M1016A); Datascope Passport; Marquette Tramscope 12; Nellcor Ultra Cap N-6000; Hellige Vicom-sm SMU 611/612 ETC. Sidestream analysers: Brüel & Kjaer Type 1304; Datex Capnomac II; Marquette MGA-AS; Datascope Multinex; Ohmeda 4700 OxiCap (all type S1: respiratory cycles not demanded); Biochem BCI 9000; Bruker BCI 9100; Dräger Capnodig and PM 8020; Criticare Poet II; Hellige Vicom-sm SMU 611/612 A-GAS (all type S2: respiratory cycles demanded). The investigations were performed with premixed test gases (2.5, 5, 10 vol%, error < or = 1% rel.). Humidification (37 degrees C) of gases were generated by a Dräger Aquapor. Respiratory cycles were simulated by manually activated valves. All monitors complied with the tolerated accuracy bias in CO2 reading (< or = 12% or 4 mmHg of actual test gas value) for wet and dry test gases at all concentrations, except that the Marquette MGA-AS exceeded this accuracy limit with wet gases at 5 and 10 vol% CO2. Water condensed in the metal airway adapter of the HP M1166A at 37 degrees C gas temperature but not at 30 degrees C. The Servomex 2500 (nonclinical reference monitor), Passport (M-type), Multinex (S1-type) and Poet II (S2-type) showed the least bias for dry and wet gases. Nitrous oxide and O2 had practically no effect on the Capnodig and the errors in the others were max. 3.4 mmHg, still within the tolerated bias in the DIS (same as above). The difference between the display reading at alarm activation and the set point was in all monitors (except in the Capnodig: bias 1.75 mmHg at 5 vol% CO2) below the tolerated limit of the DIS (difference < or = 0.2 vol
NASA Astrophysics Data System (ADS)
Khamizov, R. K.; Kumakhov, M. A.; Nikitina, S. V.; Mikhin, V. A.
2005-07-01
The possibilities of increasing the sensitivity of energy dispersive X-ray fluorescence analysis (EDXRF) of solutions with the use of special preconcentrating sensors are described in the article. The sensors are made from polycapillary tubes or plates consisting of hundred thousands micro-channels, each containing a micro-grain of collecting sorbent. The kinetic regularities for preconcentration of micro-components from solutions are considered. Experimental results are given for EDXRF analyses of different solutions containing metals and other elements in trace amounts, and the detection limits of tens and hundreds ppb are demonstrated. The pilot sample of a new analytical instrument <
Multiresolution With Super-Compact Wavelets
NASA Technical Reports Server (NTRS)
Lee, Dohyung
2000-01-01
The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of
Keizer, Ron J; Jansen, Robert S; Rosing, Hilde; Thijssen, Bas; Beijnen, Jos H; Schellens, Jan H M; Huitema, Alwin D R
2015-01-01
Handling of data below the lower limit of quantification (LLOQ), below the limit of quantification (BLOQ) in population pharmacokinetic (PopPK) analyses is important for reducing bias and imprecision in parameter estimation. We aimed to evaluate whether using the concentration data below the LLOQ has superior performance over several established methods. The performance of this approach (“All data”) was evaluated and compared to other methods: “Discard,” “LLOQ/2,” and “LIKE” (likelihood-based). An analytical and residual error model was constructed on the basis of in-house analytical method validations and analyses from literature, with additional included variability to account for model misspecification. Simulation analyses were performed for various levels of BLOQ, several structural PopPK models, and additional influences. Performance was evaluated by relative root mean squared error (RMSE), and run success for the various BLOQ approaches. Performance was also evaluated for a real PopPK data set. For all PopPK models and levels of censoring, RMSE values were lowest using “All data.” Performance of the “LIKE” method was better than the “LLOQ/2” or “Discard” method. Differences between all methods were small at the lowest level of BLOQ censoring. “LIKE” method resulted in low successful minimization (<50%) and covariance step success (<30%), although estimates were obtained in most runs (∼90%). For the real PK data set (7.4% BLOQ), similar parameter estimates were obtained using all methods. Incorporation of BLOQ concentrations showed superior performance in terms of bias and precision over established BLOQ methods, and shown to be feasible in a real PopPK analysis. PMID:26038706
Exploring a Multi-resolution Approach Using AMIP Simulations
Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun; Yang, Qing; Lu, Jian; Hagos, Samson M.; Rauscher, Sara; Dong, Li; Ringler, Todd; Lauritzen, P. H.
2015-07-31
This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulations reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.
Cresswell, A J; Sanderson, D C W; White, D C
2006-02-01
The uncertainties associated with airborne gamma spectrometry (AGS) measurements analysed using a spectral windows method, and associated detection limits, have been investigated. For individual short measurements over buried 137Cs activity detection limits of 10 kBq m(-2) are achieved. These detection limits are reduced for superficial activity and longer integration times. For superficial activity, detection limits below 1 kBq m(-2) are achievable. A comparison is made with the detection limits for other data processing methods.
Multiresolutional models of uncertainty generation and reduction
NASA Technical Reports Server (NTRS)
Meystel, A.
1989-01-01
Kolmogorov's axiomatic principles of the probability theory, are reconsidered in the scope of their applicability to the processes of knowledge acquisition and interpretation. The model of uncertainty generation is modified in order to reflect the reality of engineering problems, particularly in the area of intelligent control. This model implies algorithms of learning which are organized in three groups which reflect the degree of conceptualization of the knowledge the system is dealing with. It is essential that these algorithms are motivated by and consistent with the multiresolutional model of knowledge representation which is reflected in the structure of models and the algorithms of learning.
NASA Astrophysics Data System (ADS)
Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.
2004-12-01
Most studies of landslide initiation employ limit equilibrium analyses of slope stability. Owing to a lack of detailed data, however, few studies have tested limit-equilibrium predictions against physical measurements of slope failure. We have conducted a series of field-scale, highly controlled landslide initiation experiments at the USGS debris-flow flume in Oregon; these experiments provide exceptional data to test limit equilibrium methods. In each of seven experiments, we attempted to induce failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand placed behind a retaining wall in the 31° sloping flume. We systematically investigated triggering of sliding by groundwater injection, by prolonged moderate-intensity sprinkling, and by bursts of high intensity sprinkling. We also used vibratory compaction to control soil porosity and thereby investigate differences in failure behavior of dense and loose soils. About 50 sensors were monitored at 20 Hz during the experiments, including nests of tiltmeters buried at 7 cm spacing to define subsurface failure geometry, and nests of tensiometers and pore-pressure sensors to define evolving pore-pressure fields. In addition, we performed ancillary laboratory tests to measure soil porosity, shear strength, hydraulic conductivity, and compressibility. In loose soils (porosity of 0.52 to 0.55), abrupt failure typically occurred along the flume bed after substantial soil deformation. In denser soils (porosity of 0.41 to 0.44), gradual failure occurred within the soil prism. All failure surfaces had a maximum length to depth ratio of about 7. In even denser soil (porosity of 0.39), we could not induce failure by sprinkling. The internal friction angle of the soils varied from 28° to 40° with decreasing porosity. We analyzed stability at failure, given the observed pore-pressure conditions just prior to large movement, using a 1-D infinite-slope method and a more complete 2-D Janbu method. Each method provides a static
Color Quantization by Multiresolution Analysis
NASA Astrophysics Data System (ADS)
Ramella, Giuliana; di Baja, Gabriella Sanniti
A color quantization method is presented, which is based on the analysis of the histogram at different resolutions computed on a Gaussian pyramid of the input image. Criteria based on persistence and dominance of peaks and pits of the histograms are introduced to detect the modes in the histogram of the input image and to define the reduced colormap. Important features of the method are, besides its limited computational cost, the possibility to obtain quantized images with a variable number of colors, depending on the user’s need, and that the number of colors in the resulting image does not need to be a priori fixed.
Active pixel sensor array with multiresolution readout
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)
1999-01-01
An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.
Yang, Lianping; Zhang, Weilin
2017-04-01
How we can describe the similarity relationship between the biological sequences is a basic but important problem in bioinformatics. The first graphical representation method for the similarity relationship rather than for single sequence is proposed in this article, which makes the similarity intuitional. Some properties such as sensitivity and continuity of the similarity are proved theoretically, which indicate that the similarity describer has the advantage of both alignment and alignment-free methods. With the aid of multiresolution analysis tools, we can exhibit the similarity's different profiles, from high resolution to low resolution. Then the idea of multiresolution clustering is raised first. A reassortment analysis on a benchmark flu virus genome data set is to test our method and it shows a better performance than alignment method, especially in dealing with problems involving segments' order.
Multiresolution moment filters: theory and applications.
Sühling, Michael; Arigovindan, Muthuvel; Hunziker, Patrick; Unser, Michael
2004-04-01
We introduce local weighted geometric moments that are computed from an image within a sliding window at multiple scales. When the window function satisfies a two-scale relation, we prove that lower order moments can be computed efficiently at dyadic scales by using a multiresolution wavelet-like algorithm. We show that B-splines are well-suited window functions because, in addition to being refinable, they are positive, symmetric, separable, and very nearly isotropic (Gaussian shape). We present three applications of these multiscale local moments. The first is a feature-extraction method for detecting and characterizing elongated structures in images. The second is a noise-reduction method which can be viewed as a multiscale extension of Savitzky-Golay filtering. The third is a multiscale optical-flow algorithm that uses a local affine model for the motion field, extending the Lucas-Kanade optical-flow method. The results obtained in all cases are promising.
Multiresolution Distance Volumes for Progressive Surface Compression
Laney, D E; Bertram, M; Duchaineau, M A; Max, N L
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.
Filter design for directional multiresolution decomposition
NASA Astrophysics Data System (ADS)
Cunha, Arthur L.; Do, Minh N.
2005-08-01
In this paper we discuss recent developments on design tools and methods for multidimensional filter banks in the context of directional multiresolution representations. Due to the inherent non-separability of the filters and the lack of multi-dimensional factorization tools, one generally has to overcome factorization by indirect methods. One such method is the mapping technique. In the context of contourlets we review methods for designing filters with directional vanishing moments (DVM). The DVM property is crucial in guaranteeing the non-linear approximation efficacy of contourlets. Our approach allows for easy design of two-channel linear-phase filter banks with DVM of any order. Next we study the design via mapping of nonsubsampled filter banks. Our methodology allows for a fast implementation through ladder steps. The proposed design is then used to construct the nonsubsampled contourlet transform which is particularly efficiently in image denoising, as experiments in this paper show.
Multiresolution MR elastography using nonlinear inversion
McGarry, M. D. J.; Van Houten, E. E. W.; Johnson, C. L.; Georgiadis, J. G.; Sutton, B. P.; Weaver, J. B.; Paulsen, K. D.
2012-01-01
Purpose: Nonlinear inversion (NLI) in MR elastography requires discretization of the displacement field for a finite element (FE) solution of the “forward problem”, and discretization of the unknown mechanical property field for the iterative solution of the “inverse problem”. The resolution requirements for these two discretizations are different: the forward problem requires sufficient resolution of the displacement FE mesh to ensure convergence, whereas lowering the mechanical property resolution in the inverse problem stabilizes the mechanical property estimates in the presence of measurement noise. Previous NLI implementations use the same FE mesh to support the displacement and property fields, requiring a trade-off between the competing resolution requirements. Methods: This work implements and evaluates multiresolution FE meshes for NLI elastography, allowing independent discretizations of the displacements and each mechanical property parameter to be estimated. The displacement resolution can then be selected to ensure mesh convergence, and the resolution of the property meshes can be independently manipulated to control the stability of the inversion. Results: Phantom experiments indicate that eight nodes per wavelength (NPW) are sufficient for accurate mechanical property recovery, whereas mechanical property estimation from 50 Hz in vivo brain data stabilizes once the displacement resolution reaches 1.7 mm (approximately 19 NPW). Viscoelastic mechanical property estimates of in vivo brain tissue show that subsampling the loss modulus while holding the storage modulus resolution constant does not substantially alter the storage modulus images. Controlling the ratio of the number of measurements to unknown mechanical properties by subsampling the mechanical property distributions (relative to the data resolution) improves the repeatability of the property estimates, at a cost of modestly decreased spatial resolution. Conclusions: Multiresolution
Hanging-wall deformation above a normal fault: sequential limit analyses
NASA Astrophysics Data System (ADS)
Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand
2015-04-01
The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring
SHORT-TERM SOLAR FLARE PREDICTION USING MULTIRESOLUTION PREDICTORS
Yu Daren; Huang Xin; Hu Qinghua; Zhou Rui; Wang Huaning; Cui Yanmei
2010-01-20
Multiresolution predictors of solar flares are constructed by a wavelet transform and sequential feature extraction method. Three predictors-the maximum horizontal gradient, the length of neutral line, and the number of singular points-are extracted from Solar and Heliospheric Observatory/Michelson Doppler Imager longitudinal magnetograms. A maximal overlap discrete wavelet transform is used to decompose the sequence of predictors into four frequency bands. In each band, four sequential features-the maximum, the mean, the standard deviation, and the root mean square-are extracted. The multiresolution predictors in the low-frequency band reflect trends in the evolution of newly emerging fluxes. The multiresolution predictors in the high-frequency band reflect the changing rates in emerging flux regions. The variation of emerging fluxes is decoupled by wavelet transform in different frequency bands. The information amount of these multiresolution predictors is evaluated by the information gain ratio. It is found that the multiresolution predictors in the lowest and highest frequency bands contain the most information. Based on these predictors, a C4.5 decision tree algorithm is used to build the short-term solar flare prediction model. It is found that the performance of the short-term solar flare prediction model based on the multiresolution predictors is greatly improved.
Limitations of analyses based on achieved blood pressure: Lessons from the AASK trial
Davis, Esa M; Appel, Lawrence J; Wang, Xuelei; Greene, Tom; Astor, Brad C.; Rahman, Mahboob; Toto, Robert; Lipkowitz, Michael S; Pogue, Velvie A; Wright, Jackson T
2011-01-01
Blood pressure (BP) guidelines that set target BP levels often rely on analyses of achieved BP from hypertension treatment trials. The objective of this paper was to compare the results of analyses of achieved BP to intention-to-treat analyses on renal disease progression. Participants (n=1,094) in the African-American Study of Kidney Disease and Hypertension Trial were randomized to either: (1) usual BP goal defined by a mean arterial pressure (MAP) goal of 102–107 mmHg or (2) lower BP goal defined by a MAP goal of ≤ 92 mmHg. Median follow-up was 3.7 years. Primary outcomes were rate of decline in measured glomerular filtration rate (GFR) and a composite of a decrease in GFR by > 50% or >25 ml/min/1.73m2, requirement for dialysis, transplantation, or death. Intention-to-treat analyses showed no evidence of a BP effect on either the rate of decline in GFR or the clinical composite outcome. In contrast, the achieved BP analyses showed that each 10 mm Hg increment in mean follow-up achieved MAP was associated with a 0.35 (95% CI 0.08 – 0.62, p = 0.01) ml/min/1.73m2 faster mean GFR decline and a 17% (95% CI 5% – 32%, p = 0.006) increased risk of the clinical composite outcome. Analyses based on achieved BP lead to markedly different inferences than traditional intention-to-treat analyses, due in part to confounding of achieved BP with co- morbidities, disease severity and adherence. Clinicians and policy makers should exercise caution when making treatment recommendations based on analyses relating outcomes to achieved BP. PMID:21555676
Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Cheze-Le Rest, C; Visvikis, D
2006-04-07
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the "à trous" algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI
NASA Astrophysics Data System (ADS)
Boussion, N.; Hatt, M.; Lamare, F.; Bizais, Y.; Turzo, A.; Cheze-LeRest, C.; Visvikis, D.
2006-04-01
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'à trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI
Limitations of log-rank tests for analysing longevity data in biogerontology.
Le Bourg, Eric
2014-08-01
Log-rank tests are sometimes used to analyse longevity data when other tests should be preferred. When the experimental design involves more than one factor, some authors perform several log-rank tests with the same data, which increases the risk to wrongly conclude that a difference among groups does exist and does not allow to test interactions. When analysing the effect of a single factor with more than two groups, some authors also perform several tests (e.g. comparing a control group to each of the experimental groups), because post hoc analysis is not available with log-rank tests. These errors prevent to fully appreciate the longevity results of these articles and it would be easy to overcome this problem by using statistical methods devoted to one-way or multi-way designs, such as Cox's models, analysis of variance, and generalised linear models.
Improvements in RIMS Isotopic Precision: Application to in situ atom-limited analyses
Levine, J.; Stephan, T.; Savina, M.; Pellin, M.
2009-03-17
Resonance ionization mass spectrometry offers high sensitivity and elemental selectivity in microanalysis, but the isotopic precision attainable by this technique has been limited. Here we report instrumental modifications to improve the precision of RIMS isotope ratio measurements. Special attention must be paid to eliminating pulse-to-pulse variations in the time-of-flight mass spectrometer through which the photoions travel, and resonant excitation schemes must be chosen such that the resonance transitions can substantially power-broadened to cover the isotope shifts. We report resonance ionization measurements of chromium isotope ratios with statistics-limited precision better than 1%.
NASA Astrophysics Data System (ADS)
Sajun Prasad, K.; Panda, Sushanta Kumar; Kar, Sujoy Kumar; Sen, Mainak; Murty, S. V. S. Naryana; Sharma, Sharad Chandra
2017-04-01
Recently, aerospace industries have shown increasing interest in forming limits of Inconel 718 sheet metals, which can be utilised in designing tools and selection of process parameters for successful fabrication of components. In the present work, stress-strain response with failure strains was evaluated by uniaxial tensile tests in different orientations, and two-stage work-hardening behavior was observed. In spite of highly preferred texture, tensile properties showed minor variations in different orientations due to the random distribution of nanoprecipitates. The forming limit strains were evaluated by deforming specimens in seven different strain paths using limiting dome height (LDH) test facility. Mostly, the specimens failed without prior indication of localized necking. Thus, fracture forming limit diagram (FFLD) was evaluated, and bending correction was imposed due to the use of sub-size hemispherical punch. The failure strains of FFLD were converted into major-minor stress space ( σ-FFLD) and effective plastic strain-stress triaxiality space ( ηEPS-FFLD) as failure criteria to avoid the strain path dependence. Moreover, FE model was developed, and the LDH, strain distribution and failure location were predicted successfully using above-mentioned failure criteria with two stages of work hardening. Fractographs were correlated with the fracture behavior and formability of sheet metal.
NASA Astrophysics Data System (ADS)
Sajun Prasad, K.; Panda, Sushanta Kumar; Kar, Sujoy Kumar; Sen, Mainak; Murty, S. V. S. Naryana; Sharma, Sharad Chandra
2017-02-01
Recently, aerospace industries have shown increasing interest in forming limits of Inconel 718 sheet metals, which can be utilised in designing tools and selection of process parameters for successful fabrication of components. In the present work, stress-strain response with failure strains was evaluated by uniaxial tensile tests in different orientations, and two-stage work-hardening behavior was observed. In spite of highly preferred texture, tensile properties showed minor variations in different orientations due to the random distribution of nanoprecipitates. The forming limit strains were evaluated by deforming specimens in seven different strain paths using limiting dome height (LDH) test facility. Mostly, the specimens failed without prior indication of localized necking. Thus, fracture forming limit diagram (FFLD) was evaluated, and bending correction was imposed due to the use of sub-size hemispherical punch. The failure strains of FFLD were converted into major-minor stress space (σ-FFLD) and effective plastic strain-stress triaxiality space (ηEPS-FFLD) as failure criteria to avoid the strain path dependence. Moreover, FE model was developed, and the LDH, strain distribution and failure location were predicted successfully using above-mentioned failure criteria with two stages of work hardening. Fractographs were correlated with the fracture behavior and formability of sheet metal.
Individual-based analyses reveal limited functional overlap in a coral reef fish community.
Brandl, Simon J; Bellwood, David R
2014-05-01
Detailed knowledge of a species' functional niche is crucial for the study of ecological communities and processes. The extent of niche overlap, functional redundancy and functional complementarity is of particular importance if we are to understand ecosystem processes and their vulnerability to disturbances. Coral reefs are among the most threatened marine systems, and anthropogenic activity is changing the functional composition of reefs. The loss of herbivorous fishes is particularly concerning as the removal of algae is crucial for the growth and survival of corals. Yet, the foraging patterns of the various herbivorous fish species are poorly understood. Using a multidimensional framework, we present novel individual-based analyses of species' realized functional niches, which we apply to a herbivorous coral reef fish community. In calculating niche volumes for 21 species, based on their microhabitat utilization patterns during foraging, and computing functional overlaps, we provide a measurement of functional redundancy or complementarity. Complementarity is the inverse of redundancy and is defined as less than 50% overlap in niche volumes. The analyses reveal extensive complementarity with an average functional overlap of just 15.2%. Furthermore, the analyses divide herbivorous reef fishes into two broad groups. The first group (predominantly surgeonfishes and parrotfishes) comprises species feeding on exposed surfaces and predominantly open reef matrix or sandy substrata, resulting in small niche volumes and extensive complementarity. In contrast, the second group consists of species (predominantly rabbitfishes) that feed over a wider range of microhabitats, penetrating the reef matrix to exploit concealed surfaces of various substratum types. These species show high variation among individuals, leading to large niche volumes, more overlap and less complementarity. These results may have crucial consequences for our understanding of herbivorous processes on
Interactive visualization of multiresolution image stacks in 3D.
Trotts, Issac; Mikula, Shawn; Jones, Edward G
2007-04-15
Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.
Segmentation of textured images using a multiresolution Gaussian autoregressive model.
Comer, M L; Delp, E J
1999-01-01
We present a new algorithm for segmentation of textured images using a multiresolution Bayesian approach. The new algorithm uses a multiresolution Gaussian autoregressive (MGAR) model for the pyramid representation of the observed image, and assumes a multiscale Markov random field model for the class label pyramid. The models used in this paper incorporate correlations between different levels of both the observed image pyramid and the class label pyramid. The criterion used for segmentation is the minimization of the expected value of the number of misclassified nodes in the multiresolution lattice. The estimate which satisfies this criterion is referred to as the "multiresolution maximization of the posterior marginals" (MMPM) estimate, and is a natural extension of the single-resolution "maximization of the posterior marginals" (MPM) estimate. Previous multiresolution segmentation techniques have been based on the maximum a posterior (MAP) estimation criterion, which has been shown to be less appropriate for segmentation than the MPM criterion. It is assumed that the number of distinct textures in the observed image is known. The parameters of the MGAR model-the means, prediction coefficients, and prediction error variances of the different textures-are unknown. A modified version of the expectation-maximization (EM) algorithm is used to estimate these parameters. The parameters of the Gibbs distribution for the label pyramid are assumed to be known. Experimental results demonstrating the performance of the algorithm are presented.
NASA Astrophysics Data System (ADS)
Yu-Jin, Zhang; Xing-Zhe, Li; Ji-Cai, Liu; Chuan-Kui, Wang
2016-01-01
Optical limiting properties of two soluble chloroindium phthalocyanines with α- and β-alkoxyl substituents in nanosecond laser field have been studied by solving numerically the coupled singlet-triplet rate equation together with the paraxial wave field equation under the Crank-Nicholson scheme. Both transverse and longitudinal effects of the laser field on photophysical properties of the compounds are considered. Effective transfer time between the ground state and the lowest triplet state is defined in reformulated rate equations to characterize dynamics of singlet-triplet state population transfer. It is found that both phthalocyanines exhibit good nonlinear optical absorption abilities, while the compound with α-substituent shows enhanced optical limiting performance. Our ab-initio calculations reveal that the phthalocyanine with α-substituent has more obvious electron delocalization and lower frontier orbital transfer energies, which are responsible for its preferable photophysical properties. Project supported by the National Basic Research Program of China (Grant No. 2011CB808100), the National Natural Science Foundation of China (Grant Nos. 11204078 and 11574082), and the Fundamental Research Funds for the Central Universities of China (Grant No. 2015MS54).
A new study on mammographic image denoising using multiresolution techniques
NASA Astrophysics Data System (ADS)
Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju
2015-12-01
Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.
Vélez, Carlos G; Letcher, Peter M; Schultz, Sabina; Powell, Martha J; Churchill, Perry F
2011-01-01
Chytridium olla A. Braun, the first described chytrid and an obligate algal parasite, is the type for the genus and thus the foundation of family Chytridiaceae, order Chytridiales, class Chytridiomycetes and phylum Chytridiomycota. Chytridium olla was isolated in coculture with its host, Oedogonium capilliforme. DNA was extracted from the coculture, and 18S, 28S and ITS1-5.8S-ITS2 rDNA were amplified with universal fungal primers. Free swimming zoospores and zoospores in mature sporangia were examined with electron microscopy. Molecular analyses placed C. olla in a clade in Chytridiales with isolates of Chytridium lagenaria and Phlyctochytrium planicorne. Ultrastructural analysis revealed C. olla to have a Group II-type zoospore, previously described for Chytridium lagenaria and Phlyctochytrium planicorne. On the basis of zoospore ultrastructure, family Chytridiaceae is emended to include the type of Chytridium and other species with a Group II-type zoospore, and the new family Chytriomycetaceae is delineated to include members of Chytridiales with a Group I-type zoospore.
Multiresolution molecular mechanics: Implementation and efficiency
NASA Astrophysics Data System (ADS)
Biyikli, Emre; To, Albert C.
2017-01-01
Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.
Telescopic multi-resolution augmented reality
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold
2014-05-01
To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.
Multiresolution molecular mechanics: Implementation and efficiency
Biyikli, Emre; To, Albert C.
2017-01-01
Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.
Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye
2015-05-28
Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms.
Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye
2015-01-01
Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms. PMID:26020491
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; ...
2016-01-01
We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina
2016-01-01
MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
Coupled-Cluster in Real Space I: CC2 Ground State Energies using Multi-Resolution Analysis.
Kottmann, Jakob Siegfried; Bischoff, Florian Andreas
2017-09-13
A framework to calculate approximate coupled-cluster CC2 ground-state correlation energies in a multiresolution basis is derived and implemented into the MADNESS library. The CC2 working equations are rederived in first quantization which makes them suitable for real-space methods. The first-quantized equations can be interpreted diagrammatically using the usual diagrams from second quantization with adjusted interpretation rules. Singularities arising form the nuclear and electronic potentials are regularized by explicitly taking the nuclear and electronic cusps into account. The regularized three- and six-dimensional cluster functions are represented directly on a grid. The resulting equations are free of singularities and virtual orbitals, which results into a lower intrinsic scaling of N^3. Correlation energies close to the basis set limit are computed for small molecules. This work is the first step towards CC2 excitation energies in a multiresolution basis.
Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization
LaMar, E; Hamann, B; Joy, K I
2001-10-16
Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.
Multiresolution Analysis of UTAT B-spline Curves
NASA Astrophysics Data System (ADS)
Lamnii, A.; Mraoui, H.; Sbibih, D.; Zidna, A.
2011-09-01
In this paper, we describe a multiresolution curve representation based on periodic uniform tension algebraic trigonometric (UTAT) spline wavelets of class ??? and order four. Then we determine the decomposition and the reconstruction vectors corresponding to UTAT-spline spaces. Finally, we give some applications in order to illustrate the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Kim, Eui Joong
Large scale ground motion simulation requires supercomputing systems in order to obtain reliable and useful results within reasonable elapsed time. In this study, we develop a framework for terascale ground motion simulations in highly heterogeneous basins. As part of the development, we present a parallel octree-based multiresolution finite element methodology for the elastodynamic wave propagation problem. The octree-based multiresolution finite element method reduces memory use significantly and improves overall computational performance. The framework is comprised of three parts; (1) an octree-based mesh generator, Euclid developed by TV and O'Hallaron, (2) a parallel mesh partitioner, ParMETIS developed by Karypis et al.[2], and (3) a parallel octree-based multiresolution finite element solver, QUAKE developed in this study. Realistic earthquakes parameters, soil material properties, and sedimentary basins dimensions will produce extremely large meshes. The out-of-core versional octree-based mesh generator, Euclid overcomes the resulting severe memory limitations. By using a parallel, distributed-memory graph partitioning algorithm, ParMETIS partitions large meshes, overcoming the memory and cost problem. Despite capability of the Octree-Based Multiresolution Mesh Method ( OBM3), large problem sizes necessitate parallelism to handle large memory and work requirements. The parallel OBM 3 elastic wave propagation code, QUAKE has been developed to address these issues. The numerical methodology and the framework have been used to simulate the seismic response of both idealized systems and of the Greater Los Angeles basin to simple pulses and to a mainshock of the 1994 Northridge Earthquake, for frequencies of up to 1 Hz and domain size of 80 km x 80 km x 30 km. In the idealized models, QUAKE shows good agreement with the analytical Green's function solutions. In the realistic models for the Northridge earthquake mainshock, QUAKE qualitatively agrees, with at most
Applying multi-resolution numerical methods to geodynamics
NASA Astrophysics Data System (ADS)
Davies, David Rhodri
structured grid solution strategies, the unstructured techniques utilized in 2-D would throw away the regular grid and, with it, the major benefits of the current solution algorithms. Alternative avenues towards multi-resolution must therefore be sought. A non-uniform structured method that produces similar advantages to unstructured grids is introduced here, in the context of the pre-existing 3-D spherical mantle dynamics code, TERRA. The method, based upon the multigrid refinement techniques employed in the field of computational engineering, is used to refine and solve on a radially non-uniform grid. It maintains the key benefits of TERRA's current configuration, whilst also overcoming many of its limitations. Highly efficient solutions to non-uniform problems are obtained. The scheme is highly resourceful in terms RAM, meaning that one can attempt calculations that would otherwise be impractical. In addition, the solution algorithm reduces the CPU-time needed to solve a given problem. Validation tests illustrate that the approach is accurate and robust. Furthermore, by being conceptually simple and straightforward to implement, the method negates the need to reformulate large sections of code. The technique is applied to highly advanced 3-D spherical mantle convection models. Due to its resourcefulness in terms of RAM, the modified code allows one to efficiently resolve thermal boundary layers at the dynamical regime of Earth's mantle. The simulations presented are therefore at superior vigor to the highest attained, to date, in 3-D spherical geometry, achieving Rayleigh numbers of order 109. Upwelling structures are examined, focussing upon the nature of deep mantle plumes. Previous studies have shown long-lived, anchored, coherent upwelling plumes to be a feature of low to moderate vigor convection. Since more vigorous convection traditionally shows greater time-dependence, the fixity of upwellings would not logically be expected for non-layered convection at higher
Multiscale/multiresolution landslides susceptibility mapping
NASA Astrophysics Data System (ADS)
Grozavu, Adrian; Cătălin Stanga, Iulian; Valeriu Patriche, Cristian; Toader Juravle, Doru
2014-05-01
Within the European strategies, landslides are considered an important threatening that requires detailed studies to identify areas where these processes could occur in the future and to design scientific and technical plans for landslide risk mitigation. In this idea, assessing and mapping the landslide susceptibility is an important preliminary step. Generally, landslide susceptibility at small scale (for large regions) can be assessed through qualitative approach (expert judgements), based on a few variables, while studies at medium and large scale requires quantitative approach (e.g. multivariate statistics), a larger set of variables and, necessarily, the landslide inventory. Obviously, the results vary more or less from a scale to another, depending on the available input data, but also on the applied methodology. Since it is almost impossible to have a complete landslide inventory on large regions (e.g. at continental level), it is very important to verify the compatibility and the validity of results obtained at different scales, identifying the differences and fixing the inherent errors. This paper aims at assessing and mapping the landslide susceptibility at regional level through a multiscale-multiresolution approach from small scale and low resolution to large scale and high resolution of data and results, comparing the compatibility of results. While the first ones could be used for studies at european and national level, the later ones allows results validation, including through fields surveys. The test area, namely the Barlad Plateau (more than 9000 sq.km) is located in Eastern Romania, covering a region where both the natural environment and the human factor create a causal context that favor these processes. The landslide predictors were initially derived from various databases available at pan-european level and progressively completed and/or enhanced together with scale and the resolution: the topography (from SRTM at 90 meters to digital
Zhang, Le; Jiang, Beini; Wu, Yukun; Strouthos, Costas; Sun, Phillip Zhe; Su, Jing; Zhou, Xiaobo
2011-12-16
Multiscale agent-based modeling (MABM) has been widely used to simulate Glioblastoma Multiforme (GBM) and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa). At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU)-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated.
A multi-resolution envelope-power based model for speech intelligibility.
Jørgensen, Søren; Ewert, Stephan D; Dau, Torsten
2013-07-01
The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope power signal-to-noise ratio (SNRenv) after modulation-frequency selective processing. Changes in this metric were shown to account well for changes of speech intelligibility for normal-hearing listeners in conditions with additive stationary noise, reverberation, and nonlinear processing with spectral subtraction. In the latter condition, the standardized speech transmission index [(2003). IEC 60268-16] fails. However, the sEPSM is limited to conditions with stationary interferers, due to the long-term integration of the envelope power, and cannot account for increased intelligibility typically obtained with fluctuating maskers. Here, a multi-resolution version of the sEPSM is presented where the SNRenv is estimated in temporal segments with a modulation-filter dependent duration. The multi-resolution sEPSM is demonstrated to account for intelligibility obtained in conditions with stationary and fluctuating interferers, and noisy speech distorted by reverberation or spectral subtraction. The results support the hypothesis that the SNRenv is a powerful objective metric for speech intelligibility prediction.
Developing a multiscale, multi-resolution agent-based brain tumor model by graphics processing units
2011-01-01
Multiscale agent-based modeling (MABM) has been widely used to simulate Glioblastoma Multiforme (GBM) and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa). At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU)-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated. PMID:22176732
Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound
NASA Astrophysics Data System (ADS)
Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.
2015-12-01
Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.
Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.
1984-10-19
A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.
Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.
2016-01-01
The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
A Multiresolution Method for Parameter Estimation of Diffusion Processes
Kou, S. C.; Olding, Benjamin P.; Lysy, Martin; Liu, Jun S.
2014-01-01
Diffusion process models are widely used in science, engineering and finance. Most diffusion processes are described by stochastic differential equations in continuous time. In practice, however, data is typically only observed at discrete time points. Except for a few very special cases, no analytic form exists for the likelihood of such discretely observed data. For this reason, parametric inference is often achieved by using discrete-time approximations, with accuracy controlled through the introduction of missing data. We present a new multiresolution Bayesian framework to address the inference difficulty. The methodology relies on the use of multiple approximations and extrapolation, and is significantly faster and more accurate than known strategies based on Gibbs sampling. We apply the multiresolution approach to three data-driven inference problems – one in biophysics and two in finance – one of which features a multivariate diffusion model with an entirely unobserved component. PMID:25328259
Multiresolutional encoding and decoding in embedded image and video coders
NASA Astrophysics Data System (ADS)
Xiong, Zixiang; Kim, Beong-Jo; Pearlman, William A.
1998-07-01
We address multiresolutional encoding and decoding within the embedded zerotree wavelet (EZW) framework for both images and video. By varying a resolution parameter, one can obtain decoded images at different resolutions from one single encoded bitstream, which is already rate scalable for EZW coders. Similarly one can decode video sequences at different rates and different spatial and temporal resolutions from one bitstream. Furthermore, a layered bitstream can be generated with multiresolutional encoding, from which the higher resolution layers can be used to increase the spatial/temporal resolution of the images/video obtained from the low resolution layer. In other words, we have achieved full scalability in rate and partial scalability in space and time. This added spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning.
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
Nielson, Gregory M.
1997-01-01
This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.
Adaptive feature enhancement for mammographic images with wavelet multiresolution analysis
NASA Astrophysics Data System (ADS)
Chen, Lulin; Chen, Chang W.; Parker, Kevin J.
1997-10-01
A novel and computationally efficient approach to an adaptive mammographic image feature enhancement using wavelet-based multiresolution analysis is presented. On wavelet decomposition applied to a given mammographic image, we integrate the information of the tree-structured zero crossings of wavelet coefficients and the information of the low-pass-filtered subimage to enhance the desired image features. A discrete wavelet transform with pyramidal structure is employed to speedup the computation for wavelet decomposition and reconstruction. The spatiofrequency localization property of the wavelet transform is exploited based on the spatial coherence of image and the principle of human psycho-visual mechanism. Preliminary results show that the proposed approach is able to adaptively enhance local edge features, suppress noise, and improve global visualization of mammographic image features. This wavelet- based multiresolution analysis is therefore promising for computerized mass screening of mammograms.
Multiresolution analysis of Bursa Malaysia KLCI time series
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed
2017-05-01
In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.
Multiresolution techniques for the classification of bioimage and biometric datasets
NASA Astrophysics Data System (ADS)
Chebira, Amina; Kovačević, Jelena
2007-09-01
We survey our work on adaptive multiresolution (MR) approaches to the classification of biological and fingerprint images. The system adds MR decomposition in front of a generic classifier consisting of feature computation and classification in each MR subspace, yielding local decisions, which are then combined into a global decision using a weighting algorithm. The system is tested on four different datasets, subcellular protein location images, drosophila embryo images, histological images and fingerprint images. Given the very high accuracies obtained for all four datasets, we demonstrate that the space-frequency localized information in the multiresolution subspaces adds significantly to the discriminative power of the system. Moreover, we show that a vastly reduced set of features is sufficient. Finally, we prove that frames are the class of MR techniques that performs the best in this context. This leads us to consider the construction of a new family of frames for classification, which we term lapped tight frame transforms.
A Multiresolution Method for Parameter Estimation of Diffusion Processes.
Kou, S C; Olding, Benjamin P; Lysy, Martin; Liu, Jun S
2012-12-01
Diffusion process models are widely used in science, engineering and finance. Most diffusion processes are described by stochastic differential equations in continuous time. In practice, however, data is typically only observed at discrete time points. Except for a few very special cases, no analytic form exists for the likelihood of such discretely observed data. For this reason, parametric inference is often achieved by using discrete-time approximations, with accuracy controlled through the introduction of missing data. We present a new multiresolution Bayesian framework to address the inference difficulty. The methodology relies on the use of multiple approximations and extrapolation, and is significantly faster and more accurate than known strategies based on Gibbs sampling. We apply the multiresolution approach to three data-driven inference problems - one in biophysics and two in finance - one of which features a multivariate diffusion model with an entirely unobserved component.
Hergovich, Andreas; Gröbl, Kristian; Carbon, Claus-Christian
2011-01-01
Following Gustav Kuhn's inspiring technique of using magicians' acts as a source of insight into cognitive sciences, we used the 'paddle move' for testing the psychophysics of combined movement trajectories. The paddle move is a standard technique in magic consisting of a combined rotating and tilting movement. Careful control of the mutual speed parameters of the two movements makes it possible to inhibit the perception of the rotation, letting the 'magic' effect emerge--a sudden change of the tilted object. By using 3-D animated computer graphics we analysed the interaction of different angular speeds and the object shape/size parameters in evoking this motion disappearance effect. An angular speed of 540 degrees s(-1) (1.5 rev. s(-1)) sufficed to inhibit the perception of the rotary movement with the smallest object showing the strongest effect. 90.7% of the 172 participants were not able to perceive the rotary movement at an angular speed of 1125 degrees s(-1) (3.125 rev. s(-1)). Further analysis by multiple linear regression revealed major influences on the effectiveness of the magic trick of object height and object area, demonstrating the applicability of analysing key factors of magic tricks to reveal limits of the perceptual system.
Multiresolution Analysis by Infinitely Differentiable Compactly Supported Functions
1992-09-01
Z 2. Wavelet decompositions The up function provides an interesting example of wavelet decompositions via multiresolution. A general discussion of...Math. Surveys 45:1 (1990), 87-120. [I] (;. Strang and G. Fix, A Fourier analysis of the finite element variational method. C.I.M.F. I 1 Ciclo 1971, in Constructi’c Aspects of Functional Analyszs ed. G. Geymonat 1973, 793-840. 10
Multiple multiresolution representation of functions and calculus for fast computation
Fann, George I; Harrison, Robert J; Hill, Judith C; Jia, Jun; Galindo, Diego A
2010-01-01
We describe the mathematical representations, data structure and the implementation of the numerical calculus of functions in the software environment multiresolution analysis environment for scientific simulations, MADNESS. In MADNESS, each smooth function is represented using an adaptive pseudo-spectral expansion using the multiwavelet basis to a arbitrary but finite precision. This is an extension of the capabilities of most of the existing net, mesh and spectral based methods where the discretization is based on a single adaptive mesh, or expansions.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
a DTM Multi-Resolution Compressed Model for Efficient Data Storage and Network Transfer
NASA Astrophysics Data System (ADS)
Biagi, L.; Brovelli, M.; Zamboni, G.
2011-08-01
In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging) technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs) show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's) and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions). At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a digital terrain
Cho, Eunji; Ahn, Miri; Kim, Young Hwan; Kim, Jongwon; Kim, Sunghwan
2013-10-01
A proton source employing a nanostructured gold surface for use in (+)-mode laser desorption ionization mass spectrometry (LDI-MS) was evaluated. Analysis of perdeuterated polyaromatic hydrocarbon compound dissolved in regular toluene, perdeuterated toluene, and deuterated methanol all showed that protonated ions were generated irregardless of solvent system. Therefore, it was concluded that residual water on the surface of the LDI plate was the major source of protons. The fact that residual water remaining after vacuum drying was the source of protons suggests that protons may be the limiting reagent in the LDI process and that overall ionization efficiency can be improved by incorporating an additional proton source. When extra proton sources, such as thiolate compounds and/or citric acid, were added to a nanostructured gold surface, the protonated signal abundance increased. These data show that protons are one of the limiting components in (+)-mode LDI MS analyses employing nanostructured gold surfaces. Therefore, it has been suggested that additional efforts are required to identify compounds that can act as proton donors without generating peaks that interfere with mass spectral interpretation.
NASA Astrophysics Data System (ADS)
Cho, Eunji; Ahn, Miri; Kim, Young Hwan; Kim, Jongwon; Kim, Sunghwan
2013-10-01
A proton source employing a nanostructured gold surface for use in (+)-mode laser desorption ionization mass spectrometry (LDI-MS) was evaluated. Analysis of perdeuterated polyaromatic hydrocarbon compound dissolved in regular toluene, perdeuterated toluene, and deuterated methanol all showed that protonated ions were generated irregardless of solvent system. Therefore, it was concluded that residual water on the surface of the LDI plate was the major source of protons. The fact that residual water remaining after vacuum drying was the source of protons suggests that protons may be the limiting reagent in the LDI process and that overall ionization efficiency can be improved by incorporating an additional proton source. When extra proton sources, such as thiolate compounds and/or citric acid, were added to a nanostructured gold surface, the protonated signal abundance increased. These data show that protons are one of the limiting components in (+)-mode LDI MS analyses employing nanostructured gold surfaces. Therefore, it has been suggested that additional efforts are required to identify compounds that can act as proton donors without generating peaks that interfere with mass spectral interpretation.
Multiresolution persistent homology for excessively large biomolecular datasets
NASA Astrophysics Data System (ADS)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-10-01
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
Multiresolution persistent homology for excessively large biomolecular datasets
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-01-01
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs. PMID:26450288
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Survey and analysis of multiresolution methods for turbulence data
Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; ...
2015-11-10
This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between themore » algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less
Concurrent multiresolution finite element: formulation and algorithmic aspects
NASA Astrophysics Data System (ADS)
Tang, Shan; Kopacz, Adrian M.; Chan O'Keeffe, Stephanie; Olson, Gregory B.; Liu, Wing Kam
2013-12-01
A multiresolution concurrent theory for heterogenous materials is proposed with novel macro scale and micro scale constitutive laws that include the plastic yield function at different length scales. In contrast to the conventional plasticity, the plastic flow at the micro zone depends on the plastic strain gradient. The consistency condition at the macro and micro zones can result in a set of algebraic equations. Using appropriate boundary conditions, the finite element discretization was derived from a variational principle with the extra degrees of freedom for the micro zones. In collaboration with LSTC Inc, the degrees of freedom at the micro zone and their related history variables have been augmented in LS-DYNA. The 3D multiresolution theory has been implemented. Shear band propagation and the large scale simulation of a shear driven ductile fracture process were carried out. Our results show that the proposed multiresolution theory in combination with the parallel implementation into LS-DYNA can capture the effects of the microstructure on shear band propagation and allows for realistic modeling of ductile fracture process.
Survey and analysis of multiresolution methods for turbulence data
Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; Ahrens, James; Hamann, Bernd
2015-11-10
This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 512^{3} mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Re_{t} = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.
Multiresolution persistent homology for excessively large biomolecular datasets
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-10-07
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.
Multiresolution stereo algorithm via wavelet representations for autonomous navigation
NASA Astrophysics Data System (ADS)
Shim, Minbo; Kurtz, John J.; Laine, Andrew F.
1999-03-01
Many autonomous vehicle navigation systems have adopted area-based stereo image processing techniques that use correlation measures to construct disparity maps as a basic obstacle detection and avoidance mechanism. Although the intra-scale area-based techniques perform well in pyramid processing frameworks, significant performance enhancement and reliability improvement may be achievable using wavelet- based inter-scale correlation measures. This paper presents a novel framework, which can be facilitated in unmanned ground vehicles, to recover 3D depth information (disparity maps) from binocular stereo images. We propose a wavelet- based coarse-to-fine incremental scheme to build up refined disparity maps from coarse ones, and demonstrate that usable disparity maps can be generated from sparse (compressed) wavelet coefficients. Our approach is motivated by a biological mechanism of the human visual system where multiresolution is known feature for perceptional visual processing. Among traditional multiresolution approaches, wavelet analysis provides a mathematically coherent and precise definition to the concept of multiresolution. The variation of resolution enables the transform to identify image signatures of objects in scale space. We use these signatures embedded in the wavelet transform domain to construct more detailed disparity maps at finer levels. Inter-scale correlation measures within the framework are used to identify the signature at the next finer level, since wavelet coefficients contain well-characterized evolutionary information.
Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.
1986-01-01
A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.
Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation
Alsubaie, Najah; Trahearn, Nicholas; Raza, Shan E. Ahmed; Snead, David; Rajpoot, Nasir M.
2017-01-01
Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners. PMID:28076381
Wavelet multi-resolution analysis of energy transfer in turbulent premixed flames
NASA Astrophysics Data System (ADS)
Kim, Jeonglae; Bassenne, Maxime; Towery, Colin; Poludnenko, Alexei; Hamlington, Peter; Ihme, Matthias; Urzay, Javier
2016-11-01
Direct numerical simulations of turbulent premixed flames are examined using wavelet multi-resolution analyses (WMRA) as a diagnostics tool to evaluate the spatially localized inter-scale energy transfer in reacting flows. In non-reacting homogeneous-isotropic turbulence, the net energy transfer occurs from large to small scales on average, thus following the classical Kolmogorov energy cascade. However, in turbulent flames, our prior work suggests that thermal expansion leads to a small-scale pressure-work contribution that transfers energy in an inverse cascade on average, which has important consequences for LES modeling of reacting flows. The current study employs WMRA to investigate, simultaneously in physical and spectral spaces, the characteristics of this combustion-induced backscatter effect. The WMRA diagnostics provide spatial statistics of the spectra, scale-conditioned intermittency of velocity and vorticity, along with energy-transfer fluxes conditioned on the local progress variable.
Gondois-Rey, F; Granjeaud, S; Rouillier, P; Rioualen, C; Bidaut, G; Olive, D
2016-05-01
The wide possibilities opened by the developments of multi-parametric cytometry are limited by the inadequacy of the classical methods of analysis to the multi-dimensional characteristics of the data. While new computational tools seemed ideally adapted and were applied successfully, their adoption is still low among the flow cytometrists. In the purpose to integrate unsupervised computational tools for the management of multi-stained samples, we investigated their advantages and limits by comparison to manual gating on a typical sample analyzed in immunomonitoring routine. A single tube of PBMC, containing 11 populations characterized by different sizes and stained with 9 fluorescent markers, was used. We investigated the impact of the strategy choice on manual gating variability, an undocumented pitfall of the analysis process, and we identified rules to optimize it. While assessing automatic gating as an alternate, we introduced the Multi-Experiment Viewer software (MeV) and validated it for merging clusters and annotating interactively populations. This procedure allowed the finding of both targeted and unexpected populations. However, the careful examination of computed clusters in standard dot plots revealed some heterogeneity, often below 10%, that was overcome by increasing the number of clusters to be computed. MeV facilitated the identification of populations by displaying both the MFI and the marker signature of the dataset simultaneously. The procedure described here appears fully adapted to manage homogeneously high number of multi-stained samples and allows improving multi-parametric analyses in a way close to the classic approach. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Ulbrich, Susanne E; Wolf, Eckhard; Bauersachs, Stefan
2012-01-01
Ongoing detailed investigations into embryo-maternal communication before implantation reveal that during early embryonic development a plethora of events are taking place. During the sexual cycle, remodelling and differentiation processes in the endometrium are controlled by ovarian hormones, mainly progesterone, to provide a suitable environment for establishment of pregnancy. In addition, embryonic signalling molecules initiate further sequences of events; of these molecules, prostaglandins are discussed herein as specifically important. Inadequate receptivity may impede preimplantation development and implantation, leading to embryonic losses. Because there are multiple factors affecting fertility, receptivity is difficult to comprehend. This review addresses different models and methods that are currently used and discusses their respective potentials and limitations in distinguishing key messages out of molecular twitter. Transcriptome, proteome and metabolome analyses generate comprehensive information and provide starting points for hypotheses, which need to be substantiated using further confirmatory methods. Appropriate in vivo and in vitro models are needed to disentangle the effects of participating factors in the embryo-maternal dialogue and to help distinguish associations from causalities. One interesting model is the study of somatic cell nuclear transfer embryos in normal recipient heifers. A multidisciplinary approach is needed to properly assess the importance of the uterine milieu for embryonic development and to use the large number of new findings to solve long-standing issues regarding fertility.
Spicer, L.J.; Ireland, J.J.
1986-07-01
Experiments were conducted to compare gonadotropin binding capacity calculated from limited-point saturation analyses to those obtained from Scatchard analyses, and to test the effects of membrane purity and source of gonadotropin receptors on determining the maximum percentage of radioiodinated hormone bound to receptors (maximum bindability). One- to four-point saturation analyses gave results comparable to results by Scatchard analyses when examining relative binding capacities of receptors. Crude testicular homogenates had lower estimates of maximum bindability of /sup 125/I-labeled human chorionic gonadotropin than more purified gonadotropin receptor preparations. Under similar preparation techniques, some gonadotropin receptor sources exhibited low maximum bindability.
Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure
NASA Astrophysics Data System (ADS)
Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.
2014-08-01
Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver
Folsom, James Patrick
2015-01-01
Escherichia coli physiological, biomass elemental composition and proteome acclimations to ammonium-limited chemostat growth were measured at four levels of nutrient scarcity controlled via chemostat dilution rate. These data were compared with published iron- and glucose-limited growth data collected from the same strain and at the same dilution rates to quantify general and nutrient-specific responses. Severe nutrient scarcity resulted in an overflow metabolism with differing organic byproduct profiles based on limiting nutrient and dilution rate. Ammonium-limited cultures secreted up to 35 % of the metabolized glucose carbon as organic byproducts with acetate representing the largest fraction; in comparison, iron-limited cultures secreted up to 70 % of the metabolized glucose carbon as lactate, and glucose-limited cultures secreted up to 4 % of the metabolized glucose carbon as formate. Biomass elemental composition differed with nutrient limitation; biomass from ammonium-limited cultures had a lower nitrogen content than biomass from either iron- or glucose-limited cultures. Proteomic analysis of central metabolism enzymes revealed that ammonium- and iron-limited cultures had a lower abundance of key tricarboxylic acid (TCA) cycle enzymes and higher abundance of key glycolysis enzymes compared with glucose-limited cultures. The overall results are largely consistent with cellular economics concepts, including metabolic tradeoff theory where the limiting nutrient is invested into essential pathways such as glycolysis instead of higher ATP-yielding, but non-essential, pathways such as the TCA cycle. The data provide a detailed insight into ecologically competitive metabolic strategies selected by evolution, templates for controlling metabolism for bioprocesses and a comprehensive dataset for validating in silico representations of metabolism. PMID:26018546
Wei, Xinli; McCune, Bruce; Lumbsch, H. Thorsten; Li, Hui; Leavitt, Steven; Yamamoto, Yoshikazu; Tchabanenko, Svetlana; Wei, Jiangchun
2016-01-01
Delimiting species boundaries among closely related lineages often requires a range of independent data sets and analytical approaches. Similar to other organismal groups, robust species circumscriptions in fungi are increasingly investigated within an empirical framework. Here we attempt to delimit species boundaries in a closely related clade of lichen-forming fungi endemic to Asia, the Hypogymnia hypotrypa group (Parmeliaceae). In the current classification, the Hypogymnia hypotrypa group includes two species: H. hypotrypa and H. flavida, which are separated based on distinctive reproductive modes, the former producing soredia but absent in the latter. We reexamined the relationship between these two species using phenotypic characters and molecular sequence data (ITS, GPD, and MCM7 sequences) to address species boundaries in this group. In addition to morphological investigations, we used Bayesian clustering to identify potential genetic groups in the H. hypotrypa/H. flavida clade. We also used a variety of empirical, sequence-based species delimitation approaches, including: the “Automatic Barcode Gap Discovery” (ABGD), the Poisson tree process model (PTP), the General Mixed Yule Coalescent (GMYC), and the multispecies coalescent approach BPP. Different species delimitation scenarios were compared using Bayes factors delimitation analysis, in addition to comparisons of pairwise genetic distances, pairwise fixation indices (FST). The majority of the species delimitation analyses implemented in this study failed to support H. hypotrypa and H. flavida as distinct lineages, as did the Bayesian clustering analysis. However, strong support for the evolutionary independence of H. hypotrypa and H. flavida was inferred using BPP and further supported by Bayes factor delimitation. In spite of rigorous morphological comparisons and a wide range of sequence-based approaches to delimit species, species boundaries in the H. hypotrypa group remain uncertain. This study
Beyond Super-Parameterization: Multiresolutional Analysis Approach: NAM- SCA
NASA Astrophysics Data System (ADS)
Yano, J.
2008-12-01
The use of CSRM in place of conventional parameterizations, such as in super-parameterization, tends to give a misleading impression that the parameterization problem is resolved in the manner. However, the present session emphasizes that CSRM itself is built up on various subgrid-scale parameterizations. Thus we should move "beyond" super-parameterization by seeking methodologies (not necessarily parameterization) for correctly and more efficiently representing complex atmospheric processes of smaller and smaller scales. In order to advance towards this goal, we propose the approach of NAM-SCA: Nonhydrostatic Anelastic Model under Segmentally-Constant Approximation. The idea for this model is inspired from various different sources. First of all, a branch of mathematics called the multiresolutional analysis provides a philosophical basis for pursuing this possibility: in the same sense as wavelet can extensively compress an image, the multiresolutional analysis provides extensive possibilities for compressing numerical models. Application of this principle into practice leads to a very flexible time-dependent mesh refinement or nesting, far more extensively than conventional approaches could provide. A "deconstruction" analysis of the mass flux convective analysis, on the other hand, reveals that the mass flux decomposition itself can be used for this purpose: NAM (or CSRM) is simply decomposed into an ensemble of mass flux modes, purely as a geometrical representation, under a spirit of multiresolutional analysis, but without any further approximations. We call this representation as SCA due to its geometrical constraint. NAM-SCA can run much efficiently than conventional CSRM by adopting high resolutions only where they are required, and potentially it can achieve a much higher resolution than the current CSRM can achieve. A two-dimensional version will be presented, which is already ready for operational implementation.
Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Sharpley, Robert C.
1999-01-01
This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across
Geometric multi-resolution analysis for dictionary learning
NASA Astrophysics Data System (ADS)
Maggioni, Mauro; Minsker, Stanislav; Strawn, Nate
2015-09-01
We present an efficient algorithm and theory for Geometric Multi-Resolution Analysis (GMRA), a procedure for dictionary learning. Sparse dictionary learning provides the necessary complexity reduction for the critical applications of compression, regression, and classification in high-dimensional data analysis. As such, it is a critical technique in data science and it is important to have techniques that admit both efficient implementation and strong theory for large classes of theoretical models. By construction, GMRA is computationally efficient and in this paper we describe how the GMRA correctly approximates a large class of plausible models (namely, the noisy manifolds).
A multi-resolution approach for optimal mass transport
NASA Astrophysics Data System (ADS)
Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen
2007-09-01
Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.
A Global, Multi-Resolution Approach to Regional Ocean Modeling
Du, Qiang
2013-11-08
In this collaborative research project between Pennsylvania State University, Colorado State University and Florida State University, we mainly focused on developing multi-resolution algorithms which are suitable to regional ocean modeling. We developed hybrid implicit and explicit adaptive multirate time integration method to solve systems of time-dependent equations that present two signi cantly di erent scales. We studied the e ects of spatial simplicial meshes on the stability and the conditioning of fully discrete approximations. We also studies adaptive nite element method (AFEM) based upon the Centroidal Voronoi Tessellation (CVT) and superconvergent gradient recovery. Some of these techniques are now being used by geoscientists(such as those at LANL).
Parallel object-oriented, denoising system using wavelet multiresolution analysis
Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.
2005-04-12
The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.
Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches
NASA Astrophysics Data System (ADS)
Duchaineau, Mark
2001-06-01
Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.
The multi-model and multi-resolution estimation of stratosphere-troposphere exchange
NASA Astrophysics Data System (ADS)
Yamashita, Yousuke; Takigawa, Masayuki; Ishijima, Kentaro; Akiyoshi, Hideharu; Yashiro, Hisashi; Satoh, Masaki
2017-04-01
The stratosphere-troposphere exchange (STE) of atmospheric mass is important to understand the oxidizing capability of troposphere as well as the atmospheric chemistry and climate interaction, since the lower stratospheric ozone is efficiently transported to the troposphere with the synoptic- and small-scale mechanisms of the STE. This study identifies the mass flux of STE from the outputs of the multi-model and multi-resolution simulations in March. We perform the CCSR/NIES-MIROC3.2 Chemistry-climate model simulations (T42 horizontal resolution with 34 vertical layers from surface to mesopause) and the multi-resolution simulations (3 horizontal resolutions and 2 vertical resolutions) of the Nonhydrostatic Icosahedral Atmospheric Model (NICAM). The horizontal resolutions of the NICAM are about 220 km (GL05), 56 km (GL07), and 14 km (GL09), and the vertical resolutions around tropopause are about 0.7-1.5 km for 40 layers and about 0.4 km for 78 layers (upper limits of the model are about 40 km for 40 layers and 50 km for 78 layers). The results show that the March average of the STE flux is large in magnitude for the coarse vertical resolutions and for the high horizontal resolutions. In addition, we find the spiral structure of the STE around the cutoff cyclones from the high horizontal and high vertical resolution simulations. These results imply that the resolution dependency of the STE is possibly related to the oxidizing capability of troposphere, which will be simulated with the chemistry interactive version of the NICAM.
Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub
2016-10-01
In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned.
Global multi-resolution terrain elevation data 2010 (GMTED2010)
Danielson, Jeffrey J.; Gesch, Dean B.
2011-01-01
In 1996, the U.S. Geological Survey (USGS) developed a global topographic elevation model designated as GTOPO30 at a horizontal resolution of 30 arc-seconds for the entire Earth. Because no single source of topographic information covered the entire land surface, GTOPO30 was derived from eight raster and vector sources that included a substantial amount of U.S. Defense Mapping Agency data. The quality of the elevation data in GTOPO30 varies widely; there are no spatially-referenced metadata, and the major topographic features such as ridgelines and valleys are not well represented. Despite its coarse resolution and limited attributes, GTOPO30 has been widely used for a variety of hydrological, climatological, and geomorphological applications as well as military applications, where a regional, continental, or global scale topographic model is required. These applications have ranged from delineating drainage networks and watersheds to using digital elevation data for the extraction of topographic structure and three-dimensional (3D) visualization exercises (Jenson and Domingue, 1988; Verdin and Greenlee, 1996; Lehner and others, 2008). Many of the fundamental geophysical processes active at the Earth's surface are controlled or strongly influenced by topography, thus the critical need for high-quality terrain data (Gesch, 1994). U.S. Department of Defense requirements for mission planning, geographic registration of remotely sensed imagery, terrain visualization, and map production are similarly dependent on global topographic data. Since the time GTOPO30 was completed, the availability of higher-quality elevation data over large geographic areas has improved markedly. New data sources include global Digital Terrain Elevation Data (DTEDRegistered) from the Shuttle Radar Topography Mission (SRTM), Canadian elevation data, and data from the Ice, Cloud, and land Elevation Satellite (ICESat). Given the widespread use of GTOPO30 and the equivalent 30-arc
Interactive, internet delivery of visualization via structured prerendered multiresolution imagery.
Chen, Jerry; Yoon, Ilmi; Bethel, Wes
2008-01-01
We present a novel approach for latency-tolerant delivery of visualization and rendering results where client-side frame rate display performance is independent of source dataset size, image size, visualization technique or rendering complexity. Our approach delivers pre-rendered, multiresolution images to a remote user as they navigate through different viewpoints, visualization or rendering parameters. We employ demand-driven tiled, multiresolution image streaming and prefetching to efficiently utilize available bandwidth while providing the maximum resolution user can perceive from a given viewpoint. Since image data is the only input to our system, our approach is generally applicable to all visualization and graphics rendering applications capable of generating image files in an ordered fashion. In our implementation, a normal web server provides on-demand images to a remote custom client application, which uses client-pull to obtain and cache only those images required to fulfill the interaction needs. The main contributions of this work are: (1) an architecture for latency-tolerant, remote delivery of precomputed imagery suitable for use with any visualization or rendering application capable of producing images in an ordered fashion; (2) a performance study showing the impact of diverse network environments and different tunable system parameters on end-to-end system performance in terms of deliverable frames per second.
Image coding by block prediction of multiresolution subimages.
Rinaldo, R; Calvagno, G
1995-01-01
The redundancy of the multiresolution representation has been clearly demonstrated in the case of fractal images, but it has not been fully recognized and exploited for general images. Fractal block coders have exploited the self-similarity among blocks in images. We devise an image coder in which the causal similarity among blocks of different subbands in a multiresolution decomposition of the image is exploited. In a pyramid subband decomposition, the image is decomposed into a set of subbands that are localized in scale, orientation, and space. The proposed coding scheme consists of predicting blocks in one subimage from blocks in lower resolution subbands with the same orientation. Although our prediction maps are of the same kind of those used in fractal block coders, which are based on an iterative mapping scheme, our coding technique does not impose any contractivity constraint on the block maps. This makes the decoding procedure very simple and allows a direct evaluation of the mean squared error (MSE) between the original and the reconstructed image at coding time. More importantly, we show that the subband pyramid acts as an automatic block classifier, thus making the block search simpler and the block matching more effective. These advantages are confirmed by the experimental results, which show that the performance of our scheme is superior for both visual quality and MSE to that obtainable with standard fractal block coders and also to that of other popular image coders such as JPEG.
Multiresolution in CROCO (Coastal and Regional Ocean Community model)
NASA Astrophysics Data System (ADS)
Debreu, Laurent; Auclair, Francis; Benshila, Rachid; Capet, Xavier; Dumas, Franck; Julien, Swen; Marchesiello, Patrick
2016-04-01
CROCO (Coastal and Regional Ocean Community model [1]) is a new oceanic modeling system built upon ROMS_AGRIF and the non-hydrostatic kernel of SNH, gradually including algorithms from MARS3D (sediments)and HYCOM (vertical coordinates). An important objective of CROCO is to provide the possibility of running truly multiresolution simulations. Our previous work on structured mesh refinement [2] allowed us to run two-way nesting with the following major features: conservation, spatial and temporal refinement, coupling at the barotropic level. In this presentation, we will expose the current developments in CROCO towards multiresolution simulations: connection between neighboring grids at the same level of resolution and load balancing on parallel computers. Results of preliminary experiments will be given both on an idealized test case and on a realistic simulation of the Bay of Biscay with high resolution along the coast. References: [1] : CROCO : http://www.croco-ocean.org [2] : Debreu, L., P. Marchesiello, P. Penven, and G. Cambon, 2012: Two-way nesting in split-explicit ocean models: algorithms, implementation and validation. Ocean Modelling, 49-50, 1-21.
Multisensor multiresolution data fusion for improvement in classification
NASA Astrophysics Data System (ADS)
Rubeena, V.; Tiwari, K. C.
2016-04-01
The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.
Automated transformation-invariant shape recognition through wavelet multiresolution
NASA Astrophysics Data System (ADS)
Brault, Patrice; Mounier, Hugues
2001-12-01
We present here new results in Wavelet Multi-Resolution Analysis (W-MRA) applied to shape recognition in automatic vehicle driving applications. Different types of shapes have to be recognized in this framework. They pertain to most of the objects entering the sensors field of a car. These objects can be road signs, lane separation lines, moving or static obstacles, other automotive vehicles, or visual beacons. The recognition process must be invariant to global, affine or not, transformations which are : rotation, translation and scaling. It also has to be invariant to more local, elastic, deformations like the perspective (in particular with wide angle camera lenses), and also like deformations due to environmental conditions (weather : rain, mist, light reverberation) or optical and electrical signal noises. To demonstrate our method, an initial shape, with a known contour, is compared to the same contour altered by rotation, translation, scaling and perspective. The curvature computed for each contour point is used as a main criterion in the shape matching process. The original part of this work is to use wavelet descriptors, generated with a fast orthonormal W-MRA, rather than Fourier descriptors, in order to provide a multi-resolution description of the contour to be analyzed. In such way, the intrinsic spatial localization property of wavelet descriptors can be used and the recognition process can be speeded up. The most important part of this work is to demonstrate the potential performance of Wavelet-MRA in this application of shape recognition.
Using Fuzzy Logic to Enhance Stereo Matching in Multiresolution Images
Medeiros, Marcos D.; Gonçalves, Luiz Marcos G.; Frery, Alejandro C.
2010-01-01
Stereo matching is an open problem in Computer Vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859
Multiresolution mesh segmentation based on surface roughness and wavelet analysis
NASA Astrophysics Data System (ADS)
Roudet, Céline; Dupont, Florent; Baskurt, Atilla
2007-01-01
During the last decades, the three-dimensional objects have begun to compete with traditional multimedia (images, sounds and videos) and have been used by more and more applications. The common model used to represent them is a surfacic mesh due to its intrinsic simplicity and efficacity. In this paper, we present a new algorithm for the segmentation of semi-regular triangle meshes, via multiresolution analysis. Our method uses several measures which reflect the roughness of the surface for all meshes resulting from the decomposition of the initial model into different fine-to-coarse multiresolution meshes. The geometric data decomposition is based on the lifting scheme. Using that formulation, we have compared various interpolant prediction operators, associated or not with an update step. For each resolution level, the resulting approximation mesh is then partitioned into classes having almost constant roughness thanks to a clustering algorithm. Resulting classes gather regions having the same visual appearance in term of roughness. The last step consists in decomposing the mesh into connex groups of triangles using region growing ang merging algorithms. These connex surface patches are of particular interest for adaptive mesh compression, visualisation, indexation or watermarking.
A multiresolution inversion for imaging the ionosphere
NASA Astrophysics Data System (ADS)
Yin, Ping; Zheng, Ya-Nan; Mitchell, Cathryn N.; Li, Bo
2017-06-01
Ionospheric tomography has been widely employed in imaging the large-scale ionospheric structures at both quiet and storm times. However, the tomographic algorithms to date have not been very effective in imaging of medium- and small-scale ionospheric structures due to limitations of uneven ground-based data distributions and the algorithm itself. Further, the effect of the density and quantity of Global Navigation Satellite Systems data that could help improve the tomographic results for the certain algorithm remains unclear in much of the literature. In this paper, a new multipass tomographic algorithm is proposed to conduct the inversion using intensive ground GPS observation data and is demonstrated over the U.S. West Coast during the period of 16-18 March 2015 which includes an ionospheric storm period. The characteristics of the multipass inversion algorithm are analyzed by comparing tomographic results with independent ionosonde data and Center for Orbit Determination in Europe total electron content estimates. Then, several ground data sets with different data distributions are grouped from the same data source in order to investigate the impact of the density of ground stations on ionospheric tomography results. Finally, it is concluded that the multipass inversion approach offers an improvement. The ground data density can affect tomographic results but only offers improvements up to a density of around one receiver every 150 to 200 km. When only GPS satellites are tracked there is no clear advantage in increasing the density of receivers beyond this level, although this may change if multiple constellations are monitored from each receiving station in the future.
Lin, James C; Guerrieri, Joy Gioia; Moore, Alison A
2011-08-01
To examine whether consistent low-risk drinking is associated with lower risk of developing functional limitations among older adults. Data were obtained from five waves of the Health and Retirement Study. Function was assessed by questions measuring four physical abilities and five instrumental activities of daily living. Five different drinking patterns were determined using data over two consecutive survey periods. Over the follow-up periods, 38.6% of older adults developed functional limitations. Consistent low-risk drinkers had lower odds of developing functional limitations compared with consistent abstainers, and the effect of consistent low-risk drinking was greater among those aged 50 to 64 years compared with those aged ≥65 years. Other drinking patterns were not associated with lower odds of incident functional limitation. Consistent low-risk drinking was associated with lower odds of developing functional limitations, and this association was greater among older middle-aged adults aged 50 to 64 years.
Banchhor, Sumit K; Araki, Tadashi; Londhe, Narendra D; Ikeda, Nobutaka; Radeva, Petia; Elbaz, Ayman; Saba, Luca; Nicolaides, Andrew; Shafique, Shoaib; Laird, John R; Suri, Jasjit S
2016-10-01
Fast intravascular ultrasound (IVUS) video processing is required for calcium volume computation during the planning phase of percutaneous coronary interventional (PCI) procedures. Nonlinear multiresolution techniques are generally applied to improve the processing time by down-sampling the video frames. This paper presents four different segmentation methods for calcium volume measurement, namely Threshold-based, Fuzzy c-Means (FCM), K-means, and Hidden Markov Random Field (HMRF) embedded with five different kinds of multiresolution techniques (bilinear, bicubic, wavelet, Lanczos, and Gaussian pyramid). This leads to 20 different kinds of combinations. IVUS image data sets consisting of 38,760 IVUS frames taken from 19 patients were collected using 40 MHz IVUS catheter (Atlantis® SR Pro, Boston Scientific®, pullback speed of 0.5 mm/sec.). The performance of these 20 systems is compared with and without multiresolution using the following metrics: (a) computational time; (b) calcium volume; (c) image quality degradation ratio; and (d) quality assessment ratio. Among the four segmentation methods embedded with five kinds of multiresolution techniques, FCM segmentation combined with wavelet-based multiresolution gave the best performance. FCM and wavelet experienced the highest percentage mean improvement in computational time of 77.15% and 74.07%, respectively. Wavelet interpolation experiences the highest mean precision-of-merit (PoM) of 94.06 ± 3.64% and 81.34 ± 16.29% as compared to other multiresolution techniques for volume level and frame level respectively. Wavelet multiresolution technique also experiences the highest Jaccard Index and Dice Similarity of 0.7 and 0.8, respectively. Multiresolution is a nonlinear operation which introduces bias and thus degrades the image. The proposed system also provides a bias correction approach to enrich the system, giving a better mean calcium volume similarity for all the multiresolution
Rutstein, Sarah E; Price, Joan T; Rosenberg, Nora E; Rennie, Stuart M; Biddle, Andrea K; Miller, William C
2017-10-01
Cost-effectiveness analysis (CEA) is an increasingly appealing tool for evaluating and comparing health-related interventions in resource-limited settings. The goal is to inform decision-makers regarding the health benefits and associated costs of alternative interventions, helping guide allocation of limited resources by prioritising interventions that offer the most health for the least money. Although only one component of a more complex decision-making process, CEAs influence the distribution of health-care resources, directly influencing morbidity and mortality for the world's most vulnerable populations. However, CEA-associated measures are frequently setting-specific valuations, and CEA outcomes may violate ethical principles of equity and distributive justice. We examine the assumptions and analytical tools used in CEAs that may conflict with societal values. We then evaluate contextual features unique to resource-limited settings, including the source of health-state utilities and disability weights, implications of CEA thresholds in light of economic uncertainty, and the role of external donors. Finally, we explore opportunities to help align interpretation of CEA outcomes with values and budgetary constraints in resource-limited settings. The ethical implications of CEAs in resource-limited settings are vast. It is imperative that CEA outcome summary measures and implementation thresholds adequately reflect societal values and ethical priorities in resource-limited settings.
Lin, James C.; Guerrieri-Bang, Joy; Moore, Alison A.
2011-01-01
OBJECTIVES To examine whether consistent low-risk drinking is associated with lower risk of developing functional limitations among older adults. METHODS Data were obtained from five waves of the Health and Retirement Study. Function was assessed by questions measuring four physical abilities and five instrumental activities of daily living. Five different drinking patterns were determined using data over two consecutive survey periods. RESULTS Over the follow-up periods, 38.6% of older adults developed functional limitations. Consistent low-risk drinkers had lower odds of developing functional limitations compared to consistent abstainers, and the effect of consistent low-risk drinking was greater among those 50–64 years compared to those ≥65 years. Other drinking patterns were not associated with lower odds of incident functional limitation. DISCUSSION Consistent low-risk drinking was associated with lower odds of developing functional limitations, and this association was greater among older middle-aged adults 50–64 years of age. PMID:21311049
Robust 2D phase unwrapping based on multiresolution
NASA Astrophysics Data System (ADS)
Davidson, Gordon W.; Bamler, Richard
1996-12-01
An approach to 2D phase unwrapping for SAR interferometry is presented, based on separate steps of coarse phase and fine phase estimation. The coarse phase is constructed from instantaneous frequency estimates obtained using adaptive multiresolution, in which estimation is done of difference frequencies between resolution levels, and the frequency differences are summed over resolution levels such that a conservative phase gradient field is maintained. This allows a smoothed coarse unwrapped phase, which achieves the full terrain height, to be obtained with an unweighted least squares phase construction. The coarse phase is used to remove the bulk of the phase variation of the interferogram, allowing more accurate multilooking, and the resulting fine phase in unwrapped with weighted least squares. The unwrapping approach is verified on simulated interferograms.
Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme
NASA Astrophysics Data System (ADS)
Hickmann, K. S.; Godinez, H. C.
2015-12-01
When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.
Multiresolution strategies for the numerical solution of optimal control problems
NASA Astrophysics Data System (ADS)
Jain, Sachin
There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a
Towards Online Multiresolution Community Detection in Large-Scale Networks
Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim
2011-01-01
The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325
Copy-move forgery detection using multiresolution local binary patterns.
Davarzani, Reza; Yaghmaie, Khashayar; Mozaffari, Saeed; Tapak, Meysam
2013-09-10
Copy-move forgery is one of the most popular tampering artifacts in digital images. In this paper, we present an efficient method for copy-move forgery detection using Multiresolution Local Binary Patterns (MLBP). The proposed method is robust to geometric distortions and illumination variations of duplicated regions. Furthermore, the proposed block-based method recovers parameters of the geometric transformations. First, the image is divided into overlapping blocks and feature vectors for each block are extracted using LBP operators. The feature vectors are sorted based on lexicographical order. Duplicated image blocks are determined in the block matching step using k-d tree for more time reduction. Finally, in order to both determine the parameters of geometric transformations and remove the possible false matches, RANSAC (RANdom SAmple Consensus) algorithm is used. Experimental results show that the proposed approach is able to precisely detect duplicated regions even after distortions such as rotation, scaling, JPEG compression, blurring and noise adding.
Multiresolution subspace-based optimization method for inverse scattering problems.
Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea
2011-10-01
This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.
Multiresolution stochastic simulations of reaction-diffusion processes.
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2008-10-21
Stochastic simulations of reaction-diffusion processes are used extensively for the modeling of complex systems in areas ranging from biology and social sciences to ecosystems and materials processing. These processes often exhibit disparate scales that render their simulation prohibitive even for massive computational resources. The problem is resolved by introducing a novel stochastic multiresolution method that enables the efficient simulation of reaction-diffusion processes as modeled by many-particle systems. The proposed method quantifies and efficiently handles the associated stiffness in simulating the system dynamics and its computational efficiency and accuracy are demonstrated in simulations of a model problem described by the Fisher-Kolmogorov equation. The method is general and can be applied to other many-particle models of physical processes.
Out-of-Core Construction and Visualization of Multiresolution Surfaces
Lindstrom, P
2003-02-03
We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a rectilinear octree grid to coarsen and create a hierarchy for the mesh, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are disk and memory efficient, and are fairly straightforward to implement.
Out-of-Core Construction and Visualization of Multiresolution Surfaces
Lindstrom, P
2002-11-04
We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a uniform octree grid to coarsen a mesh and create a hierarchy, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are both disk and memory efficient, and are fairly straightforward to implement.
A Multi-Resolution Data Structure for Two-Dimensional Morse Functions
Bremer, P-T; Edelsbrunner, H; Hamann, B; Pascucci, V
2003-07-30
The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.
M3RSM: Many-to-Many Multi-Resolution Scan Matching
2015-05-01
Hayward St, Ann Arbor MI ebolson@umich.edu Fig. 1. Multi-resolution image pyramid . A high resolution cost function (bottom) is successively reduced in...dimension forming an image pyramid . By constructing the pyramid carefully, alignment scores computed using low-resolution levels can be used to prune the...used a two-level multi-resolution scheme in which the two levels differed in resolution by a factor of 10 [12]. In this work, we generalize this to a
Larson, Steven B; Day, John S; McPherson, Alexander
2010-04-13
Further refinement of the model using maximum likelihood procedures and reevaluation of the native electron density map has shown that crystals of pig pancreatic alpha-amylase, whose structure we reported more than 15 years ago, in fact contain a substantial amount of carbohydrate. The carbohydrate fragments are the products of glycogen digestion carried out as an essential step of the protein's purification procedure. In particular, the substrate-binding cleft contains a limit dextrin of six glucose residues, one of which contains both alpha-(1,4) and alpha-(1,6) linkages to contiguous residues. The disaccharide in the original model, shared between two amylase molecules in the crystal lattice, but also occupying a portion of the substrate-binding cleft, is now seen to be a tetrasaccharide. There are, in addition, several other probable monosaccharide binding sites. Furthermore, we have further reviewed our X-ray diffraction analysis of alpha-amylase complexed with alpha-cyclodextrin. alpha-Amylase binds three cyclodextrin molecules. Glucose residues of two of the rings superimpose upon the limit dextrin and the tetrasaccharide. The limit dextrin superimposes in large part upon linear oligosaccharide inhibitors visualized by other investigators. By comprehensive integration of these complexes we have constructed a model for the binding of polysaccharides having the helical character known to be present in natural substrates such as starch and glycogen.
Larson, Steven B.; Day, John S.; McPherson, Alexander
2010-01-01
Further refinement of the model using maximum likelihood procedures and re-evaluation of the native electron density map has shown that crystals of pig pancreatic α-amylase, whose structure we reported more than fifteen years ago, in fact contain a substantial amount of carbohydrate. The carbohydrate fragments are the products of glycogen digestion carried out as an essential step of the protein's purification procedure. In particular, the substrate-binding cleft contains a limit dextrin of six glucose residues, one of which contains both α-(1,4) and α-(1,6) linkages to contiguous residues. The disaccharide in the original model, shared between two amylase molecules in the crystal lattice, but also occupying a portion of the substrate binding cleft, is now seen to be a tetrasaccharide. There are, in addition, several other probable monosaccharide binding sites. To these results we have further reviewed our X-ray diffraction analysis of α-amylase complexed with α-cyclodextrin. α-Amylase binds three cyclodextrin molecules. Glucose residues of two of the rings superimpose upon the limit dextrin and the tetrasaccharide. The limit dextrin superimposes in large part upon linear oligosaccharide inhibitors visualized by other investigators. By comprehensive integration of these complexes we have constructed a model for the binding of polysaccharides having the helical character known to be present in natural substrates such as starch and glycogen. PMID:20222716
Multi-resolution community detection based on generalized self-loop rescaling strategy
NASA Astrophysics Data System (ADS)
Xiang, Ju; Tang, Yan-Ni; Gao, Yuan-Yuan; Zhang, Yan; Deng, Ke; Xu, Xiao-Ke; Hu, Ke
2015-08-01
Community detection is of considerable importance for analyzing the structure and function of complex networks. Many real-world networks may possess community structures at multiple scales, and recently, various multi-resolution methods were proposed to identify the community structures at different scales. In this paper, we present a type of multi-resolution methods by using the generalized self-loop rescaling strategy. The self-loop rescaling strategy provides one uniform ansatz for the design of multi-resolution community detection methods. Many quality functions for community detection can be unified in the framework of the self-loop rescaling. The resulting multi-resolution quality functions can be optimized directly using the existing modularity-optimization algorithms. Several derived multi-resolution methods are applied to the analysis of community structures in several synthetic and real-world networks. The results show that these methods can find the pre-defined substructures in synthetic networks and real splits observed in real-world networks. Finally, we give a discussion on the methods themselves and their relationship. We hope that the study in the paper can be helpful for the understanding of the multi-resolution methods and provide useful insight into designing new community detection methods.
Continuously zoom imaging probe for the multi-resolution foveated laparoscope
Qin, Yi; Hua, Hong
2016-01-01
In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration. PMID:27446645
Continuously zoom imaging probe for the multi-resolution foveated laparoscope.
Qin, Yi; Hua, Hong
2016-04-01
In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration.
Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry.
Caracappa, Peter F; Rhodes, Ashley; Fiedler, Derek
2014-09-21
Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-09-01
Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
NASA Astrophysics Data System (ADS)
Kaneko, J. Y.; Yuen, D. A.; Dzwinel, W.; Boryszko, K.; Ben-Zion, Y.; Sevre, E. O.
2002-12-01
The study of seismic patterns with synthetic data is important for analyzing the seismic hazard of faults because one can precisely control the spatial and temporal domains. Using modern clustering analysis from statistics and a recently introduced visualization software, AMIRA, we have examined the multi-resolution nature of a total assemblage involving 922,672 earthquake events in 4 numerically simulated models, which have different constitutive parameters, with 2 disparately different time intervals in a 3D spatial domain. The evolution of stress and slip on the fault plane was simulated with the 3D elastic dislocation theory for a configuration representing the central San Andreas Fault (Ben-Zion, J. Geophys. Res., 101, 5677-5706, 1996). The 4 different models represent various levels of fault zone disorder and have the following brittle properties and names: uniform properties (model U), a Parkfield type Asperity (A), fractal properties (F), and multi-size-heterogeneities (model M). We employed the MNN (mutual nearest neighbor) clustering method and developed a C-program that calculates simultaneously a number of parameters related to the location of the earthquakes and their magnitude values .Visualization was then used to look at the geometrical locations of the hypocenters and the evolution of seismic patterns. We wrote an AmiraScript that allows us to pass the parameters in an interactive format. With data sets consisting of 150 year time intervals, we have unveiled the distinctly multi-resolutional nature in the spatial-temporal pattern of small and large earthquake correlations shown previously by Eneva and Ben-Zion (J. Geophys. Res., 102, 24513-24528, 1997). In order to search for clearer possible stationary patterns and substructures within the clusters, we have also carried out the same analysis for corresponding data sets with time extending to several thousand years. The larger data sets were studied with finer and finer time intervals and multi
Cybulska, Annalina; Meintker, Lisa; Ringwald, Jürgen; Krause, Stefan W
2017-05-01
Detection of immature platelets in the circulation may help to dissect thrombocytopenia due to platelet destruction from bone marrow failure (BMF). We prospectively tested the predictive value of immature platelets, measured as immature platelet fraction (IPF) on the XE-5000 (Sysmex, Kobe, Japan) or percentage of reticulated platelets (rPT) on the CD Sapphire (Abbott Diagnostics, Santa Clara, CA, USA) to separate immune thrombocytopenia (ITP) from BMF (leukaemia, myelodysplastic syndrome, aplastic anaemia). We analysed 58 samples of patients with BMF, 47 samples of patients with ITP and 97 controls. Median rPT (CD Sapphire) was increased to 9·0% in ITP and to 10·9% in BMF, compared to 1·9% in controls. Median IPF (XE-5000) was 16·2% in ITP, 10·2% in BMF and 2·5% in controls. We found an inverse correlation between high fractions of immature platelets and low platelet counts in thrombocytopenic samples regardless of the diagnosis. In conclusion, we observed a broad overlap of immature platelets between ITP and BMF, which may be caused by an accelerated release of immature platelets in any thrombocytopenic state and decreased production in many patients with ITP. Despite this, IPF (XE-5000) had some power to discriminate ITP from BMF, whereas rPT (CD Sapphire) was of no predictive value. © 2017 John Wiley & Sons Ltd.
An adaptive multiresolution gradient-augmented level set method for advection problems
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kolomenskiy, Dmitry; Nave, Jean-Chtristophe
2014-11-01
Advection problems are encountered in many applications, such as transport of passive scalars modeling pollution or mixing in chemical engineering. In some problems, the solution develops small-scale features localized in a part of the computational domain. If the location of these features changes in time, the efficiency of the numerical method can be significantly improved by adapting the partition dynamically to the solution. We present a space-time adaptive scheme for solving advection equations in two space dimensions. The third order accurate gradient-augmented level set method using a semi-Lagrangian formulation with backward time integration is coupled with a point value multiresolution analysis using Hermite interpolation. Thus locally refined dyadic spatial grids are introduced which are efficiently implemented with dynamic quad-tree data structures. For adaptive time integration, an embedded Runge-Kutta method is employed. The precision of the new fully adaptive method is analysed and speed up of CPU time and memory compression with respect to the uniform grid discretization are reported.
Nagelmüller, Sebastian; Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim
2016-03-01
Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called 'Leaf Length Tracker' (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions.
Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim
2016-01-01
Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called ‘Leaf Length Tracker’ (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions. PMID:26818912
Kennedy, Theodore A.; Cross, Wyatt F.; Hall, Robert O.; Baxter, Colden V.; Rosi-Marshall, Emma J.
2013-01-01
Fish populations in the Colorado River downstream from Glen Canyon Dam appear to be limited by the availability of high-quality invertebrate prey. Midge and blackfly production is low and nonnative rainbow trout in Glen Canyon and native fishes in Grand Canyon consume virtually all of the midge and blackfly biomass that is produced annually. In Glen Canyon, the invertebrate assemblage is dominated by nonnative New Zealand mudsnails, the food web has a simple structure, and transfers of energy from the base of the web (algae) to the top of the web (rainbow trout) are inefficient. The food webs in Grand Canyon are more complex relative to Glen Canyon, because, on average, each species in the web is involved in more interactions and feeding connections. Based on theory and on studies from other ecosystems, the structure and organization of Grand Canyon food webs should make them more stable and less susceptible to large changes following perturbations of the flow regime relative to food webs in Glen Canyon. In support of this hypothesis, Grand Canyon food webs were much less affected by a 2008 controlled flood relative to the food web in Glen Canyon.
Multi-resolution multi-object statistical shape models based on the locality assumption.
Wilms, Matthias; Handels, Heinz; Ehrhardt, Jan
2017-02-17
Statistical shape models learned from a population of previously observed training shapes are nowadays widely used in medical image analysis to aid segmentation or classification. However, providing an appropriate and representative training population of preferably manual segmentations is typically either very labor-intensive or even impossible. Therefore, statistical shape models in practice frequently suffer from the high-dimension-low-sample-size (HDLSS) problem resulting in models with insufficient expressiveness. In this paper, a novel approach for learning representative multi-resolution multi-object statistical shape models from a small number of training samples that adequately model the variability of each individual object as well as their interrelations is presented. The method is based on the assumption of locality, which means that local shape variations have limited effects in distant areas and, therefore, can be modeled independently. This locality assumption is integrated into the standard statistical shape modeling framework by manipulating the sample covariance matrix (non-zero covariances between distant landmarks are set to zero). To allow for multi-object modeling, a method for computing distances between points located on different object shapes is proposed. Furthermore, different levels of locality are introduced by deriving a multi-resolution scheme, which is equipped with a method to combine variability information modeled at different levels into a single shape model. This combined representation of global and local variability in a single shape model allows the use of the classical active shape model strategy for model-based image segmentation. An extensive evaluation based on a public data base of 247 chest radiographs is performed to show the modeling and segmentation capabilities of the proposed approach in single- and multi-object HDLSS scenarios. The new approach is not only compared to the classical shape modeling method but also
Brasso, Rebecka L; Polito, Michael J; Emslie, Steven D
2014-10-01
Inter-annual variation in tissue mercury concentrations in birds can result from annual changes in the bioavailability of mercury or shifts in dietary composition and/or trophic level. We investigated potential annual variability in mercury dynamics in the Antarctic marine food web using Pygoscelis penguins as biomonitors. Eggshell membrane, chick down, and adult feathers were collected from three species of sympatrically breeding Pygoscelis penguins during the austral summers of 2006/2007-2010/2011. To evaluate the hypothesis that mercury concentrations in penguins exhibit significant inter-annual variation and to determine the potential source of such variation (dietary or environmental), we compared tissue mercury concentrations with trophic levels as indicated by δ(15)N values from all species and tissues. Overall, no inter-annual variation in mercury was observed in adult feathers suggesting that mercury exposure, on an annual scale, was consistent for Pygoscelis penguins. However, when examining tissues that reflected more discrete time periods (chick down and eggshell membrane) relative to adult feathers, we found some evidence of inter-annual variation in mercury exposure during penguins' pre-breeding and chick rearing periods. Evidence of inter-annual variation in penguin trophic level was also limited suggesting that foraging ecology and environmental factors related to the bioavailability of mercury may provide more explanatory power for mercury exposure compared to trophic level alone. Even so, the variable strength of relationships observed between trophic level and tissue mercury concentrations across and within Pygoscelis penguin species suggest that caution is required when selecting appropriate species and tissue combinations for environmental biomonitoring studies in Antarctica.
Velocity Vector Fields from Sea Surface Temperature Images Using Multiresolution
NASA Astrophysics Data System (ADS)
Tonsmann, G.; Tyler, J. M.; Walker, N. D.; Wiseman, W.; Rouse, L. J.
2001-12-01
This paper presents a new method for the estimation of oceanic surface velocity vector fields using multiresolution. Wavelet analysis is used to achieve multiresolution. The new method requires two sea surface temperature (SST) satellite images of the same region taken within a known time interval. Wavelet analysis is performed on both images to decompose them into sub-images of decreasing resolution levels. These sub-images are organized into two quadtrees, one for each SST image. The method compares equivalent sub-images between quadtrees to produce vector fields to represent local displacements of features within the images. Comparisons are performed by maximization of cross correlation of regions in the sub-images. The vector fields are smoothed to eliminate noise and to produce coherent vector fields at each resolution level. Vector fields at levels of higher resolution are used as refinements to vector fields at lower resolution levels. Operational parameters for the new method were optimized. It was determined that wavelet filters with smaller support were best for analysis and smoothing. Validation of the methodology was performed with SST images of the Gulf of Mexico from NOAA satellites during the period October 1993 through July 1994. Image pairs within this set were selected with a time interval of 24 hours between them to minimize biases in SST values that may be introduced by the day-night cycle and to filter out the effects of the diurnal tide, which dominates in the Gulf of Mexico. Comparisons with daily average velocities calculated from drifters from the Surface Current and Lagrangian-drift Program I (SCULP-I) were also performed. Agreement with drifter data was partially achieved. Some discrepancies were discovered in featureless image regions where cross correlation calculations produce unreliable results. The discrepancies could also be explained by differences in the features captured in the satellite images and the factors that influence
Multi-Resolution Seismic Tomography Based on Recursive Tessellation Hierarchy
Simmons, N A; Myers, S C; Ramirez, A
2009-07-01
tomographic problems. They also apply the progressive inversion approach with Pn waves traveling within the Middle East region and compare the results to simple tomographic inversions. As expected from synthetic testing, the progressive approach results in detailed structure where there is high data density and broader regional anomalies where seismic information is sparse. The ultimate goal is to use these methods to produce a seamless, multi-resolution global tomographic model with local model resolution determined by the constraints afforded by available data. They envisage this new technique as the general approach to be employed for future multi-resolution model development with complex arrangements of regional and teleseismic information.
Fast multiresolution search algorithm for optimal retrieval in large multimedia databases
NASA Astrophysics Data System (ADS)
Song, Byung C.; Kim, Myung J.; Ra, Jong Beom
1999-12-01
Most of the content-based image retrieval systems require a distance computation for each candidate image in the database. As a brute-force approach, the exhaustive search can be employed for this computation. However, this exhaustive search is time-consuming and limits the usefulness of such systems. Thus, there is a growing demand for a fast algorithm which provides the same retrieval results as the exhaustive search. In this paper, we prose a fast search algorithm based on a multi-resolution data structure. The proposed algorithm computes the lower bound of distance at each level and compares it with the latest minimum distance, starting from the low-resolution level. Once it is larger than the latest minimum distance, we can exclude the candidates without calculating the full- resolution distance. By doing this, we can dramatically reduce the total computational complexity. It is noticeable that the proposed fast algorithm provides not only the same retrieval results as the exhaustive search, but also a faster searching ability than existing fast algorithms. For additional performance improvement, we can easily combine the proposed algorithm with existing tree-based algorithms. The algorithm can also be used for the fast matching of various features such as luminance histograms, edge images, and local binary partition textures.
Systolic implementation of a bidimensional lattice filter bank for multiresolution image coding
NASA Astrophysics Data System (ADS)
Desneux, P.; Legat, Jean-Didier; Macq, Benoit M. M.; Mertes, J. Y.
1993-10-01
In this paper, we present systolic architecture based on the lattice structure of filters. The main characteristic of this architecture is the systolism: computations are pipelined in many identical locally interconnected processing elements (PEs). These PEs are simple and can reach high frequency clock while working at any time of the process. So, the speed of the circuit can be increased. The implementation of the filters, through VLSI techniques, is facilitated by the repetitive nature of the elements. In section 2, we describe the multiresolution scheme and the lattice structures. If the lattice structure appears as an efficient remedy for the finite length of the multipliers, special attention has to be kept on the computation noise which appears when the datapath is limited to some finite width. The goal of the so-related study consists in keeping this computation noise below the quantization noise (coming from the quantizers) at a reasonable cost. In section 3, we present the basic processing element and its use among the different stages of the filter. Section 4 deals with the finite representation of the data throughout the datapath.
Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis
Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel
2016-01-01
An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.
NASA Technical Reports Server (NTRS)
Jawerth, Bjoern; Sweldens, Wim
1993-01-01
We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.
Face recognition with multi-resolution spectral feature images.
Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou
2013-01-01
The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.
Nonlinear multiresolution signal decomposition schemes--part I: morphological pyramids.
Goutsias, J; Heijmans, H M
2000-01-01
Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This paper presents a general theory for constructing linear as well as nonlinear pyramid decomposition schemes for signal analysis and synthesis. The proposed theory is based on the following ingredients: 1) the pyramid consists of a (finite or infinite) number of levels such that the information content decreases toward higher levels and 2) each step toward a higher level is implemented by an (information-reducing) analysis operator, whereas each step toward a lower level is implemented by an (information-preserving) synthesis operator. One basic assumption is necessary: synthesis followed by analysis yields the identity operator, meaning that no information is lost by these two consecutive steps. Several examples of pyramid decomposition schemes are shown to be instances of the proposed theory: a particular class of linear pyramids, morphological skeleton decompositions, the morphological Haar pyramid, median pyramids, etc. Furthermore, the paper makes a distinction between single-scale and multiscale decomposition schemes, i.e., schemes without or with sample reduction. Finally, the proposed theory provides the foundation of a general approach to constructing nonlinear wavelet decomposition schemes and filter banks.
On analysis of electroencephalogram by multiresolution-based energetic approach
NASA Astrophysics Data System (ADS)
Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer
2013-10-01
Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.
Eagle II: A prototype for multi-resolution combat modeling
Powell, D.R.; Hutchinson, J.L.
1993-02-01
Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.
Multiresolution imaging of in-vivo ligand-receptor interactions
NASA Astrophysics Data System (ADS)
Thevenaz, Philippe; Millet, Philippe
2001-05-01
The aim of this study is to obtain voxel-by-voxel images of binding parameters between [11C]-flumazenil and benzodiazepine receptors using positron emission tomography (PET). We estimate five local parameters (k1, k2, B'max, kon/VR, koff) by fitting a three- compartment ligand-receptor model for each voxel of a PET time series. It proves difficult to fit the ligand-receptor model to the data. We trade noise and spatial resolution to get better results. Our strategy is based on the use of a multiresolution pyramid. It is much easier to solve the problem at coarse resolution because there are fewer data to process. To increase resolution, we expand the parameter maps to the next finer level and use them as initial solution to further optimization, which then proceeds at a fast pace and is more likely to escape false local minima. For this approach to work optimally, the residue between data at a given pyramid level and data at the next level must be as small as possible. We satisfy this constraint by working with spline-based least- squares pyramids. To achieve speed, the optimizer must be efficient, particularly when it is nearing the solution. To that effect, we have developed a Marquardt-Levenberg algorithm that exhibits superlinear convergence properties.
Cointegration and Nonstationarity in the Context of Multiresolution Analysis
NASA Astrophysics Data System (ADS)
Worden, K.; Cross, E. J.; Kyprianou, A.
2011-07-01
Cointegration has established itself as a powerful means of projecting out long-term trends from time-series data in the context of econometrics. Recent work by the current authors has further established that cointegration can be applied profitably in the context of structural health monitoring (SHM), where it is desirable to project out the effects of environmental and operational variations from data in order that they do not generate false positives in diagnostic tests. The concept of cointegration is partly built on a clear understanding of the ideas of stationarity and nonstationarity for time-series. Nonstationarity in this context is 'traditionally' established through the use of statistical tests, e.g. the hypothesis test based on the augmented Dickey-Fuller statistic. However, it is important to understand the distinction in this case between 'trend' stationarity and stationarity of the AR models typically fitted as part of the analysis process. The current paper will discuss this distinction in the context of SHM data and will extend the discussion by the introduction of multi-resolution (discrete wavelet) analysis as a means of characterising the time-scales on which nonstationarity manifests itself. The discussion will be based on synthetic data and also on experimental data for the guided-wave SHM of a composite plate.
Eagle II: A prototype for multi-resolution combat modeling
Powell, D.R.; Hutchinson, J.L.
1993-01-01
Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to calibrate'' the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.
Interactive multiscale tensor reconstruction for multiresolution volume visualization.
Suter, Susanne K; Guitián, José A Iglesias; Marton, Fabio; Agus, Marco; Elsener, Andreas; Zollikofer, Christoph P E; Gopi, M; Gobbetti, Enrico; Pajarola, Renato
2011-12-01
Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
Multiresolution analysis over simple graphs for brain computer interfaces
NASA Astrophysics Data System (ADS)
Asensio-Cubero, J.; Gan, J. Q.; Palaniappan, R.
2013-08-01
Objective. Multiresolution analysis (MRA) offers a useful framework for signal analysis in the temporal and spectral domains, although commonly employed MRA methods may not be the best approach for brain computer interface (BCI) applications. This study aims to develop a new MRA system for extracting tempo-spatial-spectral features for BCI applications based on wavelet lifting over graphs. Approach. This paper proposes a new graph-based transform for wavelet lifting and a tailored simple graph representation for electroencephalography (EEG) data, which results in an MRA system where temporal, spectral and spatial characteristics are used to extract motor imagery features from EEG data. The transformed data is processed within a simple experimental framework to test the classification performance of the new method. Main Results. The proposed method can significantly improve the classification results obtained by various wavelet families using the same methodology. Preliminary results using common spatial patterns as feature extraction method show that we can achieve comparable classification accuracy to more sophisticated methodologies. From the analysis of the results we can obtain insights into the pattern development in the EEG data, which provide useful information for feature basis selection and thus for improving classification performance. Significance. Applying wavelet lifting over graphs is a new approach for handling BCI data. The inherent flexibility of the lifting scheme could lead to new approaches based on the hereby proposed method for further classification performance improvement.
Face Recognition with Multi-Resolution Spectral Feature Images
Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou
2013-01-01
The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method. PMID:23418451
Multi-resolution Convolution Methodology for ICP Waveform Morphology Analysis.
Shaw, Martin; Piper, Ian; Hawthorne, Christopher
2016-01-01
Intracranial pressure (ICP) monitoring is a key clinical tool in the assessment and treatment of patients in neurointensive care. ICP morphology analysis can be useful in the classification of waveform features.A methodology for the decomposition of an ICP signal into clinically relevant dimensions has been devised that allows the identification of important ICP waveform types. It has three main components. First, multi-resolution convolution analysis is used for the main signal decomposition. Then, an impulse function is created, with multiple parameters, that can represent any form in the signal under analysis. Finally, a simple, localised optimisation technique is used to find morphologies of interest in the decomposed data.A pilot application of this methodology using a simple signal has been performed. This has shown that the technique works with performance receiver operator characteristic area under the curve values for each of the waveform types: plateau wave, B wave and high and low compliance states of 0.936, 0.694, 0.676 and 0.698, respectively.This is a novel technique that showed some promise during the pilot analysis. However, it requires further optimisation to become a usable clinical tool for the automated analysis of ICP signals.
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew
2013-04-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.
Techniques and potential capabilities of multi-resolutional information (knowledge) processing
NASA Technical Reports Server (NTRS)
Meystel, A.
1989-01-01
A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.
NASA Astrophysics Data System (ADS)
Ke-zhong, Han
The rise of frame theory in applied mathematics is due to the flexibility and redundancy of frames. In the work, the notion of bivariate affine pseudoframes is introduced and the no-tion of a bivariate generalized multiresolution analysis (GMRA) is introduced. A novel approach for designing one GMRA of Paley Wiener subspaces of L2(R2) is proposed. The sufficient condition for the existence of a sort of affine pseudoframes with fi-filter banks is obtained by virtue of a generalized multiresolution analysis. The pyramid decomposition scheme is established based on such a generalized multiresolution analysis. An approach for designing a sort of affine biariate dual frames in two-dimensional space is presented.
Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation
NASA Astrophysics Data System (ADS)
Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk
2016-09-01
This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).
NASA Astrophysics Data System (ADS)
Couloigner, Isabelle; Ranchin, Thierry
1998-10-01
This paper presents a new method to extract, semi- automatically, quadrangular urban road network from high spatial resolution imagery. A quadrangular network is generally composed of different classes of streets in a hierarchical system. The developed method is based both on the multiresolution analysis and on the wavelet transform. The multiresolution analysis allows a multiscale analysis of images and thus the extraction of the streets in a class-by- class way. The wavelet transform enables the modeling of information at different characteristic scales. In the problem, it allows the extraction of the topography of streets. These two mathematical tools are combined in the `a trous' algorithm. The application of this algorithm to images of urban areas has been used to develop semi- automatic multiresolution processing. This method will help photo-interpreters in their cartographic works by a partial automation of tasks.
Reuter, Matthew G; Hill, Judith C; Harrison, Robert J
2012-01-01
In this work, we develop and analyze a formalism for solving boundary value problems in arbitrarily-shaped domains using the MADNESS (multiresolution adaptive numerical environment for scientific simulation) package for adaptive computation with multiresolution algorithms. We begin by implementing a previously-reported diffuse domain approximation for embedding the domain of interest into a larger domain (Li et al., 2009 [1]). Numerical and analytical tests both demonstrate that this approximation yields non-physical solutions with zero first and second derivatives at the boundary. This excessive smoothness leads to large numerical cancellation and confounds the dynamically-adaptive, multiresolution algorithms inside MADNESS. We thus generalize the diffuse domain approximation, producing a formalism that demonstrates first-order convergence in both near- and far-field errors. We finally apply our formalism to an electrostatics problem from nanoscience with characteristic length scales ranging from 0.0001 to 300 nm.
NASA Astrophysics Data System (ADS)
Hakoyama, Tomoyuki; Kuwabara, Toshihiko; Barlat, Frédéric
2016-10-01
The effect of the method used to determine the material parameters of a yield function on the accuracy of the forming limit strains predicted using the Marciniak-Kuczyński-type (M-K) forming limit analysis for a 5000 series aluminum alloy sheet is investigated. A tube subjected to tension-expansion loading under linear paths in the first quadrant of the stress space are performed to measure the multiaxial plastic deformation behavior and the forming limit strains of the test material. The anisotropic parameters and the exponent of the Yld2000-2d yield function (Barlat et al, 2003) are optimized to approximate the contours of the plastic work and/or the directions of the plastic strain rates. The M-K analyses are performed using the different model identifications based on the Yld2000-2d yield function. It is concluded that the yield function best capturing both the plastic work contours and the directions of the plastic strain rates leads to the most accurate predicted forming limit strains.
Multiresolution molecular mechanics: Surface effects in nanoscale materials
NASA Astrophysics Data System (ADS)
Yang, Qingcheng; To, Albert C.
2017-05-01
Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), [57]) is applied to capture surface effect for nanosized structures by designing a surface summation rule SRS within the framework of MMM. Combined with previously proposed bulk summation rule SRB, the MMM summation rule SRMMM is completed. SRS and SRB are consistently formed within SRMMM for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SRMMM lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SRS and SRB are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SRMMM accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SRMMM with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SRMMM that is analogous to numerical integration error with quadrature rule in FEM is very small.
On-the-Fly Decompression and Rendering of Multiresolution Terrain
Lindstrom, P; Cohen, J D
2009-04-02
We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.
Multiresolution image representation using combined 2-D and 1-D directional filter banks.
Tanaka, Yuichi; Ikehara, Masaaki; Nguyen, Truong Q
2009-02-01
In this paper, effective multiresolution image representations using a combination of 2-D filter bank (FB) and directional wavelet transform (WT) are presented. The proposed methods yield simple implementation and low computation costs compared to previous 1-D and 2-D FB combinations or adaptive directional WT methods. Furthermore, they are nonredundant transforms and realize quad-tree like multiresolution representations. In applications on nonlinear approximation, image coding, and denoising, the proposed filter banks show visual quality improvements and have higher PSNR than the conventional separable WT or the contourlet.
NASA Astrophysics Data System (ADS)
Qin, Yi; Hua, Hong; Nguyen, Mike
2013-03-01
Laparoscope is the essential tool for minimally invasive surgery (MIS) within the abdominal cavity. However, the focal length of a conventional laparoscope is fixed. Therefore, it suffers from the tradeoff between field of view (FOV) and spatial resolution. In order to obtain large optical magnification to see more details, a conventional laparoscope is usually designed with a small working distance, typically less than 50mm. Such a small working distance limits the field of coverage, which causes the situational awareness challenge during the laparoscopic surgery. We developed a multi-resolution foveated laparoscope (MRFL) aiming to address this limitation. The MRFL was designed to support a large working distance range from 80mm to 180mm. It is able to simultaneously provide both wide-angle overview and high-resolution image of the surgical field in real time within a fully integrated system. The high-resolution imaging probe can automatically scan and engage to any subfield of the wide-angle view. During the surgery, MRFL does not need to move; therefore it can reduce the instruments conflicts. The FOV of the wide-angle imaging probe is 80° and that of the high-resolution imaging probe is 26.6°. The maximum resolution is about 45um in the object space at an 80mm working distance, which is about 5 times as good as a conventional laparoscope at a 50mm working distance. The prototype can realize an equivalent 10 million-pixel resolution by using only two HD cameras because of its foveation capability. It saves the bandwidth and improves the frame rate compared to the use of a super resolution camera. It has great potential to aid safety and accuracy of the laparoscopic surgery.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-09-01
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K.; Okonkwo, Ozioma C.; Johnson, Sterling C.; Bendlin, Barbara; Singh, Vikas
2015-01-01
There is significant interest, both from basic and applied research perspectives, in understanding how structural/functional connectivity changes can explain behavioral symptoms and predict decline in neurodegenerative diseases such as Alzheimer's disease (AD). The first step in most such analyses is to encode the connectivity information as a graph; then, one may perform statistical inference on various ‘global’ graph theoretic summary measures (e.g., modularity, graph diameter) and/or at the level of individual edges (or connections). For AD in particular, clear differences in connectivity at the dementia stage of the disease (relative to healthy controls) have been identified. Despite such findings, AD-related connectivity changes in preclinical disease remain poorly characterized. Such preclinical datasets are typically smaller and group differences are weaker. In this paper, we propose a new multi-resolution method for performing statistical analysis of connectivity networks/graphs derived from neuroimaging data. At the high level, the method occupies the middle ground between the two contrasts — that is, to analyze global graph summary measures (global) or connectivity strengths or correlations for individual edges similar to voxel based analysis (local). Instead, our strategy derives a Wavelet representation at each primitive (connection edge) which captures the graph context at multiple resolutions. We provide extensive empirical evidence of how this framework offers improved statistical power by analyzing two distinct AD datasets. Here, connectivity is derived from diffusion tensor magnetic resonance images by running a tractography routine. We first present results showing significant connectivity differences between AD patients and controls that were not evident using standard approaches. Later, we show results on populations that are not diagnosed with AD but have a positive family history risk of AD where our algorithm helps in identifying
Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K; Okonkwo, Ozioma C; Johnson, Sterling C; B Bendlin, Barbara; Singh, Vikas
2015-09-01
There is significant interest, both from basic and applied research perspectives, in understanding how structural/functional connectivity changes can explain behavioral symptoms and predict decline in neurodegenerative diseases such as Alzheimer's disease (AD). The first step in most such analyses is to encode the connectivity information as a graph; then, one may perform statistical inference on various 'global' graph theoretic summary measures (e.g., modularity, graph diameter) and/or at the level of individual edges (or connections). For AD in particular, clear differences in connectivity at the dementia stage of the disease (relative to healthy controls) have been identified. Despite such findings, AD-related connectivity changes in preclinical disease remain poorly characterized. Such preclinical datasets are typically smaller and group differences are weaker. In this paper, we propose a new multi-resolution method for performing statistical analysis of connectivity networks/graphs derived from neuroimaging data. At the high level, the method occupies the middle ground between the two contrasts - that is, to analyze global graph summary measures (global) or connectivity strengths or correlations for individual edges similar to voxel based analysis (local). Instead, our strategy derives a Wavelet representation at each primitive (connection edge) which captures the graph context at multiple resolutions. We provide extensive empirical evidence of how this framework offers improved statistical power by analyzing two distinct AD datasets. Here, connectivity is derived from diffusion tensor magnetic resonance images by running a tractography routine. We first present results showing significant connectivity differences between AD patients and controls that were not evident using standard approaches. Later, we show results on populations that are not diagnosed with AD but have a positive family history risk of AD where our algorithm helps in identifying potentially
A maximum-likelihood multi-resolution weak lensing mass reconstruction method
NASA Astrophysics Data System (ADS)
Khiabanian, Hossein
Gravitational lensing is formed when the light from a distant source is "bent" around a massive object. Lensing analysis has increasingly become the method of choice for studying dark matter, so much that it is one of the main tools that will be employed in the future surveys to study the dark energy and its equation of state as well as the evolution of galaxy clustering. Unlike other popular techniques for selecting galaxy clusters (such as studying the X-ray emission or observing the over-densities of galaxies), weak gravitational lensing does not have the disadvantage of relying on the luminous matter and provides a parameter-free reconstruction of the projected mass distribution in clusters without dependence on baryon content. Gravitational lensing also provides a unique test for the presence of truly dark clusters, though it is otherwise an expensive detection method. Therefore it is essential to make use of all the information provided by the data to improve the quality of the lensing analysis. This thesis project has been motivated by the limitations encountered with the commonly used direct reconstruction methods of producing mass maps. We have developed a multi-resolution maximum-likelihood reconstruction method for producing two dimensional mass maps using weak gravitational lensing data. To utilize all the shear information, we employ an iterative inverse method with a properly selected regularization coefficient which fits the deflection potential at the position of each galaxy. By producing mass maps with multiple resolutions in the different parts of the observed field, we can achieve a uniform signal to noise level by increasing the resolution in regions of higher distortions or regions with an over-density of background galaxies. In addition, we are able to better study the sub- structure of the massive clusters at a resolution which is not attainable in the rest of the observed field.
Multiresolution retinal vessel tracker based on directional smoothing
NASA Astrophysics Data System (ADS)
Englmeier, Karl-Hans; Bichler, Simon; Schmid, K.; Maurino, M.; Porta, Massimo; Bek, Toke; Ege, B.; Larsen, Ole V.; Hejlesen, Ok
2002-04-01
To support ophthalmologists in their routine and enable the quantitative assessment of vascular changes in color fundus photographs a multi-resolution approach was developed which segments the vessel tree efficiently and precisely in digital images of the retina. The algorithm starts at seed points, found in a preprocessing step and then follows the vessel, iteratively adjusting the direction of the search, and finding the center line of the vessels. As an addition, vessel branches and crossings are detected and stored in detailed lists. Every iteration of the Directional Smoothing Based (DSB) tracking process starts at a given point in the middle of a vessel. First rectangular windows for several directions in a neighborhood of this point are smoothed in the assumed direction of the vessel. The window, that results in the best contrast is then said to have the true direction of the vessel. The center point is moved into that direction 1/8th of the vessel width, and the algorithm continues with the next iteration. The vessel branch and crossing detection uses a list with unique vessel segment IDs and branch point IDs. During the tracking, when another vessel is crossed, the tracking is stopped. The newly traced vessel segment is stored in the vessel segment list, and the vessel, that had been traced before is broken up at the crossing- or branch point, and is stored as two different vessel segments. This approach has several advantages: - With directional smoothing, noise is eliminated, while the edges of the vessels are kept. - DSB works on high resolution images (3000 x 2000 pixel) as well as on low-resolution images (900 x 600 pixel), because a large area of the vessel is used to find the vessel direction - For the detection of venous beading the vessel width is measured for every step of the traced vessel. - With the lists of branch- and crossing points, we get a network of connected vessel segments, that can be used for further processing the retinal vessel
Multi-resolution global tomography using multiple tessellation tiers
NASA Astrophysics Data System (ADS)
Simmons, N. A.; Myers, S. C.
2008-12-01
We are currently developing a new 3-D global tomography modeling framework with monitoring purposes at the forefront. The overall monitoring goal is to develop a seamless, 3-D global model capable of accurately locating seismic events from combined teleseismic and regional travel time observations. Such a model will eliminate the need to splice independent regional and/or global models for the purpose of travel time prediction since the singular model will be complete and self-consistent. In order to optimize the number of free parameters, the modeling framework must allow for flexible multiple resolution capabilities (depth and spatial). The model design must also directly account for variable discontinuous structures from a-priori information to maintain travel time prediction accuracy. With these general criteria in mind, we have designed a tessellation procedure whereby triangular sides of a pre-determined polyhedron are recursively subdivided into smaller triangles. Each recursion level (tier) produces 4 triangular regions (daughters) within each larger triangular region (parents) and the vertices are then normalized to the unit sphere. Upon building each higher tier tessellation, parent-daughter triangle information is stored and indexed to allow for rapid model referencing at any desired tessellation tier (resolution level). Therefore sensitivity kernels can be computed for all available tessellation tiers while simultaneously triangulating positions along a given ray path, allowing for easily adjustable multi-resolution model inversion setup. Mixing kernel weights from multiple tessellation tiers also provides for an efficient regularization scheme. Sets of model nodes are defined along the tessellated vertices and discontinuous structures (such as the Moho and subducted slabs) are treated via logical definitions and structured data array design with only a slight hindrance on computational efficiency. A geodetic reference ellipsoid (GRS80) is built into
Adaptive multiresolution semi-Lagrangian discontinuous Galerkin methods for the Vlasov equations
NASA Astrophysics Data System (ADS)
Besse, N.; Deriaz, E.; Madaule, É.
2017-03-01
We develop adaptive numerical schemes for the Vlasov equation by combining discontinuous Galerkin discretisation, multiresolution analysis and semi-Lagrangian time integration. We implement a tree based structure in order to achieve adaptivity. Both multi-wavelets and discontinuous Galerkin rely on a local polynomial basis. The schemes are tested and validated using Vlasov-Poisson equations for plasma physics and astrophysics.
NASA Astrophysics Data System (ADS)
Voicu, Liviu; Rabadi, Wissam A.; Myler, Harley R.
1997-10-01
The problem of reconstructing the support of an imaged object from the support of its autocorrelation is addressed within the framework of genetic algorithms. First, we propose a method of coding binary sets into chromosomes that is both efficient and general, producing reasonably short chromosomes and being able to represent convex objects, as well as some non-convex and even clustered ones. Furthermore, in order to compensate for the computational costs normally incurred when genetic algorithms are applied, a novel multiresolution version of the algorithm was introduced and tested. The multiresolution genetic algorithm consists of a superposition of multiple algorithms evolving at different resolutions, sequentially. Upon occurrence of some convergence criteria at the current scale, the genetic population was mapped at a superior scale by a coarse-to- fine mapping that preserved the progress registered previously. This mapping is implemented in a genetic algorithm framework by a new genetic operator called cloning. A number of experiments of object support reconstruction were performed and the best results from different genetic generations were depicted in chronological sequence. While both versions of genetic algorithms achieved good results, the multiresolution approach wa also able to substantially improve the convergence speed of the process. The effectiveness of the method can be extended even further if a parallel implementation of the genetic algorithm is employed. Finally, alternate coding methods could be readily used in both the standard and the multiresolution approaches, with no need for further adaptations of the basic structure of the genetic algorithm.
Dowideit, Kerstin; Scholz-Muramatsu, Heidrun; Miethling-Graff, Rona; Vigelahn, Lothar; Freygang, Martina; Dohrmann, Anja B; Tebbe, Christoph C
2010-03-01
Microbiological analyses of sediment samples were conducted to explore potentials and limitations for bioremediation of field sites polluted with chlorinated ethenes. Intact sediment cores, collected by direct push probing from a 35-ha contaminated area, were analyzed in horizontal layers. Cultivation-independent PCR revealed Dehalococcoides to be the most abundant 16S rRNA gene phylotype with a suspected potential for reductive dechlorination of the major contaminant trichloroethene (TCE). In declining abundances, Desulfitobacterium, Desulfuromonas and Dehalobacter were also detected. In TCE-amended sediment slurry incubations, 66% of 121 sediment samples were dechlorinating, among them one-third completely and the rest incompletely (end product cis-1,2-dichloroethene; cDCE). Both PCR and slurry analyses revealed highly heterogeneous horizontal and vertical distributions of the dechlorination potentials in the sediments. Complete reductive TCE dechlorination correlated with the presence of Dehalococcoides, accompanied by Acetobacterium and a relative of Trichococcus pasteurii. Sediment incubations under close to in situ conditions showed that a low TCE dechlorination activity could be stimulated by 7 mg L(-1) dissolved carbon for cDCE formation and by an additional 36 mg carbon (lactate) L(-1) for further dechlorination. The study demonstrates that the highly heterogeneous distribution of TCE degraders and their specific requirements for carbon and electrons are key issues for TCE degradation in contaminated sites.
A multiresolution approach to automated classification of protein subcellular location images
Chebira, Amina; Barbotin, Yann; Jackson, Charles; Merryman, Thomas; Srinivasa, Gowri; Murphy, Robert F; Kovačević, Jelena
2007-01-01
Background Fluorescence microscopy is widely used to determine the subcellular location of proteins. Efforts to determine location on a proteome-wide basis create a need for automated methods to analyze the resulting images. Over the past ten years, the feasibility of using machine learning methods to recognize all major subcellular location patterns has been convincingly demonstrated, using diverse feature sets and classifiers. On a well-studied data set of 2D HeLa single-cell images, the best performance to date, 91.5%, was obtained by including a set of multiresolution features. This demonstrates the value of multiresolution approaches to this important problem. Results We report here a novel approach for the classification of subcellular location patterns by classifying in multiresolution subspaces. Our system is able to work with any feature set and any classifier. It consists of multiresolution (MR) decomposition, followed by feature computation and classification in each MR subspace, yielding local decisions that are then combined into a global decision. With 26 texture features alone and a neural network classifier, we obtained an increase in accuracy on the 2D HeLa data set to 95.3%. Conclusion We demonstrate that the space-frequency localized information in the multiresolution subspaces adds significantly to the discriminative power of the system. Moreover, we show that a vastly reduced set of features is sufficient, consisting of our novel modified Haralick texture features. Our proposed system is general, allowing for any combinations of sets of features and any combination of classifiers. PMID:17578580
The Global Multi-Resolution Topography (GMRT) Synthesis
NASA Astrophysics Data System (ADS)
Arko, R.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; O'Hara, S.; Chayes, D.; Weissel, R.; Goodwillie, A.; Ferrini, V.; Stroker, K.; Virden, W.
2007-12-01
Topographic maps provide a backdrop for research in nearly every earth science discipline. There is particular demand for bathymetry data in the ocean basins, where existing coverage is sparse. Ships and submersibles worldwide are rapidly acquiring large volumes of new data with modern swath mapping systems. The science community is best served by a global topography compilation that is easily accessible, up-to-date, and delivers data in the highest possible (i.e. native) resolution. To meet this need, the NSF-supported Marine Geoscience Data System (MGDS; www.marine-geo.org) has partnered with the National Geophysical Data Center (NGDC; www.ngdc.noaa.gov) to produce the Global Multi-Resolution Topography (GMRT) synthesis - a continuously updated digital elevation model that is accessible through Open Geospatial Consortium (OGC; www.opengeospatial.org) Web services. GMRT had its genesis in 1992 with the NSF RIDGE Multibeam Synthesis (RMBS); later grew to include the Antarctic Multibeam Synthesis (AMBS); expanded again to include the NSF Ridge 2000 and MARGINS programs; and finally emerged as a global compilation in 2005 with the NSF Legacy of Ocean Exploration (LOE) project. The LOE project forged a permanent partnership between MGDS and NGDC, in which swath bathymetry data sets are routinely published and exchanged via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH; www.openarchives.org). GMRT includes both color-shaded relief images and underlying elevation values at ten different resolutions as high as 100m. New data are edited, gridded, and tiled using tools originally developed by William Haxby at Lamont-Doherty Earth Observatory. Global and regional data sources include the NASA Shuttle Radar Topography Mission (SRTM; http://www.jpl.nasa.gov/srtm/); Smith & Sandwell Satellite Predicted Bathymetry (http://topex.ucsd.edu/marine_topo/); SCAR Subglacial Topographic Model of the Antarctic (BEDMAP; http://www.antarctica.ac.uk/bedmap/); and
NASA Astrophysics Data System (ADS)
Garber, J. M.; Hacker, B. R.; Kylander-Clark, A. R.
2015-12-01
Coupled age and trace-element data from titanites in the Western Gneiss Region (WGR) of Norway suggest that continental crust underwent limited recrystallization and ductile flow through ~40 My of deep subduction and subsequent exhumation. Precambrian igneous titanites in granitic to tonalitic orthogneisses from the WGR were metastably preserved though Caledonian ultrahigh-pressure (UHP) metamorphism and variably recrystallized through subsequent amphibolite-facies metamorphism from ~420-385 Ma. The inherited Precambrian titanites are not present everywhere but rather cluster primarily in a cooler "southern domain" (peak T ~650oC) and a hotter "northern domain" (peak T ~750-800oC).Titanite data were collected using LASS (laser-ablation split stream inductively-coupled plasma mass spectrometry) at UCSB, and a principal component analysis (PCA) was used to define age and trace-element populations. These data indicate that inherited titanites are LREE-enriched, HFSE-enriched, and have higher Th/U, consistent with Precambrian neocrystallization from a granitic melt. In contrast, the recrystallized titanites have generally lower Th/U and flat, LREE-depleted, or hump-shaped trace-element patterns. These data suggest that (1) Caledonian titanite recrystallization occurred in the presence of LREE-depleted melts or fluids, or that (2) recrystallization was accompanied by a "typical" granitic melt, but that titanite/bulk-rock distribution coefficients are different for neo- and recrystallization; on-going whole-rock analyses will clarify these hypotheses. Critically, the geochemical signature of recrystallized titanite in felsic orthogneisses is comparable across the entire WGR - emphasizing that the petrologic process of titanite recrystallization was similar orogen-wide, but was less extensive in the domains where inherited titanite was preserved. In this case, large volumes of crust outside of the "old domains" may also have retained metastable titanite during subduction
Liu, Hao; Liu, Haodong; Lapidus, Saul H.; Meng, Y. Shirley; Chupas, Peter J.; Chapman, Karena W.
2017-01-01
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. However, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi0.8Co0.15Al0.05O2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. This refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.
Men, Yujie; Yu, Ke; Bælum, Jacob; ...
2017-02-10
The aim of this paper is to obtain a systems-level understanding of the interactions between Dehalococcoides and corrinoid-supplying microorganisms by analyzing community structures and functional compositions, activities, and dynamics in trichloroethene (TCE)-dechlorinating enrichments. Metagenomes and metatranscriptomes of the dechlorinating enrichments with and without exogenous cobalamin were compared. Seven putative draft genomes were binned from the metagenomes. At an early stage (2 days), more transcripts of genes in the Veillonellaceae bin-genome were detected in the metatranscriptome of the enrichment without exogenous cobalamin than in the one with the addition of cobalamin. Among these genes, sporulation-related genes exhibited the highest differential expressionmore » when cobalamin was not added, suggesting a possible release route of corrinoids from corrinoid producers. Other differentially expressed genes include those involved in energy conservation and nutrient transport (including cobalt transport). The most highly expressed corrinoid de novo biosynthesis pathway was also assigned to the Veillonellaceae bin-genome. Targeted quantitative PCR (qPCR) analyses confirmed higher transcript abundances of those corrinoid biosynthesis genes in the enrichment without exogenous cobalamin than in the enrichment with cobalamin. Furthermore, the corrinoid salvaging and modification pathway of Dehalococcoides was upregulated in response to the cobalamin stress. Finally, this study provides important insights into the microbial interactions and roles played by members of dechlorinating communities under cobalamin-limited conditions.« less
Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...
2017-06-21
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi0.8Co0.15Al0.05O2 (NCA), we demonstrated themore » sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides. « less
Ringler, Todd D; Gunzburger, Max; Ju, Lili
2008-01-01
During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multi-resolution schemes that are able, at least regional to faithfully simulate these fine-scale processes. Spherical Centroidal Voronoi Tessellations (SCVTs) offer one potential path toward the development of robust, multi-resolution climate system component models, SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function, each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean-ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear shallow-water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multi-resolution method and the challenges ahead.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
A multi-resolution approach for an automated fusion of different low-cost 3D sensors.
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-04-24
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-01-01
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255
NASA Astrophysics Data System (ADS)
Ressler, Gerhard; Eicker, Annette; Lieb, Verena; Schmidt, Michael; Seitz, Florian; Shang, Kun; Shum, Che-Kwan
2015-04-01
Regionally changing hydrological conditions and their link to the availability of water for human consumption and agriculture is a challenging topic in the context of global change that is receiving increasing attention. Gravity field changes related to signals of land hydrology have been observed by the Gravity Recovery And Climate Experiment (GRACE) satellite mission over a period of more than 12 years. These changes are being analysed in our studies with respect to changing hydrological conditions, especially as a consequence of extreme weather situations and/or a change of climatic conditions. Typically, variations of the Earth's gravity field are modeled as a series expansion in terms of global spherical harmonics with time dependent harmonic coefficients. In order to investigate specific structures in the signal we alternatively apply a wavelet-based multi-resolution technique for the determination of regional spatiotemporal variations of the Earth's gravitational potential in combination with principal component analysis (PCA) for detailed evaluation of these structures. The multi-resolution representation (MRR) i.e. the composition of a signal considering different resolution levels is a suitable approach for spatial gravity modeling especially in case of inhomogeneous distribution of observation data on the one hand and because of the inhomogeneous structure of the Earth's gravity field itself on the other hand. In the MRR the signal is split into detail signals by applying low- and band-pass filters realized e.g. by spherical scaling and wavelet functions. Each detail signal is related to a specific resolution level and covers a certain part of the signal spectrum. Principal component analysis (PCA) enables for revealing specific signal patterns in the space as well as the time domain like trends and seasonal as well as semi seasonal variations. We apply the above mentioned combined technique to GRACE L1C residual potential differences that have been
NASA Astrophysics Data System (ADS)
Scafetta, Nicola; West, Bruce J.
2004-04-01
The multiresolution diffusion entropy analysis is used to evaluate the stochastic information left in a time series after systematic removal of certain non-stationarities. This method allows us to establish whether the identified patterns are sufficient to capture all relevant information contained in a time series. If they do not, the method suggests the need for further interpretation to explain the residual memory in the signal. We apply the multiresolution diffusion entropy analysis to the daily count of births to teens in Texas from 1964 through 2000 because it is a typical example of a non-stationary time series, having an anomalous trend, an annual variation, as well as short time fluctuations. The analysis is repeated for the three main racial/ethnic groups in Texas (White, Hispanic and African American), as well as, to married and unmarried teens during the years from 1994 to 2000 and we study the differences that emerge among the groups.
Efficient Human Action and Gait Analysis Using Multiresolution Motion Energy Histogram
NASA Astrophysics Data System (ADS)
Yu, Chih-Chang; Cheng, Hsu-Yung; Cheng, Chien-Hung; Fan, Kuo-Chin
2010-12-01
Average Motion Energy (AME) image is a good way to describe human motions. However, it has to face the computation efficiency problem with the increasing number of database templates. In this paper, we propose a histogram-based approach to improve the computation efficiency. We convert the human action/gait recognition problem to a histogram matching problem. In order to speed up the recognition process, we adopt a multiresolution structure on the Motion Energy Histogram (MEH). To utilize the multiresolution structure more efficiently, we propose an automated uneven partitioning method which is achieved by utilizing the quadtree decomposition results of MEH. In that case, the computation time is only relevant to the number of partitioned histogram bins, which is much less than the AME method. Two applications, action recognition and gait classification, are conducted in the experiments to demonstrate the feasibility and validity of the proposed approach.
Eberdt, Michael; Brown, Patrick K; Lazzi, Gianluca
2003-07-01
A multiresolution impedance method for the solution of low-frequency electromagnetic interaction problems typically encountered in bioelectromagnetics is presented. While the impedance method in its original form is based on the discretization of the scattering objects into equal-sized cells, our formulation decreases the number of unknowns by using an automatic mesh generation method that does not yield equal-sized cells in the modeling space. Results indicate that our multiresolution mesh generation scheme can provide a 50%-80% reduction in cell count, providing new opportunities for the solution of low-frequency bioelectromagnetic problems that require a high level of detail only in specific regions of the modeling space. Furthermore, linking the mesh generator to a circuit simulator such as SPICE permits the addition of arbitrarily complex passive and active circuit elements to the generated impedance network, opening the door to significant advances in the modeling of bioelectromagnetic phenomena.
A multiresolution framework for ultrasound image segmentation by combinative active contours.
Wang, Weiming; Qin, Jing; Chui, Yim-Pan; Heng, Pheng-Ann
2013-01-01
We propose a novel multiresolution framework for ultrasound image segmentation in this paper. The framework exploits both local intensity and local phase information to tackle the degradations of ultrasound images. First, multiresolution scheme is adopted to build a Gaussian pyramid for each speckled image. Speckle noise is gradually smoothed out at higher levels of the pyramid. Then local intensity-driven active contours are employed to locate the coarse contour of the target from the coarsest image, followed by local phase-based geodesic active contours to further refine the contour in finer images. Compared with traditional gradient-based methods, phase-based methods are more suitable for ultrasound images because they are invariant to variations in image contrast. Experimental results on left ventricle segmentation from echocardiographic images demonstrate the advantages of the proposed model.
Multi-resolution imaging with an optimized number and distribution of sampling points.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo
2014-05-05
We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis.
Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.
Cowlagi, Raghvendra V; Tsiotras, Panagiotis
2012-10-01
We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.
Bayesian multiresolution method for local X-ray tomography in dental radiology
NASA Astrophysics Data System (ADS)
Niinimäki, Kati; Siltanen, Samuli; Kolehmainen, Ville
2009-02-01
Dental tomographic cone-beam X-ray imaging devices record truncated projections and reconstruct a region of interest (ROI) inside the head. Image reconstruction from the resulting local tomography data is an ill-posed inverse problem. A Bayesian multiresolution method is proposed for the local tomography reconstruction. The inverse problem is formulated in a well-posed statistical form where a prior model of the tissues compensates for the incomplete projection data. Tissues are represented in a reduced wavelet basis, and prior information is modeled in terms of a Besov norm penalty. The number of unknowns in the inverse problem is reduced by abandoning fine-scale wavelets outside the ROI. Compared to traditional voxel based reconstruction methods, this multiresolution approach allows significant reduction in number of unknown parameters without loss of reconstruction accuracy inside the ROI, as shown by two dimensional examples using simulated local tomography data.
An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.
2016-06-01
Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.
Adaptive multiresolution WENO schemes for multi-species kinematic flow models
Buerger, Raimund . E-mail: rburger@ing-mat.udec.cl; Kozakevicius, Alice . E-mail: alicek@smail.ufsm.br
2007-06-10
Multi-species kinematic flow models lead to strongly coupled, nonlinear systems of first-order, spatially one-dimensional conservation laws. The number of unknowns (the concentrations of the species) may be arbitrarily high. Models of this class include a multi-species generalization of the Lighthill-Whitham-Richards traffic model and a model for the sedimentation of polydisperse suspensions. Their solutions typically involve kinematic shocks separating areas of constancy, and should be approximated by high resolution schemes. A fifth-order weighted essentially non-oscillatory (WENO) scheme is combined with a multiresolution technique that adaptively generates a sparse point representation (SPR) of the evolving numerical solution. Thus, computational effort is concentrated on zones of strong variation near shocks. Numerical examples from the traffic and sedimentation models demonstrate the effectiveness of the resulting WENO multiresolution (WENO-MRS) scheme.
Aircraft target identification based on 2D ISAR images using multiresolution analysis wavelet
NASA Astrophysics Data System (ADS)
Fu, Qiang; Xiao, Huaitie; Hu, Xiangjiang
2001-09-01
The formation of 2D ISAR images for radar target identification hold much promise for additional distinguish- ability between targets. Since an image contains important information is a wide range of scales, and this information is often independent from one scale to another, wavelet analysis provides a method of identifying the spatial frequency content of an image and the local regions within the image where those spatial frequencies exist. In this paper, a multiresolution analysis wavelet method based on 2D ISAR images was proposed for use in aircraft radar target identification under the wide band high range resolution radar background. The proposed method was performed in three steps; first, radar backscatter signals were processed in the form of 2D ISAR images, then, Mallat's wavelet algorithm was used in the decomposition of images, finally, a three layer perceptron neural net was used as classifier. The result of experiments demonstrated that the feasibility of using multiresolution analysis wavelet for target identification.
Coupled-Cluster in Real Space II: CC2 Excited States using Multi-Resolution Analysis.
Kottmann, Jakob Siegfried; Bischoff, Florian Andreas
2017-09-13
We report a first quantized approach to calculate approximate coupled-cluster singles and doubles CC2 excitation energies in real space. The cluster functions are directly represented on an adaptive grid using multiresolution analysis. Virtual orbitals are neither calculated nor needed which leads to an improved formal scaling. The nuclear and electronic cusps are taken into account explicitly where our ansatz regularizes the corresponding equations exactly. First calculations on small molecules are in excellent agreement with the best available LCAO approaches.
NASA Astrophysics Data System (ADS)
Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; Di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Hasankiadeh, Q.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Messina, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Torri, M.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.
2017-06-01
We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80o and energies in excess of 4 EeV (4 × 1018 eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p-values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10-5 in the case of the angular power spectrum, and 2.5 × 10-3 in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales.
Deconstructing a polygenetic landscape using LiDAR and multi-resolution analysis
NASA Astrophysics Data System (ADS)
Barrineau, Patrick; Dobreva, Iliyana; Bishop, Michael P.; Houser, Chris
2016-04-01
It is difficult to deconstruct a complex polygenetic landscape into distinct process-form regimes using digital elevation models (DEMs) and fundamental land-surface parameters. This study describes a multi-resolution analysis approach for extracting geomorphological information from a LiDAR-derived DEM over a stabilized aeolian landscape in south Texas that exhibits distinct process-form regimes associated with different stages in landscape evolution. Multi-resolution analysis was used to generate average altitudes using a Gaussian filter with a maximum radius of 1 km at 20 m intervals, resulting in 50 generated DEMs. This multi-resolution dataset was analyzed using Principal Components Analysis (PCA) to identify the dominant variance structure in the dataset. The first 4 principal components (PC) account for 99.9% of the variation, and classification of the variance structure reveals distinct multi-scale topographic variation associated with different process-form regimes and evolutionary stages. Our results suggest that this approach can be used to generate quantitatively rigorous morphometric maps to guide field-based sedimentological and geophysical investigations, which tend to use purposive sampling techniques resulting in bias and error.
A high order multi-resolution solver for the Poisson equation with application to vortex methods
NASA Astrophysics Data System (ADS)
Hejlesen, Mads Mølholm; Spietz, Henrik Juul; Walther, Jens Honore
2015-11-01
A high order method is presented for solving the Poisson equation subject to mixed free-space and periodic boundary conditions by using fast Fourier transforms (FFT). The high order convergence is achieved by deriving mollified Green's functions from a high order regularization function which provides a correspondingly smooth solution to the Poisson equation. The high order regularization function may be obtained analogous to the approximate deconvolution method used in turbulence models and strongly relates to deblurring algorithms used in image processing. At first we show that the regularized solver can be combined with a short range particle-particle correction for evaluating discrete particle interactions in the context of a particle-particle particle-mesh (P3M) method. By a similar approach we extend the regularized solver to handle multi-resolution patches in continuum field simulations by super-positioning an inter-mesh correction. For sufficiently smooth vector fields this multi-resolution correction can be achieved without the loss of convergence rate. An implementation of the multi-resolution solver in a two-dimensional re-meshed particle-mesh based vortex method is presented and validated.
Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery
NASA Astrophysics Data System (ADS)
Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.
2016-06-01
The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).
Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy
NASA Astrophysics Data System (ADS)
Tang, Shengjun; Wu, Bo; Zhu, Qing
2016-04-01
Due to the widespread availability of satellite imagery nowadays, it is common for regions to be covered by satellite imagery from multiple sources with multiple resolutions. This paper presents a combined adjustment approach to integrate multi-source multi-resolution satellite imagery for improved geo-positioning accuracy without the use of ground control points (GCPs). Instead of using all the rational polynomial coefficients (RPCs) of images for processing, only those dominating the geo-positioning accuracy are used in the combined adjustment. They, together with tie points identified in the images, are used as observations in the adjustment model. Proper weights are determined for each observation, and ridge parameters are determined for better convergence of the adjustment solution. The outputs from the combined adjustment are the improved dominating RPCs of images, from which improved geo-positioning accuracy can be obtained. Experiments using ZY-3, SPOT-7 and Pleiades-1 imagery in Hong Kong, and Cartosat-1 and Worldview-1 imagery in Catalonia, Spain demonstrate that the proposed method is able to effectively improve the geo-positioning accuracy of satellite images. The combined adjustment approach offers an alternative method to improve geo-positioning accuracy of satellite images. The approach enables the integration of multi-source and multi-resolution satellite imagery for generating more precise and consistent 3D spatial information, which permits the comparative and synergistic use of multi-resolution satellite images from multiple sources.
Combining nonlinear multiresolution system and vector quantization for still image compression
NASA Astrophysics Data System (ADS)
Wong, Yiu-fai
1994-05-01
It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge- preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized in the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.
NASA Astrophysics Data System (ADS)
de Almeida, Maria d. G.
2004-08-01
The multirate processing of two-dimensional (2D) signals involves various types of sampling and matrices, due to different grid geometry. A more consistent theory is then needed in order to obtain better techniques and useful results in many areas, such as image and signal processing, biomedical, telecommunications, multimedia, remote sensing, optics. In this work, a 2-channel complementary filter banks theory, designed based on 2D multirate processing and complementary filters properties is presented with foundations for multiresolution levels methods modeling, for the processing of signals in two-dimensions, in nonseparable way. Signal analysis and synthesis using 2-channel complementary filter (CF) banks, the conditions under which the reconstruction of the 2D input signal is perfect and frequency division in the analysis part are developed. Since multiresolution decomposition of signals, wavelet representation and filter banks have a strong link, a relation of then with complementary filter banks is done. Other multiresolution levels methods can be derived from this theory and applications of them were found for compression, edge detection, 2D scaling and wavelets functions and digital TV systems.
Pandey, Abhishek; Yoruk, Umit; Keerthivasan, Mahesh; Galons, Jean-Philippe; Sharma, Puneet; Johnson, Kevin; Martin, Diego R; Altbach, Maria I; Bilgin, Ali; Saranathan, Manojkumar
2017-07-01
To develop a novel multiresolution MRI methodology for accurate estimation of glomerular filtration rate (GFR) in vivo. A three-dimensional golden-angle radial stack-of-stars (SoS) trajectory was used for data acquisition on a 3 Tesla MRI scanner. Multiresolution reconstruction and analysis was performed using arterial input function reconstructed at 1-s. temporal resolution and renal dynamic data reconstructed using compressed sensing (CS) with 4-s temporal resolution. The method was first validated using simulations and the clinical utility of the technique was evaluated by comparing the GFR estimates from the proposed method to the estimated GFR (eGFR) obtained from serum creatinine for 10 subjects. The 4-s temporal resolution CS images minimized streaking artifacts and noise while the 1-s temporal resolution AIF minimized errors in GFR estimates. A paired t-test showed that there was no statistically significant difference between MRI based total GFR values and serum creatinine based eGFR estimates (P = 0.92). We have demonstrated the feasibility of multiresolution MRI using a golden angle radial stack-of-stars scheme to accurately estimate GFR as well as produce diagnostic quality dynamic images in vivo. 1 Technical Efficacy: Stage 3 J. MAGN. RESON. IMAGING 2017;46:303-311. © 2017 International Society for Magnetic Resonance in Medicine.
Maes, F; Vandermeulen, D; Suetens, P
1999-12-01
Maximization of mutual information of voxel intensities has been demonstrated to be a very powerful criterion for three-dimensional medical image registration, allowing robust and accurate fully automated affine registration of multimodal images in a variety of applications, without the need for segmentation or other preprocessing of the images. In this paper, we investigate the performance of various optimization methods and multiresolution strategies for maximization of mutual information, aiming at increasing registration speed when matching large high-resolution images. We show that mutual information is a continuous function of the affine registration parameters when appropriate interpolation is used and we derive analytic expressions of its derivatives that allow numerically exact evaluation of its gradient. Various multiresolution gradient- and non-gradient-based optimization strategies, such as Powell, simplex, steepest-descent, conjugate-gradient, quasi-Newton and Levenberg-Marquardt methods, are evaluated for registration of computed tomography (CT) and magnetic resonance images of the brain. Speed-ups of a factor of 3 on average compared to Powell's method at full resolution are achieved with similar precision and without a loss of robustness with the simplex, conjugate-gradient and Levenberg-Marquardt method using a two-level multiresolution scheme. Large data sets such as 256(2) x 128 MR and 512(2) x 48 CT images can be registered with subvoxel precision in <5 min CPU time on current workstations.
Three-dimensional wavelet transform and multiresolution surface reconstruction from volume data
NASA Astrophysics Data System (ADS)
Wang, Yun; Sloan, Kenneth R., Jr.
1995-04-01
Multiresolution surface reconstruction from volume data is very useful in medical imaging, data compression and multiresolution modeling. This paper presents a hierarchical structure for extracting multiresolution surfaces from volume data by using a 3-D wavelet transform. The hierarchical scheme is used to visualize different levels of detail of the surface and allows a user to explore different features of the surface at different scales. We use 3-D surface curvature as a smoothness condition to control the hierarchical level and the distance error between the reconstructed surface and the original data as the stopping criteria. A 3-D wavelet transform provides an appropriate hierarchical structure to build the volume pyramid. It can be constructed by the tensor products of 1-D wavelet transforms in three subspaces. We choose the symmetric and smoothing filters such as Haar, linear, pseudoCoiflet, cubic B-spline and their corresponding orthogonal wavelets to build the volume pyramid. The surface is reconstructed at each level of volume data by using the cell interpolation method. Some experimental results are shown through the comparison of the different filters based on the distance errors of the surfaces.
Combining nonlinear multiresolution system and vector quantization for still image compression
Wong, Y.
1993-12-17
It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.
Qin, Yi; Hua, Hong; Nguyen, Mike
2014-01-01
The state-of-the-art laparoscope lacks the ability to capture high-magnification and wide-angle images simultaneously, which introduces challenges when both close- up views for details and wide-angle overviews for orientation are required in clinical practice. A multi-resolution foveated laparoscope (MRFL) which can provide the surgeon both high-magnification close-up and wide-angle images was proposed to address the limitations of the state-of-art surgical laparoscopes. In this paper, we present the overall system design from both clinical and optical system perspectives along with a set of experiments to characterize the optical performances of our prototype system and describe our preliminary in-vivo evaluation of the prototype with a pig model. The experimental results demonstrate that at the optimum working distance of 120mm, the high-magnification probe has a resolution of 6.35lp/mm and image a surgical area of 53 × 40mm2; the wide-angle probe provides a surgical area coverage of 160 × 120mm2 with a resolution of 2.83lp/mm. The in-vivo evaluation demonstrates that MRFL has great potential in clinical applications for improving the safety and efficiency of the laparoscopic surgery. PMID:25136485
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site
NASA Astrophysics Data System (ADS)
Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R.
2011-09-01
Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype.
Mubayi, V.
1995-05-01
The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.
Multi-Resolution Playback of Network Trace Files
2015-06-01
goals, but by utilizing out-of- band control channels , TCPopera becomes limited by the complexity of control channel connections as the scale...find the dislocated transport layer and the required data. 7.1.5 Database Efficiency Enhancements The database population procedure inserts a single
Novel Laplacian scheme and multiresolution modal curvatures for structural damage identification
NASA Astrophysics Data System (ADS)
Cao, Maosen; Qiao, Pizhong
2009-05-01
Modal curvature is more sensitive to structural damage than directly measured mode shape, and the standard Laplace operator is commonly used to acquire the modal curvatures from the mode shapes. However, the standard Laplace operator is very prone to noise, which often leads to the degraded modal curvatures incapable of identifying damage. To overcome this problem, a novel Laplacian scheme is proposed, from which an improved damage identification algorithm is developed. The proposed step-by-step procedures in the algorithm include: (1) By progressively upsampling the standard Laplace operator, a new Laplace operator is constructed, from which a Laplace operator array is formed; (2) by applying the Laplace operator array to the retrieved mode shape of a damaged structure, the multiresolution curvature mode shapes are produced, on which the damage trait, previously shadowed under the standard Laplace operator, can be revealed by a ridge of multiresolution modal curvatures; (3) a Gaussian filter is then incorporated into the new Laplace operator to produce a more versatile Laplace operator with properties of both the smoothness and differential capabilities, in which the damage feature is effectively strengthened; and (4) a smoothened nonlinear energy operator is introduced to further enhance the damage feature by eliminating the trend interference of the multiresolution modal curvatures, and it results in a significantly improved damage trait. The proposed algorithm is tested using the data generated by an analytical crack beam model, and its applicability is validated with an experimental program of a delaminated composite beam using scanning laser vibrometer (SLV) to acquire mode shapes. The results are compared in each step, showing increasing degree of improvement for damage effect. Numerical and experimental results demonstrate that the proposed novel Laplacian scheme provides a promising damage identification algorithm, which exhibits apparent advantages (e
A multi-resolution HEALPix data structure for spherically mapped point data.
Youngren, Robert W; Petty, Mikel D
2017-06-01
Data describing entities with locations that are points on a sphere are described as spherically mapped. Several data structures designed for spherically mapped data have been developed. One of them, known as Hierarchical Equal Area iso-Latitude Pixelization (HEALPix), partitions the sphere into twelve diamond-shaped equal-area base cells and then recursively subdivides each cell into four diamond-shaped subcells, continuing to the desired level of resolution. Twelve quadtrees, one associated with each base cell, store the data records associated with that cell and its subcells. HEALPix has been used successfully for numerous applications, notably including cosmic microwave background data analysis. However, for applications involving sparse point data HEALPix has possible drawbacks, including inefficient memory utilization, overwriting of proximate points, and return of spurious points for certain queries. A multi-resolution variant of HEALPix specifically optimized for sparse point data was developed. The new data structure allows different areas of the sphere to be subdivided at different levels of resolution. It combines HEALPix positive features with the advantages of multi-resolution, including reduced memory requirements and improved query performance. An implementation of the new Multi-Resolution HEALPix (MRH) data structure was tested using spherically mapped data from four different scientific applications (warhead fragmentation trajectories, weather station locations, galaxy locations, and synthetic locations). Four types of range queries were applied to each data structure for each dataset. Compared to HEALPix, MRH used two to four orders of magnitude less memory for the same data, and on average its queries executed 72% faster.
Proof-of-concept demonstration of a miniaturized three-channel multiresolution imaging system
NASA Astrophysics Data System (ADS)
Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Vervaeke, Michael; Van Erps, Jürgen; Thienpont, Hugo
2014-05-01
Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.
W-transform method for feature-oriented multiresolution image retrieval
Kwong, M.K.; Lin, B.
1995-07-01
Image database management is important in the development of multimedia technology. Since an enormous amount of digital images is likely to be generated within the next few decades in order to integrate computers, television, VCR, cables, telephone and various imaging devices. Effective image indexing and retrieval systems are urgently needed so that images can be easily organized, searched, transmitted, and presented. Here, the authors present a local-feature-oriented image indexing and retrieval method based on Kwong, and Tang`s W-transform. Multiresolution histogram comparison is an effective method for content-based image indexing and retrieval. However, most recent approaches perform multiresolution analysis for whole images but do not exploit the local features present in the images. Since W-transform is featured by its ability to handle images of arbitrary size, with no periodicity assumptions, it provides a natural tool for analyzing local image features and building indexing systems based on such features. In this approach, the histograms of the local features of images are used in the indexing, system. The system not only can retrieve images that are similar or identical to the query images but also can retrieve images that contain features specified in the query images, even if the retrieved images as a whole might be very different from the query images. The local-feature-oriented method also provides a speed advantage over the global multiresolution histogram comparison method. The feature-oriented approach is expected to be applicable in managing large-scale image systems such as video databases and medical image databases.
NASA Astrophysics Data System (ADS)
Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei
2017-07-01
Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a ;geobrowser;, the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.
SU-E-J-88: Deformable Registration Using Multi-Resolution Demons Algorithm for 4DCT.
Li, Dengwang; Yin, Yong
2012-06-01
In order to register 4DCT efficiently, we propose an improved deformable registration algorithm based on improved multi-resolution demons strategy to improve the efficiency of the algorithm. 4DCT images of lung cancer patients are collected from a General Electric Discovery ST CT scanner from our cancer hospital. All of the images are sorted into groups and reconstructed according to their phases, and eachrespiratory cycle is divided into 10 phases with the time interval of 10%. Firstly, in our improved demons algorithm we use gradients of both reference and floating images as deformation forces and also redistribute the forces according to the proportion of the two forces. Furthermore, we introduce intermediate variable to cost function for decreasing the noise in registration process. At the same time, Gaussian multi-resolution strategy and BFGS method for optimization are used to improve speed and accuracy of the registration. To validate the performance of the algorithm, we register the previous 10 phase-images. We compared the difference of floating and reference images before and after registered where two landmarks are decided by experienced clinician. We registered 10 phase-images of 4D-CT which is lung cancer patient from cancer hospital and choose images in exhalationas the reference images, and all other images were registered into the reference images. This method has a good accuracy demonstrated by a higher similarity measure for registration of 4D-CT and it can register a large deformation precisely. Finally, we obtain the tumor target achieved by the deformation fields using proposed method, which is more accurately than the internal margin (IM) expanded by the Gross Tumor Volume (GTV). Furthermore, we achieve tumor and normal tissue tracking and dose accumulation using 4DCT data. An efficient deformable registration algorithm was proposed by using multi-resolution demons algorithm for 4DCT. © 2012 American Association of Physicists in Medicine.
A multiresolution prostate representation for automatic segmentation in magnetic resonance images.
Alvarez, Charlens; Martínez, Fabio; Romero, Eduardo
2017-04-01
Accurate prostate delineation is necessary in radiotherapy processes for concentrating the dose onto the prostate and reducing side effects in neighboring organs. Currently, manual delineation is performed over magnetic resonance imaging (MRI) taking advantage of its high soft tissue contrast property. Nevertheless, as human intervention is a consuming task with high intra- and interobserver variability rates, (semi)-automatic organ delineation tools have emerged to cope with these challenges, reducing the time spent for these tasks. This work presents a multiresolution representation that defines a novel metric and allows to segment a new prostate by combining a set of most similar prostates in a dataset. The proposed method starts by selecting the set of most similar prostates with respect to a new one using the proposed multiresolution representation. This representation characterizes the prostate through a set of salient points, extracted from a region of interest (ROI) that encloses the organ and refined using structural information, allowing to capture main relevant features of the organ boundary. Afterward, the new prostate is automatically segmented by combining the nonrigidly registered expert delineations associated to the previous selected similar prostates using a weighted patch-based strategy. Finally, the prostate contour is smoothed based on morphological operations. The proposed approach was evaluated with respect to the expert manual segmentation under a leave-one-out scheme using two public datasets, obtaining averaged Dice coefficients of 82% ± 0.07 and 83% ± 0.06, and demonstrating a competitive performance with respect to atlas-based state-of-the-art methods. The proposed multiresolution representation provides a feature space that follows a local salient point criteria and a global rule of the spatial configuration among these points to find out the most similar prostates. This strategy suggests an easy adaptation in the clinical
Ringler, Todd; Ju, Lili; Gunzburger, Max
2008-11-14
During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoi tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.
Spatiotemporal multi-resolution approximation of the Amari type neural field model.
Aram, P; Freestone, D R; Dewar, M; Scerri, K; Jirsa, V; Grayden, D B; Kadirkamanathan, V
2013-02-01
Neural fields are spatially continuous state variables described by integro-differential equations, which are well suited to describe the spatiotemporal evolution of cortical activations on multiple scales. Here we develop a multi-resolution approximation (MRA) framework for the integro-difference equation (IDE) neural field model based on semi-orthogonal cardinal B-spline wavelets. In this way, a flexible framework is created, whereby both macroscopic and microscopic behavior of the system can be represented simultaneously. State and parameter estimation is performed using the expectation maximization (EM) algorithm. A synthetic example is provided to demonstrate the framework.
Fauqueux, Sandrine; Caillault, Karine; Simoneau, Pierre; Labarre, Luc
2009-10-01
The validation of the multiresolution model of sea surface infrared optical properties developed at ONERA is investigated in the one-dimensional case by comparison with a reference model, using a submillimeter discretization of the surface. Having expressed the optical properties, we detail the characteristics of each model. A set of numerical tests is made for various wind speeds, resolutions, and realizations of the sea surface. The tests show a good agreement between the results except for grazing angles, where the impact of multiple reflections and the effects of adjacent rough surfaces on shadow have to be investigated.
2011-01-01
Background Although high-throughput microarray based molecular diagnostic technologies show a great promise in cancer diagnosis, it is still far from a clinical application due to its low and instable sensitivities and specificities in cancer molecular pattern recognition. In fact, high-dimensional and heterogeneous tumor profiles challenge current machine learning methodologies for its small number of samples and large or even huge number of variables (genes). This naturally calls for the use of an effective feature selection in microarray data classification. Methods We propose a novel feature selection method: multi-resolution independent component analysis (MICA) for large-scale gene expression data. This method overcomes the weak points of the widely used transform-based feature selection methods such as principal component analysis (PCA), independent component analysis (ICA), and nonnegative matrix factorization (NMF) by avoiding their global feature-selection mechanism. In addition to demonstrating the effectiveness of the multi-resolution independent component analysis in meaningful biomarker discovery, we present a multi-resolution independent component analysis based support vector machines (MICA-SVM) and linear discriminant analysis (MICA-LDA) to attain high-performance classifications in low-dimensional spaces. Results We have demonstrated the superiority and stability of our algorithms by performing comprehensive experimental comparisons with nine state-of-the-art algorithms on six high-dimensional heterogeneous profiles under cross validations. Our classification algorithms, especially, MICA-SVM, not only accomplish clinical or near-clinical level sensitivities and specificities, but also show strong performance stability over its peers in classification. Software that implements the major algorithm and data sets on which this paper focuses are freely available at https://sites.google.com/site/heyaumapbc2011/. Conclusions This work suggests a new
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.
1998-02-01
We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.
Lee, C.G.; Chen, C.H.
1996-12-31
In this paper a novel multiresolution wavelet analysis (MWA) and non-stationary Gaussian Markov random field (GMRF) technique is introduced for the identification of microcalcifications with high accuracy. The hierarchical multiresolution wavelet information in conjunction with the contextual information of the images extracted from GMRF provides a highly efficient technique for microcalcification detection. A Bayesian teaming paradigm realized via the expectation maximization (EM) algorithm was also introduced for edge detection or segmentation of larger lesions recorded on the mammograms. The effectiveness of the approach has been extensively tested with a number of mammographic images provided by a local hospital.
NASA Astrophysics Data System (ADS)
Smeesters, L.; Belay, Gebirie Y.; Ottevaere, H.; Meuret, Youri; Thienpont, H.
2014-05-01
Inspired by nature, many application domains might gain from combining the multi-channel design of the compound eyes of insects and the refocusing capability of the human eye in one compact configuration. Multi-channel refocusing imaging systems are nowadays only commercially available in bulky and expensive designs since classical refocusing mechanisms cannot be integrated in a miniaturized configuration. We designed a wafer-level multi-resolution two-channel imaging system with refocusing capabilities using a voltage tunable liquid lens. One channel is able to capture a wide field-of-view image (2x40°) of a surrounding with a low angular resolution (0.078°), whereas a detailed image of a small region of interest (2x7.57°) can be obtained with the high angular resolution channel (0.0098°). The latter high angular resolution channel contains the tunable lens and therefore also the refocusing capabilities. In this paper, we first discuss the working principle, tunability and optical quality of a voltage tunable liquid lens. Based on optical characterization measurements with a Mach-Zehnder interferometer, we designed a tunable lens model. The designed tunable lens model and its validation in an imaging setup show a diffraction-limited image quality. Following, we discuss the performance of the designed two-channel imaging system. Both the wide field-of-view and high angular resolution optical channels show a diffraction-limited performance, ensuring a good image quality. Moreover, we obtained an improved depth-of-field, from 0.254m until infinity, in comparison with the current state-of-the art published wafer-level multi-channel imaging systems, which show a depth-of-field from 9m until infinity.
A Conceptual Framework for SAHRA Integrated Multi-resolution Modeling in the Rio Grande Basin
NASA Astrophysics Data System (ADS)
Liu, Y.; Gupta, H.; Springer, E.; Wagener, T.; Brookshire, D.; Duffy, C.
2004-12-01
The sustainable management of water resources in a river basin requires an integrated analysis of the social, economic, environmental and institutional dimensions of the problem. Numerical models are commonly used for integration of these dimensions and for communication of the analysis results to stakeholders and policy makers. The National Science Foundation Science and Technology Center for Sustainability of semi-Arid Hydrology and Riparian Areas (SAHRA) has been developing integrated multi-resolution models to assess impacts of climate variability and land use change on water resources in the Rio Grande Basin. These models not only couple natural systems such as surface and ground waters, but will also include engineering, economic and social components that may be involved in water resources decision-making processes. This presentation will describe the conceptual framework being developed by SAHRA to guide and focus the multiple modeling efforts and to assist the modeling team in planning, data collection and interpretation, communication, evaluation, etc. One of the major components of this conceptual framework is a Conceptual Site Model (CSM), which describes the basin and its environment based on existing knowledge and identifies what additional information must be collected to develop technically sound models at various resolutions. The initial CSM is based on analyses of basin profile information that has been collected, including a physical profile (e.g., topographic and vegetative features), a man-made facility profile (e.g., dams, diversions, and pumping stations), and a land use and ecological profile (e.g., demographics, natural habitats, and endangered species). Based on the initial CSM, a Conceptual Physical Model (CPM) is developed to guide and evaluate the selection of a model code (or numerical model) for each resolution to conduct simulations and predictions. A CPM identifies, conceptually, all the physical processes and engineering and socio
Multi-Resolution Modeling of Large Scale Scientific Simulation Data
Baldwin, C; Abdulla, G; Critchlow, T
2003-01-31
This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.
Smith, Daniel P.; Thrash, J. Cameron; Nicora, Carrie D.; Lipton, Mary S.; Burnum-Johnson, Kristin E.; Carini, Paul; Smith, Richard D.; Giovannoni, Stephen J.
2013-01-01
ABSTRACT Nitrogen is one of the major nutrients limiting microbial productivity in the ocean, and as a result, most marine microorganisms have evolved systems for responding to nitrogen stress. The highly abundant alphaproteobacterium “Candidatus Pelagibacter ubique,” a cultured member of the order Pelagibacterales (SAR11), lacks the canonical GlnB, GlnD, GlnK, and NtrB/NtrC genes for regulating nitrogen assimilation, raising questions about how these organisms respond to nitrogen limitation. A survey of 266 Alphaproteobacteria genomes found these five regulatory genes nearly universally conserved, absent only in intracellular parasites and members of the order Pelagibacterales, including “Ca. Pelagibacter ubique.” Global differences in mRNA and protein expression between nitrogen-limited and nitrogen-replete cultures were measured to identify nitrogen stress responses in “Ca. Pelagibacter ubique” strain HTCC1062. Transporters for ammonium (AmtB), taurine (TauA), amino acids (YhdW), and opines (OccT) were all elevated in nitrogen-limited cells, indicating that they devote increased resources to the assimilation of nitrogenous organic compounds. Enzymes for assimilating amine into glutamine (GlnA), glutamate (GltBD), and glycine (AspC) were similarly upregulated. Differential regulation of the transcriptional regulator NtrX in the two-component signaling system NtrY/NtrX was also observed, implicating it in control of the nitrogen starvation response. Comparisons of the transcriptome and proteome supported previous observations of uncoupling between transcription and translation in nutrient-deprived “Ca. Pelagibacter ubique” cells. Overall, these data reveal a streamlined, PII-independent response to nitrogen stress in “Ca. Pelagibacter ubique,” and likely other Pelagibacterales, and show that they respond to nitrogen stress by allocating more resources to the assimilation of nitrogen-rich organic compounds. PMID:24281717
a Virtual Globe-Based Multi-Resolution Tin Surface Modeling and Visualizetion Method
NASA Astrophysics Data System (ADS)
Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei
2016-06-01
The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD) generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN)-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.
Ramchandran, K; Ortega, A; Vetterli, M
1994-01-01
We address the problem of efficient bit allocation in a dependent coding environment. While optimal bit allocation for independently coded signal blocks has been studied in the literature, we extend these techniques to the more general temporally and spatially dependent coding scenarios. Of particular interest are the topical MPEG video coder and multiresolution coders. Our approach uses an operational rate-distortion (R-D) framework for arbitrary quantizer sets. We show how a certain monotonicity property of the dependent R-D curves can be exploited in formulating fast ways to obtain optimal and near-optimal solutions. We illustrate the application of this property in specifying intelligent pruning conditions to eliminate suboptimal operating points for the MPEG allocation problem, for which we also point out fast nearly-optimal heuristics. Additionally, we formulate an efficient allocation strategy for multiresolution coders, using the spatial pyramid coder as an example. We then extend this analysis to a spatio-temporal 3-D pyramidal coding scheme. We tackle the compatibility problem of optimizing full-resolution quality while simultaneously catering to subresolution bit rate or quality constraints. We show how to obtain fast solutions that provide nearly optimal (typically within 0.3 dB) full resolution quality while providing much better performance for the subresolution layer (typically 2-3 dB better than the full-resolution optimal solution).
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial ;patches; distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
NASA Astrophysics Data System (ADS)
Stephanakis, Ioannis M.; Anastassopoulos, George C.
2009-03-01
A novel algorithm for 3-D tomographic reconstruction is proposed. The proposed algorithm is based on multiresolution techniques for local inversion of the 3-D Radon transform in confined subvolumes within the entire object space. Directional wavelet functions of the form ψm,nj(x)=2j/2ψ(2jwm,nx) are employed in a sequel of double filtering and 2-D backprojection operations performed on vertical and horizontal reconstruction planes using the method suggested by Marr and others. The densities of the 3-D object are found initially as backprojections of coarse wavelet functions of this form at directions on vertical and horizontal planes that intersect the object. As the algorithm evolves, finer planar wavelets intersecting a subvolume of medical interest within the original object may be used to reconstruct its details by double backprojection steps on vertical and horizontal planes in a similar fashion. Reduction in the complexity of the reconstruction algorithm is achieved due to the good localization properties of planar wavelets that render the details of the projections with small errors. Experimental results that illustrate multiresolution reconstruction at four successive levels of resolution are given for wavelets belonging to the Daubechies family.
An efficient multi-resolution GA approach to dental image alignment
NASA Astrophysics Data System (ADS)
Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany
2006-02-01
Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.
Reliability of multiresolution deconvolution for improving depth resolution in SIMS analysis
NASA Astrophysics Data System (ADS)
Boulakroune, M.'Hamed
2016-11-01
This paper deals the effectiveness and reliability of multiresolution deconvolution algorithm for recovery Secondary Ions Mass Spectrometry, SIMS, profiles altered by the measurement. This new algorithm is characterized as a regularized wavelet transform. It combines ideas from Tikhonov Miller regularization, wavelet analysis and deconvolution algorithms in order to benefit from the advantages of each. The SIMS profiles were obtained by analysis of two structures of boron in a silicon matrix using a Cameca-Ims6f instrument at oblique incidence. The first structure is large consisting of two distant wide boxes and the second one is thin structure containing ten delta-layers in which the deconvolution by zone was applied. It is shown that this new multiresolution algorithm gives best results. In particular, local application of the regularization parameter of blurred and estimated solutions at each resolution level provided to smoothed signals without creating artifacts related to noise content in the profile. This led to a significant improvement in the depth resolution and peaks' maximums.
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
A multi-resolution multi-size-windows disparity estimation approach
NASA Astrophysics Data System (ADS)
Martinez Bauza, Judit; Shiralkar, Manish
2011-03-01
This paper describes an algorithm for estimating the disparity between 2 images of a stereo pair. The disparity is related to the depth of the objects in the scene. Being able to obtain the depth of the objects in the scene is useful in many applications such as virtual reality, 3D user interfaces, background-foreground segmentation, or depth-image-based synthesis. This last application has motivated the proposed algorithm as part of a system that estimates disparities from a stereo pair and synthesizes new views. Synthesizing virtual views enables the post-processing of 3D content to adapt to user preferences or viewing conditions, as well as enabling the interface with multi-view auto-stereoscopic displays. The proposed algorithm has been designed to fulfill the following constraints: (a) low memory requirements, (b) local and parallelizable processing, and (c) adaptability to a sudden reduction in processing resources. Our solution uses a multi-resolution multi-size-windows approach, implemented as a line-independent process, well-suited for GPU implementation. The multi-resolution approach provides adaptability to sudden reduction in processing capabilities, besides computational advantages; the windows-based image processing algorithm guarantees low-memory requirements and local processing.
Qi, Xianbiao; Zhao, Guoying; Li, Chun-Guang; Guo, Jun; Pietikainen, Matti
2017-03-01
Indirect immunofluorescence imaging of human epithelial type 2 (HEp-2) cell image is an effective evidence to diagnose autoimmune diseases. Recently, computer-aided diagnosis of autoimmune diseases by the HEp-2 cell classification has attracted great attention. However, the HEp-2 cell classification task is quite challenging due to large intraclass and small interclass variations. In this paper, we propose an effective approach for the automatic HEp-2 cell classification by combining multiresolution co-occurrence texture and large regional shape information. To be more specific, we propose to: 1) capture multiresolution co-occurrence texture information by a novel pairwise rotation-invariant co-occurrence of local Gabor binary pattern descriptor; 2) depict large regional shape information by using an improved Fisher vector model with RootSIFT features, which are sampled from large image patches in multiple scales; and 3) combine both features. We evaluate systematically the proposed approach on the IEEE International Conference on Pattern Recognition (ICPR) 2012, the IEEE International Conference on Image Processing (ICIP) 2013, and the ICPR 2014 contest datasets. The proposed method based on the combination of the introduced two features outperforms the winners of the ICPR 2012 contest using the same experimental protocol. Our method also greatly improves the winner of the ICIP 2013 contest under four different experimental setups. Using the leave-one-specimen-out evaluation strategy, our method achieves comparable performance with the winner of the ICPR 2014 contest that combined four features.
Multi-focus and multi-modal fusion: a study of multi-resolution transforms
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Thomas, Millicent
2016-05-01
Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.
A multi-resolution approach to retrospectively-gated cardiac micro-CT reconstruction
NASA Astrophysics Data System (ADS)
Clark, D. P.; Johnson, G. A.; Badea, C. T.
2014-03-01
In preclinical research, micro-CT is commonly used to provide anatomical information; however, there is significant interest in using this technology to obtain functional information in cardiac studies. The fastest acquisition in 4D cardiac micro-CT imaging is achieved via retrospective gating, resulting in irregular angular projections after binning the projections into phases of the cardiac cycle. Under these conditions, analytical reconstruction algorithms, such as filtered back projection, suffer from streaking artifacts. Here, we propose a novel, multi-resolution, iterative reconstruction algorithm inspired by robust principal component analysis which prevents the introduction of streaking artifacts, while attempting to recover the highest temporal resolution supported by the projection data. The algorithm achieves these results through a unique combination of the split Bregman method and joint bilateral filtration. We illustrate the algorithm's performance using a contrast-enhanced, 2D slice through the MOBY mouse phantom and realistic projection acquisition and reconstruction parameters. Our results indicate that the algorithm is robust to under sampling levels of only 34 projections per cardiac phase and, therefore, has high potential in reducing both acquisition times and radiation dose. Another potential advantage of the multi-resolution scheme is the natural division of the reconstruction problem into a large number of independent sub-problems which can be solved in parallel. In future work, we will investigate the performance of this algorithm with retrospectively-gated, cardiac micro-CT data.
QRS detection by lifting scheme constructing multi-resolution morphological decomposition.
Zhang, Pu; Ma, Heather T; Zhang, Qinyu
2014-01-01
QRS complex detecting algorithm is core of ECG auto-diagnosis method and deeply influences cardiac cycle division for signal compression. However, ECG signals collected by noninvasive surface electrodes areusually mixed with several kinds of interference, and its waveform variation is the main reason for the hard realization of ECG processing. This paper proposes a QRS complex detecting algorithm based on multi-resolution mathematical morphological decomposition. This algorithm possesses superiorities in R peak detection of both mathematical morphological method and multi-resolution decomposition. Moreover, a lifting constructing method with Maximizationupdating operator is adopted to further improve the algorithm performance. And an efficient R peak search-back algorithm is employed to reduce the false positives (FP) and false negatives (FN). The proposed algorithm provides a good performance applying to MIT-BIH Arrhythmia Database, and achieves over 99% detection rate, sensitivity and positive predictivity, respectively, and calculation burden is low. Therefore, the proposed method is appropriate for portable medical devices in Telemedicine system.
NASA Astrophysics Data System (ADS)
Kishan, Harini; Seelamantula, Chandra Sekhar
2015-09-01
We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.
NASA Astrophysics Data System (ADS)
Kong, Jun; Sertel, Olcay; Shimada, Hiroyuki; Boyer, Kim L.; Saltz, Joel H.; Gurcan, Metin N.
2008-03-01
Neuroblastic Tumor (NT) is one of the most commonly occurring tumors in children. Of all types of NTs, neuroblastoma is the most malignant tumor that can be further categorized into undifferentiated (UD), poorly-differentiated (PD) and differentiating (D) types, in terms of the grade of pathological differentiation. Currently, pathologists determine the grade of differentiation by visual examinations of tissue samples under the microscope. However, this process is subjective and, hence, may lead to intra- and inter-reader variability. In this paper, we propose a multi-resolution image analysis system that helps pathologists classify tissue samples according to their grades of differentiation. The inputs to this system are color images of haematoxylin and eosin (H&E) stained tissue samples. The complete image analysis system has five stages: segmentation, feature construction, feature extraction, classification and confidence evaluation. Due to the large number of input images, both parallel processing and multi-resolution analysis were carried out to reduce the execution time of the algorithm. Our training dataset consists of 387 images tiles of size 512x512 in pixels from three whole-slide images. We tested the developed system with an independent set of 24 whole-slide images, eight from each grade. The developed system has an accuracy of 83.3% in correctly identifying the grade of differentiation, and it takes about two hours, on average, to process each whole slide image.
Multiresolution analysis of 3D multimodal objects using a 2D quincunx wavelet analysis
NASA Astrophysics Data System (ADS)
Toubin, Marc F.; Dumont, Christophe; Truchetet, Frederic; Abidi, Mongi A.
1999-08-01
A reconstructed scene in virtual reality typically consists of millions of triangles.Data are heterogeneous and consist not only of geometric coordinates but also of multi-modal data. The latter requires more complex calculations and very high-speed graphics. Due to the large amount of data, displaying and analyzing these 3D models require new methods. This paper present an innovative method to analyze multi-model models using a 2D-quincunx wavelet analysis. The algorithm is composed of three processes. First, a set of range images is captured from various viewpoints surrounding the object of interest. In addition, a set of multi-modal images is acquired. Then, a multi-resolution analysis based on the quincunx wavelet transform is performed. The multi- resolution analysis allows extraction of multi-resolution detail areas. These areas of details are projected back onto the surface of the initial model. Detail areas are marked onto the model and constitute another modality. Finally, a mesh simplification is performed to reduce data that are not marked as detail. This approach can be applied to any 3D models containing multi-modal information in order to allow fast rendering and manipulation. This method also allows 3D data de-noising.
Forest cover mapping in post-Soviet Central Asia using multi-resolution remote sensing imagery.
Yin, He; Khamzina, Asia; Pflugmacher, Dirk; Martius, Christopher
2017-05-02
Despite rapid advances and large-scale initiatives in forest mapping, reliable cross-border information about the status of forest resources in Central Asian countries is lacking. We produced consistent Central Asia forest cover (CAFC) maps based on a cost-efficient approach using multi-resolution satellite imagery from Landsat and MODIS during 2009-2011. The spectral-temporal metrics derived from 2009-2011 Landsat imagery (overall accuracy of 0.83) was used to predict sub-pixel forest cover on the MODIS scale for 2010. Accuracy assessment confirmed the validity of MODIS-based forest cover map with a normalized root-mean-square error of 0.63. A general paucity of forest resources in post-Soviet Central Asia was indicated, with 1.24% of the region covered by forest. In comparison to the CAFC map, a regional map derived from MODIS Vegetation Continuous Fields tended to underestimate forest cover, while the Global Forest Change product matched well. The Global Forest Resources Assessments, based on individual country reports, overestimated forest cover by 1.5 to 147 times, particularly in the more arid countries of Turkmenistan and Uzbekistan. Multi-resolution imagery contributes to regionalized assessment of forest cover in the world's drylands while developed CAFC maps (available at https://data.zef.de/ ) aim to facilitate decisions on biodiversity conservation and reforestation programs in Central Asia.
Multiresolution wavelet analysis of the body surface ECG before and after angioplasty.
Gramatikov, B; Yi-Chun, S; Rix, H; Caminal, P; Thakor, N V
1995-01-01
Electrocardiographic recordings of patients with coronary artery stenosis, made before and after angioplasty, were analyzed by the multiresolution wavelet transform (MRWT) technique. The MRWT decomposes the signal of interest into its coarse and detail components at successively finer scales. MRWT was carried out on different leads in order to compare the P-QRS-T complex from recordings made before with those made after percutaneous transluminal coronary angioplasty (PTCA). ECG signals before and after successful PTCA procedures show distinctive changes at certain scales, thus helping to identify whether the procedure has been successful. In six patients who underwent right coronary artery PTCA, varying levels of reperfusion were achieved, and the changes in the detail components of ECG were shown to correlate with the successful reperfusion. The detail components at scales 5 and 6, corresponding approximately to the frequencies in the range of 2.3-8.3 Hz, are shown to be the most sensitive to ischemia-reperfusion changes (p < 0.05). The same conclusion was reached by synthesizing the post-PTCA signals from pre-PTCA signals with the help of these detail components. For on-line monitoring a vector plot, analogous to vector cardiogram, of the two most sensitive MRWT detail components is proposed. Thus, multiresolution analysis of ECG may be useful as a monitoring and diagnostic tool during angioplasty procedures.
Wavelet-based multiresolution with n-th-root-of-2 Subdivision
Linsen, L; Pascucci, V; Duchaineau, M A; Hamann, B; Joy, K I
2004-12-16
Multiresolution methods are a common technique used for dealing with large-scale data and representing it at multiple levels of detail. The authors present a multiresolution hierarchy construction based on n{radical}2 subdivision, which has all the advantages of a regular data organization scheme while reducing the drawback of coarse granularity. The n{radical}2-subdivision scheme only doubles the number of vertices in each subdivision step regardless of dimension n. They describe the construction of 2D, 3D, and 4D hierarchies representing surfaces, volume data, and time-varying volume data, respectively. The 4D approach supports spatial and temporal scalability. For high-quality data approximation on each level of detail, they use downsampling filters based on n-variate B-spline wavelets. They present a B-spline wavelet lifting scheme for n{radical}2-subdivision steps to obtain small or narrow filters. Narrow filters support adaptive refinement and out-of-core data exploration techniques.
NASA Astrophysics Data System (ADS)
Sangireddy, H.; Passalacqua, P.; Stark, C. P.
2013-12-01
Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic
A Multiresolution Approach to Shear Wave Image Reconstruction
Hollender, Peter; Bottenus, Nick; Trahey, Gregg
2015-01-01
Shear wave imaging techniques build maps of local elasticity estimating the local group velocity of induced mechanical waves. Velocity estimates are formed using the time delay in the motion profile of the medium at two or more points offset from the shear wave source. Because the absolute time-of-flight between any pair of locations scales with the distance between them, there is an inherent trade-off between robustness to time-of-flight errors and lateral spatial resolution based on the number and spacing of the receive points used for each estimate. This work proposes a method of using the time delays measured between all combinations of locations to estimate a noise-robust, high-resolution image. The time-of-flight problem is presented as an overdetermined system of linear equations that can be directly solved with and without spatial regularization terms. Finite element method simulations of acoustic radiation force-induced shear waves are used to illustrate the method, demonstrating superior contrast-to-noise ratio and lateral edge resolution characteristics compared to linear regression of arrival times. This technique may improve shear wave imaging in situations where time-of-flight noise is a limiting factor. PMID:26276953
A hardware implementation of multiresolution filtering for broadband instrumentation
Kercel, S.W.; Dress, W.B.
1995-12-01
The authors have constructed a wavelet processing board that implements a 14-level wavelet transform. The board uses a high-speed, analog-to-digital (A/D) converter, a hardware queue, and five fixed-point digital signal processing (DSP) chips in a parallel pipeline architecture. All five processors are independently programmable. The board is designed as a general purpose engine for instrumentation applications requiring near real-time wavelet processing or multiscale filtering. The present application is the processing engine of a magnetic field monitor that covers 305 Hz through 5 MHz. The monitor is used for the detection of peak values of magnetic fields in nuclear power plants. This paper describes the design, development, simulation, and testing of the system. Specific issues include the conditioning of real-world signals for wavelet processing, practical trade-offs between queue length and filter length, selection of filter coefficients, simulation of a 14-octave filter bank, and limitations imposed by a fixed-point processor. Test results from the completed wavelet board are included.
Hardware implementation of multiresolution filtering for broadband instrumentation
NASA Astrophysics Data System (ADS)
Kercel, Stephen W.; Dress, William B.
1995-04-01
The authors have constructed a wavelet processing board that implements a 14-level wavelet transform. The board uses a high-speed analog-to-digital (A/D) converter, a hardware queue, and five fixed-point digital signal processing (DSP) chips in a parallel pipeline architecture. All five processors are independently programmable. The board is designed as a general purpose engine for instrumentation applications requiring near real-time wavelet processing or multiscale filtering. The present application is the processing engine of a magnetic field monitor that covers 305 Hz through 5 MHz. The monitor is used for the detection of peak values of magnetic fields in nuclear power plants. This paper describes the design, development, simulation, and testing of the system. Specific issues include the conditioning of real-world signals for wavelet processing, practical trade-offs between queue length and filter length, selection of filter coefficients, simulation of a 14-octave filter bank, and limitations imposed by a fixed-point processor. Test results from the completed wavelet board are included.
Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund
2012-01-01
Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the
Smith, Daniel P; Thrash, J Cameron; Nicora, Carrie D; Lipton, Mary S; Burnum-Johnson, Kristin E; Carini, Paul; Smith, Richard D; Giovannoni, Stephen J
2013-11-26
Nitrogen is one of the major nutrients limiting microbial productivity in the ocean, and as a result, most marine microorganisms have evolved systems for responding to nitrogen stress. The highly abundant alphaproteobacterium "Candidatus Pelagibacter ubique," a cultured member of the order Pelagibacterales (SAR11), lacks the canonical GlnB, GlnD, GlnK, and NtrB/NtrC genes for regulating nitrogen assimilation, raising questions about how these organisms respond to nitrogen limitation. A survey of 266 Alphaproteobacteria genomes found these five regulatory genes nearly universally conserved, absent only in intracellular parasites and members of the order Pelagibacterales, including "Ca. Pelagibacter ubique." Global differences in mRNA and protein expression between nitrogen-limited and nitrogen-replete cultures were measured to identify nitrogen stress responses in "Ca. Pelagibacter ubique" strain HTCC1062. Transporters for ammonium (AmtB), taurine (TauA), amino acids (YhdW), and opines (OccT) were all elevated in nitrogen-limited cells, indicating that they devote increased resources to the assimilation of nitrogenous organic compounds. Enzymes for assimilating amine into glutamine (GlnA), glutamate (GltBD), and glycine (AspC) were similarly upregulated. Differential regulation of the transcriptional regulator NtrX in the two-component signaling system NtrY/NtrX was also observed, implicating it in control of the nitrogen starvation response. Comparisons of the transcriptome and proteome supported previous observations of uncoupling between transcription and translation in nutrient-deprived "Ca. Pelagibacter ubique" cells. Overall, these data reveal a streamlined, PII-independent response to nitrogen stress in "Ca. Pelagibacter ubique," and likely other Pelagibacterales, and show that they respond to nitrogen stress by allocating more resources to the assimilation of nitrogen-rich organic compounds. Pelagibacterales are extraordinarily abundant and play a pivotal
Multiresolution pattern recognition of small volcanos in Magellan data
NASA Technical Reports Server (NTRS)
Smyth, P.; Anderson, C. H.; Aubele, J. C.; Crumpler, L. S.
1992-01-01
The Magellan data is a treasure-trove for scientific analysis of venusian geology, providing far more detail than was previously available from Pioneer Venus, Venera 15/16, or ground-based radar observations. However, at this point, planetary scientists are being overwhelmed by the sheer quantities of data collected--data analysis technology has not kept pace with our ability to collect and store it. In particular, 'small-shield' volcanos (less than 20 km in diameter) are the most abundant visible geologic feature on the planet. It is estimated, based on extrapolating from previous studies and knowledge of the underlying geologic processes, that there should be on the order of 10(exp 5) to 10(exp 6) of these volcanos visible in the Magellan data. Identifying and studying these volcanos is fundamental to a proper understanding of the geologic evolution of Venus. However, locating and parameterizing them in a manual manner is very time-consuming. Hence, we have undertaken the development of techniques to partially automate this task. The goal is not the unrealistic one of total automation, but rather the development of a useful tool to aid the project scientists. The primary constraints for this particular problem are as follows: (1) the method must be reasonably robust; and (2) the method must be reasonably fast. Unlike most geological features, the small volcanos of Venus can be ascribed to a basic process that produces features with a short list of readily defined characteristics differing significantly from other surface features on Venus. For pattern recognition purposes the relevant criteria include the following: (1) a circular planimetric outline; (2) known diameter frequency distribution from preliminary studies; (3) a limited number of basic morphological shapes; and (4) the common occurrence of a single, circular summit pit at the center of the edifice.
Multiresolution quantification of deciduousness in West Central African forests
NASA Astrophysics Data System (ADS)
Viennois, G.; Barbier, N.; Fabre, I.; Couteron, P.
2013-04-01
The characterization of leaf phenology in tropical forests is of major importance and improves our understanding of earth-atmosphere-climate interactions. The availability of satellite optical data with a high temporal resolution has permitted the identification of unexpected phenological cycles, particularly over the Amazon region. A primary issue in these studies is the relationship between the optical reflectance of pixels of 1 km or more in size and ground information of limited spatial extent. In this paper, we demonstrate that optical data with high to very-high spatial resolution can help bridge this scale gap by providing snapshots of the canopy that allow discernment of the leaf-phenological stage of trees and the proportions of leaved crowns within the canopy. We also propose applications for broad-scale forest characterization and mapping in West Central Africa over an area of 141 000 km2. Eleven years of the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) data were averaged over the wet and dry seasons to provide a dataset of optimal radiometric quality at a spatial resolution of 250 m. Sample areas covered at a very-high (GeoEye) and high (SPOT-5) spatial resolution were used to identify forest types and to quantify the proportion of leaved trees in the canopy. The dry season EVI was positively correlated with the proportion of leaved trees in the canopy. This relationship allowed the conversion of EVI into canopy deciduousness at the regional level. On this basis, ecologically important forest types could be mapped, including young secondary, open Marantaceae, Gilbertiodendron dewevrei and swamp forests. We show that in west central African forests, a large share of the variability in canopy reflectance, as captured by the EVI, is due to variation in the proportion of leaved trees in the upper canopy, thereby opening new perspectives for biodiversity and carbon-cycle applications.
Multiresolution quantification of deciduousness in West-Central African forests
NASA Astrophysics Data System (ADS)
Viennois, G.; Barbier, N.; Fabre, I.; Couteron, P.
2013-11-01
The characterization of leaf phenology in tropical forests is of major importance for forest typology as well as to improve our understanding of earth-atmosphere-climate interactions or biogeochemical cycles. The availability of satellite optical data with a high temporal resolution has permitted the identification of unexpected phenological cycles, particularly over the Amazon region. A primary issue in these studies is the relationship between the optical reflectance of pixels of 1 km or more in size and ground information of limited spatial extent. In this paper, we demonstrate that optical data with high to very-high spatial resolution can help bridge this scale gap by providing snapshots of the canopy that allow discernment of the leaf-phenological stage of trees and the proportions of leaved crowns within the canopy. We also propose applications for broad-scale forest characterization and mapping in West-Central Africa over an area of 141 000 km2. Eleven years of the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) data were averaged over the wet and dry seasons to provide a data set of optimal radiometric quality at a spatial resolution of 250 m. Sample areas covered at a very-high (GeoEye) and high (SPOT-5) spatial resolution were used to identify forest types and to quantify the proportion of leaved trees in the canopy. The dry-season EVI was positively correlated with the proportion of leaved trees in the canopy. This relationship allowed the conversion of EVI into canopy deciduousness at the regional level. On this basis, ecologically important forest types could be mapped, including young secondary, open Marantaceae, Gilbertiodendron dewevrei and swamp forests. We show that in West-Central African forests, a large share of the variability in canopy reflectance, as captured by the EVI, is due to variation in the proportion of leaved trees in the upper canopy, thereby opening new perspectives for biodiversity and
NASA Astrophysics Data System (ADS)
Matsubara, R.; Sakai, Y.; Nomura, T.; Sakai, M.; Kudo, K.; Majima, Y.; Knipp, D.; Nakamura, M.
2015-11-01
For the better performance of organic thin-film transistors (TFTs), gate-insulator surface treatments are often applied. However, the origin of mobility increase has not been well understood because mobility-limiting factors have not been compared quantitatively. In this work, we clarify the influence of gate-insulator surface treatments in pentacene thin-film transistors on the limiting factors of mobility, i.e., size of crystal-growth domain, crystallite size, HOMO-band-edge fluctuation, and carrier transport barrier at domain boundary. We quantitatively investigated these factors for pentacene TFTs with bare, hexamethyldisilazane-treated, and polyimide-coated SiO2 layers as gate dielectrics. By applying these surface treatments, size of crystal-growth domain increases but both crystallite size and HOMO-band-edge fluctuation remain unchanged. Analyzing the experimental results, we also show that the barrier height at the boundary between crystal-growth domains is not sensitive to the treatments. The results imply that the essential increase in mobility by these surface treatments is only due to the increase in size of crystal-growth domain or the decrease in the number of energy barriers at domain boundaries in the TFT channel.
1993-12-01
36 iv Page 3.3 Discrete Multiresolution Decomposition Algorithm ..... ........... 40 3.4 Spatio-Temporal Filter Bank Representation...List of Figures Figure Page 1. Spatial and temporal frequency sensitivity of motion cells ................... 3 2. STFT and wavelet filter banks ...construction of a wavelet filter bank that provides directional selectivity, 5) combining the coefficients obtained in the decomposition process to
J.ames. Wickham; Collin. Homer; James Vogelmann; Alexa McKerrow; Rick Mueler; Nate Herold; John Coulston
2014-01-01
The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agenciesâ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the...
NASA Astrophysics Data System (ADS)
Collier, A.; Lao, L. L.; Abla, G.; Chu, M. S.; Prater, R.; Smith, S. P.; St. John, H. E.; Guo, W.; Li, G.; Pan, C.; Ren, Q.; Park, J. M.; Bisai, N.; Srinivasan, R.; Sun, A. P.; Liu, Y.; Worrall, M.
2010-11-01
This presentation summarizes several useful applications provided by the IMFIT integrated modeling framework to support DIII-D and EAST research. IMFIT is based on Python and utilizes modular task-flow architecture with a central manager and extensive GUI support to coordinate tasks among component modules. The kinetic-EFIT application allows multiple time-slice reconstructions by fetching pressure profile data directly from MDS+ or from ONETWO or PTRANSP. The stability application analyzes a given reference equilibrium for stability limits by performing parameter perturbation studies with MHD codes such as DCON, GATO, ELITE, or PEST3. The transport task includes construction of experimental energy and momentum fluxes from profile analysis and comparison against theoretical models such as MMM95, GLF23, or TGLF.
NASA Astrophysics Data System (ADS)
Wang, Xiang-Dong; Deng, Xiao-Chuan; Wang, Yong-Wei; Wang, Yong; Wen, Yi; Zhang, Bo
2014-05-01
This paper describes the successful fabrication of 4H-SiC junction barrier Schottky (JBS) rectifiers with a linearly graded field limiting ring (LG-FLR). Linearly variable ring spacings for the FLR termination are applied to improve the blocking voltage by reducing the peak surface electric field at the edge termination region, which acts like a variable lateral doping profile resulting in a gradual field distribution. The experimental results demonstrate a breakdown voltage of 5 kV at the reverse leakage current density of 2 mA/cm2 (about 80% of the theoretical value). Detailed numerical simulations show that the proposed termination structure provides a uniform electric field profile compared to the conventional FLR termination, which is responsible for 45% improvement in the reverse blocking voltage despite a 3.7% longer total termination length.
Morgado, L N; Noordeloos, M E; Lamoureux, Y; Geml, J
2013-12-01
Species from Entoloma subg. Entoloma are commonly recorded from both the Northern and Southern Hemispheres and, according to literature, most of them have at least Nearctic-Palearctic distributions. However, these records are based on morphological analysis, and studies relating morphology, molecular data and geographical distribution have not been reported. In this study, we used phylogenetic species recognition criteria through gene genealogical concordance (based on nuclear ITS, LSU, rpb2 and mitochondrial SSU) to answer specific questions considering species limits in Entoloma subg. Entoloma and their geographic distribution in Europe, North America and Australasia. The studied morphotaxa belong to sect. Entoloma, namely species like the notorious poisonous E. sinuatum (E. lividum auct.), E. prunuloides (type-species of sect. Entoloma), E. nitidum and the red-listed E. bloxamii. With a few exceptions, our results reveal strong phylogeographical partitions that were previously not known. For example, no collection from Australasia proved to be conspecific with the Northern Hemisphere specimens. Almost all North American collections represent distinct and sister taxa to the European ones. And even within Europe, new lineages were uncovered for the red-listed E. bloxamii, which were previously unknown due to a broad morphological species concept. Our results clearly demonstrate the power of the phylogenetic species concept to reveal evolutionary units, to redefine the morphological limits of the species addressed and to provide insights into the evolutionary history of key morphological characters for Entoloma systematics. New taxa are described, and new combinations are made, including E. fumosobrunneum, E. pseudoprunuloides, E. ochreoprunuloides and E. caesiolamellatum. Epitypes are selected for E. prunuloides and E. bloxamii. In addition, complete descriptions are given of some other taxa used in this study for which modern descriptions are lacking, viz. E
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-01-01
A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.
Yi, Li; Fan, Yuebo; Quinn, Paul C.; Feng, Cong; Huang, Dan; Li, Jiao; Mao, Guoquan; Lee, Kang
2012-01-01
There has been considerable controversy regarding whether children with autism spectrum disorder (ASD) and typically developing children (TD) show different eye movement patterns when processing faces. We investigated ASD and age- and IQ-matched TD children's scanning of faces using a novel multi-method approach. We found that ASD children spent less time looking at the whole face generally. After controlling for this difference, ASD children's fixations of the other face parts, except for the eye region, and their scanning paths between face parts were comparable either to the age-matched or IQ-matched TD groups. In contrast, in the eye region, ASD children's scanning differed significantly from that of both TD groups: (a) ASD children fixated significantly less on the right eye (from the observer's view); (b) ASD children's fixations were more biased towards the left eye region; and (c) ASD children fixated below the left eye, whereas TD children fixated on the pupil region of the eye. Thus, ASD children do not have a general abnormality in face scanning. Rather, their abnormality is limited to the eye region, likely due to their strong tendency to avoid eye contact. PMID:23929830
DTMs: discussion of a new multi-resolution function based model
NASA Astrophysics Data System (ADS)
Brovelli, M. A.; Biagi, L.; Zamboni, G.
2012-04-01
The diffusion of new technologies based on WebGIS and virtual globes allows DTMs distribution and three dimensional representations to the Web users' community. In the Web distribution of geographical information, the database storage size represents a critical point: given a specific interest area, typically the server needs to perform some preprocessing, the data have to be sent to the client, that applies some additional processing. The efficiency of all these actions is crucial to guarantee a near real time availability of the information. DTMs are obtained from the raw observations by some sampling or interpolation technique and typically are stored and distributed as Triangular Irregular Networks (TIN) or regular grids. A new approach to store and transmit DTMs has been studied and implemented. The basic idea is to use multi-resolution bilinear spline functions to interpolate the raw observations and to represent the terrain. More in detail, the algorithm performs the following actions. 1) The spatial distribution of the raw observations is investigated. In areas where few data are available, few levels of splines are activated while more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with respect to the previous level. 2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the raw observations. The interpolation is computed by batch least squares. 3) Finally, the estimated coefficients of the splines are stored. The algorithm guarantees a local resolution consistent with the data density, exploiting all the available information provided by the sample. The model can be defined "function based" because the coefficients of a given function are stored instead of a set of heights: in particular, the resolution level, the position and the coefficient of each activated spline function are stored by the server and are
Multiresolution field map estimation using golden section search for water-fat separation.
Lu, Wenmiao; Hargreaves, Brian A
2008-07-01
Many diagnostic MRI sequences demand reliable and uniform fat suppression. Multipoint water-fat separation methods, which are based on chemical-shift induced phase differences, have shown great success in the presence of field inhomogeneities. This work presents a computationally efficient and robust field map estimation method. The method begins with subsampling image data into a multiresolution image pyramidal structure, and then utilizes a golden section search to directly locate possible field map values at the coarsest level of the pyramidal structure. The field map estimate is refined and propagated to increasingly finer resolutions in an efficient manner until the full-resolution field map is obtained for final water-fat separation. The proposed method is validated with multiecho sequences where long echo-spacings normally impose great challenges on reliable field map estimation.
A Statistical Multiresolution Approach for Face Recognition Using Structural Hidden Markov Models
NASA Astrophysics Data System (ADS)
Nicholl, P.; Amira, A.; Bouchaffra, D.; Perrott, R. H.
2007-12-01
This paper introduces a novel methodology that combines the multiresolution feature of the discrete wavelet transform (DWT) with the local interactions of the facial structures expressed through the structural hidden Markov model (SHMM). A range of wavelet filters such as Haar, biorthogonal 9/7, and Coiflet, as well as Gabor, have been implemented in order to search for the best performance. SHMMs perform a thorough probabilistic analysis of any sequential pattern by revealing both its inner and outer structures simultaneously. Unlike traditional HMMs, the SHMMs do not perform the state conditional independence of the visible observation sequence assumption. This is achieved via the concept of local structures introduced by the SHMMs. Therefore, the long-range dependency problem inherent to traditional HMMs has been drastically reduced. SHMMs have not previously been applied to the problem of face identification. The results reported in this application have shown that SHMM outperforms the traditional hidden Markov model with a 73% increase in accuracy.
Liao, Fuyuan; Wang, Jue; He, Ping
2008-04-01
Gait rhythm of patients with Parkinson's disease (PD), Huntington's disease (HD) and amyotrophic lateral sclerosis (ALS) has been studied focusing on the fractal and correlation properties of stride time fluctuations. In this study, we investigated gait asymmetry in these diseases using the multi-resolution entropy analysis of stance time fluctuations. Since stance time is likely to exhibit fluctuations across multiple spatial and temporal scales, the data series were decomposed into appropriate levels by applying stationary wavelet transform. The similarity between two corresponding wavelet coefficient series in terms of their regularities at each level was quantified based on a modified sample entropy method and a weighted sum was then used as gait symmetry index. We found that gait symmetry in subjects with PD and HD, especially with ALS is significantly disturbed. This method may be useful in characterizing certain pathologies of motor control and, possibly, in monitoring disease progression and evaluating the effect of an individual treatment.
The Multi-Resolution CLEAN and its application to the short-spacing problem in interferometry
NASA Astrophysics Data System (ADS)
Wakker, B. P.; Schwarz, U. J.
1988-07-01
A modification of the CLEAN algorithm is described which alleviates the difficulties occurring in CLEAN for extended sources. This is accomplished by appropriately combining the results of a number of conventional CLEAN operations with optimized parameters, each done at a different resolution. This algorithm can properly be called "Multi-Resolution CLEAN" or "MRC". Experiments on model sources show that this deconvolution method works well even when the source is so large that the usual CLEAN becomes impractical. Further, for observations of extended sources, MRC enhances the signal-to-noise ratio, resulting in an easier definition of the area of signal. Moreover, MRC is in principle faster than a standard CLEAN because less δ-functions are needed.
Pathfinder: multiresolution region-based searching of pathology images using IRM.
Wang, J Z
2000-01-01
The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wavelets and the IRM (Integrated Region Matching) distance. Experiments with a database of 70,000 pathology image fragments have demonstrated high retrieval accuracy and high speed. The algorithm can be combined with our previously developed wavelet-based progressive pathology image transmission and browsing algorithm and is expandable for medical image databases.
Pathfinder: multiresolution region-based searching of pathology images using IRM.
Wang, J. Z.
2000-01-01
The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wavelets and the IRM (Integrated Region Matching) distance. Experiments with a database of 70,000 pathology image fragments have demonstrated high retrieval accuracy and high speed. The algorithm can be combined with our previously developed wavelet-based progressive pathology image transmission and browsing algorithm and is expandable for medical image databases. Images Figure 4 Figure 6 Figure 7 Figure 5 Figure 8 Figure 9 PMID:11080011
A multiresolution analysis for tensor-product splines using weighted spline wavelets
NASA Astrophysics Data System (ADS)
Kapl, Mario; Jüttler, Bert
2009-09-01
We construct biorthogonal spline wavelets for periodic splines which extend the notion of "lazy" wavelets for linear functions (where the wavelets are simply a subset of the scaling functions) to splines of higher degree. We then use the lifting scheme in order to improve the approximation properties with respect to a norm induced by a weighted inner product with a piecewise constant weight function. Using the lifted wavelets we define a multiresolution analysis of tensor-product spline functions and apply it to image compression of black-and-white images. By performing-as a model problem-image compression with black-and-white images, we demonstrate that the use of a weight function allows to adapt the norm to the specific problem.
Denoising techniques in adaptive multi-resolution domains with applications to biomedical images.
Lahmiri, Salim
2017-02-01
Variational mode decomposition (VMD) is a new adaptive multi-resolution technique suitable for signal denoising purpose. The main focus of this work has been to study the feasibility of several image denoising techniques in empirical mode decomposition (EMD) and VMD domains. A comparative study is made using 11 techniques widely used in the literature, including Wiener filter, first-order local statistics, fourth partial differential equation, nonlinear complex diffusion process, linear complex diffusion process (LCDP), probabilistic non-local means, non-local Euclidean medians, non-local means, non-local patch regression, discrete wavelet transform and wavelet packet transform. On the basis of comparison of 396 denoising based on peak signal-to-noise ratio, it is found that the best performances are obtained in VMD domain when appropriate denoising techniques are applied. Particularly, it is found that LCDP in combination with VMD performs the best and that VMD is faster than EMD.
High-speed multiresolution scanning probe microscopy based on Lissajous scan trajectories.
Tuma, Tomas; Lygeros, John; Kartik, V; Sebastian, Abu; Pantazi, Angeliki
2012-05-11
A novel scan trajectory for high-speed scanning probe microscopy is presented in which the probe follows a two-dimensional Lissajous pattern. The Lissajous pattern is generated by actuating the scanner with two single-tone harmonic waveforms of constant frequency and amplitude. Owing to the extremely narrow frequency spectrum, high imaging speeds can be achieved without exciting the unwanted resonant modes of the scanner and without increasing the sensitivity of the feedback loop to the measurement noise. The trajectory also enables rapid multiresolution imaging, providing a preview of the scanned area in a fraction of the overall scan time. We present a procedure for tuning the spatial and the temporal resolution of Lissajous trajectories and show experimental results obtained on a custom-built atomic force microscope (AFM). Real-time AFM imaging with a frame rate of 1 frame s⁻¹ is demonstrated.
Visulization of Time-Varying Multiresolution Date Using Error-Based Temporal-Spatial Resuse
Nuber, C; LaMar, E; Hamann, B; Joy, K
2002-04-22
In this paper, we report results on exploration of two-dimensional (2D) time varying datasets. We extend the notion of multiresolution spatial data approximation of static datasets to spatio-temporal approximation of time-varying datasets. Time-varying datasets typically do not change ''uniformly,'' i.e., some spatial sub-domains can experience only little or no change for extended periods of time. In these sub-domains, we show that approximation error bounds can be met when using sub-domains from other time-steps effectively. We generate a more general approximation scheme where sub-domains may approximate congruent sub-domains from any other time steps. While this incurs an O(T2) overhead, where T is the total number of time-steps, we show significant reduction in data transmission. We also discuss ideas for improvements to reduce overhead.
Multiresolution parametric estimation of transparent motions and denoising of fluoroscopic images.
Auvray, Vincent; Liénard, Jean; Bouthemy, Patrick
2005-01-01
We describe a novel multiresolution parametric framework to estimate transparent motions typically present in X-Ray exams. Assuming the presence if two transparent layers, it computes two affine velocity fields by minimizing an appropriate objective function with an incremental Gauss-Newton technique. We have designed a realistic simulation scheme of fluoroscopic image sequences to validate our method on data with ground truth and different levels of noise. An experiment on real clinical images is also reported. We then exploit this transparent-motion estimation method to denoise two layers image sequences using a motion-compensated estimation method. In accordance with theory, we show that we reach a denoising factor of 2/3 in a few iterations without bringing any local artifacts in the image sequence.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation
NASA Technical Reports Server (NTRS)
Lacaze, Alberto; Meystel, Michael; Meystel, Alex
1994-01-01
This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.
Exploring a Multiresolution Modeling Approach within the Shallow-Water Equations
Ringler, Todd D.; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William
2011-11-01
The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward
Multitemporal Multi-Resolution SAR Data for Urbanization Mapping and Monitoring: Midterm Results
NASA Astrophysics Data System (ADS)
Ban, Yifang; Gamba, Paolo; Jacob, Alexander; Salentinig, Andreas
2014-11-01
The objective of this research is to evaluate spaceborne SAR data for urban extent extraction, urban land cover mapping and urbanization monitoring. The methodology includes urban extraction using KTH-Pavia urban extractor and multi-resolution SAR data, as well as object-based classification of urban land cover using KTH-SEG and TerraSAR-X data. The urban extend extraction is based on spatial indices and Grey Level Co-occurrence Matrix (GLCM) textures while the object-based classification is based on KTH-SEG, an edge-aware region growing and merging algorithm. ENVISAT ASAR C-VV data at 30m and 75m resolution as well as TerraSAR-X data at 1m and 3m resolution were selected for this research. The results show that the KTH-Pavia Urban Extractor is effective in extracting urban areas and small towns from single-date single polarization ERS-1 SAR and ENVISAT ASAR data and urbanization monitoring could be performed in a timely and reliable manner at low-cost. The results also show that multi-resolution urban extractions showed more reliable results due to the reduction of the commission error even though the overall accuracy does not change significantly. For urban land cover mapping, KTH-SEG was effective for classification of TerraSAR-X and TanDEM-X data with best accuracy of 83% achieved. These findings indicate that operational global urban mapping and urbanization monitoring is possible with multitemporal spaceborne SAR data, especially with the recent launch of Sentinel-1 that provides SAR data with global coverage, operational reliability and quick data delivery.
Multitemporal Multi-Resolution SAR Data for Urbanization Mapping and Monitoring: Midterm Results
NASA Astrophysics Data System (ADS)
Ban, Yifang; Gamba, Paolo; Jacob, Alexander; Salentinig, Andreas
2014-11-01
The objective of this research is to evaluate spaceborne SAR data for urban extent extraction, urban land cover mapping and urbanization monitoring. The methodology includes urban extraction using KTH- Pavia urban extractor and multi-resolution SAR data, as well as object-based classification of urban land cover using KTH-SEG and TerraSAR-X data. The urban extend extraction is based on spatial indices and Grey Level Co-occurrence Matrix (GLCM) textures while the object-based classification is based on KTH-SEG, an edge-aware region growing and merging algorithm. ENVISAT ASAR C-VV data at 30m and 75m resolution as well as TerraSAR-X data at 1m and 3m resolution were selected for this research. The results show that the KTH-Pavia Urban Extractor is effective in extracting urban areas and small towns from single-date single polarization ERS-1 SAR and ENVISAT ASAR data and urbanization monitoring could be performed in a timely and reliable manner at low-cost. The results also show that multi-resolution urban extractions showed more reliable results due to the reduction of the commission error even though the overall accuracy does not change significantly. For urban land cover mapping, KTH-SEG was effective for classification of TerraSAR- X and TanDEM-X data with best accuracy of 83% achieved. These findings indicate that operational global urban mapping and urbanization monitoring is possible with multitemporal spaceborne SAR data, especially with the recent launch of Sentinel-1 that provides SAR data with global coverage, operational reliability and quick data delivery.
Fuzzy model identification of dengue epidemic in Colombia based on multiresolution analysis.
Torres, Claudia; Barguil, Samier; Melgarejo, Miguel; Olarte, Andrés
2014-01-01
This article presents a model of a dengue and severe dengue epidemic in Colombia based on the cases reported between 1995 and 2011. We present a methodological approach that combines multiresolution analysis and fuzzy systems to represent cases of dengue and severe dengue in Colombia. The performance of this proposal was compared with that obtained by applying traditional fuzzy modeling techniques on the same data set. This comparison was obtained by two performance measures that evaluate the similarity between the original data and the approximate signal: the mean square error and the variance accounted for. Finally, the predictive ability of the proposed technique was evaluated to forecast the number of dengue and severe dengue cases in a horizon of three years (2012-2015). These estimates were validated with a data set that was not included into the training stage of the model. The proposed technique allowed the creation of a model that adequately represented the dynamic of a dengue and severe dengue epidemic in Colombia. This technique achieves a significantly superior performance to that obtained with traditional fuzzy modeling techniques: the similarity between the original data and the approximate signal increases from 21.13% to 90.06% and from 18.90% to 76.83% in the case of dengue and severe dengue, respectively. Finally, the developed models generate plausible predictions that resemble validation data. The difference between the cumulative cases reported from January 2012 until July 2013 and those predicted by the model for the same period was 24.99% for dengue and only 4.22% for severe dengue. The fuzzy model identification technique based on multiresolution analysis produced a proper representation of dengue and severe dengue cases for Colombia despite the complexity and uncertainty that characterize this biological system. Additionally, the obtained models generate plausible predictions that can be used by surveillance authorities to support decision
Accessing the Global Multi-Resolution Topography (GMRT) Synthesis through Gmrt Maptool
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Barg, B.; Carbotte, S. M.
2014-12-01
The Global Multi-Resolution Topography (GMRT) Synthesis (http://gmrt.marine-geo.org) is a dynamically maintained global multi-resolution synthesis of terrestrial and seafloor elevation data maintained as both images and gridded data values as part of the IEDA Marine Geoscience Data System. GMRT seamlessly brings together a variety of elevation sources, and includes ship-based multibeam sonar collected throughout the global oceans that is processed by the GMRT Team and is gridded to 100-m resolution. New versions of GMRT are released twice each year, typically adding processed multibeam data from ~80 cruises per year. GMRT grids and images can be accessed through a variety of tools and interfaces including GeoMapApp (http://www.geomapapp.org) the GMRT MapTool (http://www.marine-geo.org/tools/maps_grids.php), and images can also be accessed through a Web Map Service. We have recently launched a new version of our web-based GMRT MapTool interface, which provides custom access to the gridded data values in standard formats including GeoTIFF, ArcASCII and GMT NetCDF. Several resolution options are provided for these gridded data, and corresponding images can also be generated. Coupled with this new interface is an XML metadata service that provides attribution information and detailed metadata about source data components (cruise metadata, sensor metadata, and full list of source data files) for any region of interest. Metadata from the attribution service is returned to the user along with the requested data, and is also combined with the data itself in new Bathymetry Attributed Grid (BAG) formatted files.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling
A high-fidelity multiresolution digital elevation model for Earth systems
NASA Astrophysics Data System (ADS)
Duan, Xinqiao; Li, Lin; Zhu, Haihong; Ying, Shen
2017-01-01
The impact of topography on Earth systems variability is well recognised. As numerical simulations evolved to incorporate broader scales and finer processes, accurately assimilating or transforming the topography to produce more exact land-atmosphere-ocean interactions, has proven to be quite challenging. Numerical schemes of Earth systems often use empirical parameterisation at sub-grid scale with downscaling to express topographic endogenous processes, or rely on insecure point interpolation to induce topographic forcing, which creates bias and input uncertainties. Digital elevation model (DEM) generalisation provides more sophisticated systematic topographic transformation, but existing methods are often difficult to be incorporated because of unwarranted grid quality. Meanwhile, approaches over discrete sets often employ heuristic approximation, which are generally not best performed. Based on DEM generalisation, this article proposes a high-fidelity multiresolution DEM with guaranteed grid quality for Earth systems. The generalised DEM surface is initially approximated as a triangulated irregular network (TIN) via selected feature points and possible input features. The TIN surface is then optimised through an energy-minimised centroidal Voronoi tessellation (CVT). By devising a robust discrete curvature as density function and exact geometry clipping as energy reference, the developed curvature CVT (cCVT) converges, the generalised surface evolves to a further approximation to the original DEM surface, and the points with the dual triangles become spatially equalised with the curvature distribution, exhibiting a quasi-uniform high-quality and adaptive variable resolution. The cCVT model was then evaluated on real lidar-derived DEM datasets and compared to the classical heuristic model. The experimental results show that the cCVT multiresolution model outperforms classical heuristic DEM generalisations in terms of both surface approximation precision and
NASA Astrophysics Data System (ADS)
Orr, Shlomo; Meystel, Alexander M.
2005-03-01
Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une
Welsher, Kevin; Yang, Haw
2015-01-01
The overwhelming effort in the development of new microscopy methods has been focused on increasing the spatial and temporal resolution in all three dimensions to enable the measurement of the molecular scale phenomena at the heart of biological processes. However, there exists a significant speed barrier to existing 3D imaging methods, which is associated with the overhead required to image large volumes. This overhead can be overcome to provide nearly unlimited temporal precision by simply focusing on a single molecule or particle via real-time 3D single-particle tracking and the newly developed 3D Multi-resolution Microscopy (3D-MM). Here, we investigate the optical and mechanical limits of real-time 3D single-particle tracking in the context of other methods. In particular, we investigate the use of an optical cantilever for position sensitive detection, finding that this method yields system magnifications of over 3000×. We also investigate the ideal PID control parameters and their effect on the power spectrum of simulated trajectories. Taken together, these data suggest that the speed limit in real-time 3D single particle-tracking is a result of slow piezoelectric stage response as opposed to optical sensitivity or PID control.
ERIC Educational Resources Information Center
JONES, HAROLD E.
THE JOB ANALYSES WERE COMPOSED FROM ACTIVITY RECORDS KEPT BY EACH PROFESSIONAL EXTENSION WORKER IN KANSAS. JOB ANALYSES ARE GIVEN FOR THE ADMINISTRATION (DIRECTOR, ASSOCIATE DIRECTOR, ADMINISTRATIVE ASSISTANT, ASSISTANT DIRECTOR, SATE LEADERS AND DEPARTMENT HEADS), EXTENSION SPECIALISTS, DISTRICT AGENTS, AND COUNTY EXTENSION AGENTS. DISCUSSION OF…
NASA Astrophysics Data System (ADS)
Zou, Yibo; Kaestner, Markus; Reithmeier, Eduard
2015-11-01
In this paper, a new method for multi-resolution characterization is introduced to analyze porous surfaces on cylinder liners. The main purpose of this new approach is to investigate the influence of resolution and magnification of different optical lenses on measuring the 3D geometry of pores based on 3D microscopy topographical surface metrology. Two optical sensors (20× lens and 50× lens) have been applied to acquire the porous surface data for the primal investigation. A feature-based image matching algorithm is introduced for the purpose of registering identical microstructures in different datasets with different pixel resolutions. The correlation between the sensor's resolution and the numerical parameters' values regarding the pores geometry is studied statistically. Finally, the preliminary results of multi-resolution characterization are presented and the impact of using a sensor with higher resolution on measuring the same object is discussed.
Xu, Wei; Cao, Maosen; Ding, Keqin; Radzieński, Maciej; Ostachowicz, Wiesław
2017-01-01
Carbon fiber reinforced polymer laminates are increasingly used in the aerospace and civil engineering fields. Identifying cracks in carbon fiber reinforced polymer laminated beam components is of considerable significance for ensuring the integrity and safety of the whole structures. With the development of high-resolution measurement technologies, mode-shape-based crack identification in such laminated beam components has become an active research focus. Despite its sensitivity to cracks, however, this method is susceptible to noise. To address this deficiency, this study proposes a new concept of multi-resolution modal Teager–Kaiser energy, which is the Teager–Kaiser energy of a mode shape represented in multi-resolution, for identifying cracks in carbon fiber reinforced polymer laminated beams. The efficacy of this concept is analytically demonstrated by identifying cracks in Timoshenko beams with general boundary conditions; and its applicability is validated by diagnosing cracks in a carbon fiber reinforced polymer laminated beam, whose mode shapes are precisely acquired via non-contact measurement using a scanning laser vibrometer. The analytical and experimental results show that multi-resolution modal Teager–Kaiser energy is capable of designating the presence and location of cracks in these beams under noisy environments. This proposed method holds promise for developing crack identification systems for carbon fiber reinforced polymer laminates. PMID:28773016
Xu, Wei; Cao, Maosen; Ding, Keqin; Radzieński, Maciej; Ostachowicz, Wiesław
2017-06-15
Carbon fiber reinforced polymer laminates are increasingly used in the aerospace and civil engineering fields. Identifying cracks in carbon fiber reinforced polymer laminated beam components is of considerable significance for ensuring the integrity and safety of the whole structures. With the development of high-resolution measurement technologies, mode-shape-based crack identification in such laminated beam components has become an active research focus. Despite its sensitivity to cracks, however, this method is susceptible to noise. To address this deficiency, this study proposes a new concept of multi-resolution modal Teager-Kaiser energy, which is the Teager-Kaiser energy of a mode shape represented in multi-resolution, for identifying cracks in carbon fiber reinforced polymer laminated beams. The efficacy of this concept is analytically demonstrated by identifying cracks in Timoshenko beams with general boundary conditions; and its applicability is validated by diagnosing cracks in a carbon fiber reinforced polymer laminated beam, whose mode shapes are precisely acquired via non-contact measurement using a scanning laser vibrometer. The analytical and experimental results show that multi-resolution modal Teager-Kaiser energy is capable of designating the presence and location of cracks in these beams under noisy environments. This proposed method holds promise for developing crack identification systems for carbon fiber reinforced polymer laminates.
NASA Astrophysics Data System (ADS)
Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Witthuhn, Jonas; Macke, Andreas
2017-03-01
The time series of global radiation observed by a dense network of 99 autonomous pyranometers during the HOPE campaign around Jülich, Germany, are investigated with a multiresolution analysis based on the maximum overlap discrete wavelet transform and the Haar wavelet. For different sky conditions, typical wavelet power spectra are calculated to quantify the timescale dependence of variability in global transmittance. Distinctly higher variability is observed at all frequencies in the power spectra of global transmittance under broken-cloud conditions compared to clear, cirrus, or overcast skies. The spatial autocorrelation function including its frequency dependence is determined to quantify the degree of similarity of two time series measurements as a function of their spatial separation. Distances ranging from 100 m to 10 km are considered, and a rapid decrease of the autocorrelation function is found with increasing frequency and distance. For frequencies above 1/3 min-1 and points separated by more than 1 km, variations in transmittance become completely uncorrelated. A method is introduced to estimate the deviation between a point measurement and a spatially averaged value for a surrounding domain, which takes into account domain size and averaging period, and is used to explore the representativeness of a single pyranometer observation for its surrounding region. Two distinct mechanisms are identified, which limit the representativeness; on the one hand, spatial averaging reduces variability and thus modifies the shape of the power spectrum. On the other hand, the correlation of variations of the spatially averaged field and a point measurement decreases rapidly with increasing temporal frequency. For a grid box of 10 km × 10 km and averaging periods of 1.5-3 h, the deviation of global transmittance between a point measurement and an area-averaged value depends on the prevailing sky conditions: 2.8 (clear), 1.8 (cirrus), 1.5 (overcast), and 4.2 % (broken
Lu, Weiguo; Olivera, Gustavo H; Chen, Ming-Li; Reckwerdt, Paul J; Mackie, Thomas R
2005-02-21
Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm2) and the 'narrow' (1.2 x 1.2 cm2) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm is
Guo, Chenlei; Zhang, Liming
2010-01-01
Salient areas in natural scenes are generally regarded as areas which the human eye will typically focus on, and finding these areas is the key step in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), Neuromorphic Vision Toolkit (NVT), and others, but they demand high computational cost and computing useful results mostly relies on their choice of parameters. Although some region-based approaches were proposed to reduce the computational complexity of feature maps, these approaches still were not able to work in real time. Recently, a simple and fast approach called spectral residual (SR) was proposed, which uses the SR of the amplitude spectrum to calculate the image's saliency map. However, in our previous work, we pointed out that it is the phase spectrum, not the amplitude spectrum, of an image's Fourier transform that is key to calculating the location of salient areas, and proposed the phase spectrum of Fourier transform (PFT) model. In this paper, we present a quaternion representation of an image which is composed of intensity, color, and motion features. Based on the principle of PFT, a novel multiresolution spatiotemporal saliency detection model called phase spectrum of quaternion Fourier transform (PQFT) is proposed in this paper to calculate the spatiotemporal saliency map of an image by its quaternion representation. Distinct from other models, the added motion dimension allows the phase spectrum to represent spatiotemporal saliency in order to perform attention selection not only for images but also for videos. In addition, the PQFT model can compute the saliency map of an image under various resolutions from coarse to fine. Therefore, the hierarchical selectivity (HS) framework based on the PQFT model is introduced here to construct the tree structure representation of an image. With the help of HS, a model called multiresolution wavelet domain foveation (MWDF) is
Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability
NASA Astrophysics Data System (ADS)
Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.
2016-06-01
This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse ~1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to ~0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent
Multiresolution iterative reconstruction in high-resolution extremity cone-beam CT
NASA Astrophysics Data System (ADS)
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2016-10-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution penalized-weighted least squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of
Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu
2015-01-01
It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146
Chen, Jerry; Yoon, Ilmi; Bethel, E. Wes
2005-04-20
We present a novel approach for highly interactive remote delivery of visualization results. Instead of real-time rendering across the internet, our approach, inspired by QuickTime VR's Object Movieconcept, delivers pre-rendered images corresponding to different viewpoints and different time steps to provide the experience of 3D and temporal navigation. We use tiled, multiresolution image streaming to consume minimum bandwidth while providing the maximum resolution that a user can perceive from a given viewpoint. Since image data, a viewpoint and time stamps are the only required inputs, our approach is generally applicable to all visualization and graphics rendering applications capable of generating image files in an ordered fashion. Our design is a form of latency-tolerant remote visualization, where visualization and Rendering time is effectively decoupled from interactive exploration. Our approach trades off increased interactivity, flexible resolution (for individual clients), reduced load and effective reuse of coherent frames between multiple users (from the servers perspective) at the expense of unconstrained exploration. A normal web server is the vehicle for providing on-demand images to the remote client application, which uses client-pull to obtain and cache only those images required to fulfill the interaction needs. This paper presents an architectural description of the system along with a performance characterization for stage of production, delivery and viewing pipeline.
NASA Astrophysics Data System (ADS)
Campo, D.; Quintero, O. L.; Bastidas, M.
2016-04-01
We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.
Multiscale and multiresolution modeling of shales and their flow and morphological properties
Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad
2015-01-01
The need for more accessible energy resources makes shale formations increasingly important. Characterization of such low-permeability formations is complicated, due to the presence of multiscale features, and defies conventional methods. High-quality 3D imaging may be an ultimate solution for revealing the complexities of such porous media, but acquiring them is costly and time consuming. High-quality 2D images, on the other hand, are widely available. A novel three-step, multiscale, multiresolution reconstruction method is presented that directly uses 2D images in order to develop 3D models of shales. It uses a high-resolution 2D image representing the small-scale features to reproduce the nanopores and their network, a large scale, low-resolution 2D image to create the larger-scale characteristics, and generates stochastic realizations of the porous formation. The method is used to develop a model for a shale system for which the full 3D image is available and its properties can be computed. The predictions of the reconstructed models are in excellent agreement with the data. The method is, however, quite general and can be used for reconstructing models of other important heterogeneous materials and media. Two biological examples and from materials science are also reconstructed to demonstrate the generality of the method. PMID:26560178
Hernández, Alfredo I.; Le Rolle, Virginie; Defontaine, Antoine; Carrault, Guy
2009-01-01
The role of modelling and simulation on the systemic analysis of living systems is now clearly established. Emerging disciplines, such as Systems Biology, and world-wide research actions, such as the Physiome project or the Virtual Physiological Human, are based on an intensive use of modelling and simulation methodologies and tools. One of the key aspects in this context is to perform an efficient integration of various models representing different biological or physiological functions, at different resolutions, spanning through different scales. This paper presents a multi-formalism modelling and simulation environment (M2SL) that has been conceived to ease model integration. A given model is represented as a set of coupled and atomic model components that may be based on different mathematical formalisms with heterogeneous structural and dynamical properties. A co-simulation approach is used to solve these hybrid systems. The pioneering model of the overall regulation of the cardiovascular system, proposed by Guyton, Coleman & Granger in 1972 has been implemented under M2SL and a pulsatile ventricular model, based on a time-varying elastance has been integrated, in a multi-resolution approach. Simulations reproducing physiological conditions and using different coupling methods show the benefits of the proposed environment. PMID:19884187
Scalable and memory efficient implementations of the multi-resolution approximation for spatial data
NASA Astrophysics Data System (ADS)
Ramakrishnaiah, V. B.; Hammerling, D.; Kumar, R. R. P.; Katzfuss, M.
2016-12-01
High-resolution observations of spatial fields over large geographic regions from satellites are available at an increasing rate, and their analysis can lead to new insights. Traditional spatial statistical techniques are computationally infeasible for large data sets due to the need to invert a covariance matrix of the size of the data. The recently developed multi-resolution approximation (MRA) algorithm, which expresses a spatial process as a linear combination of basis functions at multiple spatial resolutions within a hierarchical framework, addresses this issue. However, the number of observations determines the overall memory footprint during execution and even if the matrix computations are feasible, a large amount of system memory is needed when the number of observations is in the millions to hundreds of millions. A natural solution is to utilize the implicit parallelization provided by MRA at each spatial resolution and across resolutions within the hierarchical framework. By having a distributed memory parallel execution, the memory needed per computing node can be reduced. We propose and analyze two implementations of distributed memory parallel algorithms. The first approach is to divide a given resolution layer into smaller parts and assign those parts to individual computing nodes. The second approach creates sub-trees of the multiple spatial resolution hierarchy, and assigns each sub-tree to a computing node. These alternative approaches are found to reduce the memory footprint of the current implementation, show good scalability, and enable statistical analysis of spatial data with over a hundred million observations.
Towards autonomous on-road driving via multiresolutional and hierarchical moving-object prediction
NASA Astrophysics Data System (ADS)
Ajot, Jerome; Schlenoff, Craig I.; Madhavan, Raj
2004-12-01
In this paper, we present the PRIDE framework (Prediction In Dynamic Environments), which is a hierarchical multi-resolutional approach for moving object prediction that incorporates multiple prediction algorithms into a single, unifying framework. PRIDE is based upon the 4D/RCS (Real-time Control System) and provides information to planners at the level of granularity that is appropriate for their planning horizon. The lower levels of the framework utilize estimation theoretic short-term predictions based upon an extended Kalman filter that provide predictions and associated uncertainty measures. The upper levels utilize a probabilistic prediction approach based upon situation recognition with an underlying cost model that provide predictions that incorporate environmental information and constraints. These predictions are made at lower frequencies and at a level of resolution more in line with the needs of higher-level planners. PRIDE is run in the systems" world model independently of the planner and the control system. The results of the prediction are made available to a planner to allow it to make accurate plans in dynamic environments. We have applied this approach to an on-road driving control hierarchy being developed as part of the DARPA Mobile Autonomous Robotic Systems (MARS) effort.
Multiresolution analysis over graphs for a motor imagery based online BCI game.
Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy
2016-01-01
Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multiresolution adaptive and progressive gradient-based color-image segmentation
NASA Astrophysics Data System (ADS)
Vantaram, Sreenath Rao; Saber, Eli; Dianat, Sohail A.; Shaw, Mark; Bhaskar, Ranjit
2010-01-01
We propose a novel unsupervised multiresolution adaptive and progressive gradient-based color-image segmentation algorithm (MAPGSEG) that takes advantage of gradient information in an adaptive and progressive framework. The proposed methodology is initiated with a dyadic wavelet decomposition scheme of an arbitrary input image accompanied by a vector gradient calculation of its color-converted counterpart in the 1976 Commission Internationale de l'Eclairage (CIE) L*a*b* color space. The resultant gradient map is used to automatically and adaptively generate thresholds to segregate regions of varying gradient densities at different resolution levels of the input image pyramid. At each level, the classification obtained by a progressively thresholded growth procedure is integrated with an entropy-based texture model by using a unique region-merging procedure to obtain an interim segmentation. A confidence map and nonlinear spatial filtering techniques are combined, and regions of high confidence are passed from one resolution level to another until the final segmentation at the highest (original) resolution is achieved. A performance evaluation of our results on several hundred images with a recently proposed metric called the normalized probabilistic Rand index demonstrates that the proposed work computationally outperforms published segmentation techniques with superior quality.
NASA Astrophysics Data System (ADS)
Tang, Shan; Kopacz, Adrian M.; Chan O'Keeffe, Stephanie; Olson, Gregory B.; Liu, Wing Kam
2013-11-01
A modified-JIC test on CT (compact tension) specimens of an alloy (Ti-Modified 4330 steel) was carried out. The microstructure (primary and secondary inclusions) in the fracture process zone and fracture surface are reconstructed with a microtomography technique. The zig-zag fracture profile resulting from nucleation of microvoid sheets at the secondary population of inclusions is observed. Embedding the experimentally reconstructed microstructure into the fracture process zone, the ductile fracture process occurring at different length scales within the microstructure is modeled by a hybrid multiresolution approach. In combination with the large scale simulation, detailed studies and statistical analysis show that shearing of microvoids (the secondary population of voids) determines the mixed mode zig-zag fracture profile. The deformation in the macro and micro zones along with the interaction between them affects the fracture process. The observed zig-zag fracture profile in the experiment is also reasonably captured. Simulations can provide a more detailed understanding of the mechanics of the fracture process than experiments which is beneficial in microstructure design to improve performance of alloys.
NASA Astrophysics Data System (ADS)
Bone, Donald J.; Popescu, Dan C.
2000-05-01
In spite the prodigious growth in the market for digital cameras, they have yet to displace film-based cameras in the consumer market. This is largely due to the high cost of photographic resolution sensors. One possible approach to producing a low cost, high resolution sensor is to linearly scan a masked low resolution sensor. Masking of the sensor elements allows transform domain imaging. Multiple displaced exposures of such a masked sensor permits the device to acquire a linear transform of a higher resolution representation of the image than that defined by the sensor element dimensions. Various approaches have been developed in the past along these lines, but they often suffer from poor sensitivity, difficulty in being adapted to a 2D sensor or spatially variable noise response. This paper presents an approach based on a new class of Hadamard masks--Uniform Noise Hadamard Masks--which has superior sensitivity to simple sampling approaches and retains the multiresolution capabilities of certain Hadamard matrices, while overcoming the non-uniform noise response problems of some simple Hadamard based masks.
NASA Astrophysics Data System (ADS)
Feenstra, Adam D.; Dueñas, Maria Emilia; Lee, Young Jin
2017-01-01
High-spatial resolution mass spectrometry imaging (MSI) is crucial for the mapping of chemical distributions at the cellular and subcellular level. In this work, we improved our previous laser optical system for matrix-assisted laser desorption ionization (MALDI)-MSI, from 9 μm practical laser spot size to a practical laser spot size of 4 μm, thereby allowing for 5 μm resolution imaging without oversampling. This is accomplished through a combination of spatial filtering, beam expansion, and reduction of the final focal length. Most importantly, the new laser optics system allows for simple modification of the spot size solely through the interchanging of the beam expander component. Using 10×, 5×, and no beam expander, we could routinely change between 4, 7, and 45 μm laser spot size, in less than 5 min. We applied this multi-resolution MALDI-MSI system to a single maize root tissue section with three different spatial resolutions of 5, 10, and 50 μm and compared the differences in imaging quality and signal sensitivity. We also demonstrated the difference in depth of focus between the optical systems with 10× and 5× beam expanders.
A method of image multi-resolution processing based on FPGA + DSP architecture
NASA Astrophysics Data System (ADS)
Peng, Xiaohan; Zhong, Sheng; Lu, Hongqiang
2015-10-01
In real-time image processing, with the improvement of resolution and frame rate of camera imaging, not only the requirement of processing capacity is improving, but also the requirement of the optimization of process is improving. With regards to the FPGA + DSP architecture image processing system, there are three common methods to overcome the challenge above. The first is using higher performance DSP. For example, DSP with higher core frequency or with more cores can be used. The second is optimizing the processing method, make the algorithm to accomplish the same processing results but spend less time. Last but not least, pre-processing in the FPGA can make the image processing more efficient. A method of multi-resolution pre-processing by FPGA based on FPGA + DSP architecture is proposed here. It takes advantage of built-in first in first out (FIFO) and external synchronous dynamic random access memory (SDRAM) to buffer the images which come from image detector, and provides down-sampled images or cut-down images for DSP flexibly and efficiently according to the request parameters sent by DSP. DSP can thus get the degraded image instead of the whole image to process, shortening the processing time and transmission time greatly. The method results in alleviating the burden of image processing of DSP and also solving the problem of single method of image resolution reduction cannot meet the requirements of image processing task of DSP.
Multiresolution analysis of characteristic length scales with high-resolution topographic data
NASA Astrophysics Data System (ADS)
Sangireddy, Harish; Stark, Colin P.; Passalacqua, Paola
2017-07-01
Characteristic length scales (CLS) define landscape structure and delimit geomorphic processes. Here we use multiresolution analysis (MRA) to estimate such scales from high-resolution topographic data. MRA employs progressive terrain defocusing, via convolution of the terrain data with Gaussian kernels of increasing standard deviation, and calculation at each smoothing resolution of (i) the probability distributions of curvature and topographic index (defined as the ratio of slope to area in log scale) and (ii) characteristic spatial patterns of divergent and convergent topography identified by analyzing the curvature of the terrain. The MRA is first explored using synthetic 1-D and 2-D signals whose CLS are known. It is then validated against a set of MARSSIM (a landscape evolution model) steady state landscapes whose CLS were tuned by varying hillslope diffusivity and simulated noise amplitude. The known CLS match the scales at which the distributions of topographic index and curvature show scaling breaks, indicating that the MRA can identify CLS in landscapes based on the scaling behavior of topographic attributes. Finally, the MRA is deployed to measure the CLS of five natural landscapes using meter resolution digital terrain model data. CLS are inferred from the scaling breaks of the topographic index and curvature distributions and equated with (i) small-scale roughness features and (ii) the hillslope length scale.
Perception-based multi-resolution auditory processing of acoustic signals
NASA Astrophysics Data System (ADS)
Ru, Po-Wen
2000-10-01
A multi-resolution auditory model is proposed to simulate the spectrotemporal processing of the primary auditory cortex. Inspired by recent physiological findings, the model produces a multi-dimensional representation of cortical activity. Though several nonlinear operations are involved, the inversion of the representation is obtained by applying convex projection technique. A series of psychoacoustical experiments were conducted to estimate the appropriate units for the axes of this auditory model. The ``perceptual distance'' measure, which was derived from the subjective results, outperforms the independent channel model in threshold prediction tasks. Additionally, a simplified vocal tract model was employed to explore the articulatory equivalence to the cortical axes. This study suggests that both local and global changes in the geometry of the vocal tract result in meaningful changes in the cortical response. The perceptual distance measure, when applied to vowel recognition and timbre quantification, yields better performance than conventional signal processing techniques. Given enough computing power, this perception-based auditory model can be used in many applications like speech recognition, audio coding, and sound identification.
Prabusankarlal, K M; Thirumoorthy, P; Manavalan, R
2016-11-01
Earliest detection and diagnosis of breast cancer reduces mortality rate of patients by increasing the treatment options. A novel method for the segmentation of breast ultrasound images is proposed in this work. The proposed method utilizes undecimated discrete wavelet transform to perform multiresolution analysis of the input ultrasound image. As the resolution level increases, although the effect of noise reduces, the details of the image also dilute. The appropriate resolution level, which contains essential details of the tumor, is automatically selected through mean structural similarity. The feature vector for each pixel is constructed by sampling intra-resolution and inter-resolution data of the image. The dimensionality of feature vectors is reduced by using principal components analysis. The reduced set of feature vectors is segmented into two disjoint clusters using spatial regularized fuzzy c-means algorithm. The proposed algorithm is evaluated by using four validation metrics on a breast ultrasound database of 150 images including 90 benign and 60 malignant cases. The algorithm produced significantly better segmentation results (Dice coef = 0.8595, boundary displacement error = 9.796, dvi = 1.744, and global consistency error = 0.1835) than the other three state of the art methods.
Fully automated analysis of multi-resolution four-channel micro-array genotyping data
NASA Astrophysics Data System (ADS)
Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.
2006-03-01
We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.
NASA Astrophysics Data System (ADS)
Al-Fahoum, Amjed S.; Zanoun, Jalal
2003-05-01
Variations of vessel's sizes, inter and intra-observer variability, nontrivial noise distribution, and the fuzzy representation of vessel's parameters are issues of concern for enhancing precision and accuracy of the available QCA techniques. In this paper, we present new multiresolution edge detection algorithm for determining vessel boundaries, and enhancing their centerline features. A bank of Canny filters of different resolutions is created. These filters are convolved with vascular images in order to obtain an edge image. Each filter will give maximum response to the segment of vessel having the same spatial resolution as the filter. The resulting responses across filters of different resolutions are combined to create an edge map for edge optimization. Boundaries of vessels are represented by edge-lines and are optimized on filter outputs with dynamic programming. The determined edge-lines are used to create vessel centerline. The centerline is then used to compute percent-diameter stenosis and coronary lesions. The system has been validated using synthetic images, flexible tube phantoms, and real angiograms. It has also been tested on coronary lesions with independent operators for inter-operator and intra-operator variability and reproducibility. The system has been found to be especially robust in complex images involving vessel branching and incomplete contrast filling.
Combination of geodetic measurements by means of a multi-resolution representation
NASA Astrophysics Data System (ADS)
Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.
2010-12-01
Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.
Atmospheric Effects Mitigation in GNSS Relative Positioning using Wavelet Multiresolution Analysis
NASA Astrophysics Data System (ADS)
Souza, E. M.; Monico, J. F.
2007-05-01
In GNSS relative processing involving long baselines, the atmospheric effect, mainly, that due to the ionosphere is one of the major errors. Although linear combination (called ion-free) of observations at the two GPS frequencies or ionosphere models can be used to mitigate ionospheric effects, they do not remove all the errors. Furthermore, depending of the region and of the ionospheric behavior, these remaining errors are very significant. This happens because the ionosphere has several irregularities and variations, such as, for instance diurnal, seasonal and solar cycle variations, as well as traveling ionospheric disturbances (TIDs), scintillation, and other geomagnetic disturbances. The short and medium-scale ionosphere irregularities (TIDs and scintillation, for example) are difficult to be taking into account or modeled. Thus, in atmospheric applications, especially for ionospheric studies, wavelet multiresolution analysis (MRA) is a powerful method to be used because it allows a time-frequency decomposition to analyze the signal in several resolutions in order to detect different scale phenomena and disturbances. In this sense, it will be present the wavelet method based on applying the MRA to the double-difference (DD) observables to detect and remove short and medium-scale ionosphere variations and disturbances, as well as short-term troposphere variations. Experiments were carried out in Brazil involving data of different baseline lengths. It was showed that even using only single-frequency data, but with MRA, one can obtain very good results, even better than those using ion-free linear combination.
Wavelet multiresolution analysis of the three vorticity components in a turbulent far wake.
Zhou, T; Rinoshika, A; Hao, Z; Zhou, Y; Chua, L P
2006-03-01
The main objective of the present study is to examine the characteristics of the vortical structures in a turbulent far wake using the wavelet multiresolution technique by decomposing the vorticity into a number of orthogonal wavelet components based on different central frequencies. The three vorticity components were measured simultaneously using an eight-wire probe at three Reynolds numbers, namely 2000, 4000, and 6000. It is found that the dominant contributions to the vorticity variances are from the intermediate and relatively small-scale structures. The contributions from the large and intermediate-scale structures to the vorticity variances decrease with the increase of Reynolds number. The contributions from the small-scale structures to all three vorticity variances jump significantly when Reynolds number is changed from 2000 to 4000, which is connected to previous observations in the near wake that there is a significant increase in the generation of small-scale structures once the Reynolds number reaches about 5000. This result reinforces the conception that turbulence "remembers" its origin.
NASA Astrophysics Data System (ADS)
Feenstra, Adam D.; Dueñas, Maria Emilia; Lee, Young Jin
2017-03-01
High-spatial resolution mass spectrometry imaging (MSI) is crucial for the mapping of chemical distributions at the cellular and subcellular level. In this work, we improved our previous laser optical system for matrix-assisted laser desorption ionization (MALDI)-MSI, from 9 μm practical laser spot size to a practical laser spot size of 4 μm, thereby allowing for 5 μm resolution imaging without oversampling. This is accomplished through a combination of spatial filtering, beam expansion, and reduction of the final focal length. Most importantly, the new laser optics system allows for simple modification of the spot size solely through the interchanging of the beam expander component. Using 10×, 5×, and no beam expander, we could routinely change between 4, 7, and 45 μm laser spot size, in less than 5 min. We applied this multi-resolution MALDI-MSI system to a single maize root tissue section with three different spatial resolutions of 5, 10, and 50 μm and compared the differences in imaging quality and signal sensitivity. We also demonstrated the difference in depth of focus between the optical systems with 10× and 5× beam expanders.
Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion
NASA Astrophysics Data System (ADS)
Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison
2016-11-01
Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.
A multi-resolution approach for spinal metastasis detection using deep Siamese neural networks.
Wang, Juan; Fang, Zhiyuan; Lang, Ning; Yuan, Huishu; Su, Min-Ying; Baldi, Pierre
2017-05-01
Spinal metastasis, a metastatic cancer of the spine, is the most common malignant disease in the spine. In this study, we investigate the feasibility of automated spinal metastasis detection in magnetic resonance imaging (MRI) by using deep learning methods. To accommodate the large variability in metastatic lesion sizes, we develop a Siamese deep neural network approach comprising three identical subnetworks for multi-resolution analysis and detection of spinal metastasis. At each location of interest, three image patches at three different resolutions are extracted and used as the input to the networks. To further reduce the false positives (FPs), we leverage the similarity between neighboring MRI slices, and adopt a weighted averaging strategy to aggregate the results obtained by the Siamese neural networks. The detection performance is evaluated on a set of 26 cases using a free-response receiver operating characteristic (FROC) analysis. The results show that the proposed approach correctly detects all the spinal metastatic lesions while producing only 0.40 FPs per case. At a true positive (TP) rate of 90%, the use of the aggregation reduces the FPs from 0.375 FPs per case to 0.207 FPs per case, a nearly 44.8% reduction. The results indicate that the proposed Siamese neural network method, combined with the aggregation strategy, provide a viable strategy for the automated detection of spinal metastasis in MRI images. Copyright © 2017 Elsevier Ltd. All rights reserved.
A new multi-resolution hybrid wavelet for analysis and image compression
NASA Astrophysics Data System (ADS)
Kekre, Hemant B.; Sarode, Tanuja K.; Vig, Rekha
2015-12-01
Most of the current image- and video-related applications require higher resolution of images and higher data rates during transmission, better compression techniques are constantly being sought after. This paper proposes a new and unique hybrid wavelet technique which has been used for image analysis and compression. The proposed hybrid wavelet combines the properties of existing orthogonal transforms in the most desirable way and also provides for multi-resolution analysis. These wavelets have unique properties that they can be generated for various sizes and types by using different component transforms and varying the number of components at each level of resolution. These hybrid wavelets have been applied to various standard images like Lena (512 × 512), Cameraman (256 × 256) and the values of peak signal to noise ratio (PSNR) obtained are compared with those obtained using some standard existing compression techniques. Considerable improvement in the values of PSNR, as much as 5.95 dB higher than the standard methods, has been observed, which shows that hybrid wavelet gives better compression. Images of various sizes like Scenery (200 × 200), Fruit (375 × 375) and Barbara (112 × 224) have also been compressed using these wavelets to demonstrate their use for different sizes and shapes.
Li, Guannan; Raza, Shan E Ahmed; Rajpoot, Nasir M
2017-04-01
It has been recently shown that recurrent miscarriage can be caused by abnormally high ratio of number of uterine natural killer (UNK) cells to the number of stromal cells in human female uterus lining. Due to high workload, the counting of UNK and stromal cells needs to be automated using computer algorithms. However, stromal cells are very similar in appearance to epithelial cells which must be excluded in the counting process. To exclude the epithelial cells from the counting process it is necessary to identify epithelial regions. There are two types of epithelial layers that can be encountered in the endometrium: luminal epithelium and glandular epithelium. To the best of our knowledge, there is no existing method that addresses the segmentation of both types of epithelium simultaneously in endometrial histology images. In this paper, we propose a multi-resolution Cell Orientation Congruence (COCo) descriptor which exploits the fact that neighbouring epithelial cells exhibit similarity in terms of their orientations. Our experimental results show that the proposed descriptors yield accurate results in simultaneously segmenting both luminal and glandular epithelium.
Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu
2015-12-04
It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results.
Dhanya, S; Kumari Roshni, V S
2016-01-01
Textures play an important role in image classification. This paper proposes a high performance texture classification method using a combination of multiresolution analysis tool and linear regression modelling by channel elimination. The correlation between different frequency regions has been validated as a sort of effective texture characteristic. This method is motivated by the observation that there exists a distinctive correlation between the image samples belonging to the same kind of texture, at different frequency regions obtained by a wavelet transform. Experimentally, it is observed that this correlation differs across textures. The linear regression modelling is employed to analyze this correlation and extract texture features that characterize the samples. Our method considers not only the frequency regions but also the correlation between these regions. This paper primarily focuses on applying the Dual Tree Complex Wavelet Packet Transform and the Linear Regression model for classification of the obtained texture features. Additionally the paper also presents a comparative assessment of the classification results obtained from the above method with two more types of wavelet transform methods namely the Discrete Wavelet Transform and the Discrete Wavelet Packet Transform.
Multiresolution molecular dynamics algorithm for realistic materials modeling on parallel computers
NASA Astrophysics Data System (ADS)
Nakano, Aiichiro; Kalia, Rajiv K.; Vashishta, Priya
1994-12-01
For realistic modeling of materials, a molecular-dynamics (MD) algorithm is developed based on multiresolutions in both space and time. Materials of interest are characterized by the long-range Coulomb, steric and charge-dipole interactions as well as three-body covalent potentials. The long-range Coulomb interaction is computed with the fast multipole method. For bulk systems with periodic boundary conditions, infinite summation over repeated image charges is carried out with the reduced cell multipole method. Short- and medium-range non-Coulombic interactions are computed with the multiple time-step approach. A separable tensor decomposition scheme is used to compute three-body potentials. For a 4.2 million-particle SiO 2 system, one MD step takes only 4.8 seconds on the 512-node Intel Touchstone Delta machine and 10.3 seconds on 64 nodes of an IBM SP1 system. The constant-grain parallel efficiency of the program is η' = 0.92 and the communication overhead is 8% on the Delta machine. On the SP1 system, η' = 0.91 and communication overhead is 7%.
Multi-core/GPU accelerated multi-resolution simulations of compressible flows
NASA Astrophysics Data System (ADS)
Hejazialhosseini, Babak; Rossinelli, Diego; Koumoutsakos, Petros
2010-11-01
We develop a multi-resolution solver for single and multi-phase compressible flow simulations by coupling average interpolating wavelets and local time stepping schemes with high order finite volume schemes. Wavelets allow for high compression rates and explicit control over the error in adaptive representation of the flow field, but their efficient parallel implementation is hindered by the use of traditional data parallel models. In this work we demonstrate that this methodology can be implemented so that it can benefit from the processing power of emerging hybrid multicore and multi-GPU architectures. This is achieved by exploiting task-based parallelism paradigm and the concept of wavelet blocks combined with OpenCL and Intel Threading Building Blocks. The solver is able to handle high resolution jumps and benefits from adaptive time integration using local time stepping schemes as implemented on heterogeneous multi-core/GPU architectures. We demonstrate the accuracy of our method and the performance of our solver on different architectures for 2D simulations of shock-bubble interaction and Richtmeyer-Meshkov instability.
A three-channel miniaturized optical system for multi-resolution imaging
NASA Astrophysics Data System (ADS)
Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Thienpont, Hugo
2013-09-01
Inspired by the natural compound eyes of insects, multichannel imaging systems embrace many channels that scramble their entire Field-Of-View (FOV). Our aim in this work was to attain multi-resolution capability into a multi-channel imaging system by manipulating the available channels to possess different imaging properties (focal length, angular resolution). We have designed a three-channel imaging system where the first and third channels have highest and lowest angular resolution of 0.0096° and 0.078° and narrowest and widest FOVs of 7° and 80°, respectively. The design of the channels has been done for a single wavelength of 587.6 nm using CODE V. The three channels each consist of 4 aspherical lens surfaces and an absorbing baffle that avoids crosstalk among the neighbouring channels. The aspherical lens surfaces have been fabricated in PMMA by ultra-precision diamond tooling and the baffles by metal additive manufacturing. The profiles of the fabricated lens surfaces have been measured with an accurate multi-sensor coordinate measuring machine and compared with the corresponding profiles of the designed lens surfaces. The fabricated lens profiles are then incorporated into CODE V to realistically model the three channels and also compare their performances with those of the nominal design. We can conclude that the performances of the two latter models are in a good agreement.
Automatic multiresolution age-related macular degeneration detection from fundus images
NASA Astrophysics Data System (ADS)
Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida
2014-03-01
Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.
Multiscale and multiresolution modeling of shales and their flow and morphological properties.
Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad
2015-11-12
The need for more accessible energy resources makes shale formations increasingly important. Characterization of such low-permeability formations is complicated, due to the presence of multiscale features, and defies conventional methods. High-quality 3D imaging may be an ultimate solution for revealing the complexities of such porous media, but acquiring them is costly and time consuming. High-quality 2D images, on the other hand, are widely available. A novel three-step, multiscale, multiresolution reconstruction method is presented that directly uses 2D images in order to develop 3D models of shales. It uses a high-resolution 2D image representing the small-scale features to reproduce the nanopores and their network, a large scale, low-resolution 2D image to create the larger-scale characteristics, and generates stochastic realizations of the porous formation. The method is used to develop a model for a shale system for which the full 3D image is available and its properties can be computed. The predictions of the reconstructed models are in excellent agreement with the data. The method is, however, quite general and can be used for reconstructing models of other important heterogeneous materials and media. Two biological examples and from materials science are also reconstructed to demonstrate the generality of the method.
Multiresolution Approach for Noncontact Measurements of Arterial Pulse Using Thermal Imaging
NASA Astrophysics Data System (ADS)
Chekmenev, Sergey Y.; Farag, Aly A.; Miller, William M.; Essock, Edward A.; Bhatnagar, Aruni
This chapter presents a novel computer vision methodology for noncontact and nonintrusive measurements of arterial pulse. This is the only investigation that links the knowledge of human physiology and anatomy, advances in thermal infrared (IR) imaging and computer vision to produce noncontact and nonintrusive measurements of the arterial pulse in both time and frequency domains. The proposed approach has a physical and physiological basis and as such is of a fundamental nature. A thermal IR camera was used to capture the heat pattern from superficial arteries, and a blood vessel model was proposed to describe the pulsatile nature of the blood flow. A multiresolution wavelet-based signal analysis approach was applied to extract the arterial pulse waveform, which lends itself to various physiological measurements. We validated our results using a traditional contact vital signs monitor as a ground truth. Eight people of different age, race and gender have been tested in our study consistent with Health Insurance Portability and Accountability Act (HIPAA) regulations and internal review board approval. The resultant arterial pulse waveforms exactly matched the ground truth oximetry readings. The essence of our approach is the automatic detection of region of measurement (ROM) of the arterial pulse, from which the arterial pulse waveform is extracted. To the best of our knowledge, the correspondence between noncontact thermal IR imaging-based measurements of the arterial pulse in the time domain and traditional contact approaches has never been reported in the literature.
Feenstra, Adam D.; Dueñas, Maria Emilia; Lee, Young Jin
2017-01-03
High-spatial resolution mass spectrometry imaging (MSI) is crucial for the mapping of chemical distributions at the cellular and subcellular level. Here in this work, we improved our previous laser optical system for matrix-assisted laser desorption ionization (MALDI)-MSI, from ~9 μm practical laser spot size to a practical laser spot size of ~4 μm, thereby allowing for 5 μm resolution imaging without oversampling. This is accomplished through a combination of spatial filtering, beam expansion, and reduction of the final focal length. Most importantly, the new laser optics system allows for simple modification of the spot size solely through the interchanging ofmore » the beam expander component. Using 10×, 5×, and no beam expander, we could routinely change between ~4, ~7, and ~45 μm laser spot size, in less than 5 min. We applied this multi-resolution MALDI-MSI system to a single maize root tissue section with three different spatial resolutions of 5, 10, and 50 μm and compared the differences in imaging quality and signal sensitivity. Lastly, we also demonstrated the difference in depth of focus between the optical systems with 10× and 5× beam expanders.« less
NASA Astrophysics Data System (ADS)
Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Macke, Andreas
2015-04-01
Clouds are the most complex structures in both spatial and temporal scales of the Earth's atmosphere that effect the downward surface reaching fluxes and thus contribute to large uncertainty in the global radiation budget. Within the framework of High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE), a high density network of 99 pyranometer stations was set up around Jülich, Germany (~ 10 × 12 km2 area) during April to July 2013 to capture the small-scale variability in cloud induced radiation fields at the surface. In this study, we perform multi-resolution analysis of the downward solar irradiance variability at the surface from the pyranometer network to investigate the dependence of temporal and spatial averaging scales on the variance and spatial correlation for different cloud regimes. Preliminary results indicate that correlation is strongly scale-dependent where as the variance is dependent on the length of averaging period. Implications of our findings will be useful for quantifying the effect of spatial collocation while validating the satellite inferred solar irradiance estimates, and also to explore the link between cloud structure and radiation. We will present the details of our analysis and results.
ERIC Educational Resources Information Center
Van Galen, Jane, Ed.; And Others
1992-01-01
This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E.…
Loescher, D.H.; Noren, K.
1996-09-01
The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe
2017-06-29
We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Multi-sensor multi-resolution image fusion for improved vegetation and urban area classification
NASA Astrophysics Data System (ADS)
Kumar, U.; Milesi, C.; Nemani, R. R.; Basu, S.
2015-06-01
In this paper, we perform multi-sensor multi-resolution data fusion of Landsat-5 TM bands (at 30 m spatial resolution) and multispectral bands of World View-2 (WV-2 at 2 m spatial resolution) through linear spectral unmixing model. The advantages of fusing Landsat and WV-2 data are two fold: first, spatial resolution of the Landsat bands increases to WV-2 resolution. Second, integration of data from two sensors allows two additional SWIR bands from Landsat data to the fused product which have advantages such as improved atmospheric transparency and material identification, for example, urban features, construction materials, moisture contents of soil and vegetation, etc. In 150 separate experiments, WV-2 data were clustered in to 5, 10, 15, 20 and 25 spectral classes and data fusion were performed with 3x3, 5x5, 7x7, 9x9 and 11x11 kernel sizes for each Landsat band. The optimal fused bands were selected based on Pearson product-moment correlation coefficient, RMSE (root mean square error) and ERGAS index and were subsequently used for vegetation, urban area and dark objects (deep water, shadows) classification using Random Forest classifier for a test site near Golden Gate Bridge, San Francisco, California, USA. Accuracy assessment of the classified images through error matrix before and after fusion showed that the overall accuracy and Kappa for fused data classification (93.74%, 0.91) was much higher than Landsat data classification (72.71%, 0.70) and WV-2 data classification (74.99%, 0.71). This approach increased the spatial resolution of Landsat data to WV-2 spatial resolution while retaining the original Landsat spectral bands with significant improvement in classification.
CSMET: Comparative Genomic Motif Detection via Multi-Resolution Phylogenetic Shadowing
Kolar, Mladen; Xing, Eric P.
2008-01-01
Functional turnover of transcription factor binding sites (TFBSs), such as whole-motif loss or gain, are common events during genome evolution. Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level, and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities. As a result, comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge, especially in higher eukaryotes, where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms, which can be difficult to generalize and hard to interpret based on phylogenetic principles. We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees, or CSMET, which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon. The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides, but of the overall functionality (e.g., functional retention or loss) of the aligned sequence segments over lineages. Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome, CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection, and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover. On both simulated and real Drosophila cis-regulatory modules, CSMET outperforms other state-of-the-art comparative genomic motif finders. PMID:18535663
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these
NASA Astrophysics Data System (ADS)
Willberg, Martin; Lieb, Verena; Pail, Roland; Schmidt, Michael
2017-04-01
The analysis of the Earth's gravity field plays an important role in various disciplines of geosciences. While modern satellite gravity missions make it possible to define a globally consistent geoid with centimeter accuracy and a spatial resolution of 80-100km, it stays a major challenge to consistently combine global low-resolution data with regional high-resolution gravity information. Therefore, a variety of different regional gravity field modelling methods have been established during the last decades. In our analysis, we investigate the spectral combination of heterogeneous gravity data within two different calculation methods: First, the statistical approach of Least Squares Collocation (LSC) which uses the covariance information of input and output data to result in a full variance-covariance matrix. Second, the Multi-Resolution Representation (MRR) based on spherical radial basis functions. The MRR combines a low-pass filtered global geopotential model with satellite gradiometer and/or terrestrial gravity data by means of band-pass filtering. We examine the theoretical concepts and the computational differences and similarities between both approaches. Through fast changing topography, mountains as well as oceanic regions, our study area in the South American Andes is challenging and perfectly suitable for this examination. The use of synthetic data in closed-loop tests enables us to a very detailed investigation of predicted and actual accuracies of geoid determination. Furthermore, we point out respective advantages and disadvantages and link them to the calculation concepts of the two methods. The results contribute to the project "Optimally combined regional geoid models for the realization of height systems in developing countries (ORG4heights)" and, thus, aim to finally integrate the regional solutions into a global vertical reference frame.
Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models.
Eisank, Clemens; Smith, Mike; Hillier, John
2014-06-01
Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the
Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models
Eisank, Clemens; Smith, Mike; Hillier, John
2014-01-01
Mapping or “delimiting” landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Olson, William S.; Anagnostou, Emmanouil N.
2003-01-01
In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) observations and it is based on models that simulate high-resolution brightness temperatures as functions of observed reflectivity profiles and a parameter related to the rain drop-size-distribution. The modeled high-resolution brightness temperatures are used to determine normalized brightness temperature polarizations at the microwave radiometer resolution. An optimal estimation procedure is employed to minimize the differences between the simulated and observed normalized polarizations by adjusting the drop-size-distribution parameter. The impact of other unknowns that are not independent variables in the optimal estimation but affect the retrievals is minimized through statistical parameterizations derived from cloud model simulations. The retrieval technique is investigated using TRMM observations collected during the Kwajalein Experiment (KWAJEX). These observations cover an area extending from 5 deg to deg N latitude and 166 deg to 172 deg E longitude from July to September 1999, and are coincident with various ground-based observations, facilitating a detailed analysis of the retrieved precipitation. Using the method developed in this study, precipitation estimates consistent with both the passive and active TRMM observations are obtained. Various parameters characterizing these estimates, i.e. the rain rate, the precipitation water content, the drop-size-distribution intercept, and the mass weighted mean drop diameter, are in good qualitative agreement with independent experimental and theoretical estimates. Combined rain estimates are in general higher than the official TRMM Precipitation Radar (PR) only estimates for the area and the period considered in the study. Ground-based precipitation estimates
Implementation of a physiographic complexity-based multiresolution snow modeling scheme
NASA Astrophysics Data System (ADS)
Baldo, Elisabeth; Margulis, Steven A.
2017-05-01
Using a uniform model resolution over a domain is not necessarily the optimal approach for simulating hydrologic processes when considering both model error and computational cost. Fine-resolution simulations at 100 m or less can provide fine-scale process representation, but can be costly to apply over large domains. On the other hand, coarser spatial resolutions are more computationally inexpensive, but at the expense of fine-scale model accuracy. Defining a multiresolution (MR) grid spanning from fine resolutions over complex mountainous areas to coarser resolutions over less complex regions can conceivably reduce computational costs, while preserving the accuracy of fine-resolution simulations on a uniform grid. A MR scheme was developed using a physiographic complexity metric (CM) that combines surface heterogeneity in forested fraction, elevation, slope, and aspect. A data reduction term was defined as a metric (relative to a uniform fine-resolution grid) related to the available computational resources for a simulation. The focus of the effort was on the snowmelt season where physiographic complexity is known to have a significant signature. MR simulations were run for different data reduction factors to generate melt rate estimates for three representative water years over a test headwater catchment in the Colorado River Basin. The MR approach with data reductions up to 47% led to negligible cumulative snowmelt differences compared to the fine-resolution baseline case, while tests with data reductions up to 60% showed differences lower than 2%. Large snow-dominated domains could therefore benefit from a MR approach to be more efficiently simulated while mitigating error.
A decadal observation of vegetation dynamics using multi-resolution satellite images
NASA Astrophysics Data System (ADS)
Chiang, Yang-Sheng; Chen, Kun-Shan; Chu, Chang-Jen
2012-10-01
Vegetation cover not just affects the habitability of the earth, but also provides potential terrestrial mechanism for mitigation of greenhouse gases. This study aims at quantifying such green resources by incorporating multi-resolution satellite images from different platforms, including Formosat-2(RSI), SPOT(HRV/HRG), and Terra(MODIS), to investigate vegetation fractional cover (VFC) and its inter-/intra-annual variation in Taiwan. Given different sensor capabilities in terms of their spatial coverage and resolution, infusion of NDVIs at different scales was used to determine fraction of vegetation cover based on NDVI. Field campaign has been constantly conducted on a monthly basis for 6 years to calibrate the critical NDVI threshold for the presence of vegetation cover, with test sites covering IPCC-defined land cover types of Taiwan. Based on the proposed method, we analyzed spatio- temporal changes of VFC for the entire Taiwan Island. A bimodal sequence of VFC was observed for intra-annual variation based on MODIS data, with level around 5% and two peaks in spring and autumn marking the principal dual-cropping agriculture pattern in southwestern Taiwan. Compared to anthropogenic-prone variation, the inter-annual VFC (Aug.-Oct.) derived from HRV/HRG/RSI reveals that the moderate variations (3%) and the oscillations were strongly linked with regional climate pattern and major disturbances resulting from extreme weather events. Two distinct cycles (2002-2005 and 2005-2009) were identified in the decadal observations, with VFC peaks at 87.60% and 88.12% in 2003 and 2006, respectively. This time-series mapping of VFC can be used to examine vegetation dynamics and its response associated with short-term and long-term anthropogenic/natural events.
NASA Astrophysics Data System (ADS)
Fernández-Cobos, R.; Vielva, P.; Barreiro, R. B.; Martínez-González, E.
2012-03-01
The cosmic microwave background (CMB) radiation data obtained by different experiments contain, besides the desired signal, a superposition of microwave sky contributions. Using a wavelet decomposition on the sphere, we present a fast and robust method to recover the CMB signal from microwave maps. We present an application to the Wilkinson Microwave Anisotropy Probe (WMAP) polarization data, which shows its good performance, particularly in very polluted regions of the sky. The applied wavelet has the advantages that it requires little computational time in its calculations, it is adapted to the HEALPIX pixelization scheme and it offers the possibility of multiresolution analysis. The decomposition is implemented as part of a fully internal template fitting method, minimizing the variance of the resulting map at each scale. Using a χ2 characterization of the noise, we find that the residuals of the cleaned maps are compatible with those expected from the instrumental noise. The maps are also comparable to those obtained from the WMAP team, but in our case we do not make use of external data sets. In addition, at low resolution, our cleaned maps present a lower level of noise. The E-mode power spectrum ? is computed at high and low resolutions, and a cross-power spectrum ? is also calculated from the foreground reduced maps of temperature given by WMAP and our cleaned maps of polarization at high resolution. These spectra are consistent with the power spectra supplied by the WMAP team. We detect the E-mode acoustic peak at ℓ˜ 400, as predicted by the standard ΛCDM model. The B-mode power spectrum ? is compatible with zero.
Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators.
Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo
2016-07-21
Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator's inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq(-1), while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq(-1). Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.
Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models
NASA Astrophysics Data System (ADS)
Eisank, Clemens; Smith, Mike; Hillier, John
2014-06-01
Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the
Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators
NASA Astrophysics Data System (ADS)
Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo
2016-07-01
Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator’s inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq-1, while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq-1. Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.
Multi-resolution radiograph alignment for motion correction in x-ray micro-tomography
NASA Astrophysics Data System (ADS)
Latham, Shane J.; Kingston, Andrew M.; Recur, Benoit; Myers, Glenn R.; Sheppard, Adrian P.
2016-10-01
Achieving sub-micron resolution in lab-based micro-tomography is challenging due to the geometric instability of the imaging hardware (spot drift, stage precision, sample motion). These instabilities manifest themselves as a distortion or motion of the radiographs relative to the expected system geometry. When the hardware instabilities are small (several microns of absolute motion), the radiograph distortions are well approximated by shift and magnification of the image. In this paper we examine the use of re-projection alignment (RA) to estimate per-radiograph motions. Our simulation results evaluate how the convergence properties of RA vary with: motion-type (smooth versus random), trajectory (helical versus space-filling) and resolution. We demonstrate that RA convergence rate and accuracy, for the space-filling trajectory, is invariant with regard to the motion-type. In addition, for the space-filling trajectory, the per-projection motions can be estimated to less than 0.25 pixel mean absolute error by performing a single quarter-resolution RA iteration followed by a single half-resolution RA iteration. The direct impact is that, for the space-filling trajectory, we need only perform one RA iteration per resolution in our iterative multi-grid reconstruction (IMGR).We also give examples of the effectiveness of RA motion correction method applied to real double-helix and space-filling trajectory micro-CT data. For double-helix Katsevich filtered-back-projection reconstruction (≍2500x2500x5000 voxels), we use a multi-resolution RA method as a pre-processing step. For the space-filling iterative reconstruction (≍2000x2000x5400 voxels), RA is applied during the IMGR iterations.
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Anagnostou, Emmanouil N.; Olson, William S.; Starr, David OC. (Technical Monitor)
2002-01-01
In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated using both simulated and airborne data. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) satellite multi-frequency active and passive observations. These observations are characterized by various spatial and sampling resolutions. This makes the retrieval problem mathematically more difficult and ill-determined because the quality of information decreases with decreasing resolution. A model that, given reflectivity profiles and a small set of parameters (including the cloud water content, the intercept drop size distribution, and a variable describing the frozen hydrometeor properties), simulates high-resolution brightness temperatures is used. The high-resolution simulated brightness temperatures are convolved at the real sensor resolution. An optimal estimation procedure is used to minimize the differences between simulated and observed brightness temperatures. The retrieval technique is investigated using cloud model synthetic and airborne data from the Fourth Convection And Moisture Experiment. Simulated high-resolution brightness temperatures and reflectivities and airborne observation strong are convolved at the resolution of the TRMM instruments and retrievals are performed and analyzed relative to the reference data used in observations synthesis. An illustration of the possible use of the technique in satellite rainfall estimation is presented through an application to TRMM data. The study suggests improvements in combined active and passive retrievals even when the instruments resolutions are significantly different. Future work needs to better quantify the retrievals performance, especially in connection with satellite applications, and the uncertainty of the models used in retrieval.
W-matrices, nonorthogonal multiresolution analysis, and finite signals of arbitrary length
Kwong, M.K.; Tang, P.T.P.
1994-12-31
Wavelet theory and discrete wavelet transforms have had great impact on the field of signal and image processing. In this paper the authors propose a new class of discrete transforms. It ``includes`` the classical Haar and Daubechies transforms. These transforms treat the endpoints of a signal in a different manner from that of conventional techniques. This new approach allows the authors to efficiently handle signals of any length; thus, one is not restricted to work with signal or image sizes that are multiples of a power of 2. Their method does not lengthen the output signal and does not require an additional bookkeeping vector. An exciting result is the uncovering of a new and simple transform that performs very well for compression purposes. It has compact support of length 4, and so is its inverse. The coefficients are symmetrical, and the associated scaling function is fairly smooth The Associated dual wavelet has vanishing moments up to order 2. Numerical results comparing the performance of this transform with that of the Daubechies D{sub 4} transform are given. The multiresolution decomposition, however, is not orthogonal. They will see why this apparent defect is not a real problem in practice. Furthermore, they will give a method to compute an orthogonal compensation that gives them the best approximation possible with the given scaling space. The transform can be described completely within the context of matrix theory and linear algebra. Thus, even without prior knowledge of wavelet theory, one can easily grasp the concrete algorithm and apply it to specific problems within a very short time, without having to master complex functional analysis. At the end of the paper, they shall make the connection to wavelet theory.
NASA Astrophysics Data System (ADS)
Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.
2016-05-01
Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.
NASA Astrophysics Data System (ADS)
Smith, M. I.; Sadler, J. R. E.
2007-04-01
Military helicopter operations are often constrained by environmental conditions, including low light levels and poor weather. Recent experience has also shown the difficulty presented by certain terrain when operating at low altitude by day and night. For example, poor pilot cues over featureless terrain with low scene contrast, together with obscuration of vision due to wind-blown and re-circulated dust at low level (brown out). These sorts of conditions can result in loss of spatial awareness and precise control of the aircraft. Atmospheric obscurants such as fog, cloud, rain and snow can similarly lead to hazardous situations and reduced situational awareness. Day Night All Weather (DNAW) systems applied research sponsored by UK Ministry of Defence (MoD) has developed a multi-resolution real time Image Fusion system that has been flown as part of a wider flight trials programme investigating increased situational awareness. Dual-band multi-resolution adaptive image fusion was performed in real-time using imagery from a Thermal Imager and a Low Light TV, both co-bore sighted on a rotary wing trials aircraft. A number of sorties were flown in a range of climatic and environmental conditions during both day and night. (Neutral density filters were used on the Low Light TV during daytime sorties.) This paper reports on the results of the flight trial evaluation and discusses the benefits offered by the use of Image Fusion in degraded visual environments.
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
MRAG-I2D: Multi-resolution adapted grids for remeshed vortex methods on multicore architectures
NASA Astrophysics Data System (ADS)
Rossinelli, Diego; Hejazialhosseini, Babak; van Rees, Wim; Gazzola, Mattia; Bergdorf, Michael; Koumoutsakos, Petros
2015-05-01
We present MRAG-I2D, an open source software framework, for multiresolution simulations of two-dimensional, incompressible, viscous flows on multicore architectures. The spatiotemporal scales of the flow field are captured by remeshed vortex methods enhanced by high order average-interpolating wavelets and local time-stepping. The multiresolution solver of the Poisson equation relies on the development of a novel, tree-based multipole method. MRAG-I2D implements a number of HPC strategies to map efficiently the irregular computational workload of wavelet-adapted grids on multicore nodes. The capabilities of the present software are compared to the current state-of-the-art in terms of accuracy, compression rates and time-to-solution. Benchmarks include the inviscid evolution of an elliptical vortex, flow past an impulsively started cylinder at Re = 40- 40 000 and simulations of self-propelled anguilliform swimmers. The results indicate that the present software has the same or better accuracy than state-of-the-art solvers while it exhibits unprecedented performance in terms of time-to-solution.
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE
Doyle, Scott; Feldman, Michael; Tomaszewski, John; Madabhushi, Anant
2012-05-01
Diagnosis of prostate cancer (CaP) currently involves examining tissue samples for CaP presence and extent via a microscope, a time-consuming and subjective process. With the advent of digital pathology, computer-aided algorithms can now be applied to disease detection on digitized glass slides. The size of these digitized histology images (hundreds of millions of pixels) presents a formidable challenge for any computerized image analysis program. In this paper, we present a boosted Bayesian multiresolution (BBMR) system to identify regions of CaP on digital biopsy slides. Such a system would serve as an important preceding step to a Gleason grading algorithm, where the objective would be to score the invasiveness and severity of the disease. In the first step, our algorithm decomposes the whole-slide image into an image pyramid comprising multiple resolution levels. Regions identified as cancer via a Bayesian classifier at lower resolution levels are subsequently examined in greater detail at higher resolution levels, thereby allowing for rapid and efficient analysis of large images. At each resolution level, ten image features are chosen from a pool of over 900 first-order statistical, second-order co-occurrence, and Gabor filter features using an AdaBoost ensemble method. The BBMR scheme, operating on 100 images obtained from 58 patients, yielded: 1) areas under the receiver operating characteristic curve (AUC) of 0.84, 0.83, and 0.76, respectively, at the lowest, intermediate, and highest resolution levels and 2) an eightfold savings in terms of computational time compared to running the algorithm directly at full (highest) resolution. The BBMR model outperformed (in terms of AUC): 1) individual features (no ensemble) and 2) a random forest classifier ensemble obtained by bagging multiple decision tree classifiers. The apparent drop-off in AUC at higher image resolutions is due to lack of fine detail in the expert annotation of CaP and is not an artifact of the
NASA Astrophysics Data System (ADS)
Belgiu, Mariana; ǎguţ, Lucian, , Dr
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea
Belgiu, Mariana; Dr Guţ, Lucian
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea
Virtual volume resection using multi-resolution triangular representation of B-spline surfaces.
Ruskó, László; Mátéka, Ilona; Kriston, András
2013-08-01
Computer assisted analysis of organs has an important role in clinical diagnosis and therapy planning. As well as the visualization, the manipulation of 3-dimensional (3D) objects are key features of medical image processing tools. The goal of this work was to develop an efficient and easy to use tool that allows the physician to partition a segmented organ into its segments or lobes. The proposed tool allows the user to define a cutting surface by drawing some traces on 2D sections of a 3D object, cut the object into two pieces with a smooth surface that fits the input traces, and iterate the process until the object is partitioned at the desired level. The tool is based on an algorithm that interpolates the user-defined traces with B-spline surface and computes a binary cutting volume that represents the different sides of the surface. The computation of the cutting volume is based on the multi-resolution triangulation of the B-spline surface. The proposed algorithm was integrated into an open-source medical image processing framework. Using the tool, the user can select the object to be partitioned (e.g. segmented liver), define the cutting surface based on the corresponding medical image (medical image visualizing the internal structure of the liver), cut the selected object, and iterate the process. In case of liver segment separation, the cuts can be performed according to a predefined sequence, which makes it possible to label the temporary as well as the final partitions (lobes, segments) automatically. The presented tool was evaluated for anatomical segment separation of the liver involving 14 cases and virtual liver tumor resection involving one case. The segment separation was repeated 3 different times by one physician for all cases, and the average and the standard deviation of segment volumes were computed. According to the test experiences the presented algorithm proved to be efficient and user-friendly enough to perform free form cuts for liver
Global Multi-Resolution Topography (GMRT) Synthesis - Version 2.0
NASA Astrophysics Data System (ADS)
Ferrini, V.; Coplan, J.; Carbotte, S. M.; Ryan, W. B.; O'Hara, S.; Morton, J. J.
2010-12-01
The detailed morphology of the global ocean floor is poorly known, with most areas mapped only at low resolution using satellite-based measurements. Ship-based sonars provide data at resolution sufficient to quantify seafloor features related to the active processes of erosion, sediment flow, volcanism, and faulting. To date, these data have been collected in a small fraction of the global ocean (<10%). The Global Multi-Resolution Topography (GMRT) synthesis makes use of sonar data collected by scientists and institutions worldwide, merging them into a single continuously updated compilation of high-resolution seafloor topography. Several applications, including GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org), make use of the GMRT Synthesis and provide direct access to images and underlying gridded data. Source multibeam files included in the compilation can also accessed through custom functionality in GeoMapApp. The GMRT Synthesis began in 1992 as the Ridge Multibeam Synthesis. It was subsequently expanded to include bathymetry data from the Southern Ocean, and now includes data from throughout the global oceans. Our design strategy has been to make data available at the full native resolution of shipboard sonar systems, which historically has been ~100 m in the deep sea (Ryan et al., 2009). A new release of the GMRT Synthesis in Fall of 2010 includes several significant improvements over our initial strategy. In addition to increasing the number of cruises included in the compilation by over 25%, we have developed a new protocol for handling multibeam source data, which has improved the overall quality of the compilation. The new tileset also includes a discrete layer of sonar data in the public domain that are gridded to the full resolution of the sonar system, with data gridded 25 m in some areas. This discrete layer of sonar data has been provided to Google for integration into Google’s default ocean base map. NOAA
Belgiu, Mariana; Drǎguţ, Lucian
2014-01-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE
Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios; Skouroliakou, Aikaterini; Hazle, John D.; Kagadis, George C. E-mail: George.Kagadis@med.upatras.gr
2014-07-15
Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet
The Multi-Resolution Land Characteristics (MRLC) Consortium is a good example of the national benefits of federal collaboration. It started in the mid-1990s as a small group of federal agencies with the straightforward goal of compiling a comprehensive national Landsat dataset t...
NASA Astrophysics Data System (ADS)
Dabiri, Z.; Hölbling, D.; Lang, S.; Bartsch, A.
2015-12-01
The increasing availability of synthetic aperture radar (SAR) data from a range of different sensors necessitates efficient methods for semi-automated information extraction at multiple spatial scales for different fields of application. The focus of the presented study is two-fold: 1) to evaluate the applicability of multi-temporal TerraSAR-X imagery for multiresolution segmentation, and 2) to identify suitable Scale Parameters through different weighing of different homogeneity criteria, mainly colour variance. Multiresolution segmentation was used for segmentation of multi-temporal TerraSAR-X imagery, and the ESP (Estimation of Scale Parameter) tool was used to identify suitable Scale Parameters for image segmentation. The validation of the segmentation results was performed using very high resolution WorldView-2 imagery and a reference map, which was created by an ecological expert. The results of multiresolution segmentation revealed that in the context of object-based image analysis the TerraSAR-X images are applicable for generating optimal image objects. Furthermore, ESP tool can be used as an indicator for estimation of Scale Parameter for multiresolution segmentation of TerraSAR-X imagery. Additionally, for more reliable results, this study suggests that the homogeneity criterion of colour, in a variance based segmentation algorithm, needs to be set to high values. Setting the shape/colour criteria to 0.005/0.995 or 0.00/1 led to the best results and to the creation of adequate image objects.
The Multi-Resolution Land Characteristics (MRLC) Consortium is a good example of the national benefits of federal collaboration. It started in the mid-1990s as a small group of federal agencies with the straightforward goal of compiling a comprehensive national Landsat dataset t...
NASA Astrophysics Data System (ADS)
Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.
2013-12-01
Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough
A new multiresolution method applied to the 3D reconstruction of small bodies
NASA Astrophysics Data System (ADS)
Capanna, C.; Jorda, L.; Lamy, P. L.; Gesquiere, G.
2012-12-01
The knowledge of the three-dimensional (3D) shape of small solar system bodies, such as asteroids and comets, is essential in determining their global physical properties (volume, density, rotational parameters). It also allows performing geomorphological studies of their surface through the characterization of topographic features, such as craters, faults, landslides, grooves, hills, etc.. In the case of small bodies, the shape is often only constrained by images obtained by interplanetary spacecrafts. Several techniques are available to retrieve 3D global shapes from these images. Stereography which relies on control points has been extensively used in the past, most recently to reconstruct the nucleus of comet 9P/Tempel 1 [Thomas (2007)]. The most accurate methods are however photogrammetry and photoclinometry, often used in conjunction with stereography. Stereophotogrammetry (SPG) has been used to reconstruct the shapes of the nucleus of comet 19P/Borrelly [Oberst (2004)] and of the asteroid (21) Lutetia [Preusker (2012)]. Stereophotoclinometry (SPC) has allowed retrieving an accurate shape of the asteroids (25143) Itokawa [Gaskell (2008)] and (2867) Steins [Jorda (2012)]. We present a new photoclinometry method based on the deformation of a 3D triangular mesh [Capanna (2012)] using a multi-resolution scheme which starts from a sphere of 300 facets and yields a shape model with 100; 000 facets. Our strategy is inspired by the "Full Multigrid" method [Botsch (2007)] and consists in going alternatively between two resolutions in order to obtain an optimized shape model at a given resolution before going to the higher resolution. In order to improve the robustness of our method, we use a set of control points obtained by stereography. Our method has been tested on images acquired by the OSIRIS visible camera, aboard the Rosetta spacecraft of the European Space Agency, during the fly-by of asteroid (21) Lutetia in July 2010. We present the corresponding 3D shape
Shi, Kuangyu; Fürst, Sebastian; Sun, Liang; Lukas, Mathias; Navab, Nassir; Förster, Stefan; Ziegler, Sibylle I
2016-11-19
PET/MR is an emerging hybrid imaging modality. However, attenuation correction (AC) remains challenging for hybrid PET/MR in generating accurate PET images. Segmentation-based methods on special MR sequences are most widely recommended by vendors. However, their accuracy is usually not high. Individual refinement of available certified attenuation maps may be helpful for further clinical applications. In this study, we proposed a multi-resolution regional learning (MRRL) scheme to utilize the internal consistency of the patient data. The anatomical and AC MR sequences of the same subject were employed to guide the refinement of the provided AC maps. The developed algorithm was tested on 9 patients scanned consecutively with PET/MR and PET/CT (7 [(18)F]FDG and 2 [(18)F]FET). The preliminary results showed that MRRL can improve the accuracy of segmented attenuation maps and consequently the accuracy of PET reconstructions.
The multi-module, multi-resolution system (M3R): A novel small-animal SPECT system
Hesterman, Jacob Y.; Kupinski, Matthew A.; Furenlid, Lars R.; Wilson, Donald W.; Barrett, Harrison H.
2008-01-01
We have designed and built an inexpensive, high-resolution, tomographic imaging system, dubbed the multi-module, multi-resolution system, or M3R. Slots machined into the system shielding allow for the interchange of pinhole plates, enabling the system to operate over a wide range of magnifications and with virtually any desired pinhole configuration. The flexibility of the system allows system optimization for specific imaging tasks and also allows for modifications necessary due to improved detectors, electronics, and knowledge of system construction (e.g., system sensitivity optimization). We provide an overview of M3R, focusing primarily on system design and construction, aperture construction, and calibration methods. Reconstruction algorithms will be described and reconstructed images presented. PMID:17441245
Torres, M E; Añino, M M; Schlotthauer, G
2003-12-01
It is well known that, from a dynamical point of view, sudden variations in physiological parameters which govern certain diseases can cause qualitative changes in the dynamics of the corresponding physiological process. The purpose of this paper is to introduce a technique that allows the automated temporal localization of slight changes in a parameter of the law that governs the nonlinear dynamics of a given signal. This tool takes, from the multiresolution entropies, the ability to show these changes as statistical variations at each scale. These variations are held in the corresponding principal component. Appropriately combining these techniques with a statistical changes detector, a complexity change detection algorithm is obtained. The relevance of the approach, together with its robustness in the presence of moderate noise, is discussed in numerical simulations and the automatic detector is applied to real and simulated biological signals.
Welsher, Kevin; Yang, Haw
2014-02-23
A detailed understanding of the cellular uptake process is essential to the development of cellular delivery strategies and to the study of viral trafficking. However, visualization of the entire process, encompassing the fast dynamics (local to the freely diffusing nanoparticle) as well the state of the larger-scale cellular environment, remains challenging. Here, we introduce a three-dimensional multi-resolution method to capture, in real time, the transient events leading to cellular binding and uptake of peptide (HIV1-Tat)-modified nanoparticles. Applying this new method to observe the landing of nanoparticles on the cellular contour in three dimensions revealed long-range deceleration of the delivery particle, possibly due to interactions with cellular receptors. Furthermore, by using the nanoparticle as a nanoscale ‘dynamics pen’, we discovered an unexpected correlation between small membrane terrain structures and local nanoparticle dynamics. This approach could help to reveal the hidden mechanistic steps in a variety of multiscale processes.
A Multi-resolution, Multi-epoch Low Radio Frequency Survey of the Kepler K2 Mission Campaign 1 Field
NASA Astrophysics Data System (ADS)
Tingay, S. J.; Hancock, P. J.; Wayth, R. B.; Intema, H.; Jagannathan, P.; Mooley, K.
2016-10-01
We present the first dedicated radio continuum survey of a Kepler K2 mission field, Field 1, covering the North Galactic Cap. The survey is wide field, contemporaneous, multi-epoch, and multi-resolution in nature and was conducted at low radio frequencies between 140 and 200 MHz. The multi-epoch and ultra wide field (but relatively low resolution) part of the survey was provided by 15 nights of observation using the Murchison Widefield Array (MWA) over a period of approximately a month, contemporaneous with K2 observations of the field. The multi-resolution aspect of the survey was provided by the low resolution (4‧) MWA imaging, complemented by non-contemporaneous but much higher resolution (20″) observations using the Giant Metrewave Radio Telescope (GMRT). The survey is, therefore, sensitive to the details of radio structures across a wide range of angular scales. Consistent with other recent low radio frequency surveys, no significant radio transients or variables were detected in the survey. The resulting source catalogs consist of 1085 and 1468 detections in the two MWA observation bands (centered at 154 and 185 MHz, respectively) and 7445 detections in the GMRT observation band (centered at 148 MHz), over 314 square degrees. The survey is presented as a significant resource for multi-wavelength investigations of the more than 21,000 target objects in the K2 field. We briefly examine our survey data against K2 target lists for dwarf star types (stellar types M and L) that have been known to produce radio flares.
NASA Astrophysics Data System (ADS)
Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.; Houser, C.
2014-12-01
Aeolian systems are ideal natural laboratories for examining self-organization in patterned landscapes, as certain wind regimes generate certain morphologies. Topographic information and scale dependent analysis offer the opportunity to study such systems and characterize process-form relationships. A statistically based methodology for differentiating aeolian features would enable the quantitative association of certain surface characteristics with certain morphodynamic regimes. We conducted a multi-resolution analysis of LiDAR elevation data to assess scale-dependent morphometric variations in an aeolian landscape in South Texas. For each pixel, mean elevation values are calculated along concentric circles moving outward at 100-meter intervals (i.e. 500 m, 600 m, 700 m from pixel). The calculated average elevation values plotted against distance from the pixel of interest as curves are used to differentiate multi-scalar variations in elevation across the landscape. In this case, it is hypothesized these curves may be used to quantitatively differentiate certain morphometries from others like a spectral signature may be used to classify paved surfaces from natural vegetation, for example. After generating multi-resolution curves for all the pixels in a selected area of interest (AOI), a Principal Components Analysis is used to highlight commonalities and singularities between generated curves from pixels across the AOI. Our findings suggest that the resulting components could be used for identification of discrete aeolian features like open sands, trailing ridges and active dune crests, and, in particular, zones of deflation. This new approach to landscape characterization not only works to mitigate bias introduced when researchers must select training pixels for morphometric investigations, but can also reveal patterning in aeolian landscapes that would not be as obvious without quantitative characterization.
NASA Astrophysics Data System (ADS)
Johnson, Glen D.
Protection of ecological resources requires the study and management of whole landscape-level ecosystems. The subsequent need for characterizing landscape structure has led to a variety of measurements for assessing different aspects of spatial patterns; however, most of these measurements are known to depend on both the spatial extent of a specified landscape and the measurement grain; therefore, multi-scale measurements would be more informative. In response, a new method is developed for obtaining a multi-resolution characterization of fragmentation patterns in land cover raster maps within a fixed geographic extent. The concept of conditional entropy is applied to quantify landscape fragmentation as one moves from larger "parent" land cover pixels to smaller "child" pixels that are hierarchically nested within the parent pixels. When applied over a range of resolutions, one obtains a "conditional entropy profile" that can be defined by three parameters. A method for stochastically simulating landscapes is also developed which allows evaluation of the expected behavior of conditional entropy profiles under known landscape generating mechanisms. This modeling approach also allows for determining sample distributions of different landscape measurements via Monte Carlo simulations. Using an eight-category raster map that was based on 30-meter resolution LANDSAT TM images, a suite of landscape measurements was obtained for each of 102 Pennsylvania watersheds (a complete tessellation of the state). This included conditional entropy profiles based on the random filter for degrading raster map resolutions. For these watersheds, the conditional entropy profiles are quite sensitive to changing pattern, and together with the readily-available marginal land cover proportions, appear to be very valuable for categorizing landscapes with respect to common types. These profiles have the further appeal of presenting multi-scale fragmentation patterns in a way that can be easily
Min-Chun Yang; Woo Kyung Moon; Wang, Yu-Chiang Frank; Min Sun Bae; Chiun-Sheng Huang; Jeon-Hor Chen; Ruey-Feng Chang
2013-12-01
Computer-aided diagnosis (CAD) systems in gray-scale breast ultrasound images have the potential to reduce unnecessary biopsy of breast masses. The purpose of our study is to develop a robust CAD system based on the texture analysis. First, gray-scale invariant features are extracted from ultrasound images via multi-resolution ranklet transform. Thus, one can apply linear support vector machines (SVMs) on the resulting gray-level co-occurrence matrix (GLCM)-based texture features for discriminating the benign and malignant masses. To verify the effectiveness and robustness of the proposed texture analysis, breast ultrasound images obtained from three different platforms are evaluated based on cross-platform training/testing and leave-one-out cross-validation (LOO-CV) schemes. We compare our proposed features with those extracted by wavelet transform in terms of receiver operating characteristic (ROC) analysis. The AUC values derived from the area under the curve for the three databases via ranklet transform are 0.918 (95% confidence interval [CI], 0.848 to 0.961), 0.943 (95% CI, 0.906 to 0.968), and 0.934 (95% CI, 0.883 to 0.961), respectively, while those via wavelet transform are 0.847 (95% CI, 0.762 to 0.910), 0.922 (95% CI, 0.878 to 0.958), and 0.867 (95% CI, 0.798 to 0.914), respectively. Experiments with cross-platform training/testing scheme between each database reveal that the diagnostic performance of our texture analysis using ranklet transform is less sensitive to the sonographic ultrasound platforms. Also, we adopt several co-occurrence statistics in terms of quantization levels and orientations (i.e., descriptor settings) for computing the co-occurrence matrices with 0.632+ bootstrap estimators to verify the use of the proposed texture analysis. These experiments suggest that the texture analysis using multi-resolution gray-scale invariant features via ranklet transform is useful for designing a robust CAD system.
NASA Astrophysics Data System (ADS)
Jeng, Yih; Lin, Chun-Hung; Li, Yi-Wei; Chen, Chih-Sung; Yu, Hung-Ming
2011-03-01
Fourier-based algorithms originally developed for the processing of seismic data are applied routinely in the Ground-penetrating radar (GPR) data processing, but these conventional methods of data processing may result in an abundance of spurious harmonics without any geological meaning. We propose a new approach in this study based essentially on multiresolution wavelet analysis (MRA) for GPR noise suppression. The 2D GPR section is similar to an image in all aspects if we consider each data point of the GPR section to be an image pixel in general. This technique is an image analysis with sub-image decomposition. We start from the basic image decomposition procedure using conventional MRA approach and establish the filter bank accordingly. With reasonable knowledge of data and noise and the basic assumption of the target, it is possible to determine the components with high S/N ratio and eliminate noisy components. The MRA procedure is performed further for the components containing both signal and noise. We treated the selected component as an original image and applied the MRA procedure again to that single component with a mother wavelet of higher resolution. This recursive procedure with finer input allows us to extract features or noise events from GPR data more effectively than conventional process. To assess the performance of the MRA filtering method, we first test this method on a simple synthetic model and then on experimental data acquired from a control site using 400 MHz GPR system. A comparison of results from our method and from conventional filtering techniques demonstrates the effectiveness of the sub-image MRA method, particularly in removing ringing noise and scattering events. Field study was carried out in a trenched fault zone where a faulting structure was present at shallow depths ready for understanding the feasibility of improving the data S/N ratio by applying the sub-image multiresolution analysis. In contrast to the conventional
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Carbotte, S. M.
2016-02-01
The Marine Geoscience Data System (MGDS: www.marine-geo.org) provides a suite of tools and services for free public access to data acquired throughout the global oceans including maps, grids, near-bottom photos, and geologic interpretations that are essential for habitat characterization and marine spatial planning. Users can explore, discover, and download data through a combination of APIs and front-end interfaces that include dynamic service-driven maps, a geospatially enabled search engine, and an easy to navigate user interface for browsing and discovering related data. MGDS offers domain-specific data curation with a team of scientists and data specialists who utilize a suite of back-end tools for introspection of data files and metadata assembly to verify data quality and ensure that data are well-documented for long-term preservation and re-use. Funded by the NSF as part of the multi-disciplinary IEDA Data Facility, MGDS also offers Data DOI registration and links between data and scientific publications. MGDS produces and curates the Global Multi-Resolution Topography Synthesis (GMRT: gmrt.marine-geo.org), a continuously updated Digital Elevation Model that seamlessly integrates multi-resolutional elevation data from a variety of sources including the GEBCO 2014 ( 1 km resolution) and International Bathymetric Chart of the Southern Ocean ( 500 m) compilations. A significant component of GMRT includes ship-based multibeam sonar data, publicly available through NOAA's National Centers for Environmental Information, that are cleaned and quality controlled by the MGDS Team and gridded at their full spatial resolution (typically 100 m resolution in the deep sea). Additional components include gridded bathymetry products contributed by individual scientists (up to meter scale resolution in places), publicly accessible regional bathymetry, and high-resolution terrestrial elevation data. New data are added to GMRT on an ongoing basis, with two scheduled releases
Multi-resolution X-ray CT research applied on geo-materials
NASA Astrophysics Data System (ADS)
Cnudde, Dr.
2009-04-01
Many research topics in geology concern the study of internal processes of geo-materials on a pore-scale level in order to estimate their macroscopic behaviour. The microstructure of a porous medium and the physical characteristics of the solids and the fluids that occupy the pore space determine several macroscopic transport properties of the medium. Understanding the relationship between microstructure and transport is therefore of great theoretical and practical interest in many fields of technology. High resolution X-ray CT is becoming a widely used technique to study geo-materials in 3D at a pore-scale level. To be able to distinguish between the different components of a sample on a pore-scale level, it is important to obtain a high resolution, good contrast and a low noise level. The resolution that can be reached not only depends on the sample size and composition, but also on the specifications of the used X-ray source and X-ray detector and on the geometry of the system. An estimate of the achievable resolution with a certain setup can be derived by dividing the diameter of the sample by the number of pixel columns in the detector. For higher resolutions, the resolution is mainly limited by the focal spot size of the X-ray tube. Other factors like sample movement and deformation by thermal or mechanical effects also have a negative influence on the system's resolution, but they can usually be suppressed by a well-considered positioning of the sample and by monitoring its environment. Image contrast is subject to the amount of X-ray absorption by the sample. It depends both on the energy of the X-rays and on the density and atomic number of the present components. Contrast can be improved by carefully selecting the main X-ray energy level, which depends both on the X-ray source and the used detector. In some cases, it can be enhanced by doping the sample with a contrast agent. Both contrast and noise level depend on the detectability of the transmitted X
NASA Technical Reports Server (NTRS)
Carabajal, Claudia C.; Harding, David J.; Boy, Jean-Paul; Danielson, Jeffrey J.; Gesch, Dean B.; Suchdeo, Vijay P.
2011-01-01
Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (+/- 86deg latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete approx.50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m.
NASA Technical Reports Server (NTRS)
Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.
1996-01-01
We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.
NASA Astrophysics Data System (ADS)
McEver, Jimmie; Davis, Paul K.; Bigelow, James H.
2000-06-01
We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.
NASA Astrophysics Data System (ADS)
Ahmad Fauzi, Mohammad Faizal; Gokozan, Hamza Numan; Elder, Brad; Puduvalli, Vinay K.; Otero, Jose J.; Gurcan, Metin N.
2014-03-01
Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame (typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the classification to more types of cancers.
Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John
2014-01-01
The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.
Thomas, S; Abdulhay, Enas; Baconnier, Pierre; Fontecave, Julie; Francoise, Jean-Pierre; Guillaud, Francois; Hannaert, Patrick; Hernandez, Alfredo; Le Rolle, Virginie; Maziere, Pierre; Tahi, Fariza; Zehraoui, Farida
2007-01-01
We present progress on a comprehensive, modular, interactive modeling environment centered on overall regulation of blood pressure and body fluid homeostasis. We call the project SAPHIR, for "a Systems Approach for PHysiological Integration of Renal, cardiac, and respiratory functions". The project uses state-of-the-art multi-scale simulation methods. The basic core model will give succinct input-output (reduced-dimension) descriptions of all relevant organ systems and regulatory processes, and it will be modular, multi-resolution, and extensible, in the sense that detailed submodules of any process(es) can be "plugged-in" to the basic model in order to explore, eg. system-level implications of local perturbations. The goal is to keep the basic core model compact enough to insure fast execution time (in view of eventual use in the clinic) and yet to allow elaborate detailed modules of target tissues or organs in order to focus on the problem area while maintaining the system-level regulatory compensations.
NASA Astrophysics Data System (ADS)
Molini, A.
2012-12-01
Precipitation is one of the major drivers of ecosystem dynamics. Such control is the result of complex dynamical interactions, seldom non linear, and exerted over a wide range of space and time scales. For this reason, if for example precipitation variability and intermittency are known to be among the main drivers of plants production, with a consequent influence on Carbon and Nitrogen cycles, the complete pathway of such a forcing remains often unclear. Traditional time series analysis bases the study of these inter-connections on linear correlation statistics. However, the possible presence of causal dynamical connections, as well as non-linear couplings and non-stationarity can affect the performance of these tools. Additionally, dynamical drivers can act simultaneously over different space and time scales. Given this premise, this talk explores linear and non-linear correlation patterns, information flows and directional couplings characterizing the control of precipitation on ecosystem dynamics by using an ensemble of statistics borrowed from information theory, non-linear dynamical systems analysis and multi-resolution spectral decomposition. In particular, we focus on the development of an extension to the frequency domain of delayed correlation and conditional mutual information functions, and on the implementation of directional coupling measures as conditional spectral causality, phase-slope index, and transfer entropy in the wavelet domain. Several examples, from different climatic regimes, are discussed with the goal of highlighting strengths and weaknesses of these statistics.
BOUMAN, Peter; MENG, Xiao-Li; DIGNAM, James; DUKIĆ, Vanja
2014-01-01
In multicenter studies, one often needs to make inference about a population survival curve based on multiple, possibly heterogeneous survival data from individual centers. We investigate a flexible Bayesian method for estimating a population survival curve based on a semiparametric multiresolution hazard model that can incorporate covariates and account for center heterogeneity. The method yields a smooth estimate of the survival curve for “multiple resolutions” or time scales of interest. The Bayesian model used has the capability to accommodate general forms of censoring and a priori smoothness assumptions. We develop a model checking and diagnostic technique based on the posterior predictive distribution and use it to identify departures from the model assumptions. The hazard estimator is used to analyze data from 110 centers that participated in a multicenter randomized clinical trial to evaluate tamoxifen in the treatment of early stage breast cancer. Of particular interest are the estimates of center heterogeneity in the baseline hazard curves and in the treatment effects, after adjustment for a few key clinical covariates. Our analysis suggests that the treatment effect estimates are rather robust, even for a collection of small trial centers, despite variations in center characteristics. PMID:25620824
Carabajal, C.C.; Harding, D.J.; Boy, J.-P.; Danielson, J.J.; Gesch, D.B.; Suchdeo, V.P.
2011-01-01
Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (?? 86?? latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete ???50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m. ?? 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).
Welsher, Kevin; Yang, Haw
2014-02-23
A detailed understanding of the cellular uptake process is essential to the development of cellular delivery strategies and to the study of viral trafficking. However, visualization of the entire process, encompassing the fast dynamics (local to the freely diffusing nanoparticle) as well the state of the larger-scale cellular environment, remains challenging. Here, we introduce a three-dimensional multi-resolution method to capture, in real time, the transient events leading to cellular binding and uptake of peptide (HIV1-Tat)-modified nanoparticles. Applying this new method to observe the landing of nanoparticles on the cellular contour in three dimensions revealed long-range deceleration of the delivery particle,more » possibly due to interactions with cellular receptors. Furthermore, by using the nanoparticle as a nanoscale ‘dynamics pen’, we discovered an unexpected correlation between small membrane terrain structures and local nanoparticle dynamics. This approach could help to reveal the hidden mechanistic steps in a variety of multiscale processes.« less
A GPU-based High-order Multi-resolution Framework for Compressible Flows at All Mach Numbers
NASA Astrophysics Data System (ADS)
Forster, Christopher J.; Smith, Marc K.
2016-11-01
The Wavelet Adaptive Multiresolution Representation (WAMR) method is a general and robust technique for providing grid adaptivity around the evolution of features in the solutions of partial differential equations and is capable of resolving length scales spanning 6 orders of magnitude. A new flow solver based on the WAMR method and specifically parallelized for the GPU computing architecture has been developed. The compressible formulation of the Navier-Stokes equations is solved using a preconditioned dual-time stepping method that provides accurate solutions for flows at all Mach numbers. The dual-time stepping method allows for control over the residuals of the governing equations and is used to complement the spatial error control provided by the WAMR method. An analytical inverse preconditioning matrix has been derived for an arbitrary number of species that allows preconditioning to be efficiently implemented on the GPU architecture. Additional modifications required for the combination of wavelet-adaptive grids and preconditioned dual-time stepping on the GPU architecture will be discussed. Verification using the Taylor-Green vortex to demonstrate the accuracy of the method will be presented.
Genome-wide DNA polymorphism analyses using VariScan
Hutter, Stephan; Vilella, Albert J; Rozas, Julio
2006-01-01
Background DNA sequence polymorphisms analysis can provide valuable information on the evolutionary forces shaping nucleotide variation, and provides an insight into the functional significance of genomic regions. The recent ongoing genome projects will radically improve our capabilities to detect specific genomic regions shaped by natural selection. Current available methods and software, however, are unsatisfactory for such genome-wide analysis. Results We have developed methods for the analysis of DNA sequence polymorphisms at the genome-wide scale. These methods, which have been tested on a coalescent-simulated and actual data files from mouse and human, have been implemented in the VariScan software package version 2.0. Additionally, we have also incorporated a graphical-user interface. The main features of this software are: i) exhaustive population-genetic analyses including those based on the coalescent theory; ii) analysis adapted to the shallow data generated by the high-throughput genome projects; iii) use of genome annotations to conduct a comprehensive analyses separately for different functional regions; iv) identification of relevant genomic regions by the sliding-window and wavelet-multiresolution approaches; v) visualization of the results integrated with current genome annotations in commonly available genome browsers. Conclusion VariScan is a powerful and flexible suite of software for the analysis of DNA polymorphisms. The current version implements new algorithms, methods, and capabilities, providing an important tool for an exhaustive exploratory analysis of genome-wide DNA polymorphism data. PMID:16968531
Analyses to improve operational flexibility
Trikouros, N.G.
1986-01-01
Operational flexibility is greatly enhanced if the technical bases for plant limits and design margins are fully understood, and the analyses necessary to evaluate the effect of plant modifications or changes in operating modes on these parameters can be performed as required. If a condition should arise that might jeopardize a plant limit or reduce operational flexibility, it would be necessary to understand the basis for the limit or the specific condition limiting operational flexibility and be capable of performing a reanalysis to either demonstrate that the limit will not be violated or to change the limit. This paper provides examples of GPU Nuclear efforts in this regard. Examples of Oyster Creek and Three Mile Island operating experiences are discussed.
Optimally combined confidence limits
NASA Astrophysics Data System (ADS)
Janot, P.; Le Diberder, F.
1998-02-01
An analytical and optimal procedure to combine statistically independent sets of confidence levels on a quantity is presented. This procedure does not impose any constraint on the methods followed by each analysis to derive its own limit. It incorporates the a priori statistical power of each of the analyses to be combined, in order to optimize the overall sensitivity. It can, in particular, be used to combine the mass limits obtained by several analyses searching for the Higgs boson in different decay channels, with different selection efficiencies, mass resolution and expected background. It can also be used to combine the mass limits obtained by several experiments (e.g. ALEPH, DELPHI, L3 and OPAL, at LEP 2) independently of the method followed by each of these experiments to derive their own limit. A method to derive the limit set by one analysis is also presented, along with an unbiased prescription to optimize the expected mass limit in the no-signal-hypothesis.
Al-Qunaieer, Fares S; Tizhoosh, Hamid R; Rahnamayan, Shahryar
2014-12-01
The level set approach to segmentation of medical images has received considerable attention in recent years. Evolving an initial contour to converge to anatomical boundaries of an organ or tumor is a very appealing method, especially when it is based on a well-defined mathematical foundation. However, one drawback of such evolving method is its high computation time. It is desirable to design and implement algorithms that are not only accurate and robust but also fast in execution. Bresson et al. have proposed a variational model using both boundary and region information as well as shape priors. The latter can be a significant factor in medical image analysis. In this work, we combine the variational model of level set with a multi-resolution approach to accelerate the processing. The question is whether a multi-resolution context can make the segmentation faster without affecting the accuracy. As well, we investigate the question whether a premature convergence, which happens in a much shorter time, would reduce accuracy. We examine multiple semiautomated configurations to segment the prostate gland in T2W MR images. Comprehensive experimentation is conducted using a data set of a 100 patients (1,235 images) to verify the effectiveness of the multi-resolution level set with shape priors. The results show that the convergence speed can be increased by a factor of ≈ 2.5 without affecting the segmentation accuracy. Furthermore, a premature convergence approach drastically increases the segmentation speed by a factor of ≈ 17.9.
Yanai, Takeshi; Fann, George I; Beylkin, Gregory; Harrison, Robert J
2015-12-21
A fully numerical method for the time-dependent Hartree-Fock and density functional theory (TD-HF/DFT) with the Tamm-Dancoff (TD) approximation is presented in a multiresolution analysis (MRA) approach. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. The integral equation is efficiently and adaptively solved using a numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. We introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.
Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...
2015-02-25
Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less
Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; Harrison, Robert J.
2015-02-25
Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using a numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H_{2}, Be, N_{2}, H_{2}O, and C_{2}H_{4} molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.
Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong
2013-01-07
Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.
Davis, A.B.; Clothiaux, E.
1999-03-01
Because of Earth`s gravitational field, its atmosphere is strongly anisotropic with respect to the vertical; the effect of the Earth`s rotation on synoptic wind patterns also causes a more subtle form of anisotropy in the horizontal plane. The authors survey various approaches to statistically robust anisotropy from a wavelet perspective and present a new one adapted to strongly non-isotropic fields that are sampled on a rectangular grid with a large aspect ratio. This novel technique uses an anisotropic version of Multi-Resolution Analysis (MRA) in image analysis; the authors form a tensor product of the standard dyadic Haar basis, where the dividing ratio is {lambda}{sub z} = 2, and a nonstandard triadic counterpart, where the dividing ratio is {lambda}{sub x} = 3. The natural support of the field is therefore 2{sup n} pixels (vertically) by 3{sup n} pixels (horizontally) where n is the number of levels in the MRA. The natural triadic basis includes the French top-hat wavelet which resonates with bumps in the field whereas the Haar wavelet responds to ramps or steps. The complete 2D basis has one scaling function and five wavelets. The resulting anisotropic MRA is designed for application to the liquid water content (LWC) field in boundary-layer clouds, as the prevailing wind advects them by a vertically pointing mm-radar system. Spatial correlations are notoriously long-range in cloud structure and the authors use the wavelet coefficients from the new MRA to characterize these correlations in a multifractal analysis scheme. In the present study, the MRA is used (in synthesis mode) to generate fields that mimic cloud structure quite realistically although only a few parameters are used to control the randomness of the LWC`s wavelet coefficients.
Ronhovde, P; Chakrabarty, S; Hu, D; Sahu, M; Sahu, K K; Kelton, K F; Mauro, N A; Nussinov, Z
2011-09-01
We elaborate on a general method that we recently introduced for characterizing the "natural" structures in complex physical systems via multi-scale network analysis. The method is based on "community detection" wherein interacting particles are partitioned into an "ideal gas" of optimally decoupled groups of particles. Specifically, we construct a set of network representations ("replicas") of the physical system based on interatomic potentials and apply a multiscale clustering ("multiresolution community detection") analysis using information-based correlations among the replicas. Replicas may i) be different representations of an identical static system, ii) embody dynamics by considering replicas to be time separated snapshots of the system (with a tunable time separation), or iii) encode general correlations when different replicas correspond to different representations of the entire history of the system as it evolves in space-time. Inputs for our method are the inter-particle potentials or experimentally measured two (or higher order) particle correlations. We apply our method to computer simulations of a binary Kob-Andersen Lennard-Jones system in a mixture ratio of A(80)B(20) , a ternary model system with components "A", "B", and "C" in ratios of A(88)B(7)C(5) (as in Al(88)Y(7)Fe(5) , and to atomic coordinates in a Zr(80)Pt(20) system as gleaned by reverse Monte Carlo analysis of experimentally determined structure factors. We identify the dominant structures (disjoint or overlapping) and general length scales by analyzing extrema of the information theory measures. We speculate on possible links between i) physical transitions or crossovers and ii) changes in structures found by this method as well as phase transitions associated with the computational complexity of the community detection problem. We also briefly consider continuum approaches and discuss rigidity and the shear penetration depth in amorphous systems; this latter length scale increases as
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-06-01
We present an algorithm which is able to reconstruct dynamic emission computed tomography (ECT) image series directly from inconsistent projection data that have been obtained using a rotating camera. By finding a reduced dimension time-activity curve (TAC) basis with which all physiologically feasible TAC's in an image may be accurately approximated, we are able to recast this large non-linear problem as one of constrained linear least squares (CLLSQ) and to reduce parameter vector dimension by a factor of 20. Implicit is the assumption that each pixel may be modeled using a single compartment model, as is typical in 99mTc teboroxime wash-in wash-out studies; and that the blood input function is known. A disadvantage of the change of basis is that TAC non-negativity is no longer ensured. As a consequence, non-negativity constraints must appear in the CLLSQ formulation. A warm-start multiresolution approach is proposed, whereby the problem is initially solved at a resolution below that finally desired. At the next iteration, the number of reconstructed pixels is increased and the solution of the lower resolution problem is then used to warm-start the estimation of the higher resolution kinetic parameters. We demonstrate the algorithm by applying it to dynamic myocardial slice phantom projection data at resolutions of 16 X 16 and 32 X 32 pixels. We find that the warm-start method employed leads to computational savings of between 2 and 4 times when compared to cold start execution times. A 20% RMS error in the reconstructed TAC's is achieved for a total number of detected sinogram counts of 1 X 105 for the 16 X 16 problem and at 1 X 106 counts for the 32 X 32 grid. These errors are 1.5 - 2 times greater than those obtained in conventional (consistent projection) SPECT imaging at similar count levels.
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Jianping; Sofia, Giulia; Tarolli, Paolo
2014-05-01
Moon surface features have great significance in understanding and reconstructing the lunar geological evolution. Linear structures like rilles and ridges are closely related to the internal forced tectonic movement. The craters widely distributed on the moon are also the key research targets for external forced geological evolution. The extremely rare availability of samples and the difficulty for field works make remote sensing the most important approach for planetary studies. New and advanced lunar probes launched by China, U.S., Japan and India provide nowadays a lot of high-quality data, especially in the form of high-resolution Digital Terrain Models (DTMs), bringing new opportunities and challenges for feature extraction on the moon. The aim of this study is to recognize and extract lunar features using geomorphometric analysis based on multi-scale parameters and multi-resolution DTMs. The considered digital datasets include CE1-LAM (Chang'E One, Laser AltiMeter) data with resolution of 500m/pix, LRO-WAC (Lunar Reconnaissance Orbiter, Wide Angle Camera) data with resolution of 100m/pix, LRO-LOLA (Lunar Reconnaissance Orbiter, Lunar Orbiter Laser Altimeter) data with resolution of 60m/pix, and LRO-NAC (Lunar Reconnaissance Orbiter, Narrow Angle Camera) data with resolution of 2-5m/pix. We considered surface derivatives to recognize the linear structures including Rilles and Ridges. Different window scales and thresholds for are considered for feature extraction. We also calculated the roughness index to identify the erosion/deposits area within craters. The results underline the suitability of the adopted methods for feature recognition on the moon surface. The roughness index is found to be a useful tool to distinguish new craters, with higher roughness, from the old craters, which present a smooth and less rough surface.
NASA Astrophysics Data System (ADS)
Fernique, P.; Allen, M. G.; Boch, T.; Oberto, A.; Pineau, F.-X.; Durand, D.; Bot, C.; Cambrésy, L.; Derriere, S.; Genova, F.; Bonnarel, F.
2015-06-01
Context. Scientific exploitation of the ever increasing volumes of astronomical data requires efficient and practical methods for data access, visualisation, and analysis. Hierarchical sky tessellation techniques enable a multi-resolution approach to organising data on angular scales from the full sky down to the individual image pixels. Aims: We aim to show that the hierarchical progressive survey (HiPS) scheme for describing astronomical images, source catalogues, and three-dimensional data cubes is a practical solution to managing large volumes of heterogeneous data and that it enables a new level of scientific interoperability across large collections of data of these different data types. Methods: HiPS uses the HEALPix tessellation of the sphere to define a hierarchical tile and pixel structure to describe and organise astronomical data. HiPS is designed to conserve the scientific properties of the data alongside both visualisation considerations and emphasis on the ease of implementation. We describe the development of HiPS to manage a large number of diverse image surveys, as well as the extension of hierarchical image systems to cube and catalogue data. We demonstrate the interoperability of HiPS and multi-order coverage (MOC) maps and highlight the HiPS mechanism to provide links to the original data. Results: Hierarchical progressive surveys have been generated by various data centres and groups for ˜200 data collections including many wide area sky surveys, and archives of pointed observations. These can be accessed and visualised in Aladin, Aladin Lite, and other applications. HiPS provides a basis for further innovations in the use of hierarchical data structures to facilitate the description and statistical analysis of large astronomical data sets.
NASA Astrophysics Data System (ADS)
Magazù, S.; Migliardo, F.; Vertessy, B. G.; Caccamo, M. T.
2013-10-01
In the present paper the results of a wavevector and thermal analysis of Elastic Incoherent Neutron Scattering (EINS) data collected on water mixtures of three homologous disaccharides through a wavelet approach are reported. The wavelet analysis allows to compare both the spatial properties of the three systems in the wavevector range of Q = 0.27 Å-1 ÷ 4.27 Å-1. It emerges that, differently from previous analyses, for trehalose the scalograms are constantly lower and sharper in respect to maltose and sucrose, giving rise to a global spectral density along the wavevector range markedly less extended. As far as the thermal analysis is concerned, the global scattered intensity profiles suggest a higher thermal restrain of trehalose in respect to the other two homologous disaccharides.
Cole, A G
1983-05-01
Analysers using a polarographic electrode had a tendency to react to nitrous oxide, which was considered dangerous with one analyser. However, they had cheaper running costs and a faster response time than the galvanic-cell analysers. These latter analysers were slightly cheaper initially but their sensors were expensive and had a reduced life in the presence of nitrous oxide. Details of accuracy tests have been presented and opinions expressed with regard to the most satisfactory analysers for clinical use.
NASA Astrophysics Data System (ADS)
van der Linde, Ian
2004-05-01
Spatial contrast sensitivity varies considerably across the field of view, being highest at the fovea and dropping towards the periphery, in accordance with the changing density, type, and interconnection of retinal cells. This observation has enabled researchers to propose the use of multiple levels of detail for visual displays, attracting the name image foveation. These methods offer improved performance when transmitting images across low-bandwidth media by conveying only highly visually salient data in high resolution, or by conveying more visually salient data first and gradually augmenting with the periphery. For stereoscopic displays, the image foveation technique may be extended to exploit the additional acuity constraint of the human visual system caused by the focal system: limited depth of field. Images may be encoded at multiple resolutions laterally taking advantage of the space-variant nature of the retina (image foveation), and additionally contain blur simulating the limited depth of field phenomenon. Since optical blur has a smoothing effect, areas of the image inside the high-resolution fovea, but outside the depth of field may be compressed more effectively. The artificial simulation of depth of field is also believed to alleviate symptoms of virtual simulator sickness resulting from accommodation-convergence separation, and reduce diplopia.
NASA Astrophysics Data System (ADS)
Sulla-Menashe, Damien
Global forests are experiencing a variety of stresses in response to climate change and human activities. The broad objective of this dissertation is to improve understanding of how temperate and boreal forests are changing by using remote sensing to develop new techniques for detecting change in forest ecosystems and to use these techniques to investigate patterns of change in North American forests. First, I developed and applied a temporal segmentation algorithm to an 11-year time series of MODIS data for a region in the Pacific Northwest of the USA. Through comparison with an existing forest disturbance map, I characterized how the severity and spatial scale of disturbances affect the ability of MODIS to detect these events. Results from these analyses showed that most disturbances occupying more than one-third of a MODIS pixel can be detected but that prior disturbance history and gridding artifacts complicate the signature of forest disturbance events in MODIS data. Second, I focused on boreal forests of Canada, where recent studies have used remote sensing to infer decreases in forest productivity. To investigate these trends, I collected 28 years of Landsat TM and ETM+ data for 11 sites spanning Canada's boreal forests. Using these data, I analyzed how sensor geometry and intra- and inter-sensor calibration influence detection of trends from Landsat time series. Results showed systematic patterns in Landsat time series that reflect sensor geometry and subtle issues related to inter-sensor calibration, including consistently higher red band reflectance values from TM data relative to ETM+ data. In the final chapter, I extended the analyses from my second chapter to explore patterns of change in Landsat time series at an expanded set of 46 sites. Trends in peak-summer values of vegetation indices from Landsat were summarized at the scale of MODIS pixels. Results showed that the magnitude and slope of observed trends reflect patterns in disturbance and land
Chavez, P.S.; Sides, S.C.; Anderson, J.A.
1991-01-01
The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors
1992-05-22
AGENCY USE ONLY (Leave EPO RT D A TE 3. REPORT TYPE AND DATES COVERED 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Optical Limiting 6. AUTHOR(S) , 1 oS...to nanoseconds and find that, since the excited state absorption is cummulative, that the dyes can limit well for nanosecond pulses but not for...over which the device limits . In addition, we find that the dynamic range of limiting devices can be substantially increased using two elements without
VizieR Online Data Catalog: Multi-resolution images of M33 (Boquien+, 2015)
NASA Astrophysics Data System (ADS)
Boquien, M.; Calzetti, D.; Aalto, S.; Boselli, A.; Braine, J.; Buat, V.; Combes, F.; Israel, F.; Kramer, C.; Lord, S.; Relano, M.; Rosolowsky, E.; Stacey, G.; Tabatabaei, F.; van der Tak, F.; van der Werf, P.; Verley, S.; Xilouris, M.
2015-02-01
The FITS file contains maps of the flux in star formation tracing bands, maps of the SFR, maps of the attenuation in star formation tracing bands, and a map of the stellar mass of M33, each from a resolution of 8"/pixel to 512"/pixel. The FUV GALEX data from NGS were obtained directly from the GALEX website through GALEXVIEW. The observation was carried out on 25 November 2003 for a total exposure time of 3334s. Hα+[NII] observations were carried out in November 1995 on the Burrel Schmidt telescope at Kitt Peak National Observatory. The observations and the data processing are analysed in detail in Hoopes & Walterbos (2000ApJ...541..597H). The Spitzer IRAC 8um image sensitive to the emission of Polycyclic Aromatic Hydrocarbons (PAH) and the MIPS 24um image sensitive to the emission of Very Small Grains (VSG) were obtained from the NASA Extragalactic Database and have been analysed by Hinz et al. (2004ApJS..154..259H) and Verley et al. (2007A&A...476.1161V, Cat. J/A+A/476/1161). The PACS data at 70um and 100um, which are sensitive to the warm dust heated by massive stars, come from two different programmes. The 100um image was obtained in the context of the Herschel HerM33es open time key project (Kramer et al., 2010A&A...518L..67K, observation ID 1342189079 and 1342189080). The observation was carried out in parallel mode on 7 January 2010 for a duration of 6.3h. It consisted in 2 orthogonal scans at a speed of 20"/s, with a leg length of 7'. The 70um image was obtained as a follow-up open time cycle 2 programme (OT2mboquien4, observation ID 1342247408 and 1342247409). M33 was scanned on 25 June 2012 at a speed of 20"/s in 2 orthogonal directions over 50' with 5 repetitions of this scheme in order to match the depth of the 100um image. The total duration of the observation was 9.9h. The cube, cube.fits files, contains 16 extensions: * FUV * HALPHA * 8 * 24 * 70 * 100 * SFR_FUV * SFR_HALPHA * SFR_24 * SFR_70 * SFR_100 * SFRFUV24 * SFRHALPHA24 * A_FUV * A
Giri, Chandra; Defourny, Pierre; Shrestha, Surendra
2003-01-01
Land use/land cover change, particularly that of tropical deforestation and forest degradation, has been occurring at an unprecedented rate and scale in Southeast Asia. The rapid rate of economic development, demographics and poverty are believed to be the underlying forces responsible for the change. Accurate and up-to-date information to support the above statement is, however, not available. The available data, if any, are outdated and are not comparable for various technical reasons. Time series analysis of land cover change and the identification of the driving forces responsible for these changes are needed for the sustainable management of natural resources and also for projecting future land cover trajectories. We analysed the multi-temporal and multi-seasonal NOAA Advanced Very High Resolution Radiometer (AVHRR) satellite data of 1985/86 and 1992 to (1) prepare historical land cover maps and (2) to identify areas undergoing major land cover transformations (called ‘hot spots’). The identified ‘hot spot’ areas were investigated in detail using high-resolution satellite sensor data such as Landsat and SPOT supplemented by intensive field surveys. Shifting cultivation, intensification of agricultural activities and change of cropping patterns, and conversion of forest to agricultural land were found to be the principal reasons for land use/land cover change in the Oudomxay province of Lao PDR, the Mekong Delta of Vietnam and the Loei province of Thailand, respectively. Moreover, typical land use/land cover change patterns of the ‘hot spot’ areas were also examined. In addition, we developed an operational methodology for land use/land cover change analysis at the national level with the help of national remote sensing institutions.
Deconstructing a Polygenetic Landscape Using LiDAR and Multi-Resolution Analysis
NASA Astrophysics Data System (ADS)
Houser, C.; Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.
2015-12-01
In many earth surface systems characteristic morphologies are associated with various regimes both past and present. Aeolian systems contain a variety of features differentiated largely by morphometric differences, which in turn reflect age and divergent process regimes. Using quantitative analysis of high-resolution elevation data to generate detailed information regarding these characteristic morphometries enables geomorphologists to effectively map process regimes from a distance. Combined with satellite imagery and other types of remotely sensed data, the outputs can even help to delineate phases of activity within aeolian systems. The differentiation of regimes and identification of relict features together enables a greater level of rigor to analyses leading to field-based investigations, which are highly dependent on site-specific historical contexts that often obscure distinctions between separate process-form regimes. We present results from a Principal Components Analysis (PCA) performed on a LiDAR-derived elevation model of a largely stabilized aeolian system in South Texas. The resulting components are layered and classified to generate a map of aeolian morphometric signatures for a portion of the landscape. Several of these areas do not immediately appear to be aeolian in nature in satellite imagery or LiDAR-derived models, yet field observations and historical imagery reveal the PCA did in fact identify stabilized and relict dune features. This methodology enables researchers to generate a morphometric classification of the land surface. We believe this method is a valuable and innovative tool for researchers identifying process regimes within a study area, particularly in field-based investigations that rely heavily on site-specific context.
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
Multi-resolution Analysis of the slip history of 1999 Chi-Chi, Taiwan earthquake
NASA Astrophysics Data System (ADS)
Ji, C.; Helmberger, D. V.
2001-05-01
Studies of large earthquakes have revealed strong heterogeneity in faulting slip distributions at mid-crustal depths. These results are inferred from modeling l ocal GPS and strong motion records but are usually limited by the lack of data density. Here we report on the fault complexity of the large (Magnitude 7.6) Chi- Chi earthquake obtained by inverting densely and well distributed static measure ments consisting of 119 GPS and 23 doubly integrated strong motion records, whic h is the best static data set yet recorded for a large earthquake. We show that the slip of the Chi-Chi earthquake was concentrated on the surface of a "wedge shaped" block. Furthermore, similar to our previous study in 1999 Hector Mine ea rthquake (Ji et al., 2001), the static data, teleseismic body wave and local str ong motion data are used to constrain the rupture process. A simulated annealing method combined with wavelet transform approach is employed to solve for the sl ip histories on subfault elements with variable sizes. The sizes are adjusted it eratively based on data type and distribution to produce an optimal balance betw een resolution and reliability. Results indicate strong local variations in rupt ure characteristics with relatively rapid changes in the middle and southern por tion producing relatively strong accelerations.
NASA Astrophysics Data System (ADS)
Savary, M.; Massei, N.; Johannet, A.; Dupont, J. P.; Hauchard, E.
2016-12-01
25% of the world populations drink water extracted from karst aquifer. The comprehension and the protection of these aquifers appear as crucial due to an increase of drinking water needs. In Normandie(North-West of France), the principal exploited aquifer is the chalk aquifer. The chalk aquifer highly karstified is an important water resource, regionally speaking. Connections between surface and underground waters thanks to karstification imply turbidity that decreases water quality. Both numerous parameters and phenomenons, and the non-linearity of the rainfall/turbidity relation influence the turbidity causing difficulties to model and forecast turbidity peaks. In this context, the Yport pumping well provides half of Le Havreconurbation drinking water supply (236 000 inhabitants). The aim of this work is thus to perform prediction of the turbidity peaks in order to help pumping well managers to decrease the impact of turbidity on water treatment. Database consists in hourly rainfalls coming from six rain gauges located on the alimentation basin since 2009 and hourly turbidity since 1993. Because of the lack of accurate physical description of the karst system and its surface basin, the systemic paradigm is chosen and a black box model: a neural network model is chosen. In a first step, correlation analyses are used to design the original model architecture by identifying the relation between output and input. The following optimization phases bring us four different architectures. These models were experimented to forecast 12h ahead turbidity and threshold surpassing. The first model is a simple multilayer perceptron. The second is a two-branches model designed to better represent the fast (rainfall) and low (evapotranspiration) dynamics. Each kind of model is developed using both a recurrent and feed-forward architecture. This work highlights that feed-forward multilayer perceptron is better to predict turbidity peaks when feed-forward two-branches model is
Creation of a Multiresolution and Multiaccuracy Dtm: Problems and Solutions for Heli-Dem Case Study
NASA Astrophysics Data System (ADS)
Biagi, L.; Carcano, L.; Lucchese, A.; Negretti, M.
2013-01-01
The work is part of "HELI-DEM" (HELvetia-Italy Digital Elevation Model) project, funded by the European Regional Development Fund within the Italy-Switzerland cooperation program. The aim of the project is the creation of a unique DTM for the alpine and subalpine area between Italy (Piedmont, Lombardy) and Switzerland (Ticino and Grisons Cantons); at present, different DTMs, that are in different reference frames and have been obtained with different technologies, accuracies, and resolutions, have been acquired. The final DTM should be correctly georeferenced and produced validating and integrating the data that are available for the project. DTMs are fundamental in hydrogeological studies, especially in alpine areas where hydrogeological risks may exist. Moreover, when an event, like for example a landslide, happens at the border between countries, a unique and integrated DTM which covers the interest area is useful to analyze the scenario. In this sense, HELI-DEM project is helpful. To perform analyses along the borders between countries, transnational geographic information is needed: a transnational DTM can be obtained by merging regional low resolution DTMs. Moreover high resolution local DTMs should be used where they are available. To be merged, low and high resolution DTMs should be in the same three dimensional reference frame, should not present biases and should be consistent in the overlapping areas. Cross-validation between the different DTMs is therefore needed. Two different problems should be solved: the merging of regional, partly overlapping low and medium resolution DTMs into a unique low/medium resolution DTM and the merging with other local high resolution/high accuracy height data. This paper discusses the preliminary processing of the data for the fusion of low and high resolution DTMs in a study-case area within the Lombardy region: Valtellina valley. In this region the Lombardy regional low resolution DTM is available, with a horizontal
Multi-resolution processing for fractal analysis of airborne remotely sensed data
NASA Technical Reports Server (NTRS)
Jaggi, S.; Quattrochi, D.; Lam, N.
1992-01-01
Fractal geometry is increasingly becoming a useful tool for modeling natural phenomenon. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. Since they are characterized by self-similarity, an ideal fractal surface is scale-independent; i.e. at different scales a fractal surface looks the same. This is not exactly true for natural surfaces. When viewed at different spatial resolutions parts of natural surfaces look alike in a statistical manner and only for a limited range of scales. Images acquired by NASA's Thermal Infrared Multispectral Scanner are used to compute the fractal dimension as a function of spatial resolution. Three methods are used to determine the fractal dimension - Schelberg's line-divider method, the variogram method, and the triangular prism method. A description of these methods and the results of applying these methods to a remotely-sensed image is also presented. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected was the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. This corresponds to 3 different pixel sizes - 5m, 15m, and 30m. After, simulating different spatial sampling intervals within the same image for each of the 3 image sets, the results are cross-correlated to compare the extent of detail and complexity that is obtained when data is taken at lower spatial intervals.
Multi-resolution integrated modeling for basin-scale water resources management and policy analysis
Gupta, Hoshin V. ,; Brookshire, David S.; Springer, E. P.; Wagener, Thorsten
2004-01-01
Approximately one-third of the land surface of the Earth is considered to be arid or semi-arid with an annual average of less than 12-14 inches of rainfall. The availability of water in such regions is of course, particularly sensitive to climate variability while the demand for water is experiencing explosive population growth. The competition for available water is exerting considerable pressure on the water resources management. Policy and decision makers in the southwestern U.S. increasingly have to cope with over-stressed rivers and aquifers as population and water demands grow. Other factors such as endangered species and Native American water rights further complicate the management problems. Further, as groundwater tables are drawn down due to pumping in excess of natural recharge, considerable (potentially irreversible) environmental impacts begin to be felt as, for example, rivers run dry for significant portions of the year, riparian habitats disappear (with consequent effects on the bio-diversity of the region), aquifers compact resulting in large scale subsidence, and water quality begins to suffer. The current drought (1999-2002) in the southwestern U.S. is raising new concerns about how to sustain the combination of agricultural, urban and in-stream uses of water that underlie the socio-economic and ecological structure in the region. The water stressed nature of arid and semi-arid environments means that competing water uses of various kinds vie for access to a highly limited resource. If basin-scale water sustainability is to be achieved, managers must somehow achieve a balance between supply and demand throughout the basin, not just for the surface water or stream. The need to move water around a basin such as the Rio Grande or Colorado River to achieve this balance has created the stimulus for water transfers and water markets, and for accurate hydrologic information to sustain such institutions [Matthews et al. 2002; Brookshire et al 2003
Benefits of an ultra large and multiresolution ensemble for estimating available wind power
NASA Astrophysics Data System (ADS)
Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik
2016-04-01
In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.
Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations
Tokuhiro, Akiro; Ruggles, Art; Pointer, David
2015-01-22
In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed
Using multi-resolution proxies to assess ENSO impacts on the mean state of the tropical Pacific.
NASA Astrophysics Data System (ADS)
Karamperidou, C.; Conroy, J. L.
2016-12-01
guided by the fundamental and open question of multi-scale interactions in the tropical Pacific, and illustrates the need for multi-resolution paleoclimate proxies and their potential uses.
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Barg, B.
2015-12-01
The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of quality controlled multibeam sonar data, collected by scientists and institutions worldwide, that is merged with gridded terrestrial and marine elevation data. The multi-resolutional elevation components of GMRT are delivered to the user through a variety of interfaces as both images and grids. The GMRT provides quantitative access to gridded data and images to the full native resolution of the sonar as well as attribution information and access to source data files. To construct the GMRT, multibeam sonar data are evaluated, cleaned and gridded by the MGDS Team and are then merged with gridded global and regional elevation data that are available at a variety of scales from 1km resolution to sub-meter resolution. As of June 2015, GMRT included processed swath data from nearly 850 research cruises with over 2.7 million ship-track miles of coverage. Several new services were developed over the past year to improve access to the GMRT Synthesis. In addition to our long-standing Web Map Services, we now offer RESTful services to provide programmatic access to gridded data in standard formats including ArcASCII, GeoTIFF, COARDS/CF-compliant NetCDF, and GMT NetCDF, as well as access to custom images of the GMRT in JPEG format. An attribution metadata XML service was also developed to return all relevant information about component data in an area, including cruise names, multibeam file names, and gridded data components. These new services are compliant with the EarthCube GeoWS Building Blocks specifications. Supplemental services include the release of data processing reports for each cruise included in the GMRT and data querying services that return elevation values at a point and great circle arc profiles using the highest available resolution data. Our new and improved map-based web application, GMRT MapTool, provides user access to the GMRT
NASA Astrophysics Data System (ADS)
Mouzourides, P.; Kyprianou, A.; Neophytou, M. K.-A.
2013-12-01
Urban morphology characterization is crucial for the parametrization of boundary-layer development over urban areas. One complexity in such a characterization is the three-dimensional variation of the urban canopies and textures, which are customarily reduced to and represented by one-dimensional varying parametrization such as the aerodynamic roughness length and zero-plane displacement . The scope of the paper is to provide novel means for a scale-adaptive spatially-varying parametrization of the boundary layer by addressing this 3-D variation. Specifically, the 3-D variation of urban geometries often poses questions in the multi-scale modelling of air pollution dispersion and other climate or weather-related modelling applications that have not been addressed yet, such as: (a) how we represent urban attributes (parameters) appropriately for the multi-scale nature and multi-resolution basis of weather numerical models, (b) how we quantify the uniqueness of an urban database in the context of modelling urban effects in large-scale weather numerical models, and (c) how we derive the impact and influence of a particular building in pre-specified sub-domain areas of the urban database. We illustrate how multi-resolution analysis (MRA) addresses and answers the afore-mentioned questions by taking as an example the Central Business District of Oklahoma City. The selection of MRA is motivated by its capacity for multi-scale sampling; in the MRA the "urban" signal depicting a city is decomposed into an approximation, a representation at a higher scale, and a detail, the part removed at lower scales to yield the approximation. Different levels of approximations were deduced for the building height and planar packing density . A spatially-varying characterization with a scale-adaptive capacity is obtained for the boundary-layer parameters (aerodynamic roughness length and zero-plane displacement ) using the MRA-deduced results for the building height and the planar packing
2007-11-02
author’s feet on the ground while letting his head stay in the clouds . As a collaborator on the project, thanks go to LCDR Joseph Skufca, USN, for his...pair of binoculars. Although one is able to zoom in and watch what sign the catcher is giving to the pitcher, it is impossible to simultaneously see...cannot at the same time see what motion the catcher makes with his …ngers as a sign to the pitcher. Here the window is so large that small scale motion
NASA Technical Reports Server (NTRS)
Holzmann, Gerard J.
2008-01-01
In the last 3 decades or so, the size of systems we have been able to verify formally with automated tools has increased dramatically. At each point in this development, we encountered a different set of limits -- many of which we were eventually able to overcome. Today, we may have reached some limits that may be much harder to conquer. The problem I will discuss is the following: given a hypothetical machine with infinite memory that is seamlessly shared among infinitely many CPUs (or CPU cores), what is the largest problem size that we could solve?
Porter, P.S.; Ward, R.C.; Bell, H.F.
1988-08-01
Water quality monitoring data are plagued with levels of chemicals that are too low to be measured precisely. This discussion will focus on the information needs of water quality management and how these needs are best met for monitoring systems that require many trace-level measurements. We propose that the limit of detection (LOD) or the limit of quantitation (LOQ) not be used to censor data. Although LOD and LOQ aid in the interpretation of individual measurements, they hinder statistical analysis of water quality data. More information is gained when a numerical result and an estimate of measurement precision are reported for every measurement, as opposed to reporting not detected or less than. This article is not intended to be a review of the issues pertaining to the LOD and related concepts.
[Introduction to the indirect meta-analyses].
Bolaños Díaz, Rafael; Calderón Cahua, María
2014-04-01
Meta-analyses are studies that aim to compile all available information, grouping them according to an specific theme and evaluating it through methodological quality tools. When there are two specific comparisons of treatments based on randomized clinical trials, standard meta-analyses are the best option, but there are scenarios in which there is no available literature for those direct comparisons. In these cases, an alternative method to consider is indirect comparison or indirect meta-analyses. The aim of this review is to understand the conceptual foundations, the need, applications and limitations of indirect comparisons for further understanding of network meta-analyses.
NASA Astrophysics Data System (ADS)
Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.
2005-12-01
NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products
NASA Technical Reports Server (NTRS)
Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.
1995-01-01
This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.
Wavelet Analyses and Applications
ERIC Educational Resources Information Center
Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.
2009-01-01
It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…
NASA Technical Reports Server (NTRS)
Taylor, G. R.
1972-01-01
Extensive microbiological analyses that were performed on the Apollo 14 prime and backup crewmembers and ancillary personnel are discussed. The crewmembers were subjected to four separate and quite different environments during the 137-day monitoring period. The relation between each of these environments and observed changes in the microflora of each astronaut are presented.
Wavelet Analyses and Applications
ERIC Educational Resources Information Center
Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.
2009-01-01
It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…
3-D Cavern Enlargement Analyses
EHGARTNER, BRIAN L.; SOBOLIK, STEVEN R.
2002-03-01
Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.
Atmospheric tether mission analyses
NASA Technical Reports Server (NTRS)
1996-01-01
NASA is considering the use of tethered satellites to explore regions of the atmosphere inaccessible to spacecraft or high altitude research balloons. This report summarizes the Lockheed Martin Astronautics (LMA) effort for the engineering study team assessment of an Orbiter-based atmospheric tether mission. Lockheed Martin responsibilities included design recommendations for the deployer and tether, as well as tether dynamic analyses for the mission. Three tether configurations were studied including single line, multistrand (Hoytether) and tape designs.
Antfolk, Jan
2017-03-01
Whereas women of all ages prefer slightly older sexual partners, men-regardless of their age-have a preference for women in their 20s. Earlier research has suggested that this difference between the sexes' age preferences is resolved according to women's preferences. This research has not, however, sufficiently considered that the age range of considered partners might change over the life span. Here we investigated the age limits (youngest and oldest) of considered and actual sex partners in a population-based sample of 2,655 adults (aged 18-50 years). Over the investigated age span, women reported a narrower age range than men and women tended to prefer slightly older men. We also show that men's age range widens as they get older: While they continue to consider sex with young women, men also consider sex with women their own age or older. Contrary to earlier suggestions, men's sexual activity thus reflects also their own age range, although their potential interest in younger women is not likely converted into sexual activity. Compared to homosexual men, bisexual and heterosexual men were more unlikely to convert young preferences into actual behavior, supporting female-choice theory.
LDEF Satellite Radiation Analyses
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
1996-01-01
Model calculations and analyses have been carried out to compare with several sets of data (dose, induced radioactivity in various experiment samples and spacecraft components, fission foil measurements, and LET spectra) from passive radiation dosimetry on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The calculations and data comparisons are used to estimate the accuracy of current models and methods for predicting the ionizing radiation environment in low earth orbit. The emphasis is on checking the accuracy of trapped proton flux and anisotropy models.
Force Limited Vibration Testing
NASA Technical Reports Server (NTRS)
Scharton, Terry; Chang, Kurng Y.
2005-01-01
This slide presentation reviews the concept and applications of Force Limited Vibration Testing. The goal of vibration testing of aerospace hardware is to identify problems that would result in flight failures. The commonly used aerospace vibration tests uses artificially high shaker forces and responses at the resonance frequencies of the test item. It has become common to limit the acceleration responses in the test to those predicted for the flight. This requires an analysis of the acceleration response, and requires placing accelerometers on the test item. With the advent of piezoelectric gages it has become possible to improve vibration testing. The basic equations have are reviewed. Force limits are analogous and complementary to the acceleration specifications used in conventional vibration testing. Just as the acceleration specification is the frequency spectrum envelope of the in-flight acceleration at the interface between the test item and flight mounting structure, the force limit is the envelope of the in-flight force at the interface . In force limited vibration tests, both the acceleration and force specifications are needed, and the force specification is generally based on and proportional to the acceleration specification. Therefore, force limiting does not compensate for errors in the development of the acceleration specification, e.g., too much conservatism or the lack thereof. These errors will carry over into the force specification. Since in-flight vibratory force data are scarce, force limits are often derived from coupled system analyses and impedance information obtained from measurements or finite element models (FEM). Fortunately, data on the interface forces between systems and components are now available from system acoustic and vibration tests of development test models and from a few flight experiments. Semi-empirical methods of predicting force limits are currently being developed on the basis of the limited flight and system test
1983-07-15
back-propagated aperture radius, Rm , in the corresponding expression (Equations (79)-(72)) of Reference 1. It should also be noted that Equation (18...incoherent detection efficiency approaches unity as Rm goes to zero and approaches zero in a limiting fashion as Rm goes to infinity, as required from the...the first two integrations reduce to the case previously treated, Equations (18) and (20), with Rm replaced by Rmi and RMO respectively. For the
Broadband rotor noise analyses
NASA Technical Reports Server (NTRS)
George, A. R.; Chou, S. T.
1984-01-01
The various mechanisms which generate broadband noise on a range of rotors studied include load fluctuations due to inflow turbulence, due to turbulent boundary layers passing the blades' trailing edges, and due to tip vortex formation. Existing analyses are used and extensions to them are developed to make more accurate predictions of rotor noise spectra and to determine which mechanisms are important in which circumstances. Calculations based on the various prediction methods in existing experiments were compared. The present analyses are adequate to predict the spectra from a wide variety of experiments on fans, full scale and model scale helicopter rotors, wind turbines, and propellers to within about 5 to 10 dB. Better knowledge of the inflow turbulence improves the accuracy of the predictions. Results indicate that inflow turbulence noise depends strongly on ambient conditions and dominates at low frequencies. Trailing edge noise and tip vortex noise are important at higher frequencies if inflow turbulence is weak. Boundary layer trailing edge noise, important, for large sized rotors, increases slowly with angle of attack but not as rapidly as tip vortex noise.
Sharma, Laxmi Kant; Nathawat, Mahendra Singh; Sinha, Suman
2013-10-01
This study deals with the future scope of REDD (Reduced Emissions from Deforestation and forest Degradation) and REDD+ regimes for measuring and monitoring the current state and dynamics of carbon stocks over time with integrated geospatial and field-based biomass inventory approach. Multi-temporal and multi-resolution geospatial synergic approach incorporating satellite sensors from moderate to high resolution with stratified random sampling design is used. The inventory process involves a continuous forest inventory to facilitate the quantification of possible CO2 reductions over time using statistical up-scaling procedures on various levels. The combined approach was applied on a regional scale taking Himachal Pradesh (India), as a case study, with a hierarchy of forest strata representing the forest structure found in India. Biophysical modeling implemented revealed power regression model as the best fit (R (2) = 0.82) to model the relationship between Normalized Difference Vegetation Index and biomass which was further implemented to calculate multi-temporal above ground biomass and carbon sequestration. The calculated value of net carbon sequestered by the forests totaled to 11.52 million tons (Mt) over the period of 20 years at the rate of 0.58 Mt per year since 1990 while CO2 equivalent reduced from the environment by the forests under study during 20 years comes to 42.26 Mt in the study area.
Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn
2017-06-25
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.
NASA Astrophysics Data System (ADS)
Yershov, V.
2015-10-01
We describe a processing system for generating multiresolution digital terrain models (DTM) of Mars within the the iMars project of the European Seventh Framework Programme. This system is based on a non-rigorous sensor model for processing highresolution stereoscopic images obtained fromthe High Resolution Imaging Science Experiment (HiRISE) camera and Context Camera (CTX) onboard the NASA Mars Reconnaissance Orbiter (MRO) spacecraft. The system includes geodetic control based on the polynomial fit of the input CTX images with respect to to a reference image obtained from the ESA Mars Express High Resolution Stereo Camera (HRSC). The input image processing is based on the Integrated Software for Images and Spectrometers (ISIS) and the NASA Ames stereo pipeline. The accuracy of the produced CTX DTM is improved by aligning it with the reference HRSC DTMand the altimetry data from the Mars Orbiter Laser Altimeter (MOLA) onboard the Mars Global Surveyor (MGS) spacecraft. The higher-resolution HiRISE imagery data are processed in the the same way, except that the reference images and DTMs are taken from the CTX results obtained during the first processing stage. A quality assessment of image photogrammetric registration is demonstrated by using data generated by the NASA Ames stereo pipeline and the BAE Socet system. Such DTMs will be produced for all available stereo-pairs and be displayed asWMS layers within the iMarsWeb GIS.
NASA Astrophysics Data System (ADS)
Tian, Yu; Xu, Hong; Zhang, Xing-Yang; Wang, Hong-Jun; Guo, Tong-Cui; Zhang, Liang-Jie; Gong, Xing-Lin
2016-12-01
In this study, we used the multi-resolution graph-based clustering (MRGC) method for determining the electrofacies (EF) and lithofacies (LF) from well log data obtained from the intraplatform bank gas fields located in the Amu Darya Basin. The MRGC could automatically determine the optimal number of clusters without prior knowledge about the structure or cluster numbers of the analyzed data set and allowed the users to control the level of detail actually needed to define the EF. Based on the LF identification and successful EF calibration using core data, an MRGC EF partition model including five clusters and a quantitative LF interpretation chart were constructed. The EF clusters 1 to 5 were interpreted as lagoon, anhydrite flat, interbank, low-energy bank, and high-energy bank, and the coincidence rate in the cored interval could reach 85%. We concluded that the MRGC could be accurately applied to predict the LF in non-cored but logged wells. Therefore, continuous EF clusters were partitioned and corresponding LF were interpreted, and the distribution and petrophysical characteristics of different LF were analyzed in the framework of sequence stratigraphy.
Glickman, Matthew R.; Tang, Akaysha
2009-02-01
The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.
LDEF Satellite Radiation Analyses
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
1996-01-01
This report covers work performed by Science Applications International Corporation (SAIC) under contract NAS8-39386 from the NASA Marshall Space Flight Center entitled LDEF Satellite Radiation Analyses. The basic objective of the study was to evaluate the accuracy of present models and computational methods for defining the ionizing radiation environment for spacecraft in Low Earth Orbit (LEO) by making comparisons with radiation measurements made on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The emphasis of the work here is on predictions and comparisons with LDEF measurements of induced radioactivity and Linear Energy Transfer (LET) measurements. These model/data comparisons have been used to evaluate the accuracy of current models for predicting the flux and directionality of trapped protons for LEO missions.
Network Class Superposition Analyses
Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul
2013-01-01
Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141
Network class superposition analyses.
Pearson, Carl A B; Zeng, Chen; Simha, Rahul
2013-01-01
Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.
Dendrochronological analyses of art objects
NASA Astrophysics Data System (ADS)
Klein, Peter
1998-05-01
Dendrochronology is a discipline of the biological sciences which makes it possible to determine the age of wooden objects. Dendrochronological analyses are used in art history as an important means of dating wooden panels, sculptures and musical instruments. This method of dating allows us to ascertain at least a 'terminus post quem' for an art-object by determining the felling date of the tree from which the object was cut, in other words the data after which the wood for the object could have been sawn. The method involves measuring the width of the annual rings on the panels and comparing the growth ring curve resulting from this measurement with dated master chronologies. Since the characteristics of the growth ring curve over several centuries are unique and specific to wood of differing geographical origins of wood, it is possible to obtain a relatively precise dating of art-objects. Since dendrochronology is year specific it is more accurate than other scientific methods. But like other methods it has limitations. The method is limited only to trees from temperate zones. And even among these, some woods are better than others. A dating is possible for oak, beech, fir, pine and spruce. Linden and poplar are not datable.
Genetic analyses of captive Alala (Corvus hawaiiensis) using AFLP analyses