Science.gov

Sample records for limits multiresolution analyses

  1. Evaluating Climate Causation of Conflict in Darfur Using Multi-temporal, Multi-resolution Satellite Image Datasets With Novel Analyses

    NASA Astrophysics Data System (ADS)

    Brown, I.; Wennbom, M.

    2013-12-01

    Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors

  2. Multiresolution training of Kohonen neural networks

    NASA Astrophysics Data System (ADS)

    Tamir, Dan E.

    2007-09-01

    This paper analyses a trade-off between convergence rate and distortion obtained through a multi-resolution training of a Kohonen Competitive Neural Network. Empirical results show that a multi-resolution approach can improve the training stage of several unsupervised pattern classification algorithms including K-means clustering, LBG vector quantization, and competitive neural networks. While, previous research concentrated on convergence rate of on-line unsupervised training. New results, reported in this paper, show that the multi-resolution approach can be used to improve training quality (measured as a derivative of the rate distortion function) on the account of convergence speed. The probability of achieving a desired point in the quality/convergence-rate space of Kohonen Competitive Neural Networks (KCNN) is evaluated using a detailed Monte Carlo set of experiments. It is shown that multi-resolution can reduce the distortion by a factor of 1.5 to 6 while maintaining the convergence rate of traditional KCNN. Alternatively, the convergence rate can be improved without loss of quality. The experiments include a controlled set of synthetic data, as well as, image data. Experimental results are reported and evaluated.

  3. Hair analyses: worthless for vitamins, limited for minerals

    SciTech Connect

    Hambridge, K.M.

    1982-11-01

    Despite many major and minor problems with interpretation of analytical data, chemical analyses of human hair have some potential value. Extensive research will be necessary to define this value, including correlation of hair concentrations of specific elements with those in other tissues and metabolic pools and definition of normal physiological concentration ranges. Many factors that may compromise the correct interpretation of analytical data require detailed evaluation for each specific element. Meanwhile, hair analyses are of some value in the comparison of different populations and, for example, in public health community surveys of environmental exposure to heavy metals. On an individual basis, their established usefulness is much more restricted and the limitations are especially notable for evaluation of mineral nutritional status. There is a wide gulf between the limited and mainly tentative scientific justification for their use on an individual basis and the current exploitation of multielement chemical analyses of human hair.

  4. Linking properties to microstructure through multiresolution mechanics

    NASA Astrophysics Data System (ADS)

    McVeigh, Cahal James

    The macroscale mechanical and physical properties of materials are inherently linked to the underlying microstructure. Traditional continuum mechanics theories have focused on approximating the heterogeneous microstructure as a continuum, which is conducive to a partial differential equation mathematical description. Although this makes large scale simulation of material much more efficient than modeling the detailed microstructure, the relationship between microstructure and macroscale properties becomes unclear. In order to perform computational materials design, material models must clearly relate the key underlying microstructural parameters (cause) to macroscale properties (effect). In this thesis, microstructure evolution and instability events are related to macroscale mechanical properties through a new multiresolution continuum analysis approach. The multiresolution nature of this theory allows prediction of the evolving magnitude and scale of deformation as a direct function of the changing microstructure. This is achieved via a two-pronged approach: (a) Constitutive models which track evolving microstructure are developed and calibrated to direct numerical simulations (DNS) of the microstructure. (b) The conventional homogenized continuum equations of motion are extended via a virtual power approach to include extra coupled microscale stresses and stress couples which are active at each characteristic length scale within the microstructure. The multiresolution approach is applied to model the fracture toughness of a cemented carbide, failure of a steel alloy under quasi-static loading conditions and the initiation and velocity of adiabatic shear bands under high speed dynamic loading. In each case the multiresolution analysis predicts the important scale effects which control the macroscale material response. The strain fields predicted in the multiresolution continuum analyses compare well to those observed in direct numerical simulations of the

  5. Research potential and limitations of trace analyses of cremated remains.

    PubMed

    Harbeck, Michaela; Schleuder, Ramona; Schneider, Julius; Wiechmann, Ingrid; Schmahl, Wolfgang W; Grupe, Gisela

    2011-01-30

    Human cremation is a common funeral practice all over the world and will presumably become an even more popular choice for interment in the future. Mainly for purposes of identification, there is presently a growing need to perform trace analyses such as DNA or stable isotope analyses on human remains after cremation in order to clarify pending questions in civil or criminal court cases. The aim of this study was to experimentally test the potential and limitations of DNA and stable isotope analyses when conducted on cremated remains. For this purpose, tibiae from modern cattle were experimentally cremated by incinerating the bones in increments of 100°C until a maximum of 1000°C was reached. In addition, cremated human remains were collected from a modern crematory. The samples were investigated to determine level of DNA preservation and stable isotope values (C and N in collagen, C and O in the structural carbonate, and Sr in apatite). Furthermore, we assessed the integrity of microstructural organization, appearance under UV-light, collagen content, as well as the mineral and crystalline organization. This was conducted in order to provide a general background with which to explain observed changes in the trace analyses data sets. The goal is to develop an efficacious screening method for determining at which degree of burning bone still retains its original biological signals. We found that stable isotope analysis of the tested light elements in bone is only possible up to a heat exposure of 300°C while the isotopic signal from strontium remains unaltered even in bones exposed to very high temperatures. DNA-analyses seem theoretically possible up to a heat exposure of 600°C but can not be advised in every case because of the increased risk of contamination. While the macroscopic colour and UV-fluorescence of cremated bone give hints to temperature exposure of the bone's outer surface, its histological appearance can be used as a reliable indicator for the

  6. [Network meta-analyses: Interest and limits in oncology].

    PubMed

    Ribassin-Majed, Laureen; Pignon, Jean-Pierre; Michiels, Stefan; Blanchard, Pierre

    2016-03-01

    In the last decade, a new method has emerged called 'network meta-analysis' to take into account all randomized trials in a given clinical setting to provide relative effectiveness between different treatments, whether or not they have been compared (pairwise) in randomized controlled trials. Network meta-analyses combine the results of direct comparisons from randomized trials with indirect comparisons between trials (i.e. when two treatments were not compared with each other, but have been studied in relation to a common comparator). The purpose of this note is to explain this method, its relevance and its limitations. A worked example in non-metastatic head and neck cancer is presented as illustration. PMID:26917469

  7. The Limited Informativeness of Meta-Analyses of Media Effects.

    PubMed

    Valkenburg, Patti M

    2015-09-01

    In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children's video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I compare Ferguson's nonsignificant effects sizes with earlier meta-analyses on the same topics that yielded larger, significant effect sizes. I argue that Ferguson's choice for partial effects sizes is unjustified on both methodological and theoretical grounds. I then plead for a more constructive debate on the effects of violent video games on children and adolescents. Until now, this debate has been dominated by two camps with diametrically opposed views on the effects of violent media on children. However, even the earliest media effects studies tell us that children can react quite differently to the same media content. Thus, if researchers truly want to understand how media affect children, rather than fight for the presence or absence of effects, they need to adopt a perspective that takes differential susceptibility to media effects more seriously. PMID:26386007

  8. Multiresolution image gathering and restoration

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.

  9. Multiresolution Simulations of Photoinjectors

    SciTech Connect

    Mihalcea, D.; Bohn, C. L.; Terzic, B.

    2006-11-27

    We report a successful implementation of a three-dimensional wavelet-based solver for Poisson's equation with Dirichlet boundary conditions, optimized for use in particle-in-cell beam dynamics simulations. We explain how the new algorithm works and the advantages it brings to accelerator simulations. The solver is integrated into a full photoinjector-simulation code (Impact-T), and the code is then benchmarked by comparing its output against that of other codes (verification) and against laboratory measurements (validation). We also simulated the AES/JLab photoinjector using a suite of codes. This activity revealed certain performance limitations and their causes.

  10. Multiresolution Simulations of Photoinjectors

    NASA Astrophysics Data System (ADS)

    Mihalcea, D.; Bohn, C. L.; Terzić, B.

    2006-11-01

    We report a successful implementation of a three-dimensional wavelet-based solver for Poisson's equation with Dirichlet boundary conditions, optimized for use in particle-in-cell beam dynamics simulations. We explain how the new algorithm works and the advantages it brings to accelerator simulations. The solver is integrated into a full photoinjector-simulation code (Impact-T), and the code is then benchmarked by comparing its output against that of other codes (verification) and against laboratory measurements (validation). We also simulated the AES/JLab photoinjector using a suite of codes. This activity revealed certain performance limitations and their causes.

  11. MOSES Inversions using Multiresolution SMART

    NASA Astrophysics Data System (ADS)

    Rust, Thomas; Fox, Lewis; Kankelborg, Charles; Courrier, Hans; Plovanic, Jacob

    2014-06-01

    We present improvements to the SMART inversion algorithm for the MOSES imaging spectrograph. MOSES, the Multi-Order Solar EUV Spectrograph, is a slitless extreme ultraviolet spectrograph designed to measure cotemporal narrowband spectra over a wide field of view via tomographic inversion of images taken at three orders of a concave diffraction grating. SMART, the Smooth Multiplicative Algebraic Reconstruction Technique, relies on a global chi squared goodness of fit criterion, which enables overfit and underfit regions to "balance out" when judging fit quality. "Good" reconstructions show poor fits at some positions and length scales. Here we take a multiresolution approach to SMART, applying corrections to the reconstruction at positions and scales where correction is warranted based on the noise. The result is improved fit residuals that more closely resemble the expected noise in the images. Within the multiresolution framework it is also easy to include a regularized deconvolution of the instrument point spread functions, which we do. Different point spread functions among MOSES spectral orders results in spurious doppler shifts in the reconstructions, most notable near bright compact emission. We estimate the point spread funtions from the data. Deconvolution is done using the Richardson-Lucy method, which is algorithmically similar to SMART. Regularization results from only correcting the reconstruction at positions and scales where correction is warranted based on the noise. We expect the point spread function deconvolution to increase signal to noise and reduce systematic error in MOSES reconstructions.

  12. A multiresolution model for small-body gravity estimation

    NASA Astrophysics Data System (ADS)

    Jones, Brandon A.; Beylkin, Gregory; Born, George H.; Provence, Robert S.

    2011-11-01

    A new model, dubbed the MRQSphere, provides a multiresolution representation of the gravity field designed for its estimation. The multiresolution representation uses an approximation via Gaussians of the solution of the Laplace's equation in the exterior of a sphere. Also, instead of the spherical harmonics, variations in the angular variables are modeled by a set of functions constructed using quadratures for the sphere invariant under the icosahedral group. When combined, these tools specify the spatial resolution of the gravity field as a function of altitude and required accuracy. We define this model, and apply it to representing and estimating the gravity field of the asteroid 433 Eros. We verified that a MRQSphere model derived directly from the true spherical harmonics gravity model satisfies the user defined precision. We also use the MRQSphere model to estimate the gravity field of Eros for a simulated satellite mission, yielding a solution with accuracy only limited by measurement errors and their spatial distribution.

  13. Multiresolution foveated laparoscope with high resolvability

    PubMed Central

    Qin, Yi; Hua, Hong; Nguyen, Mike

    2016-01-01

    A key limitation of the state-of-the-art laparoscopes for minimally invasive surgery is the tradeoff between the field of view and spatial resolution in a single-view camera system. As such, surgical procedures are usually performed at a zoomed-in view, which limits the surgeon’s ability to see much outside the immediate focus of interest and causes a situational awareness challenge. We proposed a multiresolution foveated laparoscope (MRFL) aiming to address this limitation. The MRFL is able to simultaneously capture wide-angle overview and high-resolution images in real time; it can scan and engage the high-resolution images to any subregion of the entire surgical field in analogy to the fovea of human eye. The MRFL is able to render equivalently 10 million pixel resolution with a low data bandwidth requirement. The system has a large working distance (WD) from 80 to 180 mm. The spatial resolvability is about 45 μm in the object space at an 80 mm WD, while the resolvability of a conventional laparoscope is about 250 μm at a typically 50 mm surgical distance. PMID:23811873

  14. Multiresolution foveated laparoscope with high resolvability.

    PubMed

    Qin, Yi; Hua, Hong; Nguyen, Mike

    2013-07-01

    A key limitation of the state-of-the-art laparoscopes for minimally invasive surgery is the tradeoff between the field of view and spatial resolution in a single-view camera system. As such, surgical procedures are usually performed at a zoomed-in view, which limits the surgeon's ability to see much outside the immediate focus of interest and causes a situational awareness challenge. We proposed a multiresolution foveated laparoscope (MRFL) aiming to address this limitation. The MRFL is able to simultaneously capture wide-angle overview and high-resolution images in real time; it can scan and engage the high-resolution images to any subregion of the entire surgical field in analogy to the fovea of human eye. The MRFL is able to render equivalently 10 million pixel resolution with a low data bandwidth requirement. The system has a large working distance (WD) from 80 to 180 mm. The spatial resolvability is about 45 μm in the object space at an 80 mm WD, while the resolvability of a conventional laparoscope is about 250 μm at a typically 50 mm surgical distance. PMID:23811873

  15. Quantum Mechanical Operators in Multiresolution Hilbert Spaces

    NASA Astrophysics Data System (ADS)

    Pipek, János

    2007-12-01

    Wavelet analysis, which is a shorthand notation for the concept of multiresolution analysis (MRA), becomes increasingly popular in high efficiency storage algorithms of complex spatial distributions. This approach is applied for describing wave functions of quantum systems. At any resolution level of MRA expansions a physical observable is represented by an infinite matrix which is "canonically" chosen as the projection of its operator in the Schrödinger picture onto the subspace of the given resolution. It is shown that this canonical choice is only a particular member of possible operator representations. Among these, there exits an optimal choice, usually different from the canonical one, which gives the best numerical values in eigenvalue problems. This construction works even in those cases, where the canonical definition is unusable. The commutation relation of physical operators is also studied in MRA subspaces. It is shown that the required commutation rules are satisfied in the fine resolution limit, whereas in coarse grained spaces a correction appears depending only on the representation of the momentum operator.

  16. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    SciTech Connect

    Milani, Gabriele Valente, Marco

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  17. Analysing the capabilities and limitations of tracer tests in stream-aquifer systems

    USGS Publications Warehouse

    Wagner, B.J.; Harvey, J.W.

    2001-01-01

    The goal of this study was to identify the limitations that apply when we couple conservative-tracer injection with reactive solute sampling to identify the transport and reaction processes active in a stream. Our methodology applies Monte Carlo uncertainty analysis to assess the ability of the tracer approach to identify the governing transport and reaction processes for a wide range of stream-solute transport and reaction scenarios likely to be encountered in high-gradient streams. Our analyses identified dimensionless factors that define the capabilities and limitations of the tracer approach. These factors provide a framework for comparing and contrasting alternative tracer test designs.

  18. Large Deformation Multiresolution Diffeomorphic Metric Mapping for Multiresolution Cortical Surfaces: A Coarse-to-Fine Approach.

    PubMed

    Tan, Mingzhen; Qiu, Anqi

    2016-09-01

    Brain surface registration is an important tool for characterizing cortical anatomical variations and understanding their roles in normal cortical development and psychiatric diseases. However, surface registration remains challenging due to complicated cortical anatomy and its large differences across individuals. In this paper, we propose a fast coarse-to-fine algorithm for surface registration by adapting the large diffeomorphic deformation metric mapping (LDDMM) framework for surface mapping and show improvements in speed and accuracy via a multiresolution analysis of surface meshes and the construction of multiresolution diffeomorphic transformations. The proposed method constructs a family of multiresolution meshes that are used as natural sparse priors of the cortical morphology. At varying resolutions, these meshes act as anchor points where the parameterization of multiresolution deformation vector fields can be supported, allowing the construction of a bundle of multiresolution deformation fields, each originating from a different resolution. Using a coarse-to-fine approach, we show a potential reduction in computation cost along with improvements in sulcal alignment when compared with LDDMM surface mapping. PMID:27254865

  19. Optical design and system engineering of a multiresolution foveated laparoscope

    PubMed Central

    Qin, Yi; Hua, Hong

    2016-01-01

    The trade-off between the spatial resolution and field of view is one major limitation of state-of-the-art laparoscopes. In order to address this limitation, we demonstrated a multiresolution foveated laparoscope (MRFL) which is capable of simultaneously capturing both a wide-angle overview for situational awareness and a high-resolution zoomed-in view for accurate surgical operation. In this paper, we focus on presenting the optical design and system engineering process for developing the MRFL prototype. More specifically, the first-order specifications and properties of the optical system are discussed, followed by a detailed discussion on the optical design strategy and procedures of each subsystem. The optical performance of the final system, including diffraction efficiency, tolerance analysis, stray light and ghost image, is fully analyzed. Finally, the prototype assembly process and the final prototype are demonstrated. PMID:27139875

  20. Optical design and system engineering of a multiresolution foveated laparoscope.

    PubMed

    Qin, Yi; Hua, Hong

    2016-04-10

    The trade-off between the spatial resolution and field of view is one major limitation of state-of-the-art laparoscopes. In order to address this limitation, we demonstrated a multiresolution foveated laparoscope (MRFL) which is capable of simultaneously capturing both a wide-angle overview for situational awareness and a high-resolution zoomed-in view for accurate surgical operation. In this paper, we focus on presenting the optical design and system engineering process for developing the MRFL prototype. More specifically, the first-order specifications and properties of the optical system are discussed, followed by a detailed discussion on the optical design strategy and procedures of each subsystem. The optical performance of the final system, including diffraction efficiency, tolerance analysis, stray light and ghost image, is fully analyzed. Finally, the prototype assembly process and the final prototype are demonstrated. PMID:27139875

  1. Multiresolution approach based on projection matrices

    SciTech Connect

    Vargas, Javier; Quiroga, Juan Antonio

    2009-03-01

    Active triangulation measurement systems with a rigid geometric configuration are inappropriate for scanning large objects with low measuring tolerances. The reason is that the ratio between the depth recovery error and the lateral extension is a constant that depends on the geometric setup. As a consequence, measuring large areas with low depth recovery error requires the use of multiresolution techniques. We propose a multiresolution technique based on a camera-projector system previously calibrated. The method consists of changing the camera or projector's parameters in order to increase the system depth sensitivity. A subpixel retroprojection error in the self-calibration process and a decrease of approximately one order of magnitude in the depth recovery error can be achieved using the proposed method.

  2. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  3. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  4. Incorporation of concentration data below the limit of quantification in population pharmacokinetic analyses.

    PubMed

    Keizer, Ron J; Jansen, Robert S; Rosing, Hilde; Thijssen, Bas; Beijnen, Jos H; Schellens, Jan H M; Huitema, Alwin D R

    2015-03-01

    Handling of data below the lower limit of quantification (LLOQ), below the limit of quantification (BLOQ) in population pharmacokinetic (PopPK) analyses is important for reducing bias and imprecision in parameter estimation. We aimed to evaluate whether using the concentration data below the LLOQ has superior performance over several established methods. The performance of this approach ("All data") was evaluated and compared to other methods: "Discard," "LLOQ/2," and "LIKE" (likelihood-based). An analytical and residual error model was constructed on the basis of in-house analytical method validations and analyses from literature, with additional included variability to account for model misspecification. Simulation analyses were performed for various levels of BLOQ, several structural PopPK models, and additional influences. Performance was evaluated by relative root mean squared error (RMSE), and run success for the various BLOQ approaches. Performance was also evaluated for a real PopPK data set. For all PopPK models and levels of censoring, RMSE values were lowest using "All data." Performance of the "LIKE" method was better than the "LLOQ/2" or "Discard" method. Differences between all methods were small at the lowest level of BLOQ censoring. "LIKE" method resulted in low successful minimization (<50%) and covariance step success (<30%), although estimates were obtained in most runs (∼90%). For the real PK data set (7.4% BLOQ), similar parameter estimates were obtained using all methods. Incorporation of BLOQ concentrations showed superior performance in terms of bias and precision over established BLOQ methods, and shown to be feasible in a real PopPK analysis. PMID:26038706

  5. Incorporation of concentration data below the limit of quantification in population pharmacokinetic analyses

    PubMed Central

    Keizer, Ron J; Jansen, Robert S; Rosing, Hilde; Thijssen, Bas; Beijnen, Jos H; Schellens, Jan H M; Huitema, Alwin D R

    2015-01-01

    Handling of data below the lower limit of quantification (LLOQ), below the limit of quantification (BLOQ) in population pharmacokinetic (PopPK) analyses is important for reducing bias and imprecision in parameter estimation. We aimed to evaluate whether using the concentration data below the LLOQ has superior performance over several established methods. The performance of this approach (“All data”) was evaluated and compared to other methods: “Discard,” “LLOQ/2,” and “LIKE” (likelihood-based). An analytical and residual error model was constructed on the basis of in-house analytical method validations and analyses from literature, with additional included variability to account for model misspecification. Simulation analyses were performed for various levels of BLOQ, several structural PopPK models, and additional influences. Performance was evaluated by relative root mean squared error (RMSE), and run success for the various BLOQ approaches. Performance was also evaluated for a real PopPK data set. For all PopPK models and levels of censoring, RMSE values were lowest using “All data.” Performance of the “LIKE” method was better than the “LLOQ/2” or “Discard” method. Differences between all methods were small at the lowest level of BLOQ censoring. “LIKE” method resulted in low successful minimization (<50%) and covariance step success (<30%), although estimates were obtained in most runs (∼90%). For the real PK data set (7.4% BLOQ), similar parameter estimates were obtained using all methods. Incorporation of BLOQ concentrations showed superior performance in terms of bias and precision over established BLOQ methods, and shown to be feasible in a real PopPK analysis. PMID:26038706

  6. Exploring a Multi-resolution Approach Using AMIP Simulations

    SciTech Connect

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun; Yang, Qing; Lu, Jian; Hagos, Samson M.; Rauscher, Sara; Dong, Li; Ringler, Todd; Lauritzen, P. H.

    2015-07-31

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulations reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.

  7. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree

  8. Multiresolutional models of uncertainty generation and reduction

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    Kolmogorov's axiomatic principles of the probability theory, are reconsidered in the scope of their applicability to the processes of knowledge acquisition and interpretation. The model of uncertainty generation is modified in order to reflect the reality of engineering problems, particularly in the area of intelligent control. This model implies algorithms of learning which are organized in three groups which reflect the degree of conceptualization of the knowledge the system is dealing with. It is essential that these algorithms are motivated by and consistent with the multiresolutional model of knowledge representation which is reflected in the structure of models and the algorithms of learning.

  9. An adapted multi-resolution representation of regional VTEC

    NASA Astrophysics Data System (ADS)

    Liang, Wenjing; Dettmering, Denise; Schmidt, Michael

    2014-05-01

    The resolution of ionosphere models is mainly limited by inhomogenously distributed input data. The International GNSS Service (IGS) provides global ionosphere maps (GIMs) of vertical total electron content (VTEC) values with a spatial resolution of 2.5° in latitude and 5° in longitude. In order to provide local ionospheric structures and support high precise GPS positioning, different high-resolution regional ionosphere models have been developed by using dense observation networks. However, there is no model available with a spatial resolution adapted to the data distribution. In this study we present a regional multi-resolution VTEC model which adapts the model resolution to the data distribution. In our approach, VTEC consists of a given background model such as the International Reference Ionosphere (IRI) and an unknown correction part modeled as a series expansion in terms of B-spline scaling functions. The resolution level of the B-spline functions has to be determined by the distribution of the input data. With a sufficient number of observations, a higher level can be chosen, i.e., finer structures of VTEC can be modeled. The input data are heterogeneously distributed; specifically, the observations are dense over the continent whereas large data gaps exist over the oceans. Furthermore, the GPS stations are unevenly distributed over the continent. A data adapted VTEC model is achieved by combining a regional VTEC part with some local densification areas, each represented by a B-spline expansion. The unknown scaling coefficients of all these parts are then estimated by parameter estimation. In this contribution, our model approach is introduced, including the method of multi-resolution representation (MRR) and of combining the regional and local model parts. Furthermore, we show an example based on GNSS observations from selected permanent stations in South America.

  10. Using Controlled Landslide Initiation Experiments to Test Limit-Equilibrium Analyses of Slope Stability

    NASA Astrophysics Data System (ADS)

    Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.

    2004-12-01

    Most studies of landslide initiation employ limit equilibrium analyses of slope stability. Owing to a lack of detailed data, however, few studies have tested limit-equilibrium predictions against physical measurements of slope failure. We have conducted a series of field-scale, highly controlled landslide initiation experiments at the USGS debris-flow flume in Oregon; these experiments provide exceptional data to test limit equilibrium methods. In each of seven experiments, we attempted to induce failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand placed behind a retaining wall in the 31° sloping flume. We systematically investigated triggering of sliding by groundwater injection, by prolonged moderate-intensity sprinkling, and by bursts of high intensity sprinkling. We also used vibratory compaction to control soil porosity and thereby investigate differences in failure behavior of dense and loose soils. About 50 sensors were monitored at 20 Hz during the experiments, including nests of tiltmeters buried at 7 cm spacing to define subsurface failure geometry, and nests of tensiometers and pore-pressure sensors to define evolving pore-pressure fields. In addition, we performed ancillary laboratory tests to measure soil porosity, shear strength, hydraulic conductivity, and compressibility. In loose soils (porosity of 0.52 to 0.55), abrupt failure typically occurred along the flume bed after substantial soil deformation. In denser soils (porosity of 0.41 to 0.44), gradual failure occurred within the soil prism. All failure surfaces had a maximum length to depth ratio of about 7. In even denser soil (porosity of 0.39), we could not induce failure by sprinkling. The internal friction angle of the soils varied from 28° to 40° with decreasing porosity. We analyzed stability at failure, given the observed pore-pressure conditions just prior to large movement, using a 1-D infinite-slope method and a more complete 2-D Janbu method. Each method provides a static

  11. Multiresolution simulated annealing for brain image analysis

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Majcenic, Zoran

    1999-05-01

    Analysis of biomedical images is an important step in quantification of various diseases such as human spontaneous intracerebral brain hemorrhage (ICH). In particular, the study of outcome in patients having ICH requires measurements of various ICH parameters such as hemorrhage volume and their change over time. A multiresolution probabilistic approach for segmentation of CT head images is presented in this work. This method views the segmentation problem as a pixel labeling problem. In this application the labels are: background, skull, brain tissue, and ICH. The proposed method is based on the Maximum A-Posteriori (MAP) estimation of the unknown pixel labels. The MAP method maximizes the a-posterior probability of segmented image given the observed (input) image. Markov random field (MRF) model has been used for the posterior distribution. The MAP estimation of the segmented image has been determined using the simulated annealing (SA) algorithm. The SA algorithm is used to minimize the energy function associated with MRF posterior distribution function. A multiresolution SA (MSA) has been developed to speed up the annealing process. MSA is presented in detail in this work. A knowledge-based classification based on the brightness, size, shape and relative position toward other regions is performed at the end of the procedure. The regions are identified as background, skull, brain, ICH and calcifications.

  12. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  13. Liver fibrosis grading using multiresolution histogram information in real-time elastography

    NASA Astrophysics Data System (ADS)

    Albouy-Kissi, A.; Sarry, L.; Massoulier, S.; Bonny, C.; Randl, K.; Abergel, A.

    2010-03-01

    Despites many limitations, liver biopsy remains the gold standard method for grading and staging liver biopsy. Several modalities have been developed for a non invasive assessment of liver diseases. Real-time elastography may constitute a true alternative to liver biopsy by providing an image of tissular elasticity distribution correlated to the fibrosis grade. In this paper, we investigate a new approach for the assessment of liver fibrosis by the classification of fibrosis morphometry. Multiresolution histogram, based on a combination of intensity and texture features, has been tested as feature space. Thus, the ability of such multiresolution histograms to discriminate fibrosis grade has been proven. The results have been tested on seventeen patients that underwent a real time elastography and FibroScan examination.

  14. Multiresolution local tomography in dental radiology using wavelets.

    PubMed

    Niinimäki, K; Siltanen, S; Kolehmainen, V

    2007-01-01

    A Bayesian multiresolution model for local tomography in dental radiology is proposed. In this model a wavelet basis is used to present dental structures and the prior information is modeled in terms of Besov norm penalty. The proposed wavelet-based multiresolution method is used to reduce the number of unknowns in the reconstruction problem by abandoning fine-scale wavelets outside the region of interest (ROI). This multiresolution model allows significant reduction in the number of unknowns without the loss of reconstruction accuracy inside the ROI. The feasibility of the proposed method is tested with two-dimensional (2D) examples using simulated and experimental projection data from dental specimens. PMID:18002604

  15. MULTIRESOLUTION REPRESENTATION OF OPERATORS WITH BOUNDARY CONDITIONS ON SIMPLE DOMAINS

    SciTech Connect

    Beylkin, Gregory; Fann, George I; Harrison, Robert J; Kurcz, Christopher E; Monzon, Lucas A

    2011-01-01

    We develop a multiresolution representation of a class of integral operators satisfying boundary conditions on simple domains in order to construct fast algorithms for their application. We also elucidate some delicate theoretical issues related to the construction of periodic Green s functions for Poisson s equation. By applying the method of images to the non-standard form of the free space operator, we obtain lattice sums that converge absolutely on all scales, except possibly on the coarsest scale. On the coarsest scale the lattice sums may be only conditionally convergent and, thus, allow for some freedom in their definition. We use the limit of square partial sums as a definition of the limit and obtain a systematic, simple approach to the construction (in any dimension) of periodized operators with sparse non-standard forms. We illustrate the results on several examples in dimensions one and three: the Hilbert transform, the projector on divergence free functions, the non-oscillatory Helmholtz Green s function and the Poisson operator. Remarkably, the limit of square partial sums yields a periodic Poisson Green s function which is not a convolution. Using a short sum of decaying Gaussians to approximate periodic Green s functions, we arrive at fast algorithms for their application. We further show that the results obtained for operators with periodic boundary conditions extend to operators with Dirichlet, Neumann, or mixed boundary conditions.

  16. A qualitative multiresolution model for counterterrorism

    NASA Astrophysics Data System (ADS)

    Davis, Paul K.

    2006-05-01

    This paper describes a prototype model for exploring counterterrorism issues related to the recruiting effectiveness of organizations such as al Qaeda. The prototype demonstrates how a model can be built using qualitative input variables appropriate to representation of social-science knowledge, and how a multiresolution design can allow a user to think and operate at several levels - such as first conducting low-resolution exploratory analysis and then zooming into several layers of detail. The prototype also motivates and introduces a variety of nonlinear mathematical methods for representing how certain influences combine. This has value for, e.g., representing collapse phenomena underlying some theories of victory, and for explanations of historical results. The methodology is believed to be suitable for more extensive system modeling of terrorism and counterterrorism.

  17. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  18. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1991-01-01

    Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.

  19. Hanging-wall deformation above a normal fault: sequential limit analyses

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand

    2015-04-01

    The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring

  20. A fast multi-resolution approach to tomographic PIV

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Astarita, Tommaso

    2012-03-01

    Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.

  1. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay rather than the entire sample process. Our objective was to develop a method to determine the 95% LOD (lowest co...

  2. Multiresolution spectrotemporal analysis of complex sounds

    NASA Astrophysics Data System (ADS)

    Chi, Taishih; Ru, Powen; Shamma, Shihab A.

    2005-08-01

    A computational model of auditory analysis is described that is inspired by psychoacoustical and neurophysiological findings in early and central stages of the auditory system. The model provides a unified multiresolution representation of the spectral and temporal features likely critical in the perception of sound. Simplified, more specifically tailored versions of this model have already been validated by successful application in the assessment of speech intelligibility [Elhilali et al., Speech Commun. 41(2-3), 331-348 (2003); Chi et al., J. Acoust. Soc. Am. 106, 2719-2732 (1999)] and in explaining the perception of monaural phase sensitivity [R. Carlyon and S. Shamma, J. Acoust. Soc. Am. 114, 333-348 (2003)]. Here we provide a more complete mathematical formulation of the model, illustrating how complex signals are transformed through various stages of the model, and relating it to comparable existing models of auditory processing. Furthermore, we outline several reconstruction algorithms to resynthesize the sound from the model output so as to evaluate the fidelity of the representation and contribution of different features and cues to the sound percept.

  3. Multiresolution segmentation technique for spine MRI images

    NASA Astrophysics Data System (ADS)

    Li, Haiyun; Yan, Chye H.; Ong, Sim Heng; Chui, Cheekong K.; Teoh, Swee H.

    2002-05-01

    In this paper, we describe a hybrid method for segmentation of spinal magnetic resonance imaging that has been developed based on the natural phenomenon of stones appearing as water recedes. The candidate segmentation region corresponds to the stones with characteristics similar to that of intensity extrema, edges, intensity ridge and grey-level blobs. The segmentation method is implemented based on a combination of wavelet multiresolution decomposition and fuzzy clustering. First thresholding is performed dynamically according to local characteristic to detect possible target areas, We then use fuzzy c-means clustering in concert with wavelet multiscale edge detection to identify the maximum likelihood anatomical and functional target areas. Fuzzy C-Means uses iterative optimization of an objective function based on a weighted similarity measure between the pixels in the image and each of c cluster centers. Local extrema of this objective function are indicative of an optimal clustering of the input data. The multiscale edges can be detected and characterized from local maxima of the modulus of the wavelet transform while the noise can be reduced to some extent by enacting thresholds. The method provides an efficient and robust algorithm for spinal image segmentation. Examples are presented to demonstrate the efficiency of the technique on some spinal MRI images.

  4. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1993-01-01

    Given a function u(x) which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. We apply this multi-resolution analysis to Essentially Non-oscillatory Schemes (ENO) schemes in order to reduce the number of numerical flux computations which is needed in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. We present an efficient algorithm for implementing this program in the one-dimensional case; this algorithm can be extended to the multi-dimensional case with cartesian grids.

  5. Multi-Resolution Dynamic Meshes with Arbitrary Deformations

    SciTech Connect

    Shamir, A.; Pascucci, V.; Bajaj, C.

    2000-07-10

    Multi-resolution techniques and models have been shown to be effective for the display and transmission of large static geometric object. Dynamic environments with internally deforming models and scientific simulations using dynamic meshes pose greater challenges in terms of time and space, and need the development of similar solutions. In this paper we introduce the T-DAG, an adaptive multi-resolution representation for dynamic meshes with arbitrary deformations including attribute, position, connectivity and topology changes. T-DAG stands for Time-dependent Directed Acyclic Graph which defines the structure supporting this representation. We also provide an incremental algorithm (in time) for constructing the T-DAG representation of a given input mesh. This enables the traversal and use of the multi-resolution dynamic model for partial playback while still constructing new time-steps.

  6. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  7. Numerical analyses on optical limiting performances of chloroindium phthalocyanines with different substituent positions

    NASA Astrophysics Data System (ADS)

    Yu-Jin, Zhang; Xing-Zhe, Li; Ji-Cai, Liu; Chuan-Kui, Wang

    2016-01-01

    Optical limiting properties of two soluble chloroindium phthalocyanines with α- and β-alkoxyl substituents in nanosecond laser field have been studied by solving numerically the coupled singlet-triplet rate equation together with the paraxial wave field equation under the Crank-Nicholson scheme. Both transverse and longitudinal effects of the laser field on photophysical properties of the compounds are considered. Effective transfer time between the ground state and the lowest triplet state is defined in reformulated rate equations to characterize dynamics of singlet-triplet state population transfer. It is found that both phthalocyanines exhibit good nonlinear optical absorption abilities, while the compound with α-substituent shows enhanced optical limiting performance. Our ab-initio calculations reveal that the phthalocyanine with α-substituent has more obvious electron delocalization and lower frontier orbital transfer energies, which are responsible for its preferable photophysical properties. Project supported by the National Basic Research Program of China (Grant No. 2011CB808100), the National Natural Science Foundation of China (Grant Nos. 11204078 and 11574082), and the Fundamental Research Funds for the Central Universities of China (Grant No. 2015MS54).

  8. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  9. Examination of metabolic responses to phosphorus limitation via proteomic analyses in the marine diatom Phaeodactylum tricornutum.

    PubMed

    Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye

    2015-01-01

    Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms. PMID:26020491

  10. Examination of metabolic responses to phosphorus limitation via proteomic analyses in the marine diatom Phaeodactylum tricornutum

    PubMed Central

    Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye

    2015-01-01

    Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms. PMID:26020491

  11. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  12. Preliminary scoping safety analyses of the limiting design basis protected accidents for the Fast Flux Test Facility tritium production core

    SciTech Connect

    Heard, F.J.

    1997-11-19

    The SAS4A/SASSYS-l computer code is used to perform a series of analyses for the limiting protected design basis transient events given a representative tritium and medical isotope production core design proposed for the Fast Flux Test Facility. The FFTF tritium and isotope production mission will require a different core loading which features higher enrichment fuel, tritium targets, and medical isotope production assemblies. Changes in several key core parameters, such as the Doppler coefficient and delayed neutron fraction will affect the transient response of the reactor. Both reactivity insertion and reduction of heat removal events were analyzed. The analysis methods and modeling assumptions are described. Results of the analyses and comparison against fuel pin performance criteria are presented to provide quantification that the plant protection system is adequate to maintain the necessary safety margins and assure cladding integrity.

  13. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  14. a DTM Multi-Resolution Compressed Model for Efficient Data Storage and Network Transfer

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Brovelli, M.; Zamboni, G.

    2011-08-01

    In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging) technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs) show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's) and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions). At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a digital terrain

  15. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  16. Periodic Density Functional Theory Solver using Multiresolution Analysis with MADNESS

    NASA Astrophysics Data System (ADS)

    Harrison, Robert; Thornton, William

    2011-03-01

    We describe the first implementation of the all-electron Kohn-Sham density functional periodic solver (DFT) using multi-wavelets and fast integral equations using MADNESS (multiresolution adaptive numerical environment for scientific simulation; http://code.google.com/p/m-a-d-n-e-s-s). The multiresolution nature of a multi-wavelet basis allows for fast computation with guaranteed precision. By reformulating the Kohn-Sham eigenvalue equation into the Lippmann-Schwinger equation, we can avoid using the derivative operator which allows better control of overall precision for the all-electron problem. Other highlights include the development of periodic integral operators with low-rank separation, an adaptable model potential for nuclear potential, and an implementation for Hartree Fock exchange. This work was supported by NSF project OCI-0904972 and made use of resources at the Center for Computational Sciences at Oak Ridge National Laboratory under contract DE-AC05-00OR22725.

  17. A Multiresolution Method for Parameter Estimation of Diffusion Processes

    PubMed Central

    Kou, S. C.; Olding, Benjamin P.; Lysy, Martin; Liu, Jun S.

    2014-01-01

    Diffusion process models are widely used in science, engineering and finance. Most diffusion processes are described by stochastic differential equations in continuous time. In practice, however, data is typically only observed at discrete time points. Except for a few very special cases, no analytic form exists for the likelihood of such discretely observed data. For this reason, parametric inference is often achieved by using discrete-time approximations, with accuracy controlled through the introduction of missing data. We present a new multiresolution Bayesian framework to address the inference difficulty. The methodology relies on the use of multiple approximations and extrapolation, and is significantly faster and more accurate than known strategies based on Gibbs sampling. We apply the multiresolution approach to three data-driven inference problems – one in biophysics and two in finance – one of which features a multivariate diffusion model with an entirely unobserved component. PMID:25328259

  18. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  19. Gaze-contingent multiresolutional displays: an integrative review.

    PubMed

    Reingold, Eyal M; Loschky, Lester C; McConkie, George W; Stampe, David M

    2003-01-01

    Gaze-contingent multiresolutional displays (GCMRDs) center high-resolution information on the user's gaze position, matching the user's area of interest (AOI). Image resolution and details outside the AOI are reduced, lowering the requirements for processing resources and transmission bandwidth in demanding display and imaging applications. This review provides a general framework within which GCMRD research can be integrated, evaluated, and guided. GCMRDs (or "moving windows") are analyzed in terms of (a) the nature of their images (i.e., "multiresolution," "variable resolution," "space variant," or "level of detail"), and (b) the movement of the AOI (i.e., "gaze contingent," "foveated," or "eye slaved"). We also synthesize the known human factors research on GCMRDs and point out important questions for future research and development. Actual or potential applications of this research include flight, medical, and driving simulators; virtual reality; remote piloting and teleoperation; infrared and indirect vision; image transmission and retrieval; telemedicine; video teleconferencing; and artificial vision systems. PMID:14529201

  20. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    USGS Publications Warehouse

    Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.

    2016-01-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.

  1. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR.

    PubMed

    Stokdyk, Joel P; Firnstahl, Aaron D; Spencer, Susan K; Burch, Tucker R; Borchardt, Mark A

    2016-06-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L(-1) assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation. PMID:27023926

  2. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  3. Multiple multiresolution representation of functions and calculus for fast computation

    SciTech Connect

    Fann, George I; Harrison, Robert J; Hill, Judith C; Jia, Jun; Galindo, Diego A

    2010-01-01

    We describe the mathematical representations, data structure and the implementation of the numerical calculus of functions in the software environment multiresolution analysis environment for scientific simulations, MADNESS. In MADNESS, each smooth function is represented using an adaptive pseudo-spectral expansion using the multiwavelet basis to a arbitrary but finite precision. This is an extension of the capabilities of most of the existing net, mesh and spectral based methods where the discretization is based on a single adaptive mesh, or expansions.

  4. Multi-scale, multi-resolution brain cancer modeling.

    PubMed

    Zhang, Le; Chen, L Leon; Deisboeck, Thomas S

    2009-03-01

    In advancing discrete-based computational cancer models towards clinical applications, one faces the dilemma of how to deal with an ever growing amount of biomedical data that ought to be incorporated eventually in one form or another. Model scalability becomes of paramount interest. In an effort to start addressing this critical issue, here, we present a novel multi-scale and multi-resolution agent-based in silico glioma model. While 'multi-scale' refers to employing an epidermal growth factor receptor (EGFR)-driven molecular network to process cellular phenotypic decisions within the micro-macroscopic environment, 'multi-resolution' is achieved through algorithms that classify cells to either active or inactive spatial clusters, which determine the resolution they are simulated at. The aim is to assign computational resources where and when they matter most for maintaining or improving the predictive power of the algorithm, onto specific tumor areas and at particular times. Using a previously described 2D brain tumor model, we have developed four different computational methods for achieving the multi-resolution scheme, three of which are designed to dynamically train on the high-resolution simulation that serves as control. To quantify the algorithms' performance, we rank them by weighing the distinct computational time savings of the simulation runs versus the methods' ability to accurately reproduce the high-resolution results of the control. Finally, to demonstrate the flexibility of the underlying concept, we show the added value of combining the two highest-ranked methods. The main finding of this work is that by pursuing a multi-resolution approach, one can reduce the computation time of a discrete-based model substantially while still maintaining a comparably high predictive power. This hints at even more computational savings in the more realistic 3D setting over time, and thus appears to outline a possible path to achieve scalability for the all

  5. Survey and analysis of multiresolution methods for turbulence data

    SciTech Connect

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; Ahrens, James; Hamann, Bernd

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.

  6. Survey and analysis of multiresolution methods for turbulence data

    DOE PAGESBeta

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; Ahrens, James; Hamann, Bernd

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between themore » algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less

  7. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  8. Multiresolution persistent homology for excessively large biomolecular datasets

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  9. Multiresolution persistent homology for excessively large biomolecular datasets.

    PubMed

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs. PMID:26450288

  10. Multiresolution persistent homology for excessively large biomolecular datasets

    SciTech Connect

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-07

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  11. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  12. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  13. Multiresolution Techniques for Interactive Texture-Based Rendering of Arbitrarily Oriented Cutting Planes

    SciTech Connect

    LaMar, E; Duchaineau, M A; Hamann, B; Joy, K I

    2001-10-03

    We present a multiresolution technique for interactive texture based rendering of arbitrarily oriented cutting planes for very large data sets. This method uses an adaptive scheme that renders the data along a cutting plane at different resolutions: higher resolution near the point-of-interest and lower resolution away from the point-of-interest. The algorithm is based on the segmentation of texture space into an octree, where the leaves of the tree define the original data and the internal nodes define lower-resolution versions. Rendering is done adaptively by selecting high-resolution cells close to a center of attention and low-resolution cells away from it. We limit the artifacts introduced by this method by blending between different levels of resolution to produce a smooth image. This technique can be used to produce viewpoint-dependent renderings.

  14. Multi-parametric cytometry from a complex cellular sample: Improvements and limits of manual versus computational-based interactive analyses.

    PubMed

    Gondois-Rey, F; Granjeaud, S; Rouillier, P; Rioualen, C; Bidaut, G; Olive, D

    2016-05-01

    The wide possibilities opened by the developments of multi-parametric cytometry are limited by the inadequacy of the classical methods of analysis to the multi-dimensional characteristics of the data. While new computational tools seemed ideally adapted and were applied successfully, their adoption is still low among the flow cytometrists. In the purpose to integrate unsupervised computational tools for the management of multi-stained samples, we investigated their advantages and limits by comparison to manual gating on a typical sample analyzed in immunomonitoring routine. A single tube of PBMC, containing 11 populations characterized by different sizes and stained with 9 fluorescent markers, was used. We investigated the impact of the strategy choice on manual gating variability, an undocumented pitfall of the analysis process, and we identified rules to optimize it. While assessing automatic gating as an alternate, we introduced the Multi-Experiment Viewer software (MeV) and validated it for merging clusters and annotating interactively populations. This procedure allowed the finding of both targeted and unexpected populations. However, the careful examination of computed clusters in standard dot plots revealed some heterogeneity, often below 10%, that was overcome by increasing the number of clusters to be computed. MeV facilitated the identification of populations by displaying both the MFI and the marker signature of the dataset simultaneously. The procedure described here appears fully adapted to manage homogeneously high number of multi-stained samples and allows improving multi-parametric analyses in a way close to the classic approach. © 2016 International Society for Advancement of Cytometry. PMID:27059253

  15. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  16. Geometric multi-resolution analysis and data-driven convolutions

    NASA Astrophysics Data System (ADS)

    Strawn, Nate

    2015-09-01

    We introduce a procedure for learning discrete convolutional operators for generic datasets which recovers the standard block convolutional operators when applied to sets of natural images. They key observation is that the standard block convolutional operators on images are intuitive because humans naturally understand the grid structure of the self-evident functions over images spaces (pixels). This procedure first constructs a Geometric Multi-Resolution Analysis (GMRA) on the set of variables giving rise to a dataset, and then leverages the details of this data structure to identify subsets of variables upon which convolutional operators are supported, as well as a space of functions that can be shared coherently amongst these supports.

  17. Geometric multi-resolution analysis for dictionary learning

    NASA Astrophysics Data System (ADS)

    Maggioni, Mauro; Minsker, Stanislav; Strawn, Nate

    2015-09-01

    We present an efficient algorithm and theory for Geometric Multi-Resolution Analysis (GMRA), a procedure for dictionary learning. Sparse dictionary learning provides the necessary complexity reduction for the critical applications of compression, regression, and classification in high-dimensional data analysis. As such, it is a critical technique in data science and it is important to have techniques that admit both efficient implementation and strong theory for large classes of theoretical models. By construction, GMRA is computationally efficient and in this paper we describe how the GMRA correctly approximates a large class of plausible models (namely, the noisy manifolds).

  18. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  19. MR-CDF: Managing multi-resolution scientific data

    NASA Technical Reports Server (NTRS)

    Salem, Kenneth

    1993-01-01

    MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time.

  20. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  1. A Global, Multi-Resolution Approach to Regional Ocean Modeling

    SciTech Connect

    Du, Qiang

    2013-11-08

    In this collaborative research project between Pennsylvania State University, Colorado State University and Florida State University, we mainly focused on developing multi-resolution algorithms which are suitable to regional ocean modeling. We developed hybrid implicit and explicit adaptive multirate time integration method to solve systems of time-dependent equations that present two signi cantly di erent scales. We studied the e ects of spatial simplicial meshes on the stability and the conditioning of fully discrete approximations. We also studies adaptive nite element method (AFEM) based upon the Centroidal Voronoi Tessellation (CVT) and superconvergent gradient recovery. Some of these techniques are now being used by geoscientists(such as those at LANL).

  2. A multiresolution analysis for detection of abnormal lung sounds

    PubMed Central

    Emmanouilidou, Dimitra; Patil, Kailash; West, James; Elhilali, Mounya

    2014-01-01

    Automated analysis and detection of abnormal lung sound patterns has great potential for improving access to standardized diagnosis of pulmonary diseases, especially in low-resource settings. In the current study, we develop signal processing tools for analysis of paediatric auscultations recorded under non-ideal noisy conditions. The proposed model is based on a biomimetic multi-resolution analysis of the spectro-temporal modulation details in lung sounds. The methodology provides a detailed description of joint spectral and temporal variations in the signal and proves to be more robust than frequency-based techniques in distinguishing crackles and wheezes from normal breathing sounds. PMID:23366591

  3. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  4. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  5. Multiresolution fusion of remotely sensed images with the Hermite transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Lopez-Caloca, Alejandra A.; Zambrano-Gallardo, Cira F.

    2004-02-01

    The Hermite Transform is an image representation model that incorporates some important properties of visual perception such as the analysis through overlapping receptive fields and the Gaussian derivative model of early vision. It also allows the construction of pyramidal multiresolution analysis-synthesis schemes. We show how the Hermite Transform can be used to build image fusion schemes that take advantage of the fact that Gaussian derivatives are good operators for the detection of relevant image patterns at different spatial scales. These patterns are later combined in the transform coefficient domain. Applications of this fusion algorithm are shown with remote sensing images, namely LANDSAT, IKONOS, RADARSAT and SAR AeS-1 images.

  6. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    NASA Astrophysics Data System (ADS)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  7. Physiological, biomass elemental composition and proteomic analyses of Escherichia coli ammonium-limited chemostat growth, and comparison with iron- and glucose-limited chemostat growth

    PubMed Central

    Folsom, James Patrick

    2015-01-01

    Escherichia coli physiological, biomass elemental composition and proteome acclimations to ammonium-limited chemostat growth were measured at four levels of nutrient scarcity controlled via chemostat dilution rate. These data were compared with published iron- and glucose-limited growth data collected from the same strain and at the same dilution rates to quantify general and nutrient-specific responses. Severe nutrient scarcity resulted in an overflow metabolism with differing organic byproduct profiles based on limiting nutrient and dilution rate. Ammonium-limited cultures secreted up to 35  % of the metabolized glucose carbon as organic byproducts with acetate representing the largest fraction; in comparison, iron-limited cultures secreted up to 70  % of the metabolized glucose carbon as lactate, and glucose-limited cultures secreted up to 4  % of the metabolized glucose carbon as formate. Biomass elemental composition differed with nutrient limitation; biomass from ammonium-limited cultures had a lower nitrogen content than biomass from either iron- or glucose-limited cultures. Proteomic analysis of central metabolism enzymes revealed that ammonium- and iron-limited cultures had a lower abundance of key tricarboxylic acid (TCA) cycle enzymes and higher abundance of key glycolysis enzymes compared with glucose-limited cultures. The overall results are largely consistent with cellular economics concepts, including metabolic tradeoff theory where the limiting nutrient is invested into essential pathways such as glycolysis instead of higher ATP-yielding, but non-essential, pathways such as the TCA cycle. The data provide a detailed insight into ecologically competitive metabolic strategies selected by evolution, templates for controlling metabolism for bioprocesses and a comprehensive dataset for validating in silico representations of metabolism. PMID:26018546

  8. Physiological, biomass elemental composition and proteomic analyses of Escherichia coli ammonium-limited chemostat growth, and comparison with iron- and glucose-limited chemostat growth.

    PubMed

    Folsom, James Patrick; Carlson, Ross P

    2015-08-01

    Escherichia coli physiological, biomass elemental composition and proteome acclimations to ammonium-limited chemostat growth were measured at four levels of nutrient scarcity controlled via chemostat dilution rate. These data were compared with published iron- and glucose-limited growth data collected from the same strain and at the same dilution rates to quantify general and nutrient-specific responses. Severe nutrient scarcity resulted in an overflow metabolism with differing organic byproduct profiles based on limiting nutrient and dilution rate. Ammonium-limited cultures secreted up to 35% of the metabolized glucose carbon as organic byproducts with acetate representing the largest fraction; in comparison, iron-limited cultures secreted up to 70 % of the metabolized glucose carbon as lactate, and glucose-limited cultures secreted up to 4% of the metabolized glucose carbon as formate. Biomass elemental composition differed with nutrient limitation; biomass from ammonium-limited cultures had a lower nitrogen content than biomass from either iron- or glucose-limited cultures. Proteomic analysis of central metabolism enzymes revealed that ammonium- and iron-limited cultures had a lower abundance of key tricarboxylic acid (TCA) cycle enzymes and higher abundance of key glycolysis enzymes compared with glucose-limited cultures. The overall results are largely consistent with cellular economics concepts, including metabolic tradeoff theory where the limiting nutrient is invested into essential pathways such as glycolysis instead of higher ATP-yielding, but non-essential, pathways such as the TCA cycle. The data provide a detailed insight into ecologically competitive metabolic strategies selected by evolution, templates for controlling metabolism for bioprocesses and a comprehensive dataset for validating in silico representations of metabolism. PMID:26018546

  9. Attosecond electron dynamics: A multiresolution approach

    NASA Astrophysics Data System (ADS)

    Vence, Nicholas; Harrison, Robert; Krstić, Predrag

    2012-03-01

    We establish a numerical solution to the time-dependent Schrödinger equation employing an adaptive, discontinuous spectral element basis that automatically adjusts to the requested precision. The explicit time evolution is accomplished by a band-limited, gradient-corrected, symplectic propagator and uses separated representations of operators for efficient computation in multiple dimensions. We illustrate the method calculating accurate bound and continuum transition probabilities along with the photoelectron spectra for H(1s), He+(1s), and Li2+(2s) in three dimensions and H2+ in three and four dimensions under a two-cycle attosecond laser pulse with driving frequency of 36 eV and an intensity of 1×1015W/cm2.

  10. Global multi-resolution terrain elevation data 2010 (GMTED2010)

    USGS Publications Warehouse

    Danielson, Jeffrey J.; Gesch, Dean B.

    2011-01-01

    In 1996, the U.S. Geological Survey (USGS) developed a global topographic elevation model designated as GTOPO30 at a horizontal resolution of 30 arc-seconds for the entire Earth. Because no single source of topographic information covered the entire land surface, GTOPO30 was derived from eight raster and vector sources that included a substantial amount of U.S. Defense Mapping Agency data. The quality of the elevation data in GTOPO30 varies widely; there are no spatially-referenced metadata, and the major topographic features such as ridgelines and valleys are not well represented. Despite its coarse resolution and limited attributes, GTOPO30 has been widely used for a variety of hydrological, climatological, and geomorphological applications as well as military applications, where a regional, continental, or global scale topographic model is required. These applications have ranged from delineating drainage networks and watersheds to using digital elevation data for the extraction of topographic structure and three-dimensional (3D) visualization exercises (Jenson and Domingue, 1988; Verdin and Greenlee, 1996; Lehner and others, 2008). Many of the fundamental geophysical processes active at the Earth's surface are controlled or strongly influenced by topography, thus the critical need for high-quality terrain data (Gesch, 1994). U.S. Department of Defense requirements for mission planning, geographic registration of remotely sensed imagery, terrain visualization, and map production are similarly dependent on global topographic data. Since the time GTOPO30 was completed, the availability of higher-quality elevation data over large geographic areas has improved markedly. New data sources include global Digital Terrain Elevation Data (DTEDRegistered) from the Shuttle Radar Topography Mission (SRTM), Canadian elevation data, and data from the Ice, Cloud, and land Elevation Satellite (ICESat). Given the widespread use of GTOPO30 and the equivalent 30-arc

  11. Fresnelets: new multiresolution wavelet bases for digital holography.

    PubMed

    Liebling, Michael; Blu, Thierry; Unser, Michael

    2003-01-01

    We propose a construction of new wavelet-like bases that are well suited for the reconstruction and processing of optically generated Fresnel holograms recorded on CCD-arrays. The starting point is a wavelet basis of L2 to which we apply a unitary Fresnel transform. The transformed basis functions are shift-invariant on a level-by-level basis but their multiresolution properties are governed by the special form that the dilation operator takes in the Fresnel domain. We derive a Heisenberg-like uncertainty relation that relates the localization of Fresnelets with that of their associated wavelet basis. According to this criterion, the optimal functions for digital hologram processing turn out to be Gabor functions, bringing together two separate aspects of the holography inventor's work. We give the explicit expression of orthogonal and semi-orthogonal Fresnelet bases corresponding to polynomial spline wavelets. This special choice of Fresnelets is motivated by their near-optimal localization properties and their approximation characteristics. We then present an efficient multiresolution Fresnel transform algorithm, the Fresnelet transform. This algorithm allows for the reconstruction (backpropagation) of complex scalar waves at several user-defined, wavelength-independent resolutions. Furthermore, when reconstructing numerical holograms, the subband decomposition of the Fresnelet transform naturally separates the image to reconstruct from the unwanted zero-order and twin image terms. This greatly facilitates their suppression. We show results of experiments carried out on both synthetic (simulated) data sets as well as on digitally acquired holograms. PMID:18237877

  12. Using fuzzy logic to enhance stereo matching in multiresolution images.

    PubMed

    Medeiros, Marcos D; Gonçalves, Luiz Marcos G; Frery, Alejandro C

    2010-01-01

    Stereo matching is an open problem in computer vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859

  13. Using Fuzzy Logic to Enhance Stereo Matching in Multiresolution Images

    PubMed Central

    Medeiros, Marcos D.; Gonçalves, Luiz Marcos G.; Frery, Alejandro C.

    2010-01-01

    Stereo matching is an open problem in Computer Vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859

  14. Multiresolution in CROCO (Coastal and Regional Ocean Community model)

    NASA Astrophysics Data System (ADS)

    Debreu, Laurent; Auclair, Francis; Benshila, Rachid; Capet, Xavier; Dumas, Franck; Julien, Swen; Marchesiello, Patrick

    2016-04-01

    CROCO (Coastal and Regional Ocean Community model [1]) is a new oceanic modeling system built upon ROMS_AGRIF and the non-hydrostatic kernel of SNH, gradually including algorithms from MARS3D (sediments)and HYCOM (vertical coordinates). An important objective of CROCO is to provide the possibility of running truly multiresolution simulations. Our previous work on structured mesh refinement [2] allowed us to run two-way nesting with the following major features: conservation, spatial and temporal refinement, coupling at the barotropic level. In this presentation, we will expose the current developments in CROCO towards multiresolution simulations: connection between neighboring grids at the same level of resolution and load balancing on parallel computers. Results of preliminary experiments will be given both on an idealized test case and on a realistic simulation of the Bay of Biscay with high resolution along the coast. References: [1] : CROCO : http://www.croco-ocean.org [2] : Debreu, L., P. Marchesiello, P. Penven, and G. Cambon, 2012: Two-way nesting in split-explicit ocean models: algorithms, implementation and validation. Ocean Modelling, 49-50, 1-21.

  15. Automated transformation-invariant shape recognition through wavelet multiresolution

    NASA Astrophysics Data System (ADS)

    Brault, Patrice; Mounier, Hugues

    2001-12-01

    We present here new results in Wavelet Multi-Resolution Analysis (W-MRA) applied to shape recognition in automatic vehicle driving applications. Different types of shapes have to be recognized in this framework. They pertain to most of the objects entering the sensors field of a car. These objects can be road signs, lane separation lines, moving or static obstacles, other automotive vehicles, or visual beacons. The recognition process must be invariant to global, affine or not, transformations which are : rotation, translation and scaling. It also has to be invariant to more local, elastic, deformations like the perspective (in particular with wide angle camera lenses), and also like deformations due to environmental conditions (weather : rain, mist, light reverberation) or optical and electrical signal noises. To demonstrate our method, an initial shape, with a known contour, is compared to the same contour altered by rotation, translation, scaling and perspective. The curvature computed for each contour point is used as a main criterion in the shape matching process. The original part of this work is to use wavelet descriptors, generated with a fast orthonormal W-MRA, rather than Fourier descriptors, in order to provide a multi-resolution description of the contour to be analyzed. In such way, the intrinsic spatial localization property of wavelet descriptors can be used and the recognition process can be speeded up. The most important part of this work is to demonstrate the potential performance of Wavelet-MRA in this application of shape recognition.

  16. Review of the EPA's radionuclide release analyses from LLW disposal trenches used in support of proposed dose limits in 40 CFR 193

    SciTech Connect

    Pescatore, C.; Sullivan, T.M.

    1991-11-01

    The April 1989 draft EPA standard for low-level waste (LLW) disposal, 40 CFR 193, would require disposal site performance to satisfy very stringent dose-limit criteria. The EPA suggests that these limits can be achieved by relying extensively on waste solidification before disposal. The EPA justifies the achievability of the proposed criteria based on performance assessment analyses in the general context of trench burial of the LLW. The core models implemented in those analyses are codified in the EPA's PRESTO family of codes. Because a key set of models for predicting potential releases are the leach-and-transport models from a disposal trench, these have been reviewed for completeness and applicability to trench disposal methods. The overall conclusion of this review is that the generic analyses performed by the EPA are not sufficiently comprehensive to support the proposed version of 40 CFR 193. More rigorous analyses may find the draft standard criteria to be unattainable.

  17. Review of the EPA`s radionuclide release analyses from LLW disposal trenches used in support of proposed dose limits in 40 CFR 193

    SciTech Connect

    Pescatore, C.; Sullivan, T.M.

    1991-11-01

    The April 1989 draft EPA standard for low-level waste (LLW) disposal, 40 CFR 193, would require disposal site performance to satisfy very stringent dose-limit criteria. The EPA suggests that these limits can be achieved by relying extensively on waste solidification before disposal. The EPA justifies the achievability of the proposed criteria based on performance assessment analyses in the general context of trench burial of the LLW. The core models implemented in those analyses are codified in the EPA`s PRESTO family of codes. Because a key set of models for predicting potential releases are the leach-and-transport models from a disposal trench, these have been reviewed for completeness and applicability to trench disposal methods. The overall conclusion of this review is that the generic analyses performed by the EPA are not sufficiently comprehensive to support the proposed version of 40 CFR 193. More rigorous analyses may find the draft standard criteria to be unattainable.

  18. A numerical evaluation of TIROS-N and NOAA-6 analyses in a high resolution limited area model

    NASA Technical Reports Server (NTRS)

    Derber, J. C.; Koehler, T. L.; Horn, L. H.

    1981-01-01

    Vertical temperature profiles derived from TIROS-N and NOAA-6 radiance measurements were used to create separate analyses for the period 0000 GMT 6 January to 0000 GMT 7 January 1980. The 0000 GMT 6 January satellite analyses and a conventional analysis were used to initialize and run the University of Wisconsin's version of the Australian Region Primitive Equations model. Forecasts based on conventional analyses were used to evaluate the forecasts based only on satellite upper air data. The forecasts based only on TIROS-N or NOAA-6 data did reasonably well in locating the main trough and ridge positions. The satellite initial analyses and forecasts revealed errors correlated to the synoptic situation. The trough in both TIROS-N and NOAA-6 forecasts which was initially too warm remained too warm as it propagated eastward during the forecast period. Thus, it is unlikely that the operational satellite data will improve forecasts in a data dense region. However, in regions of poor data coverage, the satellite data should have a beneficial effect on numerical forecasts.

  19. Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme

    NASA Astrophysics Data System (ADS)

    Hickmann, K. S.; Godinez, H. C.

    2015-12-01

    When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.

  20. Multiresolution strategies for the numerical solution of optimal control problems

    NASA Astrophysics Data System (ADS)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  1. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  2. Automatic image segmentation by dynamic region growth and multiresolution merging.

    PubMed

    Ugarriza, Luis Garcia; Saber, Eli; Vantaram, Sreenath Rao; Amuso, Vincent; Shaw, Mark; Bhaskar, Ranjit

    2009-10-01

    Image segmentation is a fundamental task in many computer vision applications. In this paper, we propose a new unsupervised color image segmentation algorithm, which exploits the information obtained from detecting edges in color images in the CIE L *a *b * color space. To this effect, by using a color gradient detection technique, pixels without edges are clustered and labeled individually to identify some initial portion of the input image content. Elements that contain higher gradient densities are included by the dynamic generation of clusters as the algorithm progresses. Texture modeling is performed by color quantization and local entropy computation of the quantized image. The obtained texture and color information along with a region growth map consisting of all fully grown regions are used to perform a unique multiresolution merging procedure to blend regions with similar characteristics. Experimental results obtained in comparison to published segmentation techniques demonstrate the performance advantages of the proposed method. PMID:19535323

  3. Towards online multiresolution community detection in large-scale networks.

    PubMed

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  4. Out-of-Core Construction and Visualization of Multiresolution Surfaces

    SciTech Connect

    Lindstrom, P

    2003-02-03

    We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a rectilinear octree grid to coarsen and create a hierarchy for the mesh, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are disk and memory efficient, and are fairly straightforward to implement.

  5. Out-of-Core Construction and Visualization of Multiresolution Surfaces

    SciTech Connect

    Lindstrom, P

    2002-11-04

    We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a uniform octree grid to coarsen a mesh and create a hierarchy, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are both disk and memory efficient, and are fairly straightforward to implement.

  6. Multiresolution Analysis and Prediction of Solar Magnetic Flux

    NASA Astrophysics Data System (ADS)

    Wik, Magnus

    Synoptic maps of the solar magnetic field provide an important visualization of the global transport and evolution of the large-scale magnetic flux. The solar dynamo picture is dependent on both the spatial and time resolution. It is therefore interesting to study the solar magnetic activity for many resolutions at the same time. A multi-resolution analysis gives us the possibility to study the synoptic solar magnetic fields for several resolutions at the same time. In this study we have first carried out a wavelet based multiresolution analysis (MRA) of the longitudinally averaged photospheric synoptic magnetograms. Magnetograms of Wilcox Solar Observatory (WSO), Stanford and of Michelson Doppler Imager (MDI) onboard SOHO of ESA/NASA were used. WSO data enabled a study of cycle 21,22 and 23 and MDI data a more detailed study of cycle 23. The result reveals a complex picture of the solar magnetic activity on different scales. For resolutions around 1-2 years and 6-7 years we observe strong transports of fluxes to the polar regions. Around 11 years we observe a very regular pattern which resembles a wave from the polar to the sunspot regions. We also see that a large range of latitudes vary in phase. A large asymmetry between solar northern and southern hemispheres is also seen. We have also developed a multilayer back propagation neural network for prediction of the solar magnetic flux. The inputs to the model are the polar and sunspot magnetic field in WSO longitudinally averaged solar magnetic fields.

  7. Static multiresolution grids with inline hierarchy information for cosmic ray propagation

    NASA Astrophysics Data System (ADS)

    Müller, Gero

    2016-08-01

    For numerical simulations of cosmic-ray propagation fast access to static magnetic field data is required. We present a data structure for multiresolution vector grids which is optimized for fast access, low overhead and shared memory use. The hierarchy information is encoded into the grid itself, reducing the memory overhead. Benchmarks show that in certain scenarios the differences in deflections introduced by sampling the magnetic field model can be significantly reduced when using the multiresolution approach.

  8. A Multi-Resolution Data Structure for Two-Dimensional Morse Functions

    SciTech Connect

    Bremer, P-T; Edelsbrunner, H; Hamann, B; Pascucci, V

    2003-07-30

    The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.

  9. Homogeneous hierarchies: A discrete analogue to the wavelet-based multiresolution approximation

    SciTech Connect

    Mirkin, B.

    1996-12-31

    A correspondence between discrete binary hierarchies and some orthonormal bases of the n-dimensional Euclidean space can be applied to such problems as clustering, ordering, identifying/testing in very large data bases, or multiresolution image/signal processing. The latter issue is considered in the paper. The binary hierarchy based multiresolution theory is expected to lead to effective methods for data processing because of relaxing the regularity restrictions of the classical theory.

  10. Multi-resolution community detection based on generalized self-loop rescaling strategy

    NASA Astrophysics Data System (ADS)

    Xiang, Ju; Tang, Yan-Ni; Gao, Yuan-Yuan; Zhang, Yan; Deng, Ke; Xu, Xiao-Ke; Hu, Ke

    2015-08-01

    Community detection is of considerable importance for analyzing the structure and function of complex networks. Many real-world networks may possess community structures at multiple scales, and recently, various multi-resolution methods were proposed to identify the community structures at different scales. In this paper, we present a type of multi-resolution methods by using the generalized self-loop rescaling strategy. The self-loop rescaling strategy provides one uniform ansatz for the design of multi-resolution community detection methods. Many quality functions for community detection can be unified in the framework of the self-loop rescaling. The resulting multi-resolution quality functions can be optimized directly using the existing modularity-optimization algorithms. Several derived multi-resolution methods are applied to the analysis of community structures in several synthetic and real-world networks. The results show that these methods can find the pre-defined substructures in synthetic networks and real splits observed in real-world networks. Finally, we give a discussion on the methods themselves and their relationship. We hope that the study in the paper can be helpful for the understanding of the multi-resolution methods and provide useful insight into designing new community detection methods.

  11. Continuously zoom imaging probe for the multi-resolution foveated laparoscope.

    PubMed

    Qin, Yi; Hua, Hong

    2016-04-01

    In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration. PMID:27446645

  12. Continuously zoom imaging probe for the multi-resolution foveated laparoscope

    PubMed Central

    Qin, Yi; Hua, Hong

    2016-01-01

    In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration. PMID:27446645

  13. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-09-01

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  14. X-ray Crystallographic Analyses of Pig Pancreatic α-Amylase with Limit Dextrin, Oligosaccharide and α-Cyclodextrin†‡

    PubMed Central

    Larson, Steven B.; Day, John S.; McPherson, Alexander

    2010-01-01

    Further refinement of the model using maximum likelihood procedures and re-evaluation of the native electron density map has shown that crystals of pig pancreatic α-amylase, whose structure we reported more than fifteen years ago, in fact contain a substantial amount of carbohydrate. The carbohydrate fragments are the products of glycogen digestion carried out as an essential step of the protein's purification procedure. In particular, the substrate-binding cleft contains a limit dextrin of six glucose residues, one of which contains both α-(1,4) and α-(1,6) linkages to contiguous residues. The disaccharide in the original model, shared between two amylase molecules in the crystal lattice, but also occupying a portion of the substrate binding cleft, is now seen to be a tetrasaccharide. There are, in addition, several other probable monosaccharide binding sites. To these results we have further reviewed our X-ray diffraction analysis of α-amylase complexed with α-cyclodextrin. α-Amylase binds three cyclodextrin molecules. Glucose residues of two of the rings superimpose upon the limit dextrin and the tetrasaccharide. The limit dextrin superimposes in large part upon linear oligosaccharide inhibitors visualized by other investigators. By comprehensive integration of these complexes we have constructed a model for the binding of polysaccharides having the helical character known to be present in natural substrates such as starch and glycogen. PMID:20222716

  15. Multi-Resolution Clustering Analysis and Visualization of Around One Million Synthetic Earthquake Events

    NASA Astrophysics Data System (ADS)

    Kaneko, J. Y.; Yuen, D. A.; Dzwinel, W.; Boryszko, K.; Ben-Zion, Y.; Sevre, E. O.

    2002-12-01

    The study of seismic patterns with synthetic data is important for analyzing the seismic hazard of faults because one can precisely control the spatial and temporal domains. Using modern clustering analysis from statistics and a recently introduced visualization software, AMIRA, we have examined the multi-resolution nature of a total assemblage involving 922,672 earthquake events in 4 numerically simulated models, which have different constitutive parameters, with 2 disparately different time intervals in a 3D spatial domain. The evolution of stress and slip on the fault plane was simulated with the 3D elastic dislocation theory for a configuration representing the central San Andreas Fault (Ben-Zion, J. Geophys. Res., 101, 5677-5706, 1996). The 4 different models represent various levels of fault zone disorder and have the following brittle properties and names: uniform properties (model U), a Parkfield type Asperity (A), fractal properties (F), and multi-size-heterogeneities (model M). We employed the MNN (mutual nearest neighbor) clustering method and developed a C-program that calculates simultaneously a number of parameters related to the location of the earthquakes and their magnitude values .Visualization was then used to look at the geometrical locations of the hypocenters and the evolution of seismic patterns. We wrote an AmiraScript that allows us to pass the parameters in an interactive format. With data sets consisting of 150 year time intervals, we have unveiled the distinctly multi-resolutional nature in the spatial-temporal pattern of small and large earthquake correlations shown previously by Eneva and Ben-Zion (J. Geophys. Res., 102, 24513-24528, 1997). In order to search for clearer possible stationary patterns and substructures within the clusters, we have also carried out the same analysis for corresponding data sets with time extending to several thousand years. The larger data sets were studied with finer and finer time intervals and multi

  16. Multi-Resolution Assimilative Analysis of High-Latitude Ionospheric Convection in both Hemispheres

    NASA Astrophysics Data System (ADS)

    Thomas, Z. M.; Matsuo, T.; Nychka, D. W.; Cousins, E. D. P.; Wiltberger, M. J.

    2014-12-01

    Assimilative techniques for obtaining complete maps of ionospheric electric potential (and related parameters) from sparse radar and satellite observations greatly facilitates studies of magnetosphere/ionosphere coupling. While there is much scientific interest in studying interhemispheric asymmetry in ionospheric convection at both large and small scales, current mapping procedures rely on spherical harmonic expansion techniques, which produce inherently large-scale analyses. Due to the global nature of the spherical harmonics, such techniques are also subject to various instabilities arising from sparsity/error in the observations which can introduce non-physical patterns in the inferred convection. We present a novel technique for spatial mapping of ionospheric electric potential via a multi-resolution basis function expansion procedure, making use of compactly supported radial basis functions which are flexibly located over geodesic grids; the coefficients are modeled via a Markov random field construction. The technique is applied to radar observations from the Super Dual Auroral Radar Network (SuperDARN), whereupon careful comparison of interhemispheric differences in mapped potential is made at various scales.

  17. Multiresolution modeling with a JMASS-JWARS high-level architecture (HLA) federation

    NASA Astrophysics Data System (ADS)

    Plotz, Gary A.; Prince, John

    2003-09-01

    Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model are both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting "extremely large" simulation. One viable alternative is to "integrate" the current hierarchical suite of simulation models using the DoD's High Level Architecture (HLA) in order to support multi-resolution modeling. An HLA integration -- called a federation -- eliminates the problem of "extremely large" models, provides a well-defined and manageable mixed resolution simulation and minimizes Verification, Validation, and Accreditation (VV&A) issues. This paper describes the process and results of integrating the Joint Modeling and Simulation System (JMASS) and the Joint Warfare System (JWARS) simulations -- two of the Department of Defense's (DoD) next-generation simulations -- using a HLA federation.

  18. Multi-Resolution Seismic Tomography Based on Recursive Tessellation Hierarchy

    SciTech Connect

    Simmons, N A; Myers, S C; Ramirez, A

    2009-07-01

    tomographic problems. They also apply the progressive inversion approach with Pn waves traveling within the Middle East region and compare the results to simple tomographic inversions. As expected from synthetic testing, the progressive approach results in detailed structure where there is high data density and broader regional anomalies where seismic information is sparse. The ultimate goal is to use these methods to produce a seamless, multi-resolution global tomographic model with local model resolution determined by the constraints afforded by available data. They envisage this new technique as the general approach to be employed for future multi-resolution model development with complex arrangements of regional and teleseismic information.

  19. Multi-resolution adaptive data collection prioritisation for multi-risk assessment

    NASA Astrophysics Data System (ADS)

    Pittore, M.; Wieland, M.; Bindi, D.; Fleming, K.; Parolai, S.

    2012-04-01

    The distribution and amount of potential losses due natural hazards are continuously, and sometimes abruptly varying, spatially and temporally. Changes in damage distribution are dependent both on the specific natural hazard (for instance flood hazard can depend on the season and on the weather) and on the evolution of vulnerability (in terms of variation in size and composition of the exposed assets). Considering space and time, moreover, the most appropriate scales at which the changes occur have to be taken into account. Furthermore, spatio-temporal variability of multi-risk assessment is depending on the distribution and quality of the information upon which the assessment is made. This information is subject to uncertainties that also vary over time, for instance as new data are collected and integrated. Multi-risk assessment is therefore a dynamical process aiming for a continuous monitoring of the expected consequences of the occurring of one or more natural events, given an uncertain and incomplete description of both the involved hazards and the composition and vulnerability of the exposed assets. A novel multi-resolution, adaptive data collection approach is explored, which is of particular interest in countries where multi-scale, multi-risk assessment is sought but limited resources are available for intensive exposure and vulnerability data collection. In this case a suitable prioritisation of data collection is proposed as an adaptive sampling scheme optimized to trade off between data collection cost and loss estimation uncertainty. Preliminary test cases will be presented and discussed.

  20. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    SciTech Connect

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  1. Native and nonnative fish populations of the Colorado River are food limited--evidence from new food web analyses

    USGS Publications Warehouse

    Kennedy, Theodore A.; Cross, Wyatt F.; Hall, Robert O., Jr.; Baxter, Colden V.; Rosi-Marshall, Emma J.

    2013-01-01

    Fish populations in the Colorado River downstream from Glen Canyon Dam appear to be limited by the availability of high-quality invertebrate prey. Midge and blackfly production is low and nonnative rainbow trout in Glen Canyon and native fishes in Grand Canyon consume virtually all of the midge and blackfly biomass that is produced annually. In Glen Canyon, the invertebrate assemblage is dominated by nonnative New Zealand mudsnails, the food web has a simple structure, and transfers of energy from the base of the web (algae) to the top of the web (rainbow trout) are inefficient. The food webs in Grand Canyon are more complex relative to Glen Canyon, because, on average, each species in the web is involved in more interactions and feeding connections. Based on theory and on studies from other ecosystems, the structure and organization of Grand Canyon food webs should make them more stable and less susceptible to large changes following perturbations of the flow regime relative to food webs in Glen Canyon. In support of this hypothesis, Grand Canyon food webs were much less affected by a 2008 controlled flood relative to the food web in Glen Canyon.

  2. Leaf Length Tracker: a novel approach to analyse leaf elongation close to the thermal limit of growth in the field

    PubMed Central

    Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim

    2016-01-01

    Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called ‘Leaf Length Tracker’ (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions. PMID:26818912

  3. Leaf Length Tracker: a novel approach to analyse leaf elongation close to the thermal limit of growth in the field.

    PubMed

    Nagelmüller, Sebastian; Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim

    2016-04-01

    Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called 'Leaf Length Tracker' (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions. PMID:26818912

  4. Analysing the Spectrum of Andesitic Plinian Eruptions: Approaching the Uppermost Hazard Limits Expected from MT. Ruapehu, New Zealand

    NASA Astrophysics Data System (ADS)

    Pardo, N.; Cronin, S. J.; Palmer, A. S.; Procter, J.; Smith, I. E.; Nemeth, K.

    2011-12-01

    parameters by comparing different methodologies, in order to best estimate realistic uppermost hazard limits. We found Sulpizio (2005) method of k1 vs. √Aip, by integrating multiple segments, as the best approach to quantify past eruptions where the exposures are limited to proximal-intermediate locations and isopachs thinner than 5 cm cannot be constructed. The bilobate nature of both isopachs and isopleth maps reflects the complexity of tephra dispersion in a form of non-elliptical isopleths shapes showing high contour distortion and lobe axis bending, reflecting important shifts in the wind-direction over a short time interval. Calculated eruptive parameters such as minimum erupted volumes (0.3 to 0.6 km3), break in slope distances (√Aip: 31.4 - 80.8 km), column heights (22-37 km), volume discharge rates (~104-105 m3/s), and mass discharge rates (~107-108 kg/s), are all consistent with Plinian style eruptions, significantly larger than eruptions that have occurred over the past 5000 yr (VEI = 3). This new data could yield the "worst-case" eruption scenario of Ruapehu, similar to the Plinian phases of Askja 1875 and Chaitén 2008 eruptions.

  5. Multi-tissue analyses reveal limited inter-annual and seasonal variation in mercury exposure in an Antarctic penguin community.

    PubMed

    Brasso, Rebecka L; Polito, Michael J; Emslie, Steven D

    2014-10-01

    Inter-annual variation in tissue mercury concentrations in birds can result from annual changes in the bioavailability of mercury or shifts in dietary composition and/or trophic level. We investigated potential annual variability in mercury dynamics in the Antarctic marine food web using Pygoscelis penguins as biomonitors. Eggshell membrane, chick down, and adult feathers were collected from three species of sympatrically breeding Pygoscelis penguins during the austral summers of 2006/2007-2010/2011. To evaluate the hypothesis that mercury concentrations in penguins exhibit significant inter-annual variation and to determine the potential source of such variation (dietary or environmental), we compared tissue mercury concentrations with trophic levels as indicated by δ(15)N values from all species and tissues. Overall, no inter-annual variation in mercury was observed in adult feathers suggesting that mercury exposure, on an annual scale, was consistent for Pygoscelis penguins. However, when examining tissues that reflected more discrete time periods (chick down and eggshell membrane) relative to adult feathers, we found some evidence of inter-annual variation in mercury exposure during penguins' pre-breeding and chick rearing periods. Evidence of inter-annual variation in penguin trophic level was also limited suggesting that foraging ecology and environmental factors related to the bioavailability of mercury may provide more explanatory power for mercury exposure compared to trophic level alone. Even so, the variable strength of relationships observed between trophic level and tissue mercury concentrations across and within Pygoscelis penguin species suggest that caution is required when selecting appropriate species and tissue combinations for environmental biomonitoring studies in Antarctica. PMID:25085270

  6. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  7. Multiresolution generalized N dimension PCA for ultrasound image denoising

    PubMed Central

    2014-01-01

    Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917

  8. On analysis of electroencephalogram by multiresolution-based energetic approach

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer

    2013-10-01

    Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.

  9. Cointegration and Nonstationarity in the Context of Multiresolution Analysis

    NASA Astrophysics Data System (ADS)

    Worden, K.; Cross, E. J.; Kyprianou, A.

    2011-07-01

    Cointegration has established itself as a powerful means of projecting out long-term trends from time-series data in the context of econometrics. Recent work by the current authors has further established that cointegration can be applied profitably in the context of structural health monitoring (SHM), where it is desirable to project out the effects of environmental and operational variations from data in order that they do not generate false positives in diagnostic tests. The concept of cointegration is partly built on a clear understanding of the ideas of stationarity and nonstationarity for time-series. Nonstationarity in this context is 'traditionally' established through the use of statistical tests, e.g. the hypothesis test based on the augmented Dickey-Fuller statistic. However, it is important to understand the distinction in this case between 'trend' stationarity and stationarity of the AR models typically fitted as part of the analysis process. The current paper will discuss this distinction in the context of SHM data and will extend the discussion by the introduction of multi-resolution (discrete wavelet) analysis as a means of characterising the time-scales on which nonstationarity manifests itself. The discussion will be based on synthetic data and also on experimental data for the guided-wave SHM of a composite plate.

  10. Eagle II: A prototype for multi-resolution combat modeling

    SciTech Connect

    Powell, D.R.; Hutchinson, J.L.

    1993-02-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  11. A novel adaptive multi-resolution combined watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, QiWei

    2008-04-01

    The rapid development of IT and WWW technique, causing person frequently confronts with various kinds of authorized identification problem, especially the copyright problem of digital products. The digital watermarking technique was emerged as one kind of solutions. The balance between robustness and imperceptibility is always the object sought by related researchers. In order to settle the problem of robustness and imperceptibility, a novel adaptive multi-resolution combined digital image watermarking algorithm was proposed in this paper. In the proposed algorithm, we first decompose the watermark into several sub-bands, and according to its significance to embed the sub-band to different DWT coefficient of the carrier image. While embedding, the HVS was considered. So under the precondition of keeping the quality of image, the larger capacity of watermark can be embedding. The experimental results have shown that the proposed algorithm has better performance in the aspects of robustness and security. And with the same visual quality, the technique has larger capacity. So the unification of robustness and imperceptibility was achieved.

  12. Eagle II: A prototype for multi-resolution combat modeling

    SciTech Connect

    Powell, D.R.; Hutchinson, J.L.

    1993-01-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to calibrate'' the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  13. Face Recognition with Multi-Resolution Spectral Feature Images

    PubMed Central

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method. PMID:23418451

  14. Wavelet-based multiresolution analysis of Wivenhoe Dam water temperatures

    NASA Astrophysics Data System (ADS)

    Percival, D. B.; Lennox, S. M.; Wang, Y.-G.; Darnell, R. E.

    2011-05-01

    Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

  15. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Domingues, Margarete O.; Gomes, Anna Karina F.; Mendes, Odim; Schneider, Kai

    2013-10-01

    We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge-Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of the magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution. This work was supported by the contract SiCoMHD (ANR-Blanc 2011-045).

  16. MRI data driven partial volume effects correction in PET imaging using 3D local multi-resolution analysis

    NASA Astrophysics Data System (ADS)

    Le Pogam, Adrien; Lamare, Frederic; Hatt, Mathieu; Fernandez, Philippe; Le Rest, Catherine Cheze; Visvikis, Dimitris

    2013-02-01

    PET partial volume effects (PVE) resulting from the limited resolution of PET scanners is still a quantitative issue that PET/MRI scanners do not solve by themselves. A recently proposed voxel-based locally adaptive 3D multi-resolution PVE correction based on the mutual analysis of wavelet decompositions was applied on 12 clinical 18F-FLT PET/T1 MRI images of glial tumors, and compared to a PET only voxel-wise iterative deconvolution approach. Quantitative and qualitative results demonstrated the interest of exploiting PET/MRI information with higher uptake increases (19±8% vs. 11±7%, p=0.02), as well as more convincing visual restoration of details within tumors with respect to deconvolution of the PET uptake only. Further studies are now required to demonstrate the accuracy of this restoration with histopathological validation of the uptake in tumors.

  17. Fast pseudo-semantic segmentation for joint region-based hierarchical and multiresolution representation

    NASA Astrophysics Data System (ADS)

    Sekkal, Rafiq; Strauss, Clement; Pasteau, François; Babel, Marie; Deforges, Olivier

    2012-01-01

    In this paper, we present a new scalable segmentation algorithm called JHMS (Joint Hierarchical and Multiresolution Segmentation) that is characterized by region-based hierarchy and resolution scalability. Most of the proposed algorithms either apply a multiresolution segmentation or a hierarchical segmentation. The proposed approach combines both multiresolution and hierarchical segmentation processes. Indeed, the image is considered as a set of images at different levels of resolution, where at each level a hierarchical segmentation is performed. Multiresolution implies that a segmentation of a given level is reused in further segmentation processes operated at next levels so that to insure contour consistency between different resolutions. Each level of resolution provides a Region Adjacency Graph (RAG) that describes the neighborhood relationships between regions within a given level of the multiresolution representation. Region label consistency is preserved thanks to a dedicated projection algorithm based on inter-level relationships. Moreover, a preprocess based on a quadtree partitioning reduces the amount of input data thus leading to a lower overall complexity of the segmentation framework. Experiments show that we obtain effective results when compared to the state of the art together with a lower complexity.

  18. A multiresolution spatial parametrization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions.

    SciTech Connect

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew

    2013-04-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.

  19. Techniques and potential capabilities of multi-resolutional information (knowledge) processing

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.

  20. Multiresolution phase retrieval in the fresnel region by use of wavelet transform.

    PubMed

    Souvorov, Alexei; Ishikawa, Tetsuya; Kuyumchyan, Armen

    2006-02-01

    A multiresolution (multiscale) analysis based on wavelet transform is applied to the problem of optical phase retrieval from the intensity measured in the in-line geometry (lens-free). The transport-of-intensity equation and the Fresnel diffraction integral are approximated in terms of a wavelet basis. A solution to the phase retrieval problem can be efficiently found in both cases using the multiresolution concept. Due to the hierarchical nature of wavelet spaces, wavelets are well suited to multiresolution methods that contain multigrid algorithms. Appropriate wavelet bases for the best solution approximation are discussed. The proposed approach reduces the computational complexity and accelerates the convergence of the solution. It is robust and reliable, and successful on both simulated and experimental images obtained with hard x rays. PMID:16477833

  1. A one-time truncate and encode multiresolution stochastic framework

    SciTech Connect

    Abgrall, R.; Congedo, P.M.; Geraci, G.

    2014-01-15

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  2. Multiresolution graph Fourier transform for compression of piecewise smooth images.

    PubMed

    Hu, Wei; Cheung, Gene; Ortega, Antonio; Au, Oscar C

    2015-01-01

    Piecewise smooth (PWS) images (e.g., depth maps or animation images) contain unique signal characteristics such as sharp object boundaries and slowly varying interior surfaces. Leveraging on recent advances in graph signal processing, in this paper, we propose to compress the PWS images using suitable graph Fourier transforms (GFTs) to minimize the total signal representation cost of each pixel block, considering both the sparsity of the signal's transform coefficients and the compactness of transform description. Unlike fixed transforms, such as the discrete cosine transform, we can adapt GFT to a particular class of pixel blocks. In particular, we select one among a defined search space of GFTs to minimize total representation cost via our proposed algorithms, leveraging on graph optimization techniques, such as spectral clustering and minimum graph cuts. Furthermore, for practical implementation of GFT, we introduce two techniques to reduce computation complexity. First, at the encoder, we low-pass filter and downsample a high-resolution (HR) pixel block to obtain a low-resolution (LR) one, so that a LR-GFT can be employed. At the decoder, upsampling and interpolation are performed adaptively along HR boundaries coded using arithmetic edge coding, so that sharp object boundaries can be well preserved. Second, instead of computing GFT from a graph in real-time via eigen-decomposition, the most popular LR-GFTs are pre-computed and stored in a table for lookup during encoding and decoding. Using depth maps and computer-graphics images as examples of the PWS images, experimental results show that our proposed multiresolution-GFT scheme outperforms H.264 intra by 6.8 dB on average in peak signal-to-noise ratio at the same bit rate. PMID:25494508

  3. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    SciTech Connect

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  4. Using sparse regularization for multi-resolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-10-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.

  5. Multi-resolution statistical analysis of brain connectivity graphs in preclinical Alzheimer's disease.

    PubMed

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K; Okonkwo, Ozioma C; Johnson, Sterling C; B Bendlin, Barbara; Singh, Vikas

    2015-09-01

    There is significant interest, both from basic and applied research perspectives, in understanding how structural/functional connectivity changes can explain behavioral symptoms and predict decline in neurodegenerative diseases such as Alzheimer's disease (AD). The first step in most such analyses is to encode the connectivity information as a graph; then, one may perform statistical inference on various 'global' graph theoretic summary measures (e.g., modularity, graph diameter) and/or at the level of individual edges (or connections). For AD in particular, clear differences in connectivity at the dementia stage of the disease (relative to healthy controls) have been identified. Despite such findings, AD-related connectivity changes in preclinical disease remain poorly characterized. Such preclinical datasets are typically smaller and group differences are weaker. In this paper, we propose a new multi-resolution method for performing statistical analysis of connectivity networks/graphs derived from neuroimaging data. At the high level, the method occupies the middle ground between the two contrasts - that is, to analyze global graph summary measures (global) or connectivity strengths or correlations for individual edges similar to voxel based analysis (local). Instead, our strategy derives a Wavelet representation at each primitive (connection edge) which captures the graph context at multiple resolutions. We provide extensive empirical evidence of how this framework offers improved statistical power by analyzing two distinct AD datasets. Here, connectivity is derived from diffusion tensor magnetic resonance images by running a tractography routine. We first present results showing significant connectivity differences between AD patients and controls that were not evident using standard approaches. Later, we show results on populations that are not diagnosed with AD but have a positive family history risk of AD where our algorithm helps in identifying potentially

  6. A hexahedron element formulation with a new multi-resolution analysis

    NASA Astrophysics Data System (ADS)

    Xia, YiMing; Chen, ShaoLin

    2015-01-01

    A multiresolution hexahedron element is presented with a new multiresolution analysis (MRA) framework. The MRA framework is formulated out of a mutually nesting displacement subspace sequence, whose basis functions are constructed of scaling and shifting on element domain of a basic node shape function. The basic node shape function is constructed from shifting to other seven quadrants around a specific node of a basic isoparametric element in one quadrant and joining the corresponding node shape functions of eight elements at the specific node. The MRA endows the proposed element with the resolution level (RL) to adjust structural analysis accuracy. As a result, the traditional 8-node hexahedron element is a monoresolution one and also a special case of the proposed element. The meshing for the monoresolution finite element model is based on the empiricism while the RL adjusting for the multiresolution is laid on the solid mathematical basis. The simplicity and clarity of shape function construction with the Kronecker delta property and the rational MRA enable the proposed element method to be more rational, easier and efficient in its implementation than the conventional mono-resolution solid element method or other MRA methods. The multiresolution hexahedron element method is more adapted to dealing with the accurate computation of structural problems.

  7. Multiresolution stroke sketch adaptive representation and neural network processing system for gray-level image recognition

    NASA Astrophysics Data System (ADS)

    Meystel, Alexander M.; Rybak, Ilya A.; Bhasin, Sanjay

    1992-11-01

    This paper describes a method for multiresolutional representation of gray-level images as hierarchial sets of strokes characterizing forms of objects with different degrees of generalization depending on the context of the image. This method transforms the original image into a hierarchical graph which allows for efficient coding in order to store, retrieve, and recognize the image. The method which is described is based upon finding the resolution levels for each image which minimizes the computations required. This becomes possible because of the use of a special image representation technique called Multiresolutional Attentional Representation for Recognition, based upon a feature which the authors call a stroke. This feature turns out to be efficient in the process of finding the appropriate system of resolutions and construction of the relational graph. Multiresolutional Attentional Representation for Recognition (MARR) is formed by a multi-layer neural network with recurrent inhibitory connections between neurons, the receptive fields of which are selectively tuned to detect the orientation of local contrasts in parts of the image with appropriate degree of generalization. This method simulates the 'coarse-to-fine' algorithm which an artist usually uses, making at attentional sketch of real images. The method, algorithms, and neural network architecture in this system can be used in many machine-vision systems with AI properties; in particular, robotic vision. We expect that systems with MARR can become a component of intelligent control systems for autonomous robots. Their architectures are mostly multiresolutional and match well with the multiple resolutions of the MARR structure.

  8. Spatial heterogeneity of dechlorinating bacteria and limiting factors for in situ trichloroethene dechlorination revealed by analyses of sediment cores from a polluted field site.

    PubMed

    Dowideit, Kerstin; Scholz-Muramatsu, Heidrun; Miethling-Graff, Rona; Vigelahn, Lothar; Freygang, Martina; Dohrmann, Anja B; Tebbe, Christoph C

    2010-03-01

    Microbiological analyses of sediment samples were conducted to explore potentials and limitations for bioremediation of field sites polluted with chlorinated ethenes. Intact sediment cores, collected by direct push probing from a 35-ha contaminated area, were analyzed in horizontal layers. Cultivation-independent PCR revealed Dehalococcoides to be the most abundant 16S rRNA gene phylotype with a suspected potential for reductive dechlorination of the major contaminant trichloroethene (TCE). In declining abundances, Desulfitobacterium, Desulfuromonas and Dehalobacter were also detected. In TCE-amended sediment slurry incubations, 66% of 121 sediment samples were dechlorinating, among them one-third completely and the rest incompletely (end product cis-1,2-dichloroethene; cDCE). Both PCR and slurry analyses revealed highly heterogeneous horizontal and vertical distributions of the dechlorination potentials in the sediments. Complete reductive TCE dechlorination correlated with the presence of Dehalococcoides, accompanied by Acetobacterium and a relative of Trichococcus pasteurii. Sediment incubations under close to in situ conditions showed that a low TCE dechlorination activity could be stimulated by 7 mg L(-1) dissolved carbon for cDCE formation and by an additional 36 mg carbon (lactate) L(-1) for further dechlorination. The study demonstrates that the highly heterogeneous distribution of TCE degraders and their specific requirements for carbon and electrons are key issues for TCE degradation in contaminated sites. PMID:20041951

  9. The Global Multi-Resolution Topography (GMRT) Synthesis

    NASA Astrophysics Data System (ADS)

    Arko, R.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; O'Hara, S.; Chayes, D.; Weissel, R.; Goodwillie, A.; Ferrini, V.; Stroker, K.; Virden, W.

    2007-12-01

    Topographic maps provide a backdrop for research in nearly every earth science discipline. There is particular demand for bathymetry data in the ocean basins, where existing coverage is sparse. Ships and submersibles worldwide are rapidly acquiring large volumes of new data with modern swath mapping systems. The science community is best served by a global topography compilation that is easily accessible, up-to-date, and delivers data in the highest possible (i.e. native) resolution. To meet this need, the NSF-supported Marine Geoscience Data System (MGDS; www.marine-geo.org) has partnered with the National Geophysical Data Center (NGDC; www.ngdc.noaa.gov) to produce the Global Multi-Resolution Topography (GMRT) synthesis - a continuously updated digital elevation model that is accessible through Open Geospatial Consortium (OGC; www.opengeospatial.org) Web services. GMRT had its genesis in 1992 with the NSF RIDGE Multibeam Synthesis (RMBS); later grew to include the Antarctic Multibeam Synthesis (AMBS); expanded again to include the NSF Ridge 2000 and MARGINS programs; and finally emerged as a global compilation in 2005 with the NSF Legacy of Ocean Exploration (LOE) project. The LOE project forged a permanent partnership between MGDS and NGDC, in which swath bathymetry data sets are routinely published and exchanged via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH; www.openarchives.org). GMRT includes both color-shaded relief images and underlying elevation values at ten different resolutions as high as 100m. New data are edited, gridded, and tiled using tools originally developed by William Haxby at Lamont-Doherty Earth Observatory. Global and regional data sources include the NASA Shuttle Radar Topography Mission (SRTM; http://www.jpl.nasa.gov/srtm/); Smith & Sandwell Satellite Predicted Bathymetry (http://topex.ucsd.edu/marine_topo/); SCAR Subglacial Topographic Model of the Antarctic (BEDMAP; http://www.antarctica.ac.uk/bedmap/); and

  10. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also

  11. A multi-resolution method for climate system modeling: application of Spherical Centroidal A multi-resolution method for climate system modeling: Application of Spherical Centroidal Voroni Tessellations

    SciTech Connect

    Ringler, Todd D; Gunzburger, Max; Ju, Lili

    2008-01-01

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multi-resolution schemes that are able, at least regional to faithfully simulate these fine-scale processes. Spherical Centroidal Voronoi Tessellations (SCVTs) offer one potential path toward the development of robust, multi-resolution climate system component models, SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function, each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean-ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear shallow-water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multi-resolution method and the challenges ahead.

  12. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  13. Noise-induced systematic errors in ratio imaging: serious artefacts and correction with multi-resolution denoising.

    PubMed

    Wang, Yu-Li

    2007-11-01

    Ratio imaging is playing an increasingly important role in modern cell biology. Combined with ratiometric dyes or fluorescence resonance energy transfer (FRET) biosensors, the approach allows the detection of conformational changes and molecular interactions in living cells. However, the approach is conducted increasingly under limited signal-to-noise ratio (SNR), where noise from multiple images can easily accumulate and lead to substantial uncertainty in ratio values. This study demonstrates that a far more serious concern is systematic errors that generate artificially high ratio values at low SNR. Thus, uneven SNR alone may lead to significant variations in ratios among different regions of a cell. Although correct average ratios may be obtained by applying conventional noise reduction filters, such as a Gaussian filter before calculating the ratio, these filters have a limited performance at low SNR and are prone to artefacts such as generating discrete domains not found in the correct ratio image. Much more reliable restoration may be achieved with multi-resolution denoising filters that take into account the actual noise characteristics of the detector. These filters are also capable of restoring structural details and photometric accuracy, and may serve as a general tool for retrieving reliable information from low-light live cell images. PMID:17970912

  14. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  15. Multi-resolution imaging with an optimized number and distribution of sampling points.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo

    2014-05-01

    We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis. PMID:24921717

  16. High throughput VLSI architecture for multiresolution integer motion estimation in high definition AVS video encoder

    NASA Astrophysics Data System (ADS)

    Yin, HaiBing; Qi, Honggang; Xu, Hao; Xie, Xiaodong; Gao, Wen

    2010-07-01

    This paper proposes a hardware friendly multi-resolution motion estimation algorithm and VLSI architecture for high definition MPEG-like video encoder hardware implementation. By parallel searching and utilizing the high correlation in multi-resolution reference pixels, huge throughput and computation due to large search window are alleviated considerably. Sixteen way parallel processing element arrays with configurable multiplying technologies achieve fast search with regular data access and efficient data reuse. Also, the parallel arrays can be efficiently reused at three hierarchical levels for sequential motion vector refinement. The modified algorithm reaches a good balance between implementation complexity and search performance. Also, the logic circuit and on-chip SRAM consumption of the VLSI architecture are moderate.

  17. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy. PMID:22581136

  18. Bayesian multiresolution method for local tomography in dental x-ray imaging.

    PubMed

    Niinimäki, K; Siltanen, S; Kolehmainen, V

    2007-11-21

    Dental tomographic cone-beam x-ray imaging devices record truncated projections and reconstruct a region of interest (ROI) inside the head. Image reconstruction from the resulting local tomography data is an ill-posed inverse problem. A new Bayesian multiresolution method is proposed for local tomography reconstruction. The inverse problem is formulated in a well-posed statistical form where a prior model of the target tissues compensates for the incomplete x-ray projection data. Tissues are represented in a wavelet basis, and prior information is modeled in terms of a Besov norm penalty. The number of unknowns in the reconstruction problem is reduced by abandoning fine-scale wavelets outside the ROI. Compared to traditional voxel-based models, this multiresolution approach allows significant reduction of degrees of freedom without loss of accuracy inside the ROI, as shown by 2D examples using simulated and in vitro local tomography data. PMID:17975290

  19. Efficient Human Action and Gait Analysis Using Multiresolution Motion Energy Histogram

    NASA Astrophysics Data System (ADS)

    Yu, Chih-Chang; Cheng, Hsu-Yung; Cheng, Chien-Hung; Fan, Kuo-Chin

    2010-12-01

    Average Motion Energy (AME) image is a good way to describe human motions. However, it has to face the computation efficiency problem with the increasing number of database templates. In this paper, we propose a histogram-based approach to improve the computation efficiency. We convert the human action/gait recognition problem to a histogram matching problem. In order to speed up the recognition process, we adopt a multiresolution structure on the Motion Energy Histogram (MEH). To utilize the multiresolution structure more efficiently, we propose an automated uneven partitioning method which is achieved by utilizing the quadtree decomposition results of MEH. In that case, the computation time is only relevant to the number of partitioned histogram bins, which is much less than the AME method. Two applications, action recognition and gait classification, are conducted in the experiments to demonstrate the feasibility and validity of the proposed approach.

  20. Multiresolution-fractal feature extraction and tumor detection: analytical modeling and implementation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Parra, Carlos

    2003-11-01

    We propose formal analytical models for identification of tumors in medical images based on the hypothesis that the tumors have a fractal (self-similar) growth behavior. Therefore, the images of these tumors may be characterized as Fractional Brownian motion (fBm) processes with a fractal dimension (D) that is distinctly different than that of the image of the surrounding tissue. In order to extract the desired features that delineate different tissues in a MR image, we study multiresolution signal decomposition and its relation to fBm. The fBm has proven successful to modeling a variety of physical phenomena and non-stationary processes, such as medical images, that share essential properties such as self-similarity, scale invariance and fractal dimension (D). We have developed the theoretical framework that combines wavelet analysis with multiresolution fBm to compute D.

  1. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  2. Multi-resolution simulation of biomolecular systems: a review of methodological issues.

    PubMed

    Meier, Katharina; Choutko, Alexandra; Dolenc, Jozica; Eichenberger, Andreas P; Riniker, Sereina; van Gunsteren, Wilfred F

    2013-03-01

    Theoretical-computational modeling with an eye to explaining experimental observations in regard to a particular chemical phenomenon or process requires choices concerning essential degrees of freedom and types of interactions and the generation of a Boltzmann ensemble or trajectories of configurations. Depending on the degrees of freedom that are essential to the process of interest, for example, electronic or nuclear versus atomic, molecular or supra-molecular, quantum- or classical-mechanical equations of motion are to be used. In multi-resolution simulation, various levels of resolution, for example, electronic, atomic, supra-atomic or supra-molecular, are combined in one model. This allows an enhancement of the computational efficiency, while maintaining sufficient detail with respect to particular degrees of freedom. The basic challenges and choices with respect to multi-resolution modeling are reviewed and as an illustration the differential catalytic properties of two enzymes with similar folds but different substrates with respect to these substrates are explored using multi-resolution simulation at the electronic, atomic and supra-molecular levels of resolution. PMID:23417997

  3. Development of a multi-resolution measurement system based on light sectioning method

    NASA Astrophysics Data System (ADS)

    Zhang, Weiguang; Zhao, Hong; Zhou, Xiang; Zhang, Lu

    2008-09-01

    With the rapid development of shape measurement technique, multi-resolution approach becomes one of valid way to enhance the accuracy. There are, however, still some key techniques such as simultaneous calibration and data fusion of several sensors being further studied. A multi-resolution system, which use light sectioning method, is developed and has been successful in many application areas for blade of aviation engine example. It can measure the shape of blade at high speed and high accuracy. The system is composed of four laser linear light sources, four or five cameras and three highprecision mechanical movement devices. Two cameras have relatively low amplifying ratios, and focus on the basin or back of blade where the radius of curvature is large. Other cameras have high amplifying ratios, and fix on the entering or ending edge of blade where the radius of curvature is small. So the system has 3600 measurement range and can carry out multi-resolution 3-D shape measurement with greatly different amplifying ratios of cameras. One measurement process has been finished when the blade mounted on mechanical movement device move up or down one time. Also the model building and principle of the measurement system, an algorithm of calibration and data fusion of several cameras are presented that calculate 3-D coordinates of one section of blade. The result shows that the accuracy of the system is about 0.05mm for the sectional circumradius approximately 50 mm measurement range, and also proves the system is feasible and efficient.

  4. Combining nonlinear multiresolution system and vector quantization for still image compression

    SciTech Connect

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  5. Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Wu, Bo; Zhu, Qing

    2016-04-01

    Due to the widespread availability of satellite imagery nowadays, it is common for regions to be covered by satellite imagery from multiple sources with multiple resolutions. This paper presents a combined adjustment approach to integrate multi-source multi-resolution satellite imagery for improved geo-positioning accuracy without the use of ground control points (GCPs). Instead of using all the rational polynomial coefficients (RPCs) of images for processing, only those dominating the geo-positioning accuracy are used in the combined adjustment. They, together with tie points identified in the images, are used as observations in the adjustment model. Proper weights are determined for each observation, and ridge parameters are determined for better convergence of the adjustment solution. The outputs from the combined adjustment are the improved dominating RPCs of images, from which improved geo-positioning accuracy can be obtained. Experiments using ZY-3, SPOT-7 and Pleiades-1 imagery in Hong Kong, and Cartosat-1 and Worldview-1 imagery in Catalonia, Spain demonstrate that the proposed method is able to effectively improve the geo-positioning accuracy of satellite images. The combined adjustment approach offers an alternative method to improve geo-positioning accuracy of satellite images. The approach enables the integration of multi-source and multi-resolution satellite imagery for generating more precise and consistent 3D spatial information, which permits the comparative and synergistic use of multi-resolution satellite images from multiple sources.

  6. Multi-resolution description of three-dimensional anthropometric data for design simplification.

    PubMed

    Niu, Jianwei; Li, Zhizhong; Salvendy, Gavriel

    2009-07-01

    Three-dimensional (3D) anthropometry can provide rich information for ergonomic product design with better safety and health considerations. To reduce computational load and model complexity in product design when using 3D anthropometric data, wavelet analysis is adopted in this paper to establish multi-resolution mathematical description of 3D anthropometric data. A proper resolution can be selected for design reference according to the application purpose. To examine the approximation errors under difference resolutions, 510 upper head, whole head, and face samples of Chinese young men have been analyzed. Descriptives of approximation errors under different resolutions are presented. These data can be used as resolution selection guide. The application of the multi-resolution method in product design is illustrated by two examples. RELEVANCE TO INDUSTRY: Multi-resolution description of 3D anthropometric data would facilitate the analysis of and design with 3D anthropometric data to improve fitting comfort. The error data under different resolutions provide important reference for resolution selection. PMID:18639863

  7. Deconstructing a polygenetic landscape using LiDAR and multi-resolution analysis

    NASA Astrophysics Data System (ADS)

    Barrineau, Patrick; Dobreva, Iliyana; Bishop, Michael P.; Houser, Chris

    2016-04-01

    It is difficult to deconstruct a complex polygenetic landscape into distinct process-form regimes using digital elevation models (DEMs) and fundamental land-surface parameters. This study describes a multi-resolution analysis approach for extracting geomorphological information from a LiDAR-derived DEM over a stabilized aeolian landscape in south Texas that exhibits distinct process-form regimes associated with different stages in landscape evolution. Multi-resolution analysis was used to generate average altitudes using a Gaussian filter with a maximum radius of 1 km at 20 m intervals, resulting in 50 generated DEMs. This multi-resolution dataset was analyzed using Principal Components Analysis (PCA) to identify the dominant variance structure in the dataset. The first 4 principal components (PC) account for 99.9% of the variation, and classification of the variance structure reveals distinct multi-scale topographic variation associated with different process-form regimes and evolutionary stages. Our results suggest that this approach can be used to generate quantitatively rigorous morphometric maps to guide field-based sedimentological and geophysical investigations, which tend to use purposive sampling techniques resulting in bias and error.

  8. Characterization and in-vivo evaluation of a multi-resolution foveated laparoscope for minimally invasive surgery

    PubMed Central

    Qin, Yi; Hua, Hong; Nguyen, Mike

    2014-01-01

    The state-of-the-art laparoscope lacks the ability to capture high-magnification and wide-angle images simultaneously, which introduces challenges when both close- up views for details and wide-angle overviews for orientation are required in clinical practice. A multi-resolution foveated laparoscope (MRFL) which can provide the surgeon both high-magnification close-up and wide-angle images was proposed to address the limitations of the state-of-art surgical laparoscopes. In this paper, we present the overall system design from both clinical and optical system perspectives along with a set of experiments to characterize the optical performances of our prototype system and describe our preliminary in-vivo evaluation of the prototype with a pig model. The experimental results demonstrate that at the optimum working distance of 120mm, the high-magnification probe has a resolution of 6.35lp/mm and image a surgical area of 53 × 40mm2; the wide-angle probe provides a surgical area coverage of 160 × 120mm2 with a resolution of 2.83lp/mm. The in-vivo evaluation demonstrates that MRFL has great potential in clinical applications for improving the safety and efficiency of the laparoscopic surgery. PMID:25136485

  9. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  10. a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R.

    2011-09-01

    Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype.

  11. Cost tradeoffs in consequence management at nuclear power plants: A risk based approach to setting optimal long-term interdiction limits for regulatory analyses

    SciTech Connect

    Mubayi, V.

    1995-05-01

    The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.

  12. W-transform method for feature-oriented multiresolution image retrieval

    SciTech Connect

    Kwong, M.K.; Lin, B.

    1995-07-01

    Image database management is important in the development of multimedia technology. Since an enormous amount of digital images is likely to be generated within the next few decades in order to integrate computers, television, VCR, cables, telephone and various imaging devices. Effective image indexing and retrieval systems are urgently needed so that images can be easily organized, searched, transmitted, and presented. Here, the authors present a local-feature-oriented image indexing and retrieval method based on Kwong, and Tang`s W-transform. Multiresolution histogram comparison is an effective method for content-based image indexing and retrieval. However, most recent approaches perform multiresolution analysis for whole images but do not exploit the local features present in the images. Since W-transform is featured by its ability to handle images of arbitrary size, with no periodicity assumptions, it provides a natural tool for analyzing local image features and building indexing systems based on such features. In this approach, the histograms of the local features of images are used in the indexing, system. The system not only can retrieve images that are similar or identical to the query images but also can retrieve images that contain features specified in the query images, even if the retrieved images as a whole might be very different from the query images. The local-feature-oriented method also provides a speed advantage over the global multiresolution histogram comparison method. The feature-oriented approach is expected to be applicable in managing large-scale image systems such as video databases and medical image databases.

  13. A multi-resolution method for climate system modeling: application of spherical centroidal Voronoi tessellations

    SciTech Connect

    Ringler, Todd; Ju, Lili; Gunzburger, Max

    2008-01-01

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoi tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.

  14. Proof-of-concept demonstration of a miniaturized three-channel multiresolution imaging system

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Vervaeke, Michael; Van Erps, Jürgen; Thienpont, Hugo

    2014-05-01

    Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.

  15. Proof-of-concept demonstration of a miniaturized multi-resolution refocusing imaging system using an electrically tunable lens

    NASA Astrophysics Data System (ADS)

    Smeesters, L.; Belay, G. Y.; Ottevaere, H.; Meuret, Y.; Vervaeke, Michael; Van Erps, J.; Thienpont, H.

    2014-09-01

    Refocusing multi-channel imaging systems are nowadays commercially available only in bulky and expensive designs. Compact wafer-level multi-channel imaging systems have until now only been published without refocusing mechanisms, since classical refocusing concepts could not be integrated in a miniaturized configuration. This lack of refocusing capabilities limits the depth-of-field of these imaging designs and therefore their application in practical systems. We designed and characterized a wafer-level two-channel multi-resolution refocusing imaging system, based on an electrically tunable liquid lens and a design that can be realized with wafer-level mass-manufacturing techniques. One wide field-of-view channel (2x40°) gives a general image of the surroundings with a lower angular resolution (0.078°), whereas the high angular resolution channel (0.0098°) provides a detailed image of a small region of interest with a much narrower field-of-view (2x7.57°). The latter high resolution imaging channel contains the tunable lens and therefore the refocusing capability. The performances of this high resolution imaging channel were experimentally characterized in a proof-of-concept demonstrator. The experimental and simulated depth-of-field and resolving power correspond well. Moreover, we are able to obtain a depth-of-field from 0.25m until infinity, which is a significant improvement of the current state-of-the-art static multi-channel imaging systems, which show a depth-of-field from 9m until infinity. Both the high resolution and wide field-of-view imaging channels show a diffraction-limited image quality. The designed wafer-level two-channel imaging system can form the basis of an advanced three-dimensional stacked image sensor, where different image processing algorithms can be simultaneously applied to the different images on the image sensor.

  16. Accelerated single-beam wavefront reconstruction techniques based on relaxation and multiresolution strategies.

    PubMed

    Falaggis, Konstantinos; Kozacki, Tomasz; Kujawinska, Malgorzata

    2013-05-15

    A previous Letter by Pedrini et al. [Opt. Lett. 30, 833 (2005)] proposed an iterative single-beam wavefront reconstruction algorithm that uses a sequence of interferograms recorded at different planes. In this Letter, the use of relaxation and multiresolution strategies is investigated in terms of accuracy and computational effort. It is shown that the convergence rate of the conventional iterative algorithm can be significantly improved with the use of relaxation techniques combined with a hierarchy of downsampled intensities that are used within a preconditioner. These techniques prove to be more robust, to achieve a higher accuracy, and to overcome the stagnation problem met in the iterative wavefront reconstruction. PMID:23938902

  17. Multiresolution Wavelet Analysis of Heartbeat Intervals Discriminates Healthy Patients from Those with Cardiac Pathology

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1998-02-01

    We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.

  18. A Conceptual Framework for SAHRA Integrated Multi-resolution Modeling in the Rio Grande Basin

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Gupta, H.; Springer, E.; Wagener, T.; Brookshire, D.; Duffy, C.

    2004-12-01

    The sustainable management of water resources in a river basin requires an integrated analysis of the social, economic, environmental and institutional dimensions of the problem. Numerical models are commonly used for integration of these dimensions and for communication of the analysis results to stakeholders and policy makers. The National Science Foundation Science and Technology Center for Sustainability of semi-Arid Hydrology and Riparian Areas (SAHRA) has been developing integrated multi-resolution models to assess impacts of climate variability and land use change on water resources in the Rio Grande Basin. These models not only couple natural systems such as surface and ground waters, but will also include engineering, economic and social components that may be involved in water resources decision-making processes. This presentation will describe the conceptual framework being developed by SAHRA to guide and focus the multiple modeling efforts and to assist the modeling team in planning, data collection and interpretation, communication, evaluation, etc. One of the major components of this conceptual framework is a Conceptual Site Model (CSM), which describes the basin and its environment based on existing knowledge and identifies what additional information must be collected to develop technically sound models at various resolutions. The initial CSM is based on analyses of basin profile information that has been collected, including a physical profile (e.g., topographic and vegetative features), a man-made facility profile (e.g., dams, diversions, and pumping stations), and a land use and ecological profile (e.g., demographics, natural habitats, and endangered species). Based on the initial CSM, a Conceptual Physical Model (CPM) is developed to guide and evaluate the selection of a model code (or numerical model) for each resolution to conduct simulations and predictions. A CPM identifies, conceptually, all the physical processes and engineering and socio

  19. A multiresolution wavelet analysis and Gaussian Markov random field algorithm for breast cancer screening of digital mammography

    SciTech Connect

    Lee, C.G.; Chen, C.H.

    1996-12-31

    In this paper a novel multiresolution wavelet analysis (MWA) and non-stationary Gaussian Markov random field (GMRF) technique is introduced for the identification of microcalcifications with high accuracy. The hierarchical multiresolution wavelet information in conjunction with the contextual information of the images extracted from GMRF provides a highly efficient technique for microcalcification detection. A Bayesian teaming paradigm realized via the expectation maximization (EM) algorithm was also introduced for edge detection or segmentation of larger lesions recorded on the mammograms. The effectiveness of the approach has been extensively tested with a number of mammographic images provided by a local hospital.

  20. Multiresolution modeling with a JMASS-JWARS HLA Federation

    NASA Astrophysics Data System (ADS)

    Prince, John D.; Painter, Ron D.; Pendell, Brian; Richert, Walt; Wolcott, Christopher

    2002-07-01

    CACI, Inc.-Federal has built, tested, and demonstrated the use of a JMASS-JWARS HLA Federation that supports multi- resolution modeling of a weapon system and its subsystems in a JMASS engineering and engagement model environment, while providing a realistic JWARS theater campaign-level synthetic battle space and operational context to assess the weapon system's value added and deployment/employment supportability in a multi-day, combined force-on-force scenario. Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model is both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting extremely large simulation. One viable alternative is to integrate the current hierarchical suite of simulation models using the DoD's High Level Architecture in order to support multi- resolution modeling. An HLA integration eliminates the extremely large model problem, provides a well-defined and manageable mixed resolution simulation and minimizes VV&A issues.

  1. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  2. Concordance of Results from Randomized and Observational Analyses within the Same Study: A Re-Analysis of the Women’s Health Initiative Limited-Access Dataset

    PubMed Central

    Bolland, Mark J.; Grey, Andrew; Gamble, Greg D.; Reid, Ian R.

    2015-01-01

    Background Observational studies (OS) and randomized controlled trials (RCTs) often report discordant results. In the Women’s Health Initiative Calcium and Vitamin D (WHI CaD) RCT, women were randomly assigned to CaD or placebo, but were permitted to use personal calcium and vitamin D supplements, creating a unique opportunity to compare results from randomized and observational analyses within the same study. Methods WHI CaD was a 7-year RCT of 1g calcium/400IU vitamin D daily in 36,282 post-menopausal women. We assessed the effects of CaD on cardiovascular events, death, cancer and fracture in a randomized design- comparing CaD with placebo in 43% of women not using personal calcium or vitamin D supplements- and in a observational design- comparing women in the placebo group (44%) using personal calcium and vitamin D supplements with non-users. Incidence was assessed using Cox proportional hazards models, and results from the two study designs deemed concordant if the absolute difference in hazard ratios was ≤0.15. We also compared results from WHI CaD to those from the WHI Observational Study(WHI OS), which used similar methodology for analyses and recruited from the same population. Results In WHI CaD, for myocardial infarction and stroke, results of unadjusted and 6/8 covariate-controlled observational analyses (age-adjusted, multivariate-adjusted, propensity-adjusted, propensity-matched) were not concordant with the randomized design results. For death, hip and total fracture, colorectal and total cancer, unadjusted and covariate-controlled observational results were concordant with randomized results. For breast cancer, unadjusted and age-adjusted observational results were concordant with randomized results, but only 1/3 other covariate-controlled observational results were concordant with randomized results. Multivariate-adjusted results from WHI OS were concordant with randomized WHI CaD results for only 4/8 endpoints. Conclusions Results of

  3. Using sparse regularization for multiresolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-03-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of Slant Total Electron Content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on l2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the l1 minimization technique and wavelet basis functions due to their properties of compact representation. The l1 minimization is selected because it can optimise the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the interfrequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of l1 minimization to estimate the coefficients over the l2 minimization. This is in particular true for an uneven observation geometry and especially for multi resolution CIT.

  4. Classification of mammographic lesion based in Completed Local Binary Pattern and using multiresolution representation

    NASA Astrophysics Data System (ADS)

    Duarte, Y. A. S.; Nascimento, M. Z.; Oliveira, D. L. L.

    2014-03-01

    This paper presents a comparison of two methods for features extraction of mammograms based in completed local binary pattern (CLBP) and wavelet transform. In first part, CLBP was applied in digitized mammograms. In second part, we applied CLBP in the sub-bands obtained from the wavelet multi-resolution representation of the mammographies. In this study, we evaluated the CLBP in the image in the spatial domain and in the sub-bands obtained with wavelet transform. Then, the statistical technique of variance analysis (ANOVA) was used to reduce the number of features. Finally, the classifier Support Vector Machine (SVM) was applied in the samples. The proposed methods were tested on 720 mammographies which 240 was diagnosed as normal samples, 240 as benign lesion and 240 as malign lesion. The images were obtained randomly of the Digital Database for Screening Mammography (DDSM). The system effectiveness was evaluated using the area under the ROC curve (AUC). The experiments demonstrate that the textural feature extraction of the multi-resolution representation was more relevant with value of AUC=1.0. In our experiments, CLBP in the spatial domain resulted in value of AUC=0.89. The proposed method demonstrated promising results in the classification of different classes of mammographic lesions.

  5. A multi-resolution image analysis system for computer-assisted grading of neuroblastoma differentiation

    NASA Astrophysics Data System (ADS)

    Kong, Jun; Sertel, Olcay; Shimada, Hiroyuki; Boyer, Kim L.; Saltz, Joel H.; Gurcan, Metin N.

    2008-03-01

    Neuroblastic Tumor (NT) is one of the most commonly occurring tumors in children. Of all types of NTs, neuroblastoma is the most malignant tumor that can be further categorized into undifferentiated (UD), poorly-differentiated (PD) and differentiating (D) types, in terms of the grade of pathological differentiation. Currently, pathologists determine the grade of differentiation by visual examinations of tissue samples under the microscope. However, this process is subjective and, hence, may lead to intra- and inter-reader variability. In this paper, we propose a multi-resolution image analysis system that helps pathologists classify tissue samples according to their grades of differentiation. The inputs to this system are color images of haematoxylin and eosin (H&E) stained tissue samples. The complete image analysis system has five stages: segmentation, feature construction, feature extraction, classification and confidence evaluation. Due to the large number of input images, both parallel processing and multi-resolution analysis were carried out to reduce the execution time of the algorithm. Our training dataset consists of 387 images tiles of size 512x512 in pixels from three whole-slide images. We tested the developed system with an independent set of 24 whole-slide images, eight from each grade. The developed system has an accuracy of 83.3% in correctly identifying the grade of differentiation, and it takes about two hours, on average, to process each whole slide image.

  6. A multi-resolution approach to retrospectively-gated cardiac micro-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Clark, D. P.; Johnson, G. A.; Badea, C. T.

    2014-03-01

    In preclinical research, micro-CT is commonly used to provide anatomical information; however, there is significant interest in using this technology to obtain functional information in cardiac studies. The fastest acquisition in 4D cardiac micro-CT imaging is achieved via retrospective gating, resulting in irregular angular projections after binning the projections into phases of the cardiac cycle. Under these conditions, analytical reconstruction algorithms, such as filtered back projection, suffer from streaking artifacts. Here, we propose a novel, multi-resolution, iterative reconstruction algorithm inspired by robust principal component analysis which prevents the introduction of streaking artifacts, while attempting to recover the highest temporal resolution supported by the projection data. The algorithm achieves these results through a unique combination of the split Bregman method and joint bilateral filtration. We illustrate the algorithm's performance using a contrast-enhanced, 2D slice through the MOBY mouse phantom and realistic projection acquisition and reconstruction parameters. Our results indicate that the algorithm is robust to under sampling levels of only 34 projections per cardiac phase and, therefore, has high potential in reducing both acquisition times and radiation dose. Another potential advantage of the multi-resolution scheme is the natural division of the reconstruction problem into a large number of independent sub-problems which can be solved in parallel. In future work, we will investigate the performance of this algorithm with retrospectively-gated, cardiac micro-CT data.

  7. Multi-resolution model-based traffic sign detection and tracking

    NASA Astrophysics Data System (ADS)

    Marinas, Javier; Salgado, Luis; Camplani, Massimo

    2012-06-01

    In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.

  8. Wavelet-based multiresolution with n-th-root-of-2 Subdivision

    SciTech Connect

    Linsen, L; Pascucci, V; Duchaineau, M A; Hamann, B; Joy, K I

    2004-12-16

    Multiresolution methods are a common technique used for dealing with large-scale data and representing it at multiple levels of detail. The authors present a multiresolution hierarchy construction based on n{radical}2 subdivision, which has all the advantages of a regular data organization scheme while reducing the drawback of coarse granularity. The n{radical}2-subdivision scheme only doubles the number of vertices in each subdivision step regardless of dimension n. They describe the construction of 2D, 3D, and 4D hierarchies representing surfaces, volume data, and time-varying volume data, respectively. The 4D approach supports spatial and temporal scalability. For high-quality data approximation on each level of detail, they use downsampling filters based on n-variate B-spline wavelets. They present a B-spline wavelet lifting scheme for n{radical}2-subdivision steps to obtain small or narrow filters. Narrow filters support adaptive refinement and out-of-core data exploration techniques.

  9. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  10. Multi-resolution and wavelet representations for identifying signatures of disease.

    PubMed

    Sajda, Paul; Laine, Andrew; Zeevi, Yehoshua

    2002-01-01

    Identifying physiological and anatomical signatures of disease in signals and images is one of the fundamental challenges in biomedical engineering. The challenge is most apparent given that such signatures must be identified in spite of tremendous inter and intra-subject variability and noise. Crucial for uncovering these signatures has been the development of methods that exploit general statistical properties of natural signals. The signal processing and applied mathematics communities have developed, in recent years, signal representations which take advantage of Gabor-type and wavelet-type functions that localize signal energy in a joint time-frequency and/or space-frequency domain. These techniques can be expressed as multi-resolution transformations, of which perhaps the best known is the wavelet transform. In this paper we review wavelets, and other related multi-resolution transforms, within the context of identifying signatures for disease. These transforms construct a general representation of signals which can be used in detection, diagnosis and treatment monitoring. We present several examples where these transforms are applied to biomedical signal and imaging processing. These include computer-aided diagnosis in mammography, real-time mosaicking of ophthalmic slit-lamp imagery, characterization of heart disease via ultrasound, predicting epileptic seizures and signature analysis of the electroencephalogram, and reconstruction of positron emission tomography data. PMID:14646044

  11. Long-range force and moment calculations in multiresolution simulations of molecular systems

    SciTech Connect

    Poursina, Mohammad; Anderson, Kurt S.

    2012-08-30

    Multiresolution simulations of molecular systems such as DNAs, RNAs, and proteins are implemented using models with different resolutions ranging from a fully atomistic model to coarse-grained molecules, or even to continuum level system descriptions. For such simulations, pairwise force calculation is a serious bottleneck which can impose a prohibitive amount of computational load on the simulation if not performed wisely. Herein, we approximate the resultant force due to long-range particle-body and body-body interactions applicable to multiresolution simulations. Since the resultant force does not necessarily act through the center of mass of the body, it creates a moment about the mass center. Although this potentially important torque is neglected in many coarse-grained models which only use particle dynamics to formulate the dynamics of the system, it should be calculated and used when coarse-grained simulations are performed in a multibody scheme. Herein, the approximation for this moment due to far-field particle-body and body-body interactions is also provided.

  12. GPU-based multi-resolution direct numerical simulation of multiphase flows with phase change

    NASA Astrophysics Data System (ADS)

    Forster, Christopher J.; Smith, Marc K.

    2014-11-01

    Nucleate pool boiling heat transfer can be enhanced in several ways to increase the critical heat flux (CHF) and delay the transition to film boiling. Changes to the heated surface geometry using open microchannels and direct forcing of the vapor bubbles using acoustic interfacial excitation are being investigated for their effects on the CHF. The numerical simulation of boiling with these effects lends itself to multi-resolution techniques due to the multiple length and time scales present during evolution of the bubbles from initial nucleation in the microchannels to forming a bubble cloud above the heated surface. To this end, a wavelet multi-resolution boiling simulation based on a parallel GPU architecture is being developed to solve the compressible Navier-Stokes equations using a dual time stepping method with preconditioning to alleviate the stiffness problems associated with the liquid phase. Interface tracking is handled by the level-set method with a prescribed interface thickness based on the maximum amount of local grid refinement desired, which can approach the physical interface thickness. Initial cases to validate the simulation will be demonstrated, including the rising bubble test problem.

  13. a Virtual Globe-Based Multi-Resolution Tin Surface Modeling and Visualizetion Method

    NASA Astrophysics Data System (ADS)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2016-06-01

    The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD) generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN)-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.

  14. A hardware implementation of multiresolution filtering for broadband instrumentation

    SciTech Connect

    Kercel, S.W.; Dress, W.B.

    1995-12-01

    The authors have constructed a wavelet processing board that implements a 14-level wavelet transform. The board uses a high-speed, analog-to-digital (A/D) converter, a hardware queue, and five fixed-point digital signal processing (DSP) chips in a parallel pipeline architecture. All five processors are independently programmable. The board is designed as a general purpose engine for instrumentation applications requiring near real-time wavelet processing or multiscale filtering. The present application is the processing engine of a magnetic field monitor that covers 305 Hz through 5 MHz. The monitor is used for the detection of peak values of magnetic fields in nuclear power plants. This paper describes the design, development, simulation, and testing of the system. Specific issues include the conditioning of real-world signals for wavelet processing, practical trade-offs between queue length and filter length, selection of filter coefficients, simulation of a 14-octave filter bank, and limitations imposed by a fixed-point processor. Test results from the completed wavelet board are included.

  15. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    NASA Astrophysics Data System (ADS)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  16. Inferring Species Richness and Turnover by Statistical Multiresolution Texture Analysis of Satellite Imagery

    PubMed Central

    Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund

    2012-01-01

    Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the

  17. Proteomic and Transcriptomic Analyses of “Candidatus Pelagibacter ubique” Describe the First PII-Independent Response to Nitrogen Limitation in a Free-Living Alphaproteobacterium

    PubMed Central

    Smith, Daniel P.; Thrash, J. Cameron; Nicora, Carrie D.; Lipton, Mary S.; Burnum-Johnson, Kristin E.; Carini, Paul; Smith, Richard D.; Giovannoni, Stephen J.

    2013-01-01

    ABSTRACT Nitrogen is one of the major nutrients limiting microbial productivity in the ocean, and as a result, most marine microorganisms have evolved systems for responding to nitrogen stress. The highly abundant alphaproteobacterium “Candidatus Pelagibacter ubique,” a cultured member of the order Pelagibacterales (SAR11), lacks the canonical GlnB, GlnD, GlnK, and NtrB/NtrC genes for regulating nitrogen assimilation, raising questions about how these organisms respond to nitrogen limitation. A survey of 266 Alphaproteobacteria genomes found these five regulatory genes nearly universally conserved, absent only in intracellular parasites and members of the order Pelagibacterales, including “Ca. Pelagibacter ubique.” Global differences in mRNA and protein expression between nitrogen-limited and nitrogen-replete cultures were measured to identify nitrogen stress responses in “Ca. Pelagibacter ubique” strain HTCC1062. Transporters for ammonium (AmtB), taurine (TauA), amino acids (YhdW), and opines (OccT) were all elevated in nitrogen-limited cells, indicating that they devote increased resources to the assimilation of nitrogenous organic compounds. Enzymes for assimilating amine into glutamine (GlnA), glutamate (GltBD), and glycine (AspC) were similarly upregulated. Differential regulation of the transcriptional regulator NtrX in the two-component signaling system NtrY/NtrX was also observed, implicating it in control of the nitrogen starvation response. Comparisons of the transcriptome and proteome supported previous observations of uncoupling between transcription and translation in nutrient-deprived “Ca. Pelagibacter ubique” cells. Overall, these data reveal a streamlined, PII-independent response to nitrogen stress in “Ca. Pelagibacter ubique,” and likely other Pelagibacterales, and show that they respond to nitrogen stress by allocating more resources to the assimilation of nitrogen-rich organic compounds. PMID:24281717

  18. Multiresolution pattern recognition of small volcanos in Magellan data

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Anderson, C. H.; Aubele, J. C.; Crumpler, L. S.

    1992-01-01

    The Magellan data is a treasure-trove for scientific analysis of venusian geology, providing far more detail than was previously available from Pioneer Venus, Venera 15/16, or ground-based radar observations. However, at this point, planetary scientists are being overwhelmed by the sheer quantities of data collected--data analysis technology has not kept pace with our ability to collect and store it. In particular, 'small-shield' volcanos (less than 20 km in diameter) are the most abundant visible geologic feature on the planet. It is estimated, based on extrapolating from previous studies and knowledge of the underlying geologic processes, that there should be on the order of 10(exp 5) to 10(exp 6) of these volcanos visible in the Magellan data. Identifying and studying these volcanos is fundamental to a proper understanding of the geologic evolution of Venus. However, locating and parameterizing them in a manual manner is very time-consuming. Hence, we have undertaken the development of techniques to partially automate this task. The goal is not the unrealistic one of total automation, but rather the development of a useful tool to aid the project scientists. The primary constraints for this particular problem are as follows: (1) the method must be reasonably robust; and (2) the method must be reasonably fast. Unlike most geological features, the small volcanos of Venus can be ascribed to a basic process that produces features with a short list of readily defined characteristics differing significantly from other surface features on Venus. For pattern recognition purposes the relevant criteria include the following: (1) a circular planimetric outline; (2) known diameter frequency distribution from preliminary studies; (3) a limited number of basic morphological shapes; and (4) the common occurrence of a single, circular summit pit at the center of the edifice.

  19. Origin of mobility enhancement by chemical treatment of gate-dielectric surface in organic thin-film transistors: Quantitative analyses of various limiting factors in pentacene thin films

    NASA Astrophysics Data System (ADS)

    Matsubara, R.; Sakai, Y.; Nomura, T.; Sakai, M.; Kudo, K.; Majima, Y.; Knipp, D.; Nakamura, M.

    2015-11-01

    For the better performance of organic thin-film transistors (TFTs), gate-insulator surface treatments are often applied. However, the origin of mobility increase has not been well understood because mobility-limiting factors have not been compared quantitatively. In this work, we clarify the influence of gate-insulator surface treatments in pentacene thin-film transistors on the limiting factors of mobility, i.e., size of crystal-growth domain, crystallite size, HOMO-band-edge fluctuation, and carrier transport barrier at domain boundary. We quantitatively investigated these factors for pentacene TFTs with bare, hexamethyldisilazane-treated, and polyimide-coated SiO2 layers as gate dielectrics. By applying these surface treatments, size of crystal-growth domain increases but both crystallite size and HOMO-band-edge fluctuation remain unchanged. Analyzing the experimental results, we also show that the barrier height at the boundary between crystal-growth domains is not sensitive to the treatments. The results imply that the essential increase in mobility by these surface treatments is only due to the increase in size of crystal-growth domain or the decrease in the number of energy barriers at domain boundaries in the TFT channel.

  20. Testing the limits of micro-scale analyses of Si stable isotopes by femtosecond laser ablation multicollector inductively coupled plasma mass spectrometry with application to rock weathering

    NASA Astrophysics Data System (ADS)

    Schuessler, Jan A.; von Blanckenburg, Friedhelm

    2014-08-01

    An analytical protocol for accurate in-situ Si stable isotope analysis has been established on a new second-generation custom-built femtosecond laser ablation system. The laser was coupled to a multicollector inductively coupled plasma mass spectrometer (fsLA-MC-ICP-MS). We investigated the influence of laser parameters such as spot size, laser focussing, energy density and repetition rate, and ICP-MS operating conditions such as ICP mass load, spectral and non-spectral matrix effects, signal intensities, and data processing on precision and accuracy of Si isotope ratios. We found that stable and reproducible ICP conditions were obtained by using He as aerosol carrier gas mixed with Ar/H2O before entering the plasma. Precise δ29Si and δ30Si values (better than ± 0.23‰, 2SD) can be obtained if the area ablated is at least 50 × 50 μm; or, alternatively, for the analysis of geometric features down to the width of the laser spot (about 20 μm) if an equivalent area is covered. Larger areas can be analysed by rastering the laser beam, whereas small single spot analyses reduce the attainable precision of δ30Si to ca. ± 0.6‰, 2SD, for < 30 μm diameter spots. It was found that focussing the laser beam beneath the sample surface with energy densities between 1 and 3.8 J/cm2 yields optimal analytical conditions for all materials investigated here. Using pure quartz (NIST 8546 aka. NBS-28) as measurement standard for calibration (standard-sample-bracketing) did result in accurate and precise data of international reference materials and samples covering a wide range in chemical compositions (Si single crystal IRMM-017, basaltic glasses KL2-G, BHVO-2G and BHVO-2, andesitic glass ML3B-G, rhyolitic glass ATHO-G, diopside glass JER, soda-lime glasses NIST SRM 612 and 610, San Carlos olivine). No composition-dependent matrix effect was discernible within uncertainties of the method. The method was applied to investigate the Si isotope signature of rock weathering at

  1. IMFIT Integrated Modeling Applications Supporting Experimental Analysis: Multiple Time-Slice Kinetic EFIT Reconstructions, MHD Stability Limits, and Energy and Momentum Flux Analyses

    NASA Astrophysics Data System (ADS)

    Collier, A.; Lao, L. L.; Abla, G.; Chu, M. S.; Prater, R.; Smith, S. P.; St. John, H. E.; Guo, W.; Li, G.; Pan, C.; Ren, Q.; Park, J. M.; Bisai, N.; Srinivasan, R.; Sun, A. P.; Liu, Y.; Worrall, M.

    2010-11-01

    This presentation summarizes several useful applications provided by the IMFIT integrated modeling framework to support DIII-D and EAST research. IMFIT is based on Python and utilizes modular task-flow architecture with a central manager and extensive GUI support to coordinate tasks among component modules. The kinetic-EFIT application allows multiple time-slice reconstructions by fetching pressure profile data directly from MDS+ or from ONETWO or PTRANSP. The stability application analyzes a given reference equilibrium for stability limits by performing parameter perturbation studies with MHD codes such as DCON, GATO, ELITE, or PEST3. The transport task includes construction of experimental energy and momentum fluxes from profile analysis and comparison against theoretical models such as MMM95, GLF23, or TGLF.

  2. Experimental and numerical analyses of high voltage 4H-SiC junction barrier Schottky rectifiers with linearly graded field limiting ring

    NASA Astrophysics Data System (ADS)

    Wang, Xiang-Dong; Deng, Xiao-Chuan; Wang, Yong-Wei; Wang, Yong; Wen, Yi; Zhang, Bo

    2014-05-01

    This paper describes the successful fabrication of 4H-SiC junction barrier Schottky (JBS) rectifiers with a linearly graded field limiting ring (LG-FLR). Linearly variable ring spacings for the FLR termination are applied to improve the blocking voltage by reducing the peak surface electric field at the edge termination region, which acts like a variable lateral doping profile resulting in a gradual field distribution. The experimental results demonstrate a breakdown voltage of 5 kV at the reverse leakage current density of 2 mA/cm2 (about 80% of the theoretical value). Detailed numerical simulations show that the proposed termination structure provides a uniform electric field profile compared to the conventional FLR termination, which is responsible for 45% improvement in the reverse blocking voltage despite a 3.7% longer total termination length.

  3. DTMs: discussion of a new multi-resolution function based model

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Biagi, L.; Zamboni, G.

    2012-04-01

    The diffusion of new technologies based on WebGIS and virtual globes allows DTMs distribution and three dimensional representations to the Web users' community. In the Web distribution of geographical information, the database storage size represents a critical point: given a specific interest area, typically the server needs to perform some preprocessing, the data have to be sent to the client, that applies some additional processing. The efficiency of all these actions is crucial to guarantee a near real time availability of the information. DTMs are obtained from the raw observations by some sampling or interpolation technique and typically are stored and distributed as Triangular Irregular Networks (TIN) or regular grids. A new approach to store and transmit DTMs has been studied and implemented. The basic idea is to use multi-resolution bilinear spline functions to interpolate the raw observations and to represent the terrain. More in detail, the algorithm performs the following actions. 1) The spatial distribution of the raw observations is investigated. In areas where few data are available, few levels of splines are activated while more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with respect to the previous level. 2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the raw observations. The interpolation is computed by batch least squares. 3) Finally, the estimated coefficients of the splines are stored. The algorithm guarantees a local resolution consistent with the data density, exploiting all the available information provided by the sample. The model can be defined "function based" because the coefficients of a given function are stored instead of a set of heights: in particular, the resolution level, the position and the coefficient of each activated spline function are stored by the server and are

  4. Comparison of and limits of accuracy for statistical analyses of vibrational and electronic circular dichroism spectra in terms of correlations to and predictions of protein secondary structure.

    PubMed Central

    Pancoska, P.; Bitto, E.; Janota, V.; Urbanova, M.; Gupta, V. P.; Keiderling, T. A.

    1995-01-01

    This work provides a systematic comparison of vibrational CD (VCD) and electronic CD (ECD) methods for spectral prediction of secondary structure. The VCD and ECD data are simplified to a small set of spectral parameters using the principal component method of factor analysis (PC/FA). Regression fits of these parameters are made to the X-ray-determined fractional components (FC) of secondary structure. Predictive capability is determined by computing structures for proteins sequentially left out of the regression. All possible combinations of PC/FA spectral parameters (coefficients) were used to form a full set of restricted multiple regressions with the FC values, both independently for each spectral data set as well as for the two VCD sets and all the data grouped together. The complete search over all possible combinations of spectral parameters for different types of spectral data is a new feature of this study, and the focus on prediction is the strength of this approach. The PC/FA method was found to be stable in detail to expansion of the training set. Coupling amide II to amide I' parameters reduced the standard deviations of the VCD regression relationships, and combining VCD and ECD data led to the best fits. Prediction results had a minimum error when dependent on relatively few spectral coefficients. Such a limited dependence on spectral variation is the key finding of this work, which has ramifications for previous studies as well as suggests future directions for spectral analysis of structure. The best ECD prediction for helix and sheet uses only one parameter, the coefficient of the first subspectrum. With VCD, the best predictions sample coefficients of both the amide I' and II bands, but error is optimized using only a few coefficients. In this respect, ECD is more accurate than VCD for alpha-helix, and the combined VCD (amide I' + II) predicts the beta-sheet component better than does ECD. Combining VCD and ECD data sets yields exceptionally good

  5. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  6. Coherent Vortex Simulation of weakly compressible turbulent mixing layers using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Roussel, Olivier; Schneider, Kai

    2010-03-01

    An adaptive mulitresolution method based on a second-order finite volume discretization is presented for solving the three-dimensional compressible Navier-Stokes equations in Cartesian geometry. The explicit time discretization is of second-order and for flux evaluation a 2-4 Mac Cormack scheme is used. Coherent Vortex Simulations (CVS) are performed by decomposing the flow variables into coherent and incoherent contributions. The coherent part is computed deterministically on a locally refined grid using the adaptive multiresolution method while the influence of the incoherent part is neglected to model turbulent dissipation. The computational efficiency of this approach in terms of memory and CPU time compression is illustrated for turbulent mixing layers in the weakly compressible regime and for Reynolds numbers based on the mixing layer thickness between 50 and 200. Comparisons with direct numerical simulations allow to assess the precision and efficiency of CVS.

  7. MULTI-RESOLUTION STATISTICAL ANALYSIS ON GRAPH STRUCTURED DATA IN NEUROIMAGING

    PubMed Central

    Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Adluru, Nagesh; Bendlin, Barbara B.; Johnson, Sterling C.

    2016-01-01

    Statistical data analysis plays a major role in discovering structural and functional imaging phenotypes for mental disorders such as Alzheimer’s disease (AD). The goal here is to identify, ideally early on, which regions in the brain show abnormal variations with a disorder. To make the method more sensitive, we rely on a multi-resolutional perspective of the given data. Since the underlying imaging data (such as cortical surfaces and connectomes) are naturally represented in the form of weighted graphs which lie in a non-Euclidean space, we introduce recent work from the harmonics literature to derive an effective multi-scale descriptor using wavelets on graphs that characterize the local context at each data point. Using this descriptor, we demonstrate experiments where we identify significant differences between AD and control populations using cortical surface data and tractography derived graphs/networks.

  8. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  9. Multiresolution Analysis Using Wavelet, Ridgelet, and Curvelet Transforms for Medical Image Segmentation

    PubMed Central

    AlZubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  10. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    NASA Technical Reports Server (NTRS)

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  11. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    PubMed

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized. PMID:26530048

  12. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  13. Coherent Vortex Simulation (CVS) of compressible turbulent mixing layers using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Roussel, Olivier; Farge, Marie

    2007-11-01

    Coherent Vortex Simulation is based on the wavelet decomposition of the flow into coherent and incoherent components. An adaptive multiresolution method using second order finite volumes with explicit time discretization, a 2-4 Mac Cormack scheme, allows an efficient computation of the coherent flow on a dynamically adapted grid. Neglecting the influence of the incoherent background models turbulent dissipation. We present CVS computation of three dimensional compressible time developing mixing layer. We show the speed up in CPU time with respect to DNS and the obtained memory reduction thanks to dynamical octree data structures. The impact of different filtering strategies is discussed and it is found that isotropic wavelet thresholding of the Favre averaged gradient of the momentum yields the most effective results.

  14. Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation.

    PubMed

    Alzubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  15. Accessing the Global Multi-Resolution Topography (GMRT) Synthesis through Gmrt Maptool

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Barg, B.; Carbotte, S. M.

    2014-12-01

    The Global Multi-Resolution Topography (GMRT) Synthesis (http://gmrt.marine-geo.org) is a dynamically maintained global multi-resolution synthesis of terrestrial and seafloor elevation data maintained as both images and gridded data values as part of the IEDA Marine Geoscience Data System. GMRT seamlessly brings together a variety of elevation sources, and includes ship-based multibeam sonar collected throughout the global oceans that is processed by the GMRT Team and is gridded to 100-m resolution. New versions of GMRT are released twice each year, typically adding processed multibeam data from ~80 cruises per year. GMRT grids and images can be accessed through a variety of tools and interfaces including GeoMapApp (http://www.geomapapp.org) the GMRT MapTool (http://www.marine-geo.org/tools/maps_grids.php), and images can also be accessed through a Web Map Service. We have recently launched a new version of our web-based GMRT MapTool interface, which provides custom access to the gridded data values in standard formats including GeoTIFF, ArcASCII and GMT NetCDF. Several resolution options are provided for these gridded data, and corresponding images can also be generated. Coupled with this new interface is an XML metadata service that provides attribution information and detailed metadata about source data components (cruise metadata, sensor metadata, and full list of source data files) for any region of interest. Metadata from the attribution service is returned to the user along with the requested data, and is also combined with the data itself in new Bathymetry Attributed Grid (BAG) formatted files.

  16. Exploring a multi-resolution modeling approach within the shallow-water equations

    SciTech Connect

    Ringler, Todd; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William

    2011-01-01

    The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward

  17. a New Multi-Resolution Algorithm to Store and Transmit Compressed DTM

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Brovelli, M.; Zamboni, G.

    2012-07-01

    WebGIS and virtual globes allow DTMs distribution and three dimensional representations to the Web users' community. In these applications, the database storage size represents a critical point. DTMs are obtained by some sampling or interpolation on the raw observations and typically are stored and distributed by data based models, like for example regular grids. A new approach to store and transmit DTMs is presented. The idea is to use multi-resolution bilinear spline functions to interpolate the observations and to model the terrain. More in detail, the algorithm performs the following actions. 1) The spatial distribution of the observations is investigated. Where few data are available, few levels of splines are activated while more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with respect to the previous level. 2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the observations. The interpolation is computed by batch least squares. 3) Finally, the estimated coefficients of the splines are stored. The model guarantees a local resolution consistent with the data density and can be defined analytical, because the coefficients of a given function are stored instead of a set of heights. The approach is discussed and compared with the traditional techniques to interpolate, store and transmit DTMs, considering accuracy and storage requirements. It is also compared with another multi-resolution technique. The research has been funded by the INTERREG HELI-DEM (Helvetia Italy Digital Elevation Model) project.

  18. Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.

    PubMed

    Verma, Gyanendra K; Tiwary, Uma Shanker

    2014-11-15

    The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. PMID:24269801

  19. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  20. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  1. Implementation of the multiconfiguration time-dependent Hatree-Fock method for general molecules on a multiresolution Cartesian grid

    NASA Astrophysics Data System (ADS)

    Sawada, Ryohto; Sato, Takeshi; Ishikawa, Kenichi L.

    2016-02-01

    We report a three-dimensional numerical implementation of the multiconfiguration time-dependent Hartree-Fock method based on a multiresolution Cartesian grid, with no need to assume any symmetry of molecular structure. We successfully compute high-harmonic generation of H2 and H2O . The present implementation will open a way to the first-principles theoretical study of intense-field- and attosecond-pulse-induced ultrafast phenomena in general molecules.

  2. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression.

    PubMed

    Guo, Chenlei; Zhang, Liming

    2010-01-01

    Salient areas in natural scenes are generally regarded as areas which the human eye will typically focus on, and finding these areas is the key step in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), Neuromorphic Vision Toolkit (NVT), and others, but they demand high computational cost and computing useful results mostly relies on their choice of parameters. Although some region-based approaches were proposed to reduce the computational complexity of feature maps, these approaches still were not able to work in real time. Recently, a simple and fast approach called spectral residual (SR) was proposed, which uses the SR of the amplitude spectrum to calculate the image's saliency map. However, in our previous work, we pointed out that it is the phase spectrum, not the amplitude spectrum, of an image's Fourier transform that is key to calculating the location of salient areas, and proposed the phase spectrum of Fourier transform (PFT) model. In this paper, we present a quaternion representation of an image which is composed of intensity, color, and motion features. Based on the principle of PFT, a novel multiresolution spatiotemporal saliency detection model called phase spectrum of quaternion Fourier transform (PQFT) is proposed in this paper to calculate the spatiotemporal saliency map of an image by its quaternion representation. Distinct from other models, the added motion dimension allows the phase spectrum to represent spatiotemporal saliency in order to perform attention selection not only for images but also for videos. In addition, the PQFT model can compute the saliency map of an image under various resolutions from coarse to fine. Therefore, the hierarchical selectivity (HS) framework based on the PQFT model is introduced here to construct the tree structure representation of an image. With the help of HS, a model called multiresolution wavelet domain foveation (MWDF) is

  3. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    NASA Astrophysics Data System (ADS)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2016-06-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse ~1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to ~0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  4. Sociopolitical Analyses.

    ERIC Educational Resources Information Center

    Van Galen, Jane, Ed.; And Others

    1992-01-01

    This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E. Beyer's "Educational Studies and…

  5. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    NASA Astrophysics Data System (ADS)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform – was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  6. Fast numerical algorithms for fitting multiresolution hybrid shape models to brain MRI.

    PubMed

    Vemuri, B C; Guo, Y; Lai, S H; Leonard, C M

    1997-09-01

    In this paper, we present new and fast numerical algorithms for shape recovery from brain MRI using multiresolution hybrid shape models. In this modeling framework, shapes are represented by a core rigid shape characterized by a superquadric function and a superimposed displacement function which is characterized by a membrane spline discretized using the finite-element method. Fitting the model to brain MRI data is cast as an energy minimization problem which is solved numerically. We present three new computational methods for model fitting to data. These methods involve novel mathematical derivations that lead to efficient numerical solutions of the model fitting problem. The first method involves using the nonlinear conjugate gradient technique with a diagonal Hessian preconditioner. The second method involves the nonlinear conjugate gradient in the outer loop for solving global parameters of the model and a preconditioned conjugate gradient scheme for solving the local parameters of the model. The third method involves the nonlinear conjugate gradient in the outer loop for solving the global parameters and a combination of the Schur complement formula and the alternating direction-implicit method for solving the local parameters of the model. We demonstrate the efficiency of our model fitting methods via experiments on several MR brain scans. PMID:9873915

  7. Interactive, Internet Delivery of Scientific Visualization viaStructured, Prerendered Multiresolution Imagery

    SciTech Connect

    Chen, Jerry; Yoon, Ilmi; Bethel, E. Wes

    2005-04-20

    We present a novel approach for highly interactive remote delivery of visualization results. Instead of real-time rendering across the internet, our approach, inspired by QuickTime VR's Object Movieconcept, delivers pre-rendered images corresponding to different viewpoints and different time steps to provide the experience of 3D and temporal navigation. We use tiled, multiresolution image streaming to consume minimum bandwidth while providing the maximum resolution that a user can perceive from a given viewpoint. Since image data, a viewpoint and time stamps are the only required inputs, our approach is generally applicable to all visualization and graphics rendering applications capable of generating image files in an ordered fashion. Our design is a form of latency-tolerant remote visualization, where visualization and Rendering time is effectively decoupled from interactive exploration. Our approach trades off increased interactivity, flexible resolution (for individual clients), reduced load and effective reuse of coherent frames between multiple users (from the servers perspective) at the expense of unconstrained exploration. A normal web server is the vehicle for providing on-demand images to the remote client application, which uses client-pull to obtain and cache only those images required to fulfill the interaction needs. This paper presents an architectural description of the system along with a performance characterization for stage of production, delivery and viewing pipeline.

  8. Wavelet multiresolution based multifractal analysis of electric fields by lightning return strokes

    NASA Astrophysics Data System (ADS)

    Gou, Xueqiang; Chen, Mingli; Zhang, Yijun; Dong, Wansheng; Qie, Xiushu

    2009-02-01

    Lightning can be seen as a large-scale cooperative phenomenon, which may evolve in a self-similar cascaded way. Using the electric field waveforms recorded by the slow antenna system, the mono- and multifractal behaviors of 115 first return strokes in negative cloud-to-ground discharges have been investigated with a wavelet multiresolution based multifractal method. The results show that the return stroke process, in term of its electric field waveform, has apparent fractality and strong degree of multifractality. The multifractal spectra obtained for the 115 cases are all well fitted to a modified version of the binomial cascade multifractal model. The width of the multifractal spectra, which measure the strength of multifractality, is 1.6 on average. The fractal dimension of the electric field waveforms ranges from 1.2 to 1.5 with an average of 1.3, a similar value to the fractal dimension of the lightning channel obtained by others. This suggests that the lightning-produced electric fields may have the same fractal dimension as its channel. The relationship between the peak current of a return stroke and the charge deposition in its channel is also discussed. The results suggest that the wavelet and scaling analysis may be a powerful tool in interpretation of the lightning-produced electric fields and therefore in understanding lightning.

  9. Comparison of various texture classification methods using multiresolution analysis and linear regression modelling.

    PubMed

    Dhanya, S; Kumari Roshni, V S

    2016-01-01

    Textures play an important role in image classification. This paper proposes a high performance texture classification method using a combination of multiresolution analysis tool and linear regression modelling by channel elimination. The correlation between different frequency regions has been validated as a sort of effective texture characteristic. This method is motivated by the observation that there exists a distinctive correlation between the image samples belonging to the same kind of texture, at different frequency regions obtained by a wavelet transform. Experimentally, it is observed that this correlation differs across textures. The linear regression modelling is employed to analyze this correlation and extract texture features that characterize the samples. Our method considers not only the frequency regions but also the correlation between these regions. This paper primarily focuses on applying the Dual Tree Complex Wavelet Packet Transform and the Linear Regression model for classification of the obtained texture features. Additionally the paper also presents a comparative assessment of the classification results obtained from the above method with two more types of wavelet transform methods namely the Discrete Wavelet Transform and the Discrete Wavelet Packet Transform. PMID:26835234

  10. Combination of geodetic measurements by means of a multi-resolution representation

    NASA Astrophysics Data System (ADS)

    Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.

    2010-12-01

    Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.

  11. Adaptation of a multi-resolution adversarial model for asymmetric warfare

    NASA Astrophysics Data System (ADS)

    Rosenberg, Brad; Gonsalves, Paul G.

    2006-05-01

    Recent military operations have demonstrated the use by adversaries of non-traditional or asymmetric military tactics to offset US military might. Rogue nations with links to trans-national terrorists have created a highly unpredictable and potential dangerous environment for US military operations. Several characteristics of these threats include extremism in beliefs, global in nature, non-state oriented, and highly networked and adaptive, thus making these adversaries less vulnerable to conventional military approaches. Additionally, US forces must also contend with more traditional state-based threats that are further evolving their military fighting strategies and capabilities. What are needed are solutions to assist our forces in the prosecution of operations against these diverse threat types and their atypical strategies and tactics. To address this issue, we present a system that allows for the adaptation of a multi-resolution adversarial model. The developed model can then be used to support both training and simulation based acquisition requirements to effectively respond to such an adversary. The described system produces a combined adversarial model by merging behavior modeling at the individual level with aspects at the group and organizational level via network analysis. Adaptation of this adversarial model is performed by means of an evolutionary algorithm to build a suitable model for the chosen adversary.

  12. Multiresolution analysis over graphs for a motor imagery based online BCI game.

    PubMed

    Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy

    2016-01-01

    Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes. PMID:26599827

  13. A three-channel miniaturized optical system for multi-resolution imaging

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Thienpont, Hugo

    2013-09-01

    Inspired by the natural compound eyes of insects, multichannel imaging systems embrace many channels that scramble their entire Field-Of-View (FOV). Our aim in this work was to attain multi-resolution capability into a multi-channel imaging system by manipulating the available channels to possess different imaging properties (focal length, angular resolution). We have designed a three-channel imaging system where the first and third channels have highest and lowest angular resolution of 0.0096° and 0.078° and narrowest and widest FOVs of 7° and 80°, respectively. The design of the channels has been done for a single wavelength of 587.6 nm using CODE V. The three channels each consist of 4 aspherical lens surfaces and an absorbing baffle that avoids crosstalk among the neighbouring channels. The aspherical lens surfaces have been fabricated in PMMA by ultra-precision diamond tooling and the baffles by metal additive manufacturing. The profiles of the fabricated lens surfaces have been measured with an accurate multi-sensor coordinate measuring machine and compared with the corresponding profiles of the designed lens surfaces. The fabricated lens profiles are then incorporated into CODE V to realistically model the three channels and also compare their performances with those of the nominal design. We can conclude that the performances of the two latter models are in a good agreement.

  14. Automated detection of landslides with a hierarchical multi-resolution image analysis approach

    NASA Astrophysics Data System (ADS)

    Kurtz, Camille; Stumpf, André; Malet, Jean-Philippe; Puissant, Anne; Gançarski, Pierre; Passat, Nicolas

    2015-04-01

    The mapping of landslides from Very High Resolution (VHR) satellite optical images present several challenges related to the heterogeneity of landslide sizes, shapes and ground surface properties. However, a common geomorphological characteristic of landslides is to be organized with a series of embedded and scaled features. These properties motivated the use of a multiresolution image analysis approach based on a hybrid segmentation/classification region-based method. The method, which uses satellite optical images of the same area at various spatial resolutions (Medium to Very High Resolution), relies on a top-down hierarchical framework. In the specific context of landslide analysis, two main novelties are introduced to enrich this framework. The first novelty consists of using non-spectral information, obtained from Digital Surface Model (DSM), as a priori knowledge for the guidance of the segmentation/classification process. The second novelty consists of using a new domain adaptation strategy, that allows to reduce the expert's interaction when handling large image datasets. Experiments performed on satellite images acquired over terrains affected by landslides in ther French Alps demonstrate the efficiency of the proposed method with different hierarchical levels of detail addressing various operational needs.

  15. Multiresolution Approach for Noncontact Measurements of Arterial Pulse Using Thermal Imaging

    NASA Astrophysics Data System (ADS)

    Chekmenev, Sergey Y.; Farag, Aly A.; Miller, William M.; Essock, Edward A.; Bhatnagar, Aruni

    This chapter presents a novel computer vision methodology for noncontact and nonintrusive measurements of arterial pulse. This is the only investigation that links the knowledge of human physiology and anatomy, advances in thermal infrared (IR) imaging and computer vision to produce noncontact and nonintrusive measurements of the arterial pulse in both time and frequency domains. The proposed approach has a physical and physiological basis and as such is of a fundamental nature. A thermal IR camera was used to capture the heat pattern from superficial arteries, and a blood vessel model was proposed to describe the pulsatile nature of the blood flow. A multiresolution wavelet-based signal analysis approach was applied to extract the arterial pulse waveform, which lends itself to various physiological measurements. We validated our results using a traditional contact vital signs monitor as a ground truth. Eight people of different age, race and gender have been tested in our study consistent with Health Insurance Portability and Accountability Act (HIPAA) regulations and internal review board approval. The resultant arterial pulse waveforms exactly matched the ground truth oximetry readings. The essence of our approach is the automatic detection of region of measurement (ROM) of the arterial pulse, from which the arterial pulse waveform is extracted. To the best of our knowledge, the correspondence between noncontact thermal IR imaging-based measurements of the arterial pulse in the time domain and traditional contact approaches has never been reported in the literature.

  16. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis.

    PubMed

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  17. A multiresolution approach to image enhancement via histogram shaping and adaptive Wiener filtering

    NASA Astrophysics Data System (ADS)

    Pace, T.; Manville, D.; Lee, H.; Cloud, G.; Puritz, J.

    2008-04-01

    It is critical in military applications to be able to extract features in imagery that may be of interest to the viewer at any time of the day or night. Infrared (IR) imagery is ideally suited for producing these types of images. However, even under the best of circumstances, the traditional approach of applying a global automatic gain control (AGC) to the digital image may not provide the user with local area details that may be of interest. Processing the imagery locally can enhance additional features and characteristics in the image which provide the viewer with an improved understanding of the scene being observed. This paper describes a multi-resolution pyramid approach for decomposing an image, enhancing its contrast by remapping the histograms to desired pdfs, filtering them and recombining them to create an output image with much more visible detail than the input image. The technique improves the local area image contrast in light and dark areas providing the warfighter with significantly improved situational awareness.

  18. Automatic multiresolution age-related macular degeneration detection from fundus images

    NASA Astrophysics Data System (ADS)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  19. Multi-resolutional brain network filtering and analysis via wavelets on non-Euclidean space.

    PubMed

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K; Charchut, Sylvia; GadElkarim, Johnson J; Altshuler, Lori; Moody, Teena; Kumar, Anand; Singh, Vikas; Leow, Alex D

    2013-01-01

    Advances in resting state fMRI and diffusion weighted imaging (DWI) have led to much interest in studies that evaluate hypotheses focused on how brain connectivity networks show variations across clinically disparate groups. However, various sources of error (e.g., tractography errors, magnetic field distortion, and motion artifacts) leak into the data, and make downstream statistical analysis problematic. In small sample size studies, such noise have an unfortunate effect that the differential signal may not be identifiable and so the null hypothesis cannot be rejected. Traditionally, smoothing is often used to filter out noise. But the construction of convolving with a Gaussian kernel is not well understood on arbitrarily connected graphs. Furthermore, there are no direct analogues of scale-space theory for graphs--ones which allow to view the signal at multiple resolutions. We provide rigorous frameworks for performing 'multi-resolutional' analysis on brain connectivity graphs. These are based on the recent theory of non-Euclidean wavelets. We provide strong evidence, on brain connectivity data from a network analysis study (structural connectivity differences in adult euthymic bipolar subjects), that the proposed algorithm allows identifying statistically significant network variations, which are clinically meaningful, where classical statistical tests, if applied directly, fail. PMID:24505816

  20. Practical operating points of multi-resolution frame compatible (MFC) stereo coding

    NASA Astrophysics Data System (ADS)

    Lu, Taoran; Ganapathy, Hariharan; Lakshminarayanan, Gopi; Chen, Tao; Yin, Peng; Brooks, David; Husak, Walt

    2013-09-01

    3D content is gaining popularity and the production and delivery of 3D video is now an active working item among video compression experts, content providers and the CE industry. Frame compatible stereo coding was initially adopted for the first generation of 3DTV broadcasting services for its compatibility with existing 2D decoders. However, the frame compatible solution sacrifices half of the original video resolution. In 2012, the Moving Picture Experts Group (MPEG) issued the call for proposal (CfP) for solutions that improve the resolution of frame compatible stereo 3D video signal while maintaining the backward compatibility to legacy decoders. The standardization process of the multiresolution frame compatible (MFC) stereo coding was then started. In this paper, the solution - Orthogonal Muxing Frame Compatible Full Resolution (OM-FCFR) - as a response to the CfP is introduced. In addition, this paper provides some experimental results for broadcasters to guide them in selecting operating points for MFC. It is observed that for typical broadcast bitrates, more than 0.5dB PSNR improvement can be achieved by MFC over the frame compatible solution with only 15%~20% overhead.

  1. Multiscale and multiresolution modeling of shales and their flow and morphological properties

    PubMed Central

    Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad

    2015-01-01

    The need for more accessible energy resources makes shale formations increasingly important. Characterization of such low-permeability formations is complicated, due to the presence of multiscale features, and defies conventional methods. High-quality 3D imaging may be an ultimate solution for revealing the complexities of such porous media, but acquiring them is costly and time consuming. High-quality 2D images, on the other hand, are widely available. A novel three-step, multiscale, multiresolution reconstruction method is presented that directly uses 2D images in order to develop 3D models of shales. It uses a high-resolution 2D image representing the small-scale features to reproduce the nanopores and their network, a large scale, low-resolution 2D image to create the larger-scale characteristics, and generates stochastic realizations of the porous formation. The method is used to develop a model for a shale system for which the full 3D image is available and its properties can be computed. The predictions of the reconstructed models are in excellent agreement with the data. The method is, however, quite general and can be used for reconstructing models of other important heterogeneous materials and media. Two biological examples and from materials science are also reconstructed to demonstrate the generality of the method. PMID:26560178

  2. Noise reduction in small-animal PET images using a multiresolution transform.

    PubMed

    Mejia, Jose M; Ochoa Domínguez, Humberto de Jesús; Vergara Villegas, Osslan Osiris; Ortega Máynez, Leticia; Mederos, Boris

    2014-10-01

    In this paper, we address the problem of denoising reconstructed small animal positron emission tomography (PET) images, based on a multiresolution approach which can be implemented with any transform such as contourlet, shearlet, curvelet, and wavelet. The PET images are analyzed and processed in the transform domain by modeling each subband as a set of different regions separated by boundaries. Homogeneous and heterogeneous regions are considered. Each region is independently processed using different filters: a linear estimator for homogeneous regions and a surface polynomial estimator for the heterogeneous region. The boundaries between the different regions are estimated using a modified edge focusing filter. The proposed approach was validated by a series of experiments. Our method achieved an overall reduction of up to 26% in the %STD of the reconstructed image of a small animal NEMA phantom. Additionally, a test on a simulated lesion showed that our method yields better contrast preservation than other state-of-the art techniques used for noise reduction. Thus, the proposed method provides a significant reduction of noise while at the same time preserving contrast and important structures such as lesions. PMID:24951682

  3. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  4. Developing a real-time emulation of multiresolutional control architectures for complex, discrete-event systems

    SciTech Connect

    Davis, W.J.; Macro, J.G.; Brook, A.L.

    1996-12-31

    This paper first discusses an object-oriented, control architecture and then applies the architecture to produce a real-time software emulator for the Rapid Acquisition of Manufactured Parts (RAMP) flexible manufacturing system (FMS). In specifying the control architecture, the coordinated object is first defined as the primary modeling element. These coordinated objects are then integrated into a Recursive, Object-Oriented Coordination Hierarchy. A new simulation methodology, the Hierarchical Object-Oriented Programmable Logic Simulator, is then employed to model the interactions among the coordinated objects. The final step in implementing the emulator is to distribute the models of the coordinated objects over a network of computers and to synchronize their operation to a real-time clock. The paper then introduces the Hierarchical Subsystem Controller as an intelligent controller for the coordinated object. The proposed approach to intelligent control is then compared to the concept of multiresolutional semiosis that has been developed by Dr. Alex Meystel. Finally, the plans for implementing an intelligent controller for the RAMP FMS are discussed.

  5. Fully automated analysis of multi-resolution four-channel micro-array genotyping data

    NASA Astrophysics Data System (ADS)

    Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.

    2006-03-01

    We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.

  6. A method of image multi-resolution processing based on FPGA + DSP architecture

    NASA Astrophysics Data System (ADS)

    Peng, Xiaohan; Zhong, Sheng; Lu, Hongqiang

    2015-10-01

    In real-time image processing, with the improvement of resolution and frame rate of camera imaging, not only the requirement of processing capacity is improving, but also the requirement of the optimization of process is improving. With regards to the FPGA + DSP architecture image processing system, there are three common methods to overcome the challenge above. The first is using higher performance DSP. For example, DSP with higher core frequency or with more cores can be used. The second is optimizing the processing method, make the algorithm to accomplish the same processing results but spend less time. Last but not least, pre-processing in the FPGA can make the image processing more efficient. A method of multi-resolution pre-processing by FPGA based on FPGA + DSP architecture is proposed here. It takes advantage of built-in first in first out (FIFO) and external synchronous dynamic random access memory (SDRAM) to buffer the images which come from image detector, and provides down-sampled images or cut-down images for DSP flexibly and efficiently according to the request parameters sent by DSP. DSP can thus get the degraded image instead of the whole image to process, shortening the processing time and transmission time greatly. The method results in alleviating the burden of image processing of DSP and also solving the problem of single method of image resolution reduction cannot meet the requirements of image processing task of DSP.

  7. Interslice interpolation of anisotropic 3D images using multiresolution contour correlation

    NASA Astrophysics Data System (ADS)

    Lee, Jiann-Der; Wan, Shu-Yen; Ma, Cherng-Min

    2002-05-01

    To visualize, manipulate and analyze the geometrical structure of anatomical changes, it is often required to perform three-dimensional (3-D) interpolation of the interested organ shape from a series of cross-sectional images obtained from various imaging modalities, such as ultrasound, computed tomography (CT), magnetic resonance imaging (MRI), etc. In this paper, a novel wavelet-based interpolation scheme, which consists of four algorithms are proposed to 3-D image reconstruction. The multi-resolution characteristics of wavelet transform (WT) is completely used in this approach, which consists of two stages, boundary extraction and contour interpolation. More specifically, a wavelet-based radial search method is first designed to extract the boundary of the target object. Next, the global information of the extracted boundary is analyzed for interpolation using WT with various bases and scales. By using six performance measures to evaluate the effectiveness of the proposed scheme, experimental results show that the performance of all proposed algorithms is superior to traditional contour-based methods, linear interpolation and B-spline interpolation. The satisfactory outcome of the proposed scheme provides its capability for serving as an essential part of image processing system developed for medical applications.

  8. Multi-resolution analysis of high density spatial and temporal cloud inhomogeneity fields from HOPE campaign

    NASA Astrophysics Data System (ADS)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Macke, Andreas

    2015-04-01

    Clouds are the most complex structures in both spatial and temporal scales of the Earth's atmosphere that effect the downward surface reaching fluxes and thus contribute to large uncertainty in the global radiation budget. Within the framework of High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE), a high density network of 99 pyranometer stations was set up around Jülich, Germany (~ 10 × 12 km2 area) during April to July 2013 to capture the small-scale variability in cloud induced radiation fields at the surface. In this study, we perform multi-resolution analysis of the downward solar irradiance variability at the surface from the pyranometer network to investigate the dependence of temporal and spatial averaging scales on the variance and spatial correlation for different cloud regimes. Preliminary results indicate that correlation is strongly scale-dependent where as the variance is dependent on the length of averaging period. Implications of our findings will be useful for quantifying the effect of spatial collocation while validating the satellite inferred solar irradiance estimates, and also to explore the link between cloud structure and radiation. We will present the details of our analysis and results.

  9. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction. PMID:24469684

  10. Current limiters

    SciTech Connect

    Loescher, D.H.; Noren, K.

    1996-09-01

    The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.