Science.gov

Sample records for limits multiresolution analyses

  1. Evaluating Climate Causation of Conflict in Darfur Using Multi-temporal, Multi-resolution Satellite Image Datasets With Novel Analyses

    NASA Astrophysics Data System (ADS)

    Brown, I.; Wennbom, M.

    2013-12-01

    Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors

  2. Multiresolution training of Kohonen neural networks

    NASA Astrophysics Data System (ADS)

    Tamir, Dan E.

    2007-09-01

    This paper analyses a trade-off between convergence rate and distortion obtained through a multi-resolution training of a Kohonen Competitive Neural Network. Empirical results show that a multi-resolution approach can improve the training stage of several unsupervised pattern classification algorithms including K-means clustering, LBG vector quantization, and competitive neural networks. While, previous research concentrated on convergence rate of on-line unsupervised training. New results, reported in this paper, show that the multi-resolution approach can be used to improve training quality (measured as a derivative of the rate distortion function) on the account of convergence speed. The probability of achieving a desired point in the quality/convergence-rate space of Kohonen Competitive Neural Networks (KCNN) is evaluated using a detailed Monte Carlo set of experiments. It is shown that multi-resolution can reduce the distortion by a factor of 1.5 to 6 while maintaining the convergence rate of traditional KCNN. Alternatively, the convergence rate can be improved without loss of quality. The experiments include a controlled set of synthetic data, as well as, image data. Experimental results are reported and evaluated.

  3. Hair analyses: worthless for vitamins, limited for minerals

    SciTech Connect

    Hambridge, K.M.

    1982-11-01

    Despite many major and minor problems with interpretation of analytical data, chemical analyses of human hair have some potential value. Extensive research will be necessary to define this value, including correlation of hair concentrations of specific elements with those in other tissues and metabolic pools and definition of normal physiological concentration ranges. Many factors that may compromise the correct interpretation of analytical data require detailed evaluation for each specific element. Meanwhile, hair analyses are of some value in the comparison of different populations and, for example, in public health community surveys of environmental exposure to heavy metals. On an individual basis, their established usefulness is much more restricted and the limitations are especially notable for evaluation of mineral nutritional status. There is a wide gulf between the limited and mainly tentative scientific justification for their use on an individual basis and the current exploitation of multielement chemical analyses of human hair.

  4. Linking properties to microstructure through multiresolution mechanics

    NASA Astrophysics Data System (ADS)

    McVeigh, Cahal James

    The macroscale mechanical and physical properties of materials are inherently linked to the underlying microstructure. Traditional continuum mechanics theories have focused on approximating the heterogeneous microstructure as a continuum, which is conducive to a partial differential equation mathematical description. Although this makes large scale simulation of material much more efficient than modeling the detailed microstructure, the relationship between microstructure and macroscale properties becomes unclear. In order to perform computational materials design, material models must clearly relate the key underlying microstructural parameters (cause) to macroscale properties (effect). In this thesis, microstructure evolution and instability events are related to macroscale mechanical properties through a new multiresolution continuum analysis approach. The multiresolution nature of this theory allows prediction of the evolving magnitude and scale of deformation as a direct function of the changing microstructure. This is achieved via a two-pronged approach: (a) Constitutive models which track evolving microstructure are developed and calibrated to direct numerical simulations (DNS) of the microstructure. (b) The conventional homogenized continuum equations of motion are extended via a virtual power approach to include extra coupled microscale stresses and stress couples which are active at each characteristic length scale within the microstructure. The multiresolution approach is applied to model the fracture toughness of a cemented carbide, failure of a steel alloy under quasi-static loading conditions and the initiation and velocity of adiabatic shear bands under high speed dynamic loading. In each case the multiresolution analysis predicts the important scale effects which control the macroscale material response. The strain fields predicted in the multiresolution continuum analyses compare well to those observed in direct numerical simulations of the

  5. Research potential and limitations of trace analyses of cremated remains.

    PubMed

    Harbeck, Michaela; Schleuder, Ramona; Schneider, Julius; Wiechmann, Ingrid; Schmahl, Wolfgang W; Grupe, Gisela

    2011-01-30

    Human cremation is a common funeral practice all over the world and will presumably become an even more popular choice for interment in the future. Mainly for purposes of identification, there is presently a growing need to perform trace analyses such as DNA or stable isotope analyses on human remains after cremation in order to clarify pending questions in civil or criminal court cases. The aim of this study was to experimentally test the potential and limitations of DNA and stable isotope analyses when conducted on cremated remains. For this purpose, tibiae from modern cattle were experimentally cremated by incinerating the bones in increments of 100°C until a maximum of 1000°C was reached. In addition, cremated human remains were collected from a modern crematory. The samples were investigated to determine level of DNA preservation and stable isotope values (C and N in collagen, C and O in the structural carbonate, and Sr in apatite). Furthermore, we assessed the integrity of microstructural organization, appearance under UV-light, collagen content, as well as the mineral and crystalline organization. This was conducted in order to provide a general background with which to explain observed changes in the trace analyses data sets. The goal is to develop an efficacious screening method for determining at which degree of burning bone still retains its original biological signals. We found that stable isotope analysis of the tested light elements in bone is only possible up to a heat exposure of 300°C while the isotopic signal from strontium remains unaltered even in bones exposed to very high temperatures. DNA-analyses seem theoretically possible up to a heat exposure of 600°C but can not be advised in every case because of the increased risk of contamination. While the macroscopic colour and UV-fluorescence of cremated bone give hints to temperature exposure of the bone's outer surface, its histological appearance can be used as a reliable indicator for the

  6. [Network meta-analyses: Interest and limits in oncology].

    PubMed

    Ribassin-Majed, Laureen; Pignon, Jean-Pierre; Michiels, Stefan; Blanchard, Pierre

    2016-03-01

    In the last decade, a new method has emerged called 'network meta-analysis' to take into account all randomized trials in a given clinical setting to provide relative effectiveness between different treatments, whether or not they have been compared (pairwise) in randomized controlled trials. Network meta-analyses combine the results of direct comparisons from randomized trials with indirect comparisons between trials (i.e. when two treatments were not compared with each other, but have been studied in relation to a common comparator). The purpose of this note is to explain this method, its relevance and its limitations. A worked example in non-metastatic head and neck cancer is presented as illustration. PMID:26917469

  7. The Limited Informativeness of Meta-Analyses of Media Effects.

    PubMed

    Valkenburg, Patti M

    2015-09-01

    In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children's video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I compare Ferguson's nonsignificant effects sizes with earlier meta-analyses on the same topics that yielded larger, significant effect sizes. I argue that Ferguson's choice for partial effects sizes is unjustified on both methodological and theoretical grounds. I then plead for a more constructive debate on the effects of violent video games on children and adolescents. Until now, this debate has been dominated by two camps with diametrically opposed views on the effects of violent media on children. However, even the earliest media effects studies tell us that children can react quite differently to the same media content. Thus, if researchers truly want to understand how media affect children, rather than fight for the presence or absence of effects, they need to adopt a perspective that takes differential susceptibility to media effects more seriously. PMID:26386007

  8. Multiresolution image gathering and restoration

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.

  9. Multiresolution Simulations of Photoinjectors

    SciTech Connect

    Mihalcea, D.; Bohn, C. L.; Terzic, B.

    2006-11-27

    We report a successful implementation of a three-dimensional wavelet-based solver for Poisson's equation with Dirichlet boundary conditions, optimized for use in particle-in-cell beam dynamics simulations. We explain how the new algorithm works and the advantages it brings to accelerator simulations. The solver is integrated into a full photoinjector-simulation code (Impact-T), and the code is then benchmarked by comparing its output against that of other codes (verification) and against laboratory measurements (validation). We also simulated the AES/JLab photoinjector using a suite of codes. This activity revealed certain performance limitations and their causes.

  10. Multiresolution Simulations of Photoinjectors

    NASA Astrophysics Data System (ADS)

    Mihalcea, D.; Bohn, C. L.; Terzić, B.

    2006-11-01

    We report a successful implementation of a three-dimensional wavelet-based solver for Poisson's equation with Dirichlet boundary conditions, optimized for use in particle-in-cell beam dynamics simulations. We explain how the new algorithm works and the advantages it brings to accelerator simulations. The solver is integrated into a full photoinjector-simulation code (Impact-T), and the code is then benchmarked by comparing its output against that of other codes (verification) and against laboratory measurements (validation). We also simulated the AES/JLab photoinjector using a suite of codes. This activity revealed certain performance limitations and their causes.

  11. MOSES Inversions using Multiresolution SMART

    NASA Astrophysics Data System (ADS)

    Rust, Thomas; Fox, Lewis; Kankelborg, Charles; Courrier, Hans; Plovanic, Jacob

    2014-06-01

    We present improvements to the SMART inversion algorithm for the MOSES imaging spectrograph. MOSES, the Multi-Order Solar EUV Spectrograph, is a slitless extreme ultraviolet spectrograph designed to measure cotemporal narrowband spectra over a wide field of view via tomographic inversion of images taken at three orders of a concave diffraction grating. SMART, the Smooth Multiplicative Algebraic Reconstruction Technique, relies on a global chi squared goodness of fit criterion, which enables overfit and underfit regions to "balance out" when judging fit quality. "Good" reconstructions show poor fits at some positions and length scales. Here we take a multiresolution approach to SMART, applying corrections to the reconstruction at positions and scales where correction is warranted based on the noise. The result is improved fit residuals that more closely resemble the expected noise in the images. Within the multiresolution framework it is also easy to include a regularized deconvolution of the instrument point spread functions, which we do. Different point spread functions among MOSES spectral orders results in spurious doppler shifts in the reconstructions, most notable near bright compact emission. We estimate the point spread funtions from the data. Deconvolution is done using the Richardson-Lucy method, which is algorithmically similar to SMART. Regularization results from only correcting the reconstruction at positions and scales where correction is warranted based on the noise. We expect the point spread function deconvolution to increase signal to noise and reduce systematic error in MOSES reconstructions.

  12. A multiresolution model for small-body gravity estimation

    NASA Astrophysics Data System (ADS)

    Jones, Brandon A.; Beylkin, Gregory; Born, George H.; Provence, Robert S.

    2011-11-01

    A new model, dubbed the MRQSphere, provides a multiresolution representation of the gravity field designed for its estimation. The multiresolution representation uses an approximation via Gaussians of the solution of the Laplace's equation in the exterior of a sphere. Also, instead of the spherical harmonics, variations in the angular variables are modeled by a set of functions constructed using quadratures for the sphere invariant under the icosahedral group. When combined, these tools specify the spatial resolution of the gravity field as a function of altitude and required accuracy. We define this model, and apply it to representing and estimating the gravity field of the asteroid 433 Eros. We verified that a MRQSphere model derived directly from the true spherical harmonics gravity model satisfies the user defined precision. We also use the MRQSphere model to estimate the gravity field of Eros for a simulated satellite mission, yielding a solution with accuracy only limited by measurement errors and their spatial distribution.

  13. Multiresolution foveated laparoscope with high resolvability.

    PubMed

    Qin, Yi; Hua, Hong; Nguyen, Mike

    2013-07-01

    A key limitation of the state-of-the-art laparoscopes for minimally invasive surgery is the tradeoff between the field of view and spatial resolution in a single-view camera system. As such, surgical procedures are usually performed at a zoomed-in view, which limits the surgeon's ability to see much outside the immediate focus of interest and causes a situational awareness challenge. We proposed a multiresolution foveated laparoscope (MRFL) aiming to address this limitation. The MRFL is able to simultaneously capture wide-angle overview and high-resolution images in real time; it can scan and engage the high-resolution images to any subregion of the entire surgical field in analogy to the fovea of human eye. The MRFL is able to render equivalently 10 million pixel resolution with a low data bandwidth requirement. The system has a large working distance (WD) from 80 to 180 mm. The spatial resolvability is about 45 μm in the object space at an 80 mm WD, while the resolvability of a conventional laparoscope is about 250 μm at a typically 50 mm surgical distance. PMID:23811873

  14. Multiresolution foveated laparoscope with high resolvability

    PubMed Central

    Qin, Yi; Hua, Hong; Nguyen, Mike

    2016-01-01

    A key limitation of the state-of-the-art laparoscopes for minimally invasive surgery is the tradeoff between the field of view and spatial resolution in a single-view camera system. As such, surgical procedures are usually performed at a zoomed-in view, which limits the surgeon’s ability to see much outside the immediate focus of interest and causes a situational awareness challenge. We proposed a multiresolution foveated laparoscope (MRFL) aiming to address this limitation. The MRFL is able to simultaneously capture wide-angle overview and high-resolution images in real time; it can scan and engage the high-resolution images to any subregion of the entire surgical field in analogy to the fovea of human eye. The MRFL is able to render equivalently 10 million pixel resolution with a low data bandwidth requirement. The system has a large working distance (WD) from 80 to 180 mm. The spatial resolvability is about 45 μm in the object space at an 80 mm WD, while the resolvability of a conventional laparoscope is about 250 μm at a typically 50 mm surgical distance. PMID:23811873

  15. Quantum Mechanical Operators in Multiresolution Hilbert Spaces

    NASA Astrophysics Data System (ADS)

    Pipek, János

    2007-12-01

    Wavelet analysis, which is a shorthand notation for the concept of multiresolution analysis (MRA), becomes increasingly popular in high efficiency storage algorithms of complex spatial distributions. This approach is applied for describing wave functions of quantum systems. At any resolution level of MRA expansions a physical observable is represented by an infinite matrix which is "canonically" chosen as the projection of its operator in the Schrödinger picture onto the subspace of the given resolution. It is shown that this canonical choice is only a particular member of possible operator representations. Among these, there exits an optimal choice, usually different from the canonical one, which gives the best numerical values in eigenvalue problems. This construction works even in those cases, where the canonical definition is unusable. The commutation relation of physical operators is also studied in MRA subspaces. It is shown that the required commutation rules are satisfied in the fine resolution limit, whereas in coarse grained spaces a correction appears depending only on the representation of the momentum operator.

  16. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    SciTech Connect

    Milani, Gabriele Valente, Marco

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  17. Analysing the capabilities and limitations of tracer tests in stream-aquifer systems

    USGS Publications Warehouse

    Wagner, B.J.; Harvey, J.W.

    2001-01-01

    The goal of this study was to identify the limitations that apply when we couple conservative-tracer injection with reactive solute sampling to identify the transport and reaction processes active in a stream. Our methodology applies Monte Carlo uncertainty analysis to assess the ability of the tracer approach to identify the governing transport and reaction processes for a wide range of stream-solute transport and reaction scenarios likely to be encountered in high-gradient streams. Our analyses identified dimensionless factors that define the capabilities and limitations of the tracer approach. These factors provide a framework for comparing and contrasting alternative tracer test designs.

  18. Large Deformation Multiresolution Diffeomorphic Metric Mapping for Multiresolution Cortical Surfaces: A Coarse-to-Fine Approach.

    PubMed

    Tan, Mingzhen; Qiu, Anqi

    2016-09-01

    Brain surface registration is an important tool for characterizing cortical anatomical variations and understanding their roles in normal cortical development and psychiatric diseases. However, surface registration remains challenging due to complicated cortical anatomy and its large differences across individuals. In this paper, we propose a fast coarse-to-fine algorithm for surface registration by adapting the large diffeomorphic deformation metric mapping (LDDMM) framework for surface mapping and show improvements in speed and accuracy via a multiresolution analysis of surface meshes and the construction of multiresolution diffeomorphic transformations. The proposed method constructs a family of multiresolution meshes that are used as natural sparse priors of the cortical morphology. At varying resolutions, these meshes act as anchor points where the parameterization of multiresolution deformation vector fields can be supported, allowing the construction of a bundle of multiresolution deformation fields, each originating from a different resolution. Using a coarse-to-fine approach, we show a potential reduction in computation cost along with improvements in sulcal alignment when compared with LDDMM surface mapping. PMID:27254865

  19. Optical design and system engineering of a multiresolution foveated laparoscope.

    PubMed

    Qin, Yi; Hua, Hong

    2016-04-10

    The trade-off between the spatial resolution and field of view is one major limitation of state-of-the-art laparoscopes. In order to address this limitation, we demonstrated a multiresolution foveated laparoscope (MRFL) which is capable of simultaneously capturing both a wide-angle overview for situational awareness and a high-resolution zoomed-in view for accurate surgical operation. In this paper, we focus on presenting the optical design and system engineering process for developing the MRFL prototype. More specifically, the first-order specifications and properties of the optical system are discussed, followed by a detailed discussion on the optical design strategy and procedures of each subsystem. The optical performance of the final system, including diffraction efficiency, tolerance analysis, stray light and ghost image, is fully analyzed. Finally, the prototype assembly process and the final prototype are demonstrated. PMID:27139875

  20. Optical design and system engineering of a multiresolution foveated laparoscope

    PubMed Central

    Qin, Yi; Hua, Hong

    2016-01-01

    The trade-off between the spatial resolution and field of view is one major limitation of state-of-the-art laparoscopes. In order to address this limitation, we demonstrated a multiresolution foveated laparoscope (MRFL) which is capable of simultaneously capturing both a wide-angle overview for situational awareness and a high-resolution zoomed-in view for accurate surgical operation. In this paper, we focus on presenting the optical design and system engineering process for developing the MRFL prototype. More specifically, the first-order specifications and properties of the optical system are discussed, followed by a detailed discussion on the optical design strategy and procedures of each subsystem. The optical performance of the final system, including diffraction efficiency, tolerance analysis, stray light and ghost image, is fully analyzed. Finally, the prototype assembly process and the final prototype are demonstrated. PMID:27139875

  1. Multiresolution approach based on projection matrices

    SciTech Connect

    Vargas, Javier; Quiroga, Juan Antonio

    2009-03-01

    Active triangulation measurement systems with a rigid geometric configuration are inappropriate for scanning large objects with low measuring tolerances. The reason is that the ratio between the depth recovery error and the lateral extension is a constant that depends on the geometric setup. As a consequence, measuring large areas with low depth recovery error requires the use of multiresolution techniques. We propose a multiresolution technique based on a camera-projector system previously calibrated. The method consists of changing the camera or projector's parameters in order to increase the system depth sensitivity. A subpixel retroprojection error in the self-calibration process and a decrease of approximately one order of magnitude in the depth recovery error can be achieved using the proposed method.

  2. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  3. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  4. Incorporation of concentration data below the limit of quantification in population pharmacokinetic analyses.

    PubMed

    Keizer, Ron J; Jansen, Robert S; Rosing, Hilde; Thijssen, Bas; Beijnen, Jos H; Schellens, Jan H M; Huitema, Alwin D R

    2015-03-01

    Handling of data below the lower limit of quantification (LLOQ), below the limit of quantification (BLOQ) in population pharmacokinetic (PopPK) analyses is important for reducing bias and imprecision in parameter estimation. We aimed to evaluate whether using the concentration data below the LLOQ has superior performance over several established methods. The performance of this approach ("All data") was evaluated and compared to other methods: "Discard," "LLOQ/2," and "LIKE" (likelihood-based). An analytical and residual error model was constructed on the basis of in-house analytical method validations and analyses from literature, with additional included variability to account for model misspecification. Simulation analyses were performed for various levels of BLOQ, several structural PopPK models, and additional influences. Performance was evaluated by relative root mean squared error (RMSE), and run success for the various BLOQ approaches. Performance was also evaluated for a real PopPK data set. For all PopPK models and levels of censoring, RMSE values were lowest using "All data." Performance of the "LIKE" method was better than the "LLOQ/2" or "Discard" method. Differences between all methods were small at the lowest level of BLOQ censoring. "LIKE" method resulted in low successful minimization (<50%) and covariance step success (<30%), although estimates were obtained in most runs (∼90%). For the real PK data set (7.4% BLOQ), similar parameter estimates were obtained using all methods. Incorporation of BLOQ concentrations showed superior performance in terms of bias and precision over established BLOQ methods, and shown to be feasible in a real PopPK analysis. PMID:26038706

  5. Incorporation of concentration data below the limit of quantification in population pharmacokinetic analyses

    PubMed Central

    Keizer, Ron J; Jansen, Robert S; Rosing, Hilde; Thijssen, Bas; Beijnen, Jos H; Schellens, Jan H M; Huitema, Alwin D R

    2015-01-01

    Handling of data below the lower limit of quantification (LLOQ), below the limit of quantification (BLOQ) in population pharmacokinetic (PopPK) analyses is important for reducing bias and imprecision in parameter estimation. We aimed to evaluate whether using the concentration data below the LLOQ has superior performance over several established methods. The performance of this approach (“All data”) was evaluated and compared to other methods: “Discard,” “LLOQ/2,” and “LIKE” (likelihood-based). An analytical and residual error model was constructed on the basis of in-house analytical method validations and analyses from literature, with additional included variability to account for model misspecification. Simulation analyses were performed for various levels of BLOQ, several structural PopPK models, and additional influences. Performance was evaluated by relative root mean squared error (RMSE), and run success for the various BLOQ approaches. Performance was also evaluated for a real PopPK data set. For all PopPK models and levels of censoring, RMSE values were lowest using “All data.” Performance of the “LIKE” method was better than the “LLOQ/2” or “Discard” method. Differences between all methods were small at the lowest level of BLOQ censoring. “LIKE” method resulted in low successful minimization (<50%) and covariance step success (<30%), although estimates were obtained in most runs (∼90%). For the real PK data set (7.4% BLOQ), similar parameter estimates were obtained using all methods. Incorporation of BLOQ concentrations showed superior performance in terms of bias and precision over established BLOQ methods, and shown to be feasible in a real PopPK analysis. PMID:26038706

  6. Exploring a Multi-resolution Approach Using AMIP Simulations

    SciTech Connect

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun; Yang, Qing; Lu, Jian; Hagos, Samson M.; Rauscher, Sara; Dong, Li; Ringler, Todd; Lauritzen, P. H.

    2015-07-31

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulations reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.

  7. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree

  8. Multiresolutional models of uncertainty generation and reduction

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    Kolmogorov's axiomatic principles of the probability theory, are reconsidered in the scope of their applicability to the processes of knowledge acquisition and interpretation. The model of uncertainty generation is modified in order to reflect the reality of engineering problems, particularly in the area of intelligent control. This model implies algorithms of learning which are organized in three groups which reflect the degree of conceptualization of the knowledge the system is dealing with. It is essential that these algorithms are motivated by and consistent with the multiresolutional model of knowledge representation which is reflected in the structure of models and the algorithms of learning.

  9. An adapted multi-resolution representation of regional VTEC

    NASA Astrophysics Data System (ADS)

    Liang, Wenjing; Dettmering, Denise; Schmidt, Michael

    2014-05-01

    The resolution of ionosphere models is mainly limited by inhomogenously distributed input data. The International GNSS Service (IGS) provides global ionosphere maps (GIMs) of vertical total electron content (VTEC) values with a spatial resolution of 2.5° in latitude and 5° in longitude. In order to provide local ionospheric structures and support high precise GPS positioning, different high-resolution regional ionosphere models have been developed by using dense observation networks. However, there is no model available with a spatial resolution adapted to the data distribution. In this study we present a regional multi-resolution VTEC model which adapts the model resolution to the data distribution. In our approach, VTEC consists of a given background model such as the International Reference Ionosphere (IRI) and an unknown correction part modeled as a series expansion in terms of B-spline scaling functions. The resolution level of the B-spline functions has to be determined by the distribution of the input data. With a sufficient number of observations, a higher level can be chosen, i.e., finer structures of VTEC can be modeled. The input data are heterogeneously distributed; specifically, the observations are dense over the continent whereas large data gaps exist over the oceans. Furthermore, the GPS stations are unevenly distributed over the continent. A data adapted VTEC model is achieved by combining a regional VTEC part with some local densification areas, each represented by a B-spline expansion. The unknown scaling coefficients of all these parts are then estimated by parameter estimation. In this contribution, our model approach is introduced, including the method of multi-resolution representation (MRR) and of combining the regional and local model parts. Furthermore, we show an example based on GNSS observations from selected permanent stations in South America.

  10. Using Controlled Landslide Initiation Experiments to Test Limit-Equilibrium Analyses of Slope Stability

    NASA Astrophysics Data System (ADS)

    Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.

    2004-12-01

    Most studies of landslide initiation employ limit equilibrium analyses of slope stability. Owing to a lack of detailed data, however, few studies have tested limit-equilibrium predictions against physical measurements of slope failure. We have conducted a series of field-scale, highly controlled landslide initiation experiments at the USGS debris-flow flume in Oregon; these experiments provide exceptional data to test limit equilibrium methods. In each of seven experiments, we attempted to induce failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand placed behind a retaining wall in the 31° sloping flume. We systematically investigated triggering of sliding by groundwater injection, by prolonged moderate-intensity sprinkling, and by bursts of high intensity sprinkling. We also used vibratory compaction to control soil porosity and thereby investigate differences in failure behavior of dense and loose soils. About 50 sensors were monitored at 20 Hz during the experiments, including nests of tiltmeters buried at 7 cm spacing to define subsurface failure geometry, and nests of tensiometers and pore-pressure sensors to define evolving pore-pressure fields. In addition, we performed ancillary laboratory tests to measure soil porosity, shear strength, hydraulic conductivity, and compressibility. In loose soils (porosity of 0.52 to 0.55), abrupt failure typically occurred along the flume bed after substantial soil deformation. In denser soils (porosity of 0.41 to 0.44), gradual failure occurred within the soil prism. All failure surfaces had a maximum length to depth ratio of about 7. In even denser soil (porosity of 0.39), we could not induce failure by sprinkling. The internal friction angle of the soils varied from 28° to 40° with decreasing porosity. We analyzed stability at failure, given the observed pore-pressure conditions just prior to large movement, using a 1-D infinite-slope method and a more complete 2-D Janbu method. Each method provides a static

  11. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  12. Multiresolution simulated annealing for brain image analysis

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Majcenic, Zoran

    1999-05-01

    Analysis of biomedical images is an important step in quantification of various diseases such as human spontaneous intracerebral brain hemorrhage (ICH). In particular, the study of outcome in patients having ICH requires measurements of various ICH parameters such as hemorrhage volume and their change over time. A multiresolution probabilistic approach for segmentation of CT head images is presented in this work. This method views the segmentation problem as a pixel labeling problem. In this application the labels are: background, skull, brain tissue, and ICH. The proposed method is based on the Maximum A-Posteriori (MAP) estimation of the unknown pixel labels. The MAP method maximizes the a-posterior probability of segmented image given the observed (input) image. Markov random field (MRF) model has been used for the posterior distribution. The MAP estimation of the segmented image has been determined using the simulated annealing (SA) algorithm. The SA algorithm is used to minimize the energy function associated with MRF posterior distribution function. A multiresolution SA (MSA) has been developed to speed up the annealing process. MSA is presented in detail in this work. A knowledge-based classification based on the brightness, size, shape and relative position toward other regions is performed at the end of the procedure. The regions are identified as background, skull, brain, ICH and calcifications.

  13. Liver fibrosis grading using multiresolution histogram information in real-time elastography

    NASA Astrophysics Data System (ADS)

    Albouy-Kissi, A.; Sarry, L.; Massoulier, S.; Bonny, C.; Randl, K.; Abergel, A.

    2010-03-01

    Despites many limitations, liver biopsy remains the gold standard method for grading and staging liver biopsy. Several modalities have been developed for a non invasive assessment of liver diseases. Real-time elastography may constitute a true alternative to liver biopsy by providing an image of tissular elasticity distribution correlated to the fibrosis grade. In this paper, we investigate a new approach for the assessment of liver fibrosis by the classification of fibrosis morphometry. Multiresolution histogram, based on a combination of intensity and texture features, has been tested as feature space. Thus, the ability of such multiresolution histograms to discriminate fibrosis grade has been proven. The results have been tested on seventeen patients that underwent a real time elastography and FibroScan examination.

  14. Multiresolution local tomography in dental radiology using wavelets.

    PubMed

    Niinimäki, K; Siltanen, S; Kolehmainen, V

    2007-01-01

    A Bayesian multiresolution model for local tomography in dental radiology is proposed. In this model a wavelet basis is used to present dental structures and the prior information is modeled in terms of Besov norm penalty. The proposed wavelet-based multiresolution method is used to reduce the number of unknowns in the reconstruction problem by abandoning fine-scale wavelets outside the region of interest (ROI). This multiresolution model allows significant reduction in the number of unknowns without the loss of reconstruction accuracy inside the ROI. The feasibility of the proposed method is tested with two-dimensional (2D) examples using simulated and experimental projection data from dental specimens. PMID:18002604

  15. MULTIRESOLUTION REPRESENTATION OF OPERATORS WITH BOUNDARY CONDITIONS ON SIMPLE DOMAINS

    SciTech Connect

    Beylkin, Gregory; Fann, George I; Harrison, Robert J; Kurcz, Christopher E; Monzon, Lucas A

    2011-01-01

    We develop a multiresolution representation of a class of integral operators satisfying boundary conditions on simple domains in order to construct fast algorithms for their application. We also elucidate some delicate theoretical issues related to the construction of periodic Green s functions for Poisson s equation. By applying the method of images to the non-standard form of the free space operator, we obtain lattice sums that converge absolutely on all scales, except possibly on the coarsest scale. On the coarsest scale the lattice sums may be only conditionally convergent and, thus, allow for some freedom in their definition. We use the limit of square partial sums as a definition of the limit and obtain a systematic, simple approach to the construction (in any dimension) of periodized operators with sparse non-standard forms. We illustrate the results on several examples in dimensions one and three: the Hilbert transform, the projector on divergence free functions, the non-oscillatory Helmholtz Green s function and the Poisson operator. Remarkably, the limit of square partial sums yields a periodic Poisson Green s function which is not a convolution. Using a short sum of decaying Gaussians to approximate periodic Green s functions, we arrive at fast algorithms for their application. We further show that the results obtained for operators with periodic boundary conditions extend to operators with Dirichlet, Neumann, or mixed boundary conditions.

  16. A qualitative multiresolution model for counterterrorism

    NASA Astrophysics Data System (ADS)

    Davis, Paul K.

    2006-05-01

    This paper describes a prototype model for exploring counterterrorism issues related to the recruiting effectiveness of organizations such as al Qaeda. The prototype demonstrates how a model can be built using qualitative input variables appropriate to representation of social-science knowledge, and how a multiresolution design can allow a user to think and operate at several levels - such as first conducting low-resolution exploratory analysis and then zooming into several layers of detail. The prototype also motivates and introduces a variety of nonlinear mathematical methods for representing how certain influences combine. This has value for, e.g., representing collapse phenomena underlying some theories of victory, and for explanations of historical results. The methodology is believed to be suitable for more extensive system modeling of terrorism and counterterrorism.

  17. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  18. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1991-01-01

    Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.

  19. Hanging-wall deformation above a normal fault: sequential limit analyses

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand

    2015-04-01

    The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring

  20. A fast multi-resolution approach to tomographic PIV

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Astarita, Tommaso

    2012-03-01

    Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.

  1. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay rather than the entire sample process. Our objective was to develop a method to determine the 95% LOD (lowest co...

  2. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1993-01-01

    Given a function u(x) which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. We apply this multi-resolution analysis to Essentially Non-oscillatory Schemes (ENO) schemes in order to reduce the number of numerical flux computations which is needed in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. We present an efficient algorithm for implementing this program in the one-dimensional case; this algorithm can be extended to the multi-dimensional case with cartesian grids.

  3. Multiresolution spectrotemporal analysis of complex sounds

    NASA Astrophysics Data System (ADS)

    Chi, Taishih; Ru, Powen; Shamma, Shihab A.

    2005-08-01

    A computational model of auditory analysis is described that is inspired by psychoacoustical and neurophysiological findings in early and central stages of the auditory system. The model provides a unified multiresolution representation of the spectral and temporal features likely critical in the perception of sound. Simplified, more specifically tailored versions of this model have already been validated by successful application in the assessment of speech intelligibility [Elhilali et al., Speech Commun. 41(2-3), 331-348 (2003); Chi et al., J. Acoust. Soc. Am. 106, 2719-2732 (1999)] and in explaining the perception of monaural phase sensitivity [R. Carlyon and S. Shamma, J. Acoust. Soc. Am. 114, 333-348 (2003)]. Here we provide a more complete mathematical formulation of the model, illustrating how complex signals are transformed through various stages of the model, and relating it to comparable existing models of auditory processing. Furthermore, we outline several reconstruction algorithms to resynthesize the sound from the model output so as to evaluate the fidelity of the representation and contribution of different features and cues to the sound percept.

  4. Multiresolution segmentation technique for spine MRI images

    NASA Astrophysics Data System (ADS)

    Li, Haiyun; Yan, Chye H.; Ong, Sim Heng; Chui, Cheekong K.; Teoh, Swee H.

    2002-05-01

    In this paper, we describe a hybrid method for segmentation of spinal magnetic resonance imaging that has been developed based on the natural phenomenon of stones appearing as water recedes. The candidate segmentation region corresponds to the stones with characteristics similar to that of intensity extrema, edges, intensity ridge and grey-level blobs. The segmentation method is implemented based on a combination of wavelet multiresolution decomposition and fuzzy clustering. First thresholding is performed dynamically according to local characteristic to detect possible target areas, We then use fuzzy c-means clustering in concert with wavelet multiscale edge detection to identify the maximum likelihood anatomical and functional target areas. Fuzzy C-Means uses iterative optimization of an objective function based on a weighted similarity measure between the pixels in the image and each of c cluster centers. Local extrema of this objective function are indicative of an optimal clustering of the input data. The multiscale edges can be detected and characterized from local maxima of the modulus of the wavelet transform while the noise can be reduced to some extent by enacting thresholds. The method provides an efficient and robust algorithm for spinal image segmentation. Examples are presented to demonstrate the efficiency of the technique on some spinal MRI images.

  5. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  6. Multi-Resolution Dynamic Meshes with Arbitrary Deformations

    SciTech Connect

    Shamir, A.; Pascucci, V.; Bajaj, C.

    2000-07-10

    Multi-resolution techniques and models have been shown to be effective for the display and transmission of large static geometric object. Dynamic environments with internally deforming models and scientific simulations using dynamic meshes pose greater challenges in terms of time and space, and need the development of similar solutions. In this paper we introduce the T-DAG, an adaptive multi-resolution representation for dynamic meshes with arbitrary deformations including attribute, position, connectivity and topology changes. T-DAG stands for Time-dependent Directed Acyclic Graph which defines the structure supporting this representation. We also provide an incremental algorithm (in time) for constructing the T-DAG representation of a given input mesh. This enables the traversal and use of the multi-resolution dynamic model for partial playback while still constructing new time-steps.

  7. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  8. Numerical analyses on optical limiting performances of chloroindium phthalocyanines with different substituent positions

    NASA Astrophysics Data System (ADS)

    Yu-Jin, Zhang; Xing-Zhe, Li; Ji-Cai, Liu; Chuan-Kui, Wang

    2016-01-01

    Optical limiting properties of two soluble chloroindium phthalocyanines with α- and β-alkoxyl substituents in nanosecond laser field have been studied by solving numerically the coupled singlet-triplet rate equation together with the paraxial wave field equation under the Crank-Nicholson scheme. Both transverse and longitudinal effects of the laser field on photophysical properties of the compounds are considered. Effective transfer time between the ground state and the lowest triplet state is defined in reformulated rate equations to characterize dynamics of singlet-triplet state population transfer. It is found that both phthalocyanines exhibit good nonlinear optical absorption abilities, while the compound with α-substituent shows enhanced optical limiting performance. Our ab-initio calculations reveal that the phthalocyanine with α-substituent has more obvious electron delocalization and lower frontier orbital transfer energies, which are responsible for its preferable photophysical properties. Project supported by the National Basic Research Program of China (Grant No. 2011CB808100), the National Natural Science Foundation of China (Grant Nos. 11204078 and 11574082), and the Fundamental Research Funds for the Central Universities of China (Grant No. 2015MS54).

  9. Examination of metabolic responses to phosphorus limitation via proteomic analyses in the marine diatom Phaeodactylum tricornutum

    PubMed Central

    Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye

    2015-01-01

    Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms. PMID:26020491

  10. Examination of metabolic responses to phosphorus limitation via proteomic analyses in the marine diatom Phaeodactylum tricornutum.

    PubMed

    Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye

    2015-01-01

    Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms. PMID:26020491

  11. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  12. Preliminary scoping safety analyses of the limiting design basis protected accidents for the Fast Flux Test Facility tritium production core

    SciTech Connect

    Heard, F.J.

    1997-11-19

    The SAS4A/SASSYS-l computer code is used to perform a series of analyses for the limiting protected design basis transient events given a representative tritium and medical isotope production core design proposed for the Fast Flux Test Facility. The FFTF tritium and isotope production mission will require a different core loading which features higher enrichment fuel, tritium targets, and medical isotope production assemblies. Changes in several key core parameters, such as the Doppler coefficient and delayed neutron fraction will affect the transient response of the reactor. Both reactivity insertion and reduction of heat removal events were analyzed. The analysis methods and modeling assumptions are described. Results of the analyses and comparison against fuel pin performance criteria are presented to provide quantification that the plant protection system is adequate to maintain the necessary safety margins and assure cladding integrity.

  13. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  14. a DTM Multi-Resolution Compressed Model for Efficient Data Storage and Network Transfer

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Brovelli, M.; Zamboni, G.

    2011-08-01

    In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging) technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs) show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's) and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions). At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a digital terrain

  15. Periodic Density Functional Theory Solver using Multiresolution Analysis with MADNESS

    NASA Astrophysics Data System (ADS)

    Harrison, Robert; Thornton, William

    2011-03-01

    We describe the first implementation of the all-electron Kohn-Sham density functional periodic solver (DFT) using multi-wavelets and fast integral equations using MADNESS (multiresolution adaptive numerical environment for scientific simulation; http://code.google.com/p/m-a-d-n-e-s-s). The multiresolution nature of a multi-wavelet basis allows for fast computation with guaranteed precision. By reformulating the Kohn-Sham eigenvalue equation into the Lippmann-Schwinger equation, we can avoid using the derivative operator which allows better control of overall precision for the all-electron problem. Other highlights include the development of periodic integral operators with low-rank separation, an adaptable model potential for nuclear potential, and an implementation for Hartree Fock exchange. This work was supported by NSF project OCI-0904972 and made use of resources at the Center for Computational Sciences at Oak Ridge National Laboratory under contract DE-AC05-00OR22725.

  16. A Multiresolution Method for Parameter Estimation of Diffusion Processes

    PubMed Central

    Kou, S. C.; Olding, Benjamin P.; Lysy, Martin; Liu, Jun S.

    2014-01-01

    Diffusion process models are widely used in science, engineering and finance. Most diffusion processes are described by stochastic differential equations in continuous time. In practice, however, data is typically only observed at discrete time points. Except for a few very special cases, no analytic form exists for the likelihood of such discretely observed data. For this reason, parametric inference is often achieved by using discrete-time approximations, with accuracy controlled through the introduction of missing data. We present a new multiresolution Bayesian framework to address the inference difficulty. The methodology relies on the use of multiple approximations and extrapolation, and is significantly faster and more accurate than known strategies based on Gibbs sampling. We apply the multiresolution approach to three data-driven inference problems – one in biophysics and two in finance – one of which features a multivariate diffusion model with an entirely unobserved component. PMID:25328259

  17. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  18. Gaze-contingent multiresolutional displays: an integrative review.

    PubMed

    Reingold, Eyal M; Loschky, Lester C; McConkie, George W; Stampe, David M

    2003-01-01

    Gaze-contingent multiresolutional displays (GCMRDs) center high-resolution information on the user's gaze position, matching the user's area of interest (AOI). Image resolution and details outside the AOI are reduced, lowering the requirements for processing resources and transmission bandwidth in demanding display and imaging applications. This review provides a general framework within which GCMRD research can be integrated, evaluated, and guided. GCMRDs (or "moving windows") are analyzed in terms of (a) the nature of their images (i.e., "multiresolution," "variable resolution," "space variant," or "level of detail"), and (b) the movement of the AOI (i.e., "gaze contingent," "foveated," or "eye slaved"). We also synthesize the known human factors research on GCMRDs and point out important questions for future research and development. Actual or potential applications of this research include flight, medical, and driving simulators; virtual reality; remote piloting and teleoperation; infrared and indirect vision; image transmission and retrieval; telemedicine; video teleconferencing; and artificial vision systems. PMID:14529201

  19. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  20. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR.

    PubMed

    Stokdyk, Joel P; Firnstahl, Aaron D; Spencer, Susan K; Burch, Tucker R; Borchardt, Mark A

    2016-06-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L(-1) assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation. PMID:27023926

  1. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    USGS Publications Warehouse

    Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.

    2016-01-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.

  2. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  3. Multiple multiresolution representation of functions and calculus for fast computation

    SciTech Connect

    Fann, George I; Harrison, Robert J; Hill, Judith C; Jia, Jun; Galindo, Diego A

    2010-01-01

    We describe the mathematical representations, data structure and the implementation of the numerical calculus of functions in the software environment multiresolution analysis environment for scientific simulations, MADNESS. In MADNESS, each smooth function is represented using an adaptive pseudo-spectral expansion using the multiwavelet basis to a arbitrary but finite precision. This is an extension of the capabilities of most of the existing net, mesh and spectral based methods where the discretization is based on a single adaptive mesh, or expansions.

  4. Multi-scale, multi-resolution brain cancer modeling.

    PubMed

    Zhang, Le; Chen, L Leon; Deisboeck, Thomas S

    2009-03-01

    In advancing discrete-based computational cancer models towards clinical applications, one faces the dilemma of how to deal with an ever growing amount of biomedical data that ought to be incorporated eventually in one form or another. Model scalability becomes of paramount interest. In an effort to start addressing this critical issue, here, we present a novel multi-scale and multi-resolution agent-based in silico glioma model. While 'multi-scale' refers to employing an epidermal growth factor receptor (EGFR)-driven molecular network to process cellular phenotypic decisions within the micro-macroscopic environment, 'multi-resolution' is achieved through algorithms that classify cells to either active or inactive spatial clusters, which determine the resolution they are simulated at. The aim is to assign computational resources where and when they matter most for maintaining or improving the predictive power of the algorithm, onto specific tumor areas and at particular times. Using a previously described 2D brain tumor model, we have developed four different computational methods for achieving the multi-resolution scheme, three of which are designed to dynamically train on the high-resolution simulation that serves as control. To quantify the algorithms' performance, we rank them by weighing the distinct computational time savings of the simulation runs versus the methods' ability to accurately reproduce the high-resolution results of the control. Finally, to demonstrate the flexibility of the underlying concept, we show the added value of combining the two highest-ranked methods. The main finding of this work is that by pursuing a multi-resolution approach, one can reduce the computation time of a discrete-based model substantially while still maintaining a comparably high predictive power. This hints at even more computational savings in the more realistic 3D setting over time, and thus appears to outline a possible path to achieve scalability for the all

  5. Multiresolution persistent homology for excessively large biomolecular datasets.

    PubMed

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs. PMID:26450288

  6. Survey and analysis of multiresolution methods for turbulence data

    DOE PAGESBeta

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; Ahrens, James; Hamann, Bernd

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between themore » algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less

  7. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  8. Multiresolution persistent homology for excessively large biomolecular datasets

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  9. Multiresolution persistent homology for excessively large biomolecular datasets

    SciTech Connect

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-07

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  10. Survey and analysis of multiresolution methods for turbulence data

    SciTech Connect

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; Ahrens, James; Hamann, Bernd

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.

  11. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  12. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  13. Multiresolution Techniques for Interactive Texture-Based Rendering of Arbitrarily Oriented Cutting Planes

    SciTech Connect

    LaMar, E; Duchaineau, M A; Hamann, B; Joy, K I

    2001-10-03

    We present a multiresolution technique for interactive texture based rendering of arbitrarily oriented cutting planes for very large data sets. This method uses an adaptive scheme that renders the data along a cutting plane at different resolutions: higher resolution near the point-of-interest and lower resolution away from the point-of-interest. The algorithm is based on the segmentation of texture space into an octree, where the leaves of the tree define the original data and the internal nodes define lower-resolution versions. Rendering is done adaptively by selecting high-resolution cells close to a center of attention and low-resolution cells away from it. We limit the artifacts introduced by this method by blending between different levels of resolution to produce a smooth image. This technique can be used to produce viewpoint-dependent renderings.

  14. Multi-parametric cytometry from a complex cellular sample: Improvements and limits of manual versus computational-based interactive analyses.

    PubMed

    Gondois-Rey, F; Granjeaud, S; Rouillier, P; Rioualen, C; Bidaut, G; Olive, D

    2016-05-01

    The wide possibilities opened by the developments of multi-parametric cytometry are limited by the inadequacy of the classical methods of analysis to the multi-dimensional characteristics of the data. While new computational tools seemed ideally adapted and were applied successfully, their adoption is still low among the flow cytometrists. In the purpose to integrate unsupervised computational tools for the management of multi-stained samples, we investigated their advantages and limits by comparison to manual gating on a typical sample analyzed in immunomonitoring routine. A single tube of PBMC, containing 11 populations characterized by different sizes and stained with 9 fluorescent markers, was used. We investigated the impact of the strategy choice on manual gating variability, an undocumented pitfall of the analysis process, and we identified rules to optimize it. While assessing automatic gating as an alternate, we introduced the Multi-Experiment Viewer software (MeV) and validated it for merging clusters and annotating interactively populations. This procedure allowed the finding of both targeted and unexpected populations. However, the careful examination of computed clusters in standard dot plots revealed some heterogeneity, often below 10%, that was overcome by increasing the number of clusters to be computed. MeV facilitated the identification of populations by displaying both the MFI and the marker signature of the dataset simultaneously. The procedure described here appears fully adapted to manage homogeneously high number of multi-stained samples and allows improving multi-parametric analyses in a way close to the classic approach. © 2016 International Society for Advancement of Cytometry. PMID:27059253

  15. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  16. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  17. A Global, Multi-Resolution Approach to Regional Ocean Modeling

    SciTech Connect

    Du, Qiang

    2013-11-08

    In this collaborative research project between Pennsylvania State University, Colorado State University and Florida State University, we mainly focused on developing multi-resolution algorithms which are suitable to regional ocean modeling. We developed hybrid implicit and explicit adaptive multirate time integration method to solve systems of time-dependent equations that present two signi cantly di erent scales. We studied the e ects of spatial simplicial meshes on the stability and the conditioning of fully discrete approximations. We also studies adaptive nite element method (AFEM) based upon the Centroidal Voronoi Tessellation (CVT) and superconvergent gradient recovery. Some of these techniques are now being used by geoscientists(such as those at LANL).

  18. Geometric multi-resolution analysis and data-driven convolutions

    NASA Astrophysics Data System (ADS)

    Strawn, Nate

    2015-09-01

    We introduce a procedure for learning discrete convolutional operators for generic datasets which recovers the standard block convolutional operators when applied to sets of natural images. They key observation is that the standard block convolutional operators on images are intuitive because humans naturally understand the grid structure of the self-evident functions over images spaces (pixels). This procedure first constructs a Geometric Multi-Resolution Analysis (GMRA) on the set of variables giving rise to a dataset, and then leverages the details of this data structure to identify subsets of variables upon which convolutional operators are supported, as well as a space of functions that can be shared coherently amongst these supports.

  19. Geometric multi-resolution analysis for dictionary learning

    NASA Astrophysics Data System (ADS)

    Maggioni, Mauro; Minsker, Stanislav; Strawn, Nate

    2015-09-01

    We present an efficient algorithm and theory for Geometric Multi-Resolution Analysis (GMRA), a procedure for dictionary learning. Sparse dictionary learning provides the necessary complexity reduction for the critical applications of compression, regression, and classification in high-dimensional data analysis. As such, it is a critical technique in data science and it is important to have techniques that admit both efficient implementation and strong theory for large classes of theoretical models. By construction, GMRA is computationally efficient and in this paper we describe how the GMRA correctly approximates a large class of plausible models (namely, the noisy manifolds).

  20. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  1. MR-CDF: Managing multi-resolution scientific data

    NASA Technical Reports Server (NTRS)

    Salem, Kenneth

    1993-01-01

    MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time.

  2. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  3. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  4. Multiresolution fusion of remotely sensed images with the Hermite transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Lopez-Caloca, Alejandra A.; Zambrano-Gallardo, Cira F.

    2004-02-01

    The Hermite Transform is an image representation model that incorporates some important properties of visual perception such as the analysis through overlapping receptive fields and the Gaussian derivative model of early vision. It also allows the construction of pyramidal multiresolution analysis-synthesis schemes. We show how the Hermite Transform can be used to build image fusion schemes that take advantage of the fact that Gaussian derivatives are good operators for the detection of relevant image patterns at different spatial scales. These patterns are later combined in the transform coefficient domain. Applications of this fusion algorithm are shown with remote sensing images, namely LANDSAT, IKONOS, RADARSAT and SAR AeS-1 images.

  5. A multiresolution analysis for detection of abnormal lung sounds

    PubMed Central

    Emmanouilidou, Dimitra; Patil, Kailash; West, James; Elhilali, Mounya

    2014-01-01

    Automated analysis and detection of abnormal lung sound patterns has great potential for improving access to standardized diagnosis of pulmonary diseases, especially in low-resource settings. In the current study, we develop signal processing tools for analysis of paediatric auscultations recorded under non-ideal noisy conditions. The proposed model is based on a biomimetic multi-resolution analysis of the spectro-temporal modulation details in lung sounds. The methodology provides a detailed description of joint spectral and temporal variations in the signal and proves to be more robust than frequency-based techniques in distinguishing crackles and wheezes from normal breathing sounds. PMID:23366591

  6. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    NASA Astrophysics Data System (ADS)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  7. Physiological, biomass elemental composition and proteomic analyses of Escherichia coli ammonium-limited chemostat growth, and comparison with iron- and glucose-limited chemostat growth

    PubMed Central

    Folsom, James Patrick

    2015-01-01

    Escherichia coli physiological, biomass elemental composition and proteome acclimations to ammonium-limited chemostat growth were measured at four levels of nutrient scarcity controlled via chemostat dilution rate. These data were compared with published iron- and glucose-limited growth data collected from the same strain and at the same dilution rates to quantify general and nutrient-specific responses. Severe nutrient scarcity resulted in an overflow metabolism with differing organic byproduct profiles based on limiting nutrient and dilution rate. Ammonium-limited cultures secreted up to 35  % of the metabolized glucose carbon as organic byproducts with acetate representing the largest fraction; in comparison, iron-limited cultures secreted up to 70  % of the metabolized glucose carbon as lactate, and glucose-limited cultures secreted up to 4  % of the metabolized glucose carbon as formate. Biomass elemental composition differed with nutrient limitation; biomass from ammonium-limited cultures had a lower nitrogen content than biomass from either iron- or glucose-limited cultures. Proteomic analysis of central metabolism enzymes revealed that ammonium- and iron-limited cultures had a lower abundance of key tricarboxylic acid (TCA) cycle enzymes and higher abundance of key glycolysis enzymes compared with glucose-limited cultures. The overall results are largely consistent with cellular economics concepts, including metabolic tradeoff theory where the limiting nutrient is invested into essential pathways such as glycolysis instead of higher ATP-yielding, but non-essential, pathways such as the TCA cycle. The data provide a detailed insight into ecologically competitive metabolic strategies selected by evolution, templates for controlling metabolism for bioprocesses and a comprehensive dataset for validating in silico representations of metabolism. PMID:26018546

  8. Physiological, biomass elemental composition and proteomic analyses of Escherichia coli ammonium-limited chemostat growth, and comparison with iron- and glucose-limited chemostat growth.

    PubMed

    Folsom, James Patrick; Carlson, Ross P

    2015-08-01

    Escherichia coli physiological, biomass elemental composition and proteome acclimations to ammonium-limited chemostat growth were measured at four levels of nutrient scarcity controlled via chemostat dilution rate. These data were compared with published iron- and glucose-limited growth data collected from the same strain and at the same dilution rates to quantify general and nutrient-specific responses. Severe nutrient scarcity resulted in an overflow metabolism with differing organic byproduct profiles based on limiting nutrient and dilution rate. Ammonium-limited cultures secreted up to 35% of the metabolized glucose carbon as organic byproducts with acetate representing the largest fraction; in comparison, iron-limited cultures secreted up to 70 % of the metabolized glucose carbon as lactate, and glucose-limited cultures secreted up to 4% of the metabolized glucose carbon as formate. Biomass elemental composition differed with nutrient limitation; biomass from ammonium-limited cultures had a lower nitrogen content than biomass from either iron- or glucose-limited cultures. Proteomic analysis of central metabolism enzymes revealed that ammonium- and iron-limited cultures had a lower abundance of key tricarboxylic acid (TCA) cycle enzymes and higher abundance of key glycolysis enzymes compared with glucose-limited cultures. The overall results are largely consistent with cellular economics concepts, including metabolic tradeoff theory where the limiting nutrient is invested into essential pathways such as glycolysis instead of higher ATP-yielding, but non-essential, pathways such as the TCA cycle. The data provide a detailed insight into ecologically competitive metabolic strategies selected by evolution, templates for controlling metabolism for bioprocesses and a comprehensive dataset for validating in silico representations of metabolism. PMID:26018546

  9. Attosecond electron dynamics: A multiresolution approach

    NASA Astrophysics Data System (ADS)

    Vence, Nicholas; Harrison, Robert; Krstić, Predrag

    2012-03-01

    We establish a numerical solution to the time-dependent Schrödinger equation employing an adaptive, discontinuous spectral element basis that automatically adjusts to the requested precision. The explicit time evolution is accomplished by a band-limited, gradient-corrected, symplectic propagator and uses separated representations of operators for efficient computation in multiple dimensions. We illustrate the method calculating accurate bound and continuum transition probabilities along with the photoelectron spectra for H(1s), He+(1s), and Li2+(2s) in three dimensions and H2+ in three and four dimensions under a two-cycle attosecond laser pulse with driving frequency of 36 eV and an intensity of 1×1015W/cm2.

  10. Global multi-resolution terrain elevation data 2010 (GMTED2010)

    USGS Publications Warehouse

    Danielson, Jeffrey J.; Gesch, Dean B.

    2011-01-01

    In 1996, the U.S. Geological Survey (USGS) developed a global topographic elevation model designated as GTOPO30 at a horizontal resolution of 30 arc-seconds for the entire Earth. Because no single source of topographic information covered the entire land surface, GTOPO30 was derived from eight raster and vector sources that included a substantial amount of U.S. Defense Mapping Agency data. The quality of the elevation data in GTOPO30 varies widely; there are no spatially-referenced metadata, and the major topographic features such as ridgelines and valleys are not well represented. Despite its coarse resolution and limited attributes, GTOPO30 has been widely used for a variety of hydrological, climatological, and geomorphological applications as well as military applications, where a regional, continental, or global scale topographic model is required. These applications have ranged from delineating drainage networks and watersheds to using digital elevation data for the extraction of topographic structure and three-dimensional (3D) visualization exercises (Jenson and Domingue, 1988; Verdin and Greenlee, 1996; Lehner and others, 2008). Many of the fundamental geophysical processes active at the Earth's surface are controlled or strongly influenced by topography, thus the critical need for high-quality terrain data (Gesch, 1994). U.S. Department of Defense requirements for mission planning, geographic registration of remotely sensed imagery, terrain visualization, and map production are similarly dependent on global topographic data. Since the time GTOPO30 was completed, the availability of higher-quality elevation data over large geographic areas has improved markedly. New data sources include global Digital Terrain Elevation Data (DTEDRegistered) from the Shuttle Radar Topography Mission (SRTM), Canadian elevation data, and data from the Ice, Cloud, and land Elevation Satellite (ICESat). Given the widespread use of GTOPO30 and the equivalent 30-arc

  11. Fresnelets: new multiresolution wavelet bases for digital holography.

    PubMed

    Liebling, Michael; Blu, Thierry; Unser, Michael

    2003-01-01

    We propose a construction of new wavelet-like bases that are well suited for the reconstruction and processing of optically generated Fresnel holograms recorded on CCD-arrays. The starting point is a wavelet basis of L2 to which we apply a unitary Fresnel transform. The transformed basis functions are shift-invariant on a level-by-level basis but their multiresolution properties are governed by the special form that the dilation operator takes in the Fresnel domain. We derive a Heisenberg-like uncertainty relation that relates the localization of Fresnelets with that of their associated wavelet basis. According to this criterion, the optimal functions for digital hologram processing turn out to be Gabor functions, bringing together two separate aspects of the holography inventor's work. We give the explicit expression of orthogonal and semi-orthogonal Fresnelet bases corresponding to polynomial spline wavelets. This special choice of Fresnelets is motivated by their near-optimal localization properties and their approximation characteristics. We then present an efficient multiresolution Fresnel transform algorithm, the Fresnelet transform. This algorithm allows for the reconstruction (backpropagation) of complex scalar waves at several user-defined, wavelength-independent resolutions. Furthermore, when reconstructing numerical holograms, the subband decomposition of the Fresnelet transform naturally separates the image to reconstruct from the unwanted zero-order and twin image terms. This greatly facilitates their suppression. We show results of experiments carried out on both synthetic (simulated) data sets as well as on digitally acquired holograms. PMID:18237877

  12. Using fuzzy logic to enhance stereo matching in multiresolution images.

    PubMed

    Medeiros, Marcos D; Gonçalves, Luiz Marcos G; Frery, Alejandro C

    2010-01-01

    Stereo matching is an open problem in computer vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859

  13. Using Fuzzy Logic to Enhance Stereo Matching in Multiresolution Images

    PubMed Central

    Medeiros, Marcos D.; Gonçalves, Luiz Marcos G.; Frery, Alejandro C.

    2010-01-01

    Stereo matching is an open problem in Computer Vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859

  14. Multiresolution in CROCO (Coastal and Regional Ocean Community model)

    NASA Astrophysics Data System (ADS)

    Debreu, Laurent; Auclair, Francis; Benshila, Rachid; Capet, Xavier; Dumas, Franck; Julien, Swen; Marchesiello, Patrick

    2016-04-01

    CROCO (Coastal and Regional Ocean Community model [1]) is a new oceanic modeling system built upon ROMS_AGRIF and the non-hydrostatic kernel of SNH, gradually including algorithms from MARS3D (sediments)and HYCOM (vertical coordinates). An important objective of CROCO is to provide the possibility of running truly multiresolution simulations. Our previous work on structured mesh refinement [2] allowed us to run two-way nesting with the following major features: conservation, spatial and temporal refinement, coupling at the barotropic level. In this presentation, we will expose the current developments in CROCO towards multiresolution simulations: connection between neighboring grids at the same level of resolution and load balancing on parallel computers. Results of preliminary experiments will be given both on an idealized test case and on a realistic simulation of the Bay of Biscay with high resolution along the coast. References: [1] : CROCO : http://www.croco-ocean.org [2] : Debreu, L., P. Marchesiello, P. Penven, and G. Cambon, 2012: Two-way nesting in split-explicit ocean models: algorithms, implementation and validation. Ocean Modelling, 49-50, 1-21.

  15. Automated transformation-invariant shape recognition through wavelet multiresolution

    NASA Astrophysics Data System (ADS)

    Brault, Patrice; Mounier, Hugues

    2001-12-01

    We present here new results in Wavelet Multi-Resolution Analysis (W-MRA) applied to shape recognition in automatic vehicle driving applications. Different types of shapes have to be recognized in this framework. They pertain to most of the objects entering the sensors field of a car. These objects can be road signs, lane separation lines, moving or static obstacles, other automotive vehicles, or visual beacons. The recognition process must be invariant to global, affine or not, transformations which are : rotation, translation and scaling. It also has to be invariant to more local, elastic, deformations like the perspective (in particular with wide angle camera lenses), and also like deformations due to environmental conditions (weather : rain, mist, light reverberation) or optical and electrical signal noises. To demonstrate our method, an initial shape, with a known contour, is compared to the same contour altered by rotation, translation, scaling and perspective. The curvature computed for each contour point is used as a main criterion in the shape matching process. The original part of this work is to use wavelet descriptors, generated with a fast orthonormal W-MRA, rather than Fourier descriptors, in order to provide a multi-resolution description of the contour to be analyzed. In such way, the intrinsic spatial localization property of wavelet descriptors can be used and the recognition process can be speeded up. The most important part of this work is to demonstrate the potential performance of Wavelet-MRA in this application of shape recognition.

  16. Review of the EPA's radionuclide release analyses from LLW disposal trenches used in support of proposed dose limits in 40 CFR 193

    SciTech Connect

    Pescatore, C.; Sullivan, T.M.

    1991-11-01

    The April 1989 draft EPA standard for low-level waste (LLW) disposal, 40 CFR 193, would require disposal site performance to satisfy very stringent dose-limit criteria. The EPA suggests that these limits can be achieved by relying extensively on waste solidification before disposal. The EPA justifies the achievability of the proposed criteria based on performance assessment analyses in the general context of trench burial of the LLW. The core models implemented in those analyses are codified in the EPA's PRESTO family of codes. Because a key set of models for predicting potential releases are the leach-and-transport models from a disposal trench, these have been reviewed for completeness and applicability to trench disposal methods. The overall conclusion of this review is that the generic analyses performed by the EPA are not sufficiently comprehensive to support the proposed version of 40 CFR 193. More rigorous analyses may find the draft standard criteria to be unattainable.

  17. Review of the EPA`s radionuclide release analyses from LLW disposal trenches used in support of proposed dose limits in 40 CFR 193

    SciTech Connect

    Pescatore, C.; Sullivan, T.M.

    1991-11-01

    The April 1989 draft EPA standard for low-level waste (LLW) disposal, 40 CFR 193, would require disposal site performance to satisfy very stringent dose-limit criteria. The EPA suggests that these limits can be achieved by relying extensively on waste solidification before disposal. The EPA justifies the achievability of the proposed criteria based on performance assessment analyses in the general context of trench burial of the LLW. The core models implemented in those analyses are codified in the EPA`s PRESTO family of codes. Because a key set of models for predicting potential releases are the leach-and-transport models from a disposal trench, these have been reviewed for completeness and applicability to trench disposal methods. The overall conclusion of this review is that the generic analyses performed by the EPA are not sufficiently comprehensive to support the proposed version of 40 CFR 193. More rigorous analyses may find the draft standard criteria to be unattainable.

  18. A numerical evaluation of TIROS-N and NOAA-6 analyses in a high resolution limited area model

    NASA Technical Reports Server (NTRS)

    Derber, J. C.; Koehler, T. L.; Horn, L. H.

    1981-01-01

    Vertical temperature profiles derived from TIROS-N and NOAA-6 radiance measurements were used to create separate analyses for the period 0000 GMT 6 January to 0000 GMT 7 January 1980. The 0000 GMT 6 January satellite analyses and a conventional analysis were used to initialize and run the University of Wisconsin's version of the Australian Region Primitive Equations model. Forecasts based on conventional analyses were used to evaluate the forecasts based only on satellite upper air data. The forecasts based only on TIROS-N or NOAA-6 data did reasonably well in locating the main trough and ridge positions. The satellite initial analyses and forecasts revealed errors correlated to the synoptic situation. The trough in both TIROS-N and NOAA-6 forecasts which was initially too warm remained too warm as it propagated eastward during the forecast period. Thus, it is unlikely that the operational satellite data will improve forecasts in a data dense region. However, in regions of poor data coverage, the satellite data should have a beneficial effect on numerical forecasts.

  19. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  20. Automatic image segmentation by dynamic region growth and multiresolution merging.

    PubMed

    Ugarriza, Luis Garcia; Saber, Eli; Vantaram, Sreenath Rao; Amuso, Vincent; Shaw, Mark; Bhaskar, Ranjit

    2009-10-01

    Image segmentation is a fundamental task in many computer vision applications. In this paper, we propose a new unsupervised color image segmentation algorithm, which exploits the information obtained from detecting edges in color images in the CIE L *a *b * color space. To this effect, by using a color gradient detection technique, pixels without edges are clustered and labeled individually to identify some initial portion of the input image content. Elements that contain higher gradient densities are included by the dynamic generation of clusters as the algorithm progresses. Texture modeling is performed by color quantization and local entropy computation of the quantized image. The obtained texture and color information along with a region growth map consisting of all fully grown regions are used to perform a unique multiresolution merging procedure to blend regions with similar characteristics. Experimental results obtained in comparison to published segmentation techniques demonstrate the performance advantages of the proposed method. PMID:19535323

  1. Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme

    NASA Astrophysics Data System (ADS)

    Hickmann, K. S.; Godinez, H. C.

    2015-12-01

    When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.

  2. Multiresolution strategies for the numerical solution of optimal control problems

    NASA Astrophysics Data System (ADS)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  3. Towards online multiresolution community detection in large-scale networks.

    PubMed

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  4. Out-of-Core Construction and Visualization of Multiresolution Surfaces

    SciTech Connect

    Lindstrom, P

    2003-02-03

    We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a rectilinear octree grid to coarsen and create a hierarchy for the mesh, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are disk and memory efficient, and are fairly straightforward to implement.

  5. Out-of-Core Construction and Visualization of Multiresolution Surfaces

    SciTech Connect

    Lindstrom, P

    2002-11-04

    We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a uniform octree grid to coarsen a mesh and create a hierarchy, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are both disk and memory efficient, and are fairly straightforward to implement.

  6. Multiresolution Analysis and Prediction of Solar Magnetic Flux

    NASA Astrophysics Data System (ADS)

    Wik, Magnus

    Synoptic maps of the solar magnetic field provide an important visualization of the global transport and evolution of the large-scale magnetic flux. The solar dynamo picture is dependent on both the spatial and time resolution. It is therefore interesting to study the solar magnetic activity for many resolutions at the same time. A multi-resolution analysis gives us the possibility to study the synoptic solar magnetic fields for several resolutions at the same time. In this study we have first carried out a wavelet based multiresolution analysis (MRA) of the longitudinally averaged photospheric synoptic magnetograms. Magnetograms of Wilcox Solar Observatory (WSO), Stanford and of Michelson Doppler Imager (MDI) onboard SOHO of ESA/NASA were used. WSO data enabled a study of cycle 21,22 and 23 and MDI data a more detailed study of cycle 23. The result reveals a complex picture of the solar magnetic activity on different scales. For resolutions around 1-2 years and 6-7 years we observe strong transports of fluxes to the polar regions. Around 11 years we observe a very regular pattern which resembles a wave from the polar to the sunspot regions. We also see that a large range of latitudes vary in phase. A large asymmetry between solar northern and southern hemispheres is also seen. We have also developed a multilayer back propagation neural network for prediction of the solar magnetic flux. The inputs to the model are the polar and sunspot magnetic field in WSO longitudinally averaged solar magnetic fields.

  7. A Multi-Resolution Data Structure for Two-Dimensional Morse Functions

    SciTech Connect

    Bremer, P-T; Edelsbrunner, H; Hamann, B; Pascucci, V

    2003-07-30

    The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.

  8. Static multiresolution grids with inline hierarchy information for cosmic ray propagation

    NASA Astrophysics Data System (ADS)

    Müller, Gero

    2016-08-01

    For numerical simulations of cosmic-ray propagation fast access to static magnetic field data is required. We present a data structure for multiresolution vector grids which is optimized for fast access, low overhead and shared memory use. The hierarchy information is encoded into the grid itself, reducing the memory overhead. Benchmarks show that in certain scenarios the differences in deflections introduced by sampling the magnetic field model can be significantly reduced when using the multiresolution approach.

  9. Homogeneous hierarchies: A discrete analogue to the wavelet-based multiresolution approximation

    SciTech Connect

    Mirkin, B.

    1996-12-31

    A correspondence between discrete binary hierarchies and some orthonormal bases of the n-dimensional Euclidean space can be applied to such problems as clustering, ordering, identifying/testing in very large data bases, or multiresolution image/signal processing. The latter issue is considered in the paper. The binary hierarchy based multiresolution theory is expected to lead to effective methods for data processing because of relaxing the regularity restrictions of the classical theory.

  10. Multi-resolution community detection based on generalized self-loop rescaling strategy

    NASA Astrophysics Data System (ADS)

    Xiang, Ju; Tang, Yan-Ni; Gao, Yuan-Yuan; Zhang, Yan; Deng, Ke; Xu, Xiao-Ke; Hu, Ke

    2015-08-01

    Community detection is of considerable importance for analyzing the structure and function of complex networks. Many real-world networks may possess community structures at multiple scales, and recently, various multi-resolution methods were proposed to identify the community structures at different scales. In this paper, we present a type of multi-resolution methods by using the generalized self-loop rescaling strategy. The self-loop rescaling strategy provides one uniform ansatz for the design of multi-resolution community detection methods. Many quality functions for community detection can be unified in the framework of the self-loop rescaling. The resulting multi-resolution quality functions can be optimized directly using the existing modularity-optimization algorithms. Several derived multi-resolution methods are applied to the analysis of community structures in several synthetic and real-world networks. The results show that these methods can find the pre-defined substructures in synthetic networks and real splits observed in real-world networks. Finally, we give a discussion on the methods themselves and their relationship. We hope that the study in the paper can be helpful for the understanding of the multi-resolution methods and provide useful insight into designing new community detection methods.

  11. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-09-01

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  12. Continuously zoom imaging probe for the multi-resolution foveated laparoscope.

    PubMed

    Qin, Yi; Hua, Hong

    2016-04-01

    In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration. PMID:27446645

  13. Continuously zoom imaging probe for the multi-resolution foveated laparoscope

    PubMed Central

    Qin, Yi; Hua, Hong

    2016-01-01

    In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration. PMID:27446645

  14. Multi-Resolution Clustering Analysis and Visualization of Around One Million Synthetic Earthquake Events

    NASA Astrophysics Data System (ADS)

    Kaneko, J. Y.; Yuen, D. A.; Dzwinel, W.; Boryszko, K.; Ben-Zion, Y.; Sevre, E. O.

    2002-12-01

    The study of seismic patterns with synthetic data is important for analyzing the seismic hazard of faults because one can precisely control the spatial and temporal domains. Using modern clustering analysis from statistics and a recently introduced visualization software, AMIRA, we have examined the multi-resolution nature of a total assemblage involving 922,672 earthquake events in 4 numerically simulated models, which have different constitutive parameters, with 2 disparately different time intervals in a 3D spatial domain. The evolution of stress and slip on the fault plane was simulated with the 3D elastic dislocation theory for a configuration representing the central San Andreas Fault (Ben-Zion, J. Geophys. Res., 101, 5677-5706, 1996). The 4 different models represent various levels of fault zone disorder and have the following brittle properties and names: uniform properties (model U), a Parkfield type Asperity (A), fractal properties (F), and multi-size-heterogeneities (model M). We employed the MNN (mutual nearest neighbor) clustering method and developed a C-program that calculates simultaneously a number of parameters related to the location of the earthquakes and their magnitude values .Visualization was then used to look at the geometrical locations of the hypocenters and the evolution of seismic patterns. We wrote an AmiraScript that allows us to pass the parameters in an interactive format. With data sets consisting of 150 year time intervals, we have unveiled the distinctly multi-resolutional nature in the spatial-temporal pattern of small and large earthquake correlations shown previously by Eneva and Ben-Zion (J. Geophys. Res., 102, 24513-24528, 1997). In order to search for clearer possible stationary patterns and substructures within the clusters, we have also carried out the same analysis for corresponding data sets with time extending to several thousand years. The larger data sets were studied with finer and finer time intervals and multi

  15. X-ray Crystallographic Analyses of Pig Pancreatic α-Amylase with Limit Dextrin, Oligosaccharide and α-Cyclodextrin†‡

    PubMed Central

    Larson, Steven B.; Day, John S.; McPherson, Alexander

    2010-01-01

    Further refinement of the model using maximum likelihood procedures and re-evaluation of the native electron density map has shown that crystals of pig pancreatic α-amylase, whose structure we reported more than fifteen years ago, in fact contain a substantial amount of carbohydrate. The carbohydrate fragments are the products of glycogen digestion carried out as an essential step of the protein's purification procedure. In particular, the substrate-binding cleft contains a limit dextrin of six glucose residues, one of which contains both α-(1,4) and α-(1,6) linkages to contiguous residues. The disaccharide in the original model, shared between two amylase molecules in the crystal lattice, but also occupying a portion of the substrate binding cleft, is now seen to be a tetrasaccharide. There are, in addition, several other probable monosaccharide binding sites. To these results we have further reviewed our X-ray diffraction analysis of α-amylase complexed with α-cyclodextrin. α-Amylase binds three cyclodextrin molecules. Glucose residues of two of the rings superimpose upon the limit dextrin and the tetrasaccharide. The limit dextrin superimposes in large part upon linear oligosaccharide inhibitors visualized by other investigators. By comprehensive integration of these complexes we have constructed a model for the binding of polysaccharides having the helical character known to be present in natural substrates such as starch and glycogen. PMID:20222716

  16. Multi-Resolution Assimilative Analysis of High-Latitude Ionospheric Convection in both Hemispheres

    NASA Astrophysics Data System (ADS)

    Thomas, Z. M.; Matsuo, T.; Nychka, D. W.; Cousins, E. D. P.; Wiltberger, M. J.

    2014-12-01

    Assimilative techniques for obtaining complete maps of ionospheric electric potential (and related parameters) from sparse radar and satellite observations greatly facilitates studies of magnetosphere/ionosphere coupling. While there is much scientific interest in studying interhemispheric asymmetry in ionospheric convection at both large and small scales, current mapping procedures rely on spherical harmonic expansion techniques, which produce inherently large-scale analyses. Due to the global nature of the spherical harmonics, such techniques are also subject to various instabilities arising from sparsity/error in the observations which can introduce non-physical patterns in the inferred convection. We present a novel technique for spatial mapping of ionospheric electric potential via a multi-resolution basis function expansion procedure, making use of compactly supported radial basis functions which are flexibly located over geodesic grids; the coefficients are modeled via a Markov random field construction. The technique is applied to radar observations from the Super Dual Auroral Radar Network (SuperDARN), whereupon careful comparison of interhemispheric differences in mapped potential is made at various scales.

  17. Multiresolution modeling with a JMASS-JWARS high-level architecture (HLA) federation

    NASA Astrophysics Data System (ADS)

    Plotz, Gary A.; Prince, John

    2003-09-01

    Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model are both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting "extremely large" simulation. One viable alternative is to "integrate" the current hierarchical suite of simulation models using the DoD's High Level Architecture (HLA) in order to support multi-resolution modeling. An HLA integration -- called a federation -- eliminates the problem of "extremely large" models, provides a well-defined and manageable mixed resolution simulation and minimizes Verification, Validation, and Accreditation (VV&A) issues. This paper describes the process and results of integrating the Joint Modeling and Simulation System (JMASS) and the Joint Warfare System (JWARS) simulations -- two of the Department of Defense's (DoD) next-generation simulations -- using a HLA federation.

  18. Multi-Resolution Seismic Tomography Based on Recursive Tessellation Hierarchy

    SciTech Connect

    Simmons, N A; Myers, S C; Ramirez, A

    2009-07-01

    tomographic problems. They also apply the progressive inversion approach with Pn waves traveling within the Middle East region and compare the results to simple tomographic inversions. As expected from synthetic testing, the progressive approach results in detailed structure where there is high data density and broader regional anomalies where seismic information is sparse. The ultimate goal is to use these methods to produce a seamless, multi-resolution global tomographic model with local model resolution determined by the constraints afforded by available data. They envisage this new technique as the general approach to be employed for future multi-resolution model development with complex arrangements of regional and teleseismic information.

  19. Multi-resolution adaptive data collection prioritisation for multi-risk assessment

    NASA Astrophysics Data System (ADS)

    Pittore, M.; Wieland, M.; Bindi, D.; Fleming, K.; Parolai, S.

    2012-04-01

    The distribution and amount of potential losses due natural hazards are continuously, and sometimes abruptly varying, spatially and temporally. Changes in damage distribution are dependent both on the specific natural hazard (for instance flood hazard can depend on the season and on the weather) and on the evolution of vulnerability (in terms of variation in size and composition of the exposed assets). Considering space and time, moreover, the most appropriate scales at which the changes occur have to be taken into account. Furthermore, spatio-temporal variability of multi-risk assessment is depending on the distribution and quality of the information upon which the assessment is made. This information is subject to uncertainties that also vary over time, for instance as new data are collected and integrated. Multi-risk assessment is therefore a dynamical process aiming for a continuous monitoring of the expected consequences of the occurring of one or more natural events, given an uncertain and incomplete description of both the involved hazards and the composition and vulnerability of the exposed assets. A novel multi-resolution, adaptive data collection approach is explored, which is of particular interest in countries where multi-scale, multi-risk assessment is sought but limited resources are available for intensive exposure and vulnerability data collection. In this case a suitable prioritisation of data collection is proposed as an adaptive sampling scheme optimized to trade off between data collection cost and loss estimation uncertainty. Preliminary test cases will be presented and discussed.

  20. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    SciTech Connect

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  1. Native and nonnative fish populations of the Colorado River are food limited--evidence from new food web analyses

    USGS Publications Warehouse

    Kennedy, Theodore A.; Cross, Wyatt F.; Hall, Robert O., Jr.; Baxter, Colden V.; Rosi-Marshall, Emma J.

    2013-01-01

    Fish populations in the Colorado River downstream from Glen Canyon Dam appear to be limited by the availability of high-quality invertebrate prey. Midge and blackfly production is low and nonnative rainbow trout in Glen Canyon and native fishes in Grand Canyon consume virtually all of the midge and blackfly biomass that is produced annually. In Glen Canyon, the invertebrate assemblage is dominated by nonnative New Zealand mudsnails, the food web has a simple structure, and transfers of energy from the base of the web (algae) to the top of the web (rainbow trout) are inefficient. The food webs in Grand Canyon are more complex relative to Glen Canyon, because, on average, each species in the web is involved in more interactions and feeding connections. Based on theory and on studies from other ecosystems, the structure and organization of Grand Canyon food webs should make them more stable and less susceptible to large changes following perturbations of the flow regime relative to food webs in Glen Canyon. In support of this hypothesis, Grand Canyon food webs were much less affected by a 2008 controlled flood relative to the food web in Glen Canyon.

  2. Leaf Length Tracker: a novel approach to analyse leaf elongation close to the thermal limit of growth in the field

    PubMed Central

    Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim

    2016-01-01

    Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called ‘Leaf Length Tracker’ (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions. PMID:26818912

  3. Leaf Length Tracker: a novel approach to analyse leaf elongation close to the thermal limit of growth in the field.

    PubMed

    Nagelmüller, Sebastian; Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim

    2016-04-01

    Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called 'Leaf Length Tracker' (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions. PMID:26818912

  4. Analysing the Spectrum of Andesitic Plinian Eruptions: Approaching the Uppermost Hazard Limits Expected from MT. Ruapehu, New Zealand

    NASA Astrophysics Data System (ADS)

    Pardo, N.; Cronin, S. J.; Palmer, A. S.; Procter, J.; Smith, I. E.; Nemeth, K.

    2011-12-01

    parameters by comparing different methodologies, in order to best estimate realistic uppermost hazard limits. We found Sulpizio (2005) method of k1 vs. √Aip, by integrating multiple segments, as the best approach to quantify past eruptions where the exposures are limited to proximal-intermediate locations and isopachs thinner than 5 cm cannot be constructed. The bilobate nature of both isopachs and isopleth maps reflects the complexity of tephra dispersion in a form of non-elliptical isopleths shapes showing high contour distortion and lobe axis bending, reflecting important shifts in the wind-direction over a short time interval. Calculated eruptive parameters such as minimum erupted volumes (0.3 to 0.6 km3), break in slope distances (√Aip: 31.4 - 80.8 km), column heights (22-37 km), volume discharge rates (~104-105 m3/s), and mass discharge rates (~107-108 kg/s), are all consistent with Plinian style eruptions, significantly larger than eruptions that have occurred over the past 5000 yr (VEI = 3). This new data could yield the "worst-case" eruption scenario of Ruapehu, similar to the Plinian phases of Askja 1875 and Chaitén 2008 eruptions.

  5. Multi-tissue analyses reveal limited inter-annual and seasonal variation in mercury exposure in an Antarctic penguin community.

    PubMed

    Brasso, Rebecka L; Polito, Michael J; Emslie, Steven D

    2014-10-01

    Inter-annual variation in tissue mercury concentrations in birds can result from annual changes in the bioavailability of mercury or shifts in dietary composition and/or trophic level. We investigated potential annual variability in mercury dynamics in the Antarctic marine food web using Pygoscelis penguins as biomonitors. Eggshell membrane, chick down, and adult feathers were collected from three species of sympatrically breeding Pygoscelis penguins during the austral summers of 2006/2007-2010/2011. To evaluate the hypothesis that mercury concentrations in penguins exhibit significant inter-annual variation and to determine the potential source of such variation (dietary or environmental), we compared tissue mercury concentrations with trophic levels as indicated by δ(15)N values from all species and tissues. Overall, no inter-annual variation in mercury was observed in adult feathers suggesting that mercury exposure, on an annual scale, was consistent for Pygoscelis penguins. However, when examining tissues that reflected more discrete time periods (chick down and eggshell membrane) relative to adult feathers, we found some evidence of inter-annual variation in mercury exposure during penguins' pre-breeding and chick rearing periods. Evidence of inter-annual variation in penguin trophic level was also limited suggesting that foraging ecology and environmental factors related to the bioavailability of mercury may provide more explanatory power for mercury exposure compared to trophic level alone. Even so, the variable strength of relationships observed between trophic level and tissue mercury concentrations across and within Pygoscelis penguin species suggest that caution is required when selecting appropriate species and tissue combinations for environmental biomonitoring studies in Antarctica. PMID:25085270

  6. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  7. A novel adaptive multi-resolution combined watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, QiWei

    2008-04-01

    The rapid development of IT and WWW technique, causing person frequently confronts with various kinds of authorized identification problem, especially the copyright problem of digital products. The digital watermarking technique was emerged as one kind of solutions. The balance between robustness and imperceptibility is always the object sought by related researchers. In order to settle the problem of robustness and imperceptibility, a novel adaptive multi-resolution combined digital image watermarking algorithm was proposed in this paper. In the proposed algorithm, we first decompose the watermark into several sub-bands, and according to its significance to embed the sub-band to different DWT coefficient of the carrier image. While embedding, the HVS was considered. So under the precondition of keeping the quality of image, the larger capacity of watermark can be embedding. The experimental results have shown that the proposed algorithm has better performance in the aspects of robustness and security. And with the same visual quality, the technique has larger capacity. So the unification of robustness and imperceptibility was achieved.

  8. Eagle II: A prototype for multi-resolution combat modeling

    SciTech Connect

    Powell, D.R.; Hutchinson, J.L.

    1993-01-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to calibrate'' the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  9. Face Recognition with Multi-Resolution Spectral Feature Images

    PubMed Central

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method. PMID:23418451

  10. On analysis of electroencephalogram by multiresolution-based energetic approach

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer

    2013-10-01

    Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.

  11. Cointegration and Nonstationarity in the Context of Multiresolution Analysis

    NASA Astrophysics Data System (ADS)

    Worden, K.; Cross, E. J.; Kyprianou, A.

    2011-07-01

    Cointegration has established itself as a powerful means of projecting out long-term trends from time-series data in the context of econometrics. Recent work by the current authors has further established that cointegration can be applied profitably in the context of structural health monitoring (SHM), where it is desirable to project out the effects of environmental and operational variations from data in order that they do not generate false positives in diagnostic tests. The concept of cointegration is partly built on a clear understanding of the ideas of stationarity and nonstationarity for time-series. Nonstationarity in this context is 'traditionally' established through the use of statistical tests, e.g. the hypothesis test based on the augmented Dickey-Fuller statistic. However, it is important to understand the distinction in this case between 'trend' stationarity and stationarity of the AR models typically fitted as part of the analysis process. The current paper will discuss this distinction in the context of SHM data and will extend the discussion by the introduction of multi-resolution (discrete wavelet) analysis as a means of characterising the time-scales on which nonstationarity manifests itself. The discussion will be based on synthetic data and also on experimental data for the guided-wave SHM of a composite plate.

  12. Eagle II: A prototype for multi-resolution combat modeling

    SciTech Connect

    Powell, D.R.; Hutchinson, J.L.

    1993-02-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  13. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Domingues, Margarete O.; Gomes, Anna Karina F.; Mendes, Odim; Schneider, Kai

    2013-10-01

    We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge-Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of the magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution. This work was supported by the contract SiCoMHD (ANR-Blanc 2011-045).

  14. Wavelet-based multiresolution analysis of Wivenhoe Dam water temperatures

    NASA Astrophysics Data System (ADS)

    Percival, D. B.; Lennox, S. M.; Wang, Y.-G.; Darnell, R. E.

    2011-05-01

    Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

  15. Multiresolution generalized N dimension PCA for ultrasound image denoising

    PubMed Central

    2014-01-01

    Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917

  16. MRI data driven partial volume effects correction in PET imaging using 3D local multi-resolution analysis

    NASA Astrophysics Data System (ADS)

    Le Pogam, Adrien; Lamare, Frederic; Hatt, Mathieu; Fernandez, Philippe; Le Rest, Catherine Cheze; Visvikis, Dimitris

    2013-02-01

    PET partial volume effects (PVE) resulting from the limited resolution of PET scanners is still a quantitative issue that PET/MRI scanners do not solve by themselves. A recently proposed voxel-based locally adaptive 3D multi-resolution PVE correction based on the mutual analysis of wavelet decompositions was applied on 12 clinical 18F-FLT PET/T1 MRI images of glial tumors, and compared to a PET only voxel-wise iterative deconvolution approach. Quantitative and qualitative results demonstrated the interest of exploiting PET/MRI information with higher uptake increases (19±8% vs. 11±7%, p=0.02), as well as more convincing visual restoration of details within tumors with respect to deconvolution of the PET uptake only. Further studies are now required to demonstrate the accuracy of this restoration with histopathological validation of the uptake in tumors.

  17. Fast pseudo-semantic segmentation for joint region-based hierarchical and multiresolution representation

    NASA Astrophysics Data System (ADS)

    Sekkal, Rafiq; Strauss, Clement; Pasteau, François; Babel, Marie; Deforges, Olivier

    2012-01-01

    In this paper, we present a new scalable segmentation algorithm called JHMS (Joint Hierarchical and Multiresolution Segmentation) that is characterized by region-based hierarchy and resolution scalability. Most of the proposed algorithms either apply a multiresolution segmentation or a hierarchical segmentation. The proposed approach combines both multiresolution and hierarchical segmentation processes. Indeed, the image is considered as a set of images at different levels of resolution, where at each level a hierarchical segmentation is performed. Multiresolution implies that a segmentation of a given level is reused in further segmentation processes operated at next levels so that to insure contour consistency between different resolutions. Each level of resolution provides a Region Adjacency Graph (RAG) that describes the neighborhood relationships between regions within a given level of the multiresolution representation. Region label consistency is preserved thanks to a dedicated projection algorithm based on inter-level relationships. Moreover, a preprocess based on a quadtree partitioning reduces the amount of input data thus leading to a lower overall complexity of the segmentation framework. Experiments show that we obtain effective results when compared to the state of the art together with a lower complexity.

  18. A multiresolution spatial parametrization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions.

    SciTech Connect

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew

    2013-04-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.

  19. Techniques and potential capabilities of multi-resolutional information (knowledge) processing

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.

  20. Multiresolution phase retrieval in the fresnel region by use of wavelet transform.

    PubMed

    Souvorov, Alexei; Ishikawa, Tetsuya; Kuyumchyan, Armen

    2006-02-01

    A multiresolution (multiscale) analysis based on wavelet transform is applied to the problem of optical phase retrieval from the intensity measured in the in-line geometry (lens-free). The transport-of-intensity equation and the Fresnel diffraction integral are approximated in terms of a wavelet basis. A solution to the phase retrieval problem can be efficiently found in both cases using the multiresolution concept. Due to the hierarchical nature of wavelet spaces, wavelets are well suited to multiresolution methods that contain multigrid algorithms. Appropriate wavelet bases for the best solution approximation are discussed. The proposed approach reduces the computational complexity and accelerates the convergence of the solution. It is robust and reliable, and successful on both simulated and experimental images obtained with hard x rays. PMID:16477833

  1. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    SciTech Connect

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  2. A one-time truncate and encode multiresolution stochastic framework

    SciTech Connect

    Abgrall, R.; Congedo, P.M.; Geraci, G.

    2014-01-15

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  3. Multiresolution graph Fourier transform for compression of piecewise smooth images.

    PubMed

    Hu, Wei; Cheung, Gene; Ortega, Antonio; Au, Oscar C

    2015-01-01

    Piecewise smooth (PWS) images (e.g., depth maps or animation images) contain unique signal characteristics such as sharp object boundaries and slowly varying interior surfaces. Leveraging on recent advances in graph signal processing, in this paper, we propose to compress the PWS images using suitable graph Fourier transforms (GFTs) to minimize the total signal representation cost of each pixel block, considering both the sparsity of the signal's transform coefficients and the compactness of transform description. Unlike fixed transforms, such as the discrete cosine transform, we can adapt GFT to a particular class of pixel blocks. In particular, we select one among a defined search space of GFTs to minimize total representation cost via our proposed algorithms, leveraging on graph optimization techniques, such as spectral clustering and minimum graph cuts. Furthermore, for practical implementation of GFT, we introduce two techniques to reduce computation complexity. First, at the encoder, we low-pass filter and downsample a high-resolution (HR) pixel block to obtain a low-resolution (LR) one, so that a LR-GFT can be employed. At the decoder, upsampling and interpolation are performed adaptively along HR boundaries coded using arithmetic edge coding, so that sharp object boundaries can be well preserved. Second, instead of computing GFT from a graph in real-time via eigen-decomposition, the most popular LR-GFTs are pre-computed and stored in a table for lookup during encoding and decoding. Using depth maps and computer-graphics images as examples of the PWS images, experimental results show that our proposed multiresolution-GFT scheme outperforms H.264 intra by 6.8 dB on average in peak signal-to-noise ratio at the same bit rate. PMID:25494508

  4. Using sparse regularization for multi-resolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-10-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.

  5. Multi-resolution statistical analysis of brain connectivity graphs in preclinical Alzheimer's disease.

    PubMed

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K; Okonkwo, Ozioma C; Johnson, Sterling C; B Bendlin, Barbara; Singh, Vikas

    2015-09-01

    There is significant interest, both from basic and applied research perspectives, in understanding how structural/functional connectivity changes can explain behavioral symptoms and predict decline in neurodegenerative diseases such as Alzheimer's disease (AD). The first step in most such analyses is to encode the connectivity information as a graph; then, one may perform statistical inference on various 'global' graph theoretic summary measures (e.g., modularity, graph diameter) and/or at the level of individual edges (or connections). For AD in particular, clear differences in connectivity at the dementia stage of the disease (relative to healthy controls) have been identified. Despite such findings, AD-related connectivity changes in preclinical disease remain poorly characterized. Such preclinical datasets are typically smaller and group differences are weaker. In this paper, we propose a new multi-resolution method for performing statistical analysis of connectivity networks/graphs derived from neuroimaging data. At the high level, the method occupies the middle ground between the two contrasts - that is, to analyze global graph summary measures (global) or connectivity strengths or correlations for individual edges similar to voxel based analysis (local). Instead, our strategy derives a Wavelet representation at each primitive (connection edge) which captures the graph context at multiple resolutions. We provide extensive empirical evidence of how this framework offers improved statistical power by analyzing two distinct AD datasets. Here, connectivity is derived from diffusion tensor magnetic resonance images by running a tractography routine. We first present results showing significant connectivity differences between AD patients and controls that were not evident using standard approaches. Later, we show results on populations that are not diagnosed with AD but have a positive family history risk of AD where our algorithm helps in identifying potentially

  6. A hexahedron element formulation with a new multi-resolution analysis

    NASA Astrophysics Data System (ADS)

    Xia, YiMing; Chen, ShaoLin

    2015-01-01

    A multiresolution hexahedron element is presented with a new multiresolution analysis (MRA) framework. The MRA framework is formulated out of a mutually nesting displacement subspace sequence, whose basis functions are constructed of scaling and shifting on element domain of a basic node shape function. The basic node shape function is constructed from shifting to other seven quadrants around a specific node of a basic isoparametric element in one quadrant and joining the corresponding node shape functions of eight elements at the specific node. The MRA endows the proposed element with the resolution level (RL) to adjust structural analysis accuracy. As a result, the traditional 8-node hexahedron element is a monoresolution one and also a special case of the proposed element. The meshing for the monoresolution finite element model is based on the empiricism while the RL adjusting for the multiresolution is laid on the solid mathematical basis. The simplicity and clarity of shape function construction with the Kronecker delta property and the rational MRA enable the proposed element method to be more rational, easier and efficient in its implementation than the conventional mono-resolution solid element method or other MRA methods. The multiresolution hexahedron element method is more adapted to dealing with the accurate computation of structural problems.

  7. Multiresolution stroke sketch adaptive representation and neural network processing system for gray-level image recognition

    NASA Astrophysics Data System (ADS)

    Meystel, Alexander M.; Rybak, Ilya A.; Bhasin, Sanjay

    1992-11-01

    This paper describes a method for multiresolutional representation of gray-level images as hierarchial sets of strokes characterizing forms of objects with different degrees of generalization depending on the context of the image. This method transforms the original image into a hierarchical graph which allows for efficient coding in order to store, retrieve, and recognize the image. The method which is described is based upon finding the resolution levels for each image which minimizes the computations required. This becomes possible because of the use of a special image representation technique called Multiresolutional Attentional Representation for Recognition, based upon a feature which the authors call a stroke. This feature turns out to be efficient in the process of finding the appropriate system of resolutions and construction of the relational graph. Multiresolutional Attentional Representation for Recognition (MARR) is formed by a multi-layer neural network with recurrent inhibitory connections between neurons, the receptive fields of which are selectively tuned to detect the orientation of local contrasts in parts of the image with appropriate degree of generalization. This method simulates the 'coarse-to-fine' algorithm which an artist usually uses, making at attentional sketch of real images. The method, algorithms, and neural network architecture in this system can be used in many machine-vision systems with AI properties; in particular, robotic vision. We expect that systems with MARR can become a component of intelligent control systems for autonomous robots. Their architectures are mostly multiresolutional and match well with the multiple resolutions of the MARR structure.

  8. Spatial heterogeneity of dechlorinating bacteria and limiting factors for in situ trichloroethene dechlorination revealed by analyses of sediment cores from a polluted field site.

    PubMed

    Dowideit, Kerstin; Scholz-Muramatsu, Heidrun; Miethling-Graff, Rona; Vigelahn, Lothar; Freygang, Martina; Dohrmann, Anja B; Tebbe, Christoph C

    2010-03-01

    Microbiological analyses of sediment samples were conducted to explore potentials and limitations for bioremediation of field sites polluted with chlorinated ethenes. Intact sediment cores, collected by direct push probing from a 35-ha contaminated area, were analyzed in horizontal layers. Cultivation-independent PCR revealed Dehalococcoides to be the most abundant 16S rRNA gene phylotype with a suspected potential for reductive dechlorination of the major contaminant trichloroethene (TCE). In declining abundances, Desulfitobacterium, Desulfuromonas and Dehalobacter were also detected. In TCE-amended sediment slurry incubations, 66% of 121 sediment samples were dechlorinating, among them one-third completely and the rest incompletely (end product cis-1,2-dichloroethene; cDCE). Both PCR and slurry analyses revealed highly heterogeneous horizontal and vertical distributions of the dechlorination potentials in the sediments. Complete reductive TCE dechlorination correlated with the presence of Dehalococcoides, accompanied by Acetobacterium and a relative of Trichococcus pasteurii. Sediment incubations under close to in situ conditions showed that a low TCE dechlorination activity could be stimulated by 7 mg L(-1) dissolved carbon for cDCE formation and by an additional 36 mg carbon (lactate) L(-1) for further dechlorination. The study demonstrates that the highly heterogeneous distribution of TCE degraders and their specific requirements for carbon and electrons are key issues for TCE degradation in contaminated sites. PMID:20041951

  9. The Global Multi-Resolution Topography (GMRT) Synthesis

    NASA Astrophysics Data System (ADS)

    Arko, R.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; O'Hara, S.; Chayes, D.; Weissel, R.; Goodwillie, A.; Ferrini, V.; Stroker, K.; Virden, W.

    2007-12-01

    Topographic maps provide a backdrop for research in nearly every earth science discipline. There is particular demand for bathymetry data in the ocean basins, where existing coverage is sparse. Ships and submersibles worldwide are rapidly acquiring large volumes of new data with modern swath mapping systems. The science community is best served by a global topography compilation that is easily accessible, up-to-date, and delivers data in the highest possible (i.e. native) resolution. To meet this need, the NSF-supported Marine Geoscience Data System (MGDS; www.marine-geo.org) has partnered with the National Geophysical Data Center (NGDC; www.ngdc.noaa.gov) to produce the Global Multi-Resolution Topography (GMRT) synthesis - a continuously updated digital elevation model that is accessible through Open Geospatial Consortium (OGC; www.opengeospatial.org) Web services. GMRT had its genesis in 1992 with the NSF RIDGE Multibeam Synthesis (RMBS); later grew to include the Antarctic Multibeam Synthesis (AMBS); expanded again to include the NSF Ridge 2000 and MARGINS programs; and finally emerged as a global compilation in 2005 with the NSF Legacy of Ocean Exploration (LOE) project. The LOE project forged a permanent partnership between MGDS and NGDC, in which swath bathymetry data sets are routinely published and exchanged via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH; www.openarchives.org). GMRT includes both color-shaded relief images and underlying elevation values at ten different resolutions as high as 100m. New data are edited, gridded, and tiled using tools originally developed by William Haxby at Lamont-Doherty Earth Observatory. Global and regional data sources include the NASA Shuttle Radar Topography Mission (SRTM; http://www.jpl.nasa.gov/srtm/); Smith & Sandwell Satellite Predicted Bathymetry (http://topex.ucsd.edu/marine_topo/); SCAR Subglacial Topographic Model of the Antarctic (BEDMAP; http://www.antarctica.ac.uk/bedmap/); and

  10. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also

  11. A multi-resolution method for climate system modeling: application of Spherical Centroidal A multi-resolution method for climate system modeling: Application of Spherical Centroidal Voroni Tessellations

    SciTech Connect

    Ringler, Todd D; Gunzburger, Max; Ju, Lili

    2008-01-01

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multi-resolution schemes that are able, at least regional to faithfully simulate these fine-scale processes. Spherical Centroidal Voronoi Tessellations (SCVTs) offer one potential path toward the development of robust, multi-resolution climate system component models, SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function, each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean-ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear shallow-water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multi-resolution method and the challenges ahead.

  12. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  13. Noise-induced systematic errors in ratio imaging: serious artefacts and correction with multi-resolution denoising.

    PubMed

    Wang, Yu-Li

    2007-11-01

    Ratio imaging is playing an increasingly important role in modern cell biology. Combined with ratiometric dyes or fluorescence resonance energy transfer (FRET) biosensors, the approach allows the detection of conformational changes and molecular interactions in living cells. However, the approach is conducted increasingly under limited signal-to-noise ratio (SNR), where noise from multiple images can easily accumulate and lead to substantial uncertainty in ratio values. This study demonstrates that a far more serious concern is systematic errors that generate artificially high ratio values at low SNR. Thus, uneven SNR alone may lead to significant variations in ratios among different regions of a cell. Although correct average ratios may be obtained by applying conventional noise reduction filters, such as a Gaussian filter before calculating the ratio, these filters have a limited performance at low SNR and are prone to artefacts such as generating discrete domains not found in the correct ratio image. Much more reliable restoration may be achieved with multi-resolution denoising filters that take into account the actual noise characteristics of the detector. These filters are also capable of restoring structural details and photometric accuracy, and may serve as a general tool for retrieving reliable information from low-light live cell images. PMID:17970912

  14. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  15. Multi-resolution imaging with an optimized number and distribution of sampling points.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo

    2014-05-01

    We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis. PMID:24921717

  16. High throughput VLSI architecture for multiresolution integer motion estimation in high definition AVS video encoder

    NASA Astrophysics Data System (ADS)

    Yin, HaiBing; Qi, Honggang; Xu, Hao; Xie, Xiaodong; Gao, Wen

    2010-07-01

    This paper proposes a hardware friendly multi-resolution motion estimation algorithm and VLSI architecture for high definition MPEG-like video encoder hardware implementation. By parallel searching and utilizing the high correlation in multi-resolution reference pixels, huge throughput and computation due to large search window are alleviated considerably. Sixteen way parallel processing element arrays with configurable multiplying technologies achieve fast search with regular data access and efficient data reuse. Also, the parallel arrays can be efficiently reused at three hierarchical levels for sequential motion vector refinement. The modified algorithm reaches a good balance between implementation complexity and search performance. Also, the logic circuit and on-chip SRAM consumption of the VLSI architecture are moderate.

  17. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy. PMID:22581136

  18. Bayesian multiresolution method for local tomography in dental x-ray imaging.

    PubMed

    Niinimäki, K; Siltanen, S; Kolehmainen, V

    2007-11-21

    Dental tomographic cone-beam x-ray imaging devices record truncated projections and reconstruct a region of interest (ROI) inside the head. Image reconstruction from the resulting local tomography data is an ill-posed inverse problem. A new Bayesian multiresolution method is proposed for local tomography reconstruction. The inverse problem is formulated in a well-posed statistical form where a prior model of the target tissues compensates for the incomplete x-ray projection data. Tissues are represented in a wavelet basis, and prior information is modeled in terms of a Besov norm penalty. The number of unknowns in the reconstruction problem is reduced by abandoning fine-scale wavelets outside the ROI. Compared to traditional voxel-based models, this multiresolution approach allows significant reduction of degrees of freedom without loss of accuracy inside the ROI, as shown by 2D examples using simulated and in vitro local tomography data. PMID:17975290

  19. Efficient Human Action and Gait Analysis Using Multiresolution Motion Energy Histogram

    NASA Astrophysics Data System (ADS)

    Yu, Chih-Chang; Cheng, Hsu-Yung; Cheng, Chien-Hung; Fan, Kuo-Chin

    2010-12-01

    Average Motion Energy (AME) image is a good way to describe human motions. However, it has to face the computation efficiency problem with the increasing number of database templates. In this paper, we propose a histogram-based approach to improve the computation efficiency. We convert the human action/gait recognition problem to a histogram matching problem. In order to speed up the recognition process, we adopt a multiresolution structure on the Motion Energy Histogram (MEH). To utilize the multiresolution structure more efficiently, we propose an automated uneven partitioning method which is achieved by utilizing the quadtree decomposition results of MEH. In that case, the computation time is only relevant to the number of partitioned histogram bins, which is much less than the AME method. Two applications, action recognition and gait classification, are conducted in the experiments to demonstrate the feasibility and validity of the proposed approach.

  20. Multiresolution-fractal feature extraction and tumor detection: analytical modeling and implementation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Parra, Carlos

    2003-11-01

    We propose formal analytical models for identification of tumors in medical images based on the hypothesis that the tumors have a fractal (self-similar) growth behavior. Therefore, the images of these tumors may be characterized as Fractional Brownian motion (fBm) processes with a fractal dimension (D) that is distinctly different than that of the image of the surrounding tissue. In order to extract the desired features that delineate different tissues in a MR image, we study multiresolution signal decomposition and its relation to fBm. The fBm has proven successful to modeling a variety of physical phenomena and non-stationary processes, such as medical images, that share essential properties such as self-similarity, scale invariance and fractal dimension (D). We have developed the theoretical framework that combines wavelet analysis with multiresolution fBm to compute D.

  1. Development of a multi-resolution measurement system based on light sectioning method

    NASA Astrophysics Data System (ADS)

    Zhang, Weiguang; Zhao, Hong; Zhou, Xiang; Zhang, Lu

    2008-09-01

    With the rapid development of shape measurement technique, multi-resolution approach becomes one of valid way to enhance the accuracy. There are, however, still some key techniques such as simultaneous calibration and data fusion of several sensors being further studied. A multi-resolution system, which use light sectioning method, is developed and has been successful in many application areas for blade of aviation engine example. It can measure the shape of blade at high speed and high accuracy. The system is composed of four laser linear light sources, four or five cameras and three highprecision mechanical movement devices. Two cameras have relatively low amplifying ratios, and focus on the basin or back of blade where the radius of curvature is large. Other cameras have high amplifying ratios, and fix on the entering or ending edge of blade where the radius of curvature is small. So the system has 3600 measurement range and can carry out multi-resolution 3-D shape measurement with greatly different amplifying ratios of cameras. One measurement process has been finished when the blade mounted on mechanical movement device move up or down one time. Also the model building and principle of the measurement system, an algorithm of calibration and data fusion of several cameras are presented that calculate 3-D coordinates of one section of blade. The result shows that the accuracy of the system is about 0.05mm for the sectional circumradius approximately 50 mm measurement range, and also proves the system is feasible and efficient.

  2. Combining nonlinear multiresolution system and vector quantization for still image compression

    SciTech Connect

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  3. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  4. Multi-resolution simulation of biomolecular systems: a review of methodological issues.

    PubMed

    Meier, Katharina; Choutko, Alexandra; Dolenc, Jozica; Eichenberger, Andreas P; Riniker, Sereina; van Gunsteren, Wilfred F

    2013-03-01

    Theoretical-computational modeling with an eye to explaining experimental observations in regard to a particular chemical phenomenon or process requires choices concerning essential degrees of freedom and types of interactions and the generation of a Boltzmann ensemble or trajectories of configurations. Depending on the degrees of freedom that are essential to the process of interest, for example, electronic or nuclear versus atomic, molecular or supra-molecular, quantum- or classical-mechanical equations of motion are to be used. In multi-resolution simulation, various levels of resolution, for example, electronic, atomic, supra-atomic or supra-molecular, are combined in one model. This allows an enhancement of the computational efficiency, while maintaining sufficient detail with respect to particular degrees of freedom. The basic challenges and choices with respect to multi-resolution modeling are reviewed and as an illustration the differential catalytic properties of two enzymes with similar folds but different substrates with respect to these substrates are explored using multi-resolution simulation at the electronic, atomic and supra-molecular levels of resolution. PMID:23417997

  5. Deconstructing a polygenetic landscape using LiDAR and multi-resolution analysis

    NASA Astrophysics Data System (ADS)

    Barrineau, Patrick; Dobreva, Iliyana; Bishop, Michael P.; Houser, Chris

    2016-04-01

    It is difficult to deconstruct a complex polygenetic landscape into distinct process-form regimes using digital elevation models (DEMs) and fundamental land-surface parameters. This study describes a multi-resolution analysis approach for extracting geomorphological information from a LiDAR-derived DEM over a stabilized aeolian landscape in south Texas that exhibits distinct process-form regimes associated with different stages in landscape evolution. Multi-resolution analysis was used to generate average altitudes using a Gaussian filter with a maximum radius of 1 km at 20 m intervals, resulting in 50 generated DEMs. This multi-resolution dataset was analyzed using Principal Components Analysis (PCA) to identify the dominant variance structure in the dataset. The first 4 principal components (PC) account for 99.9% of the variation, and classification of the variance structure reveals distinct multi-scale topographic variation associated with different process-form regimes and evolutionary stages. Our results suggest that this approach can be used to generate quantitatively rigorous morphometric maps to guide field-based sedimentological and geophysical investigations, which tend to use purposive sampling techniques resulting in bias and error.

  6. Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Wu, Bo; Zhu, Qing

    2016-04-01

    Due to the widespread availability of satellite imagery nowadays, it is common for regions to be covered by satellite imagery from multiple sources with multiple resolutions. This paper presents a combined adjustment approach to integrate multi-source multi-resolution satellite imagery for improved geo-positioning accuracy without the use of ground control points (GCPs). Instead of using all the rational polynomial coefficients (RPCs) of images for processing, only those dominating the geo-positioning accuracy are used in the combined adjustment. They, together with tie points identified in the images, are used as observations in the adjustment model. Proper weights are determined for each observation, and ridge parameters are determined for better convergence of the adjustment solution. The outputs from the combined adjustment are the improved dominating RPCs of images, from which improved geo-positioning accuracy can be obtained. Experiments using ZY-3, SPOT-7 and Pleiades-1 imagery in Hong Kong, and Cartosat-1 and Worldview-1 imagery in Catalonia, Spain demonstrate that the proposed method is able to effectively improve the geo-positioning accuracy of satellite images. The combined adjustment approach offers an alternative method to improve geo-positioning accuracy of satellite images. The approach enables the integration of multi-source and multi-resolution satellite imagery for generating more precise and consistent 3D spatial information, which permits the comparative and synergistic use of multi-resolution satellite images from multiple sources.

  7. Multi-resolution description of three-dimensional anthropometric data for design simplification.

    PubMed

    Niu, Jianwei; Li, Zhizhong; Salvendy, Gavriel

    2009-07-01

    Three-dimensional (3D) anthropometry can provide rich information for ergonomic product design with better safety and health considerations. To reduce computational load and model complexity in product design when using 3D anthropometric data, wavelet analysis is adopted in this paper to establish multi-resolution mathematical description of 3D anthropometric data. A proper resolution can be selected for design reference according to the application purpose. To examine the approximation errors under difference resolutions, 510 upper head, whole head, and face samples of Chinese young men have been analyzed. Descriptives of approximation errors under different resolutions are presented. These data can be used as resolution selection guide. The application of the multi-resolution method in product design is illustrated by two examples. RELEVANCE TO INDUSTRY: Multi-resolution description of 3D anthropometric data would facilitate the analysis of and design with 3D anthropometric data to improve fitting comfort. The error data under different resolutions provide important reference for resolution selection. PMID:18639863

  8. Characterization and in-vivo evaluation of a multi-resolution foveated laparoscope for minimally invasive surgery

    PubMed Central

    Qin, Yi; Hua, Hong; Nguyen, Mike

    2014-01-01

    The state-of-the-art laparoscope lacks the ability to capture high-magnification and wide-angle images simultaneously, which introduces challenges when both close- up views for details and wide-angle overviews for orientation are required in clinical practice. A multi-resolution foveated laparoscope (MRFL) which can provide the surgeon both high-magnification close-up and wide-angle images was proposed to address the limitations of the state-of-art surgical laparoscopes. In this paper, we present the overall system design from both clinical and optical system perspectives along with a set of experiments to characterize the optical performances of our prototype system and describe our preliminary in-vivo evaluation of the prototype with a pig model. The experimental results demonstrate that at the optimum working distance of 120mm, the high-magnification probe has a resolution of 6.35lp/mm and image a surgical area of 53 × 40mm2; the wide-angle probe provides a surgical area coverage of 160 × 120mm2 with a resolution of 2.83lp/mm. The in-vivo evaluation demonstrates that MRFL has great potential in clinical applications for improving the safety and efficiency of the laparoscopic surgery. PMID:25136485

  9. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  10. a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R.

    2011-09-01

    Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype.

  11. Cost tradeoffs in consequence management at nuclear power plants: A risk based approach to setting optimal long-term interdiction limits for regulatory analyses

    SciTech Connect

    Mubayi, V.

    1995-05-01

    The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.

  12. W-transform method for feature-oriented multiresolution image retrieval

    SciTech Connect

    Kwong, M.K.; Lin, B.

    1995-07-01

    Image database management is important in the development of multimedia technology. Since an enormous amount of digital images is likely to be generated within the next few decades in order to integrate computers, television, VCR, cables, telephone and various imaging devices. Effective image indexing and retrieval systems are urgently needed so that images can be easily organized, searched, transmitted, and presented. Here, the authors present a local-feature-oriented image indexing and retrieval method based on Kwong, and Tang`s W-transform. Multiresolution histogram comparison is an effective method for content-based image indexing and retrieval. However, most recent approaches perform multiresolution analysis for whole images but do not exploit the local features present in the images. Since W-transform is featured by its ability to handle images of arbitrary size, with no periodicity assumptions, it provides a natural tool for analyzing local image features and building indexing systems based on such features. In this approach, the histograms of the local features of images are used in the indexing, system. The system not only can retrieve images that are similar or identical to the query images but also can retrieve images that contain features specified in the query images, even if the retrieved images as a whole might be very different from the query images. The local-feature-oriented method also provides a speed advantage over the global multiresolution histogram comparison method. The feature-oriented approach is expected to be applicable in managing large-scale image systems such as video databases and medical image databases.

  13. A multi-resolution method for climate system modeling: application of spherical centroidal Voronoi tessellations

    SciTech Connect

    Ringler, Todd; Ju, Lili; Gunzburger, Max

    2008-01-01

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoi tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.

  14. Proof-of-concept demonstration of a miniaturized three-channel multiresolution imaging system

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Vervaeke, Michael; Van Erps, Jürgen; Thienpont, Hugo

    2014-05-01

    Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.

  15. Proof-of-concept demonstration of a miniaturized multi-resolution refocusing imaging system using an electrically tunable lens

    NASA Astrophysics Data System (ADS)

    Smeesters, L.; Belay, G. Y.; Ottevaere, H.; Meuret, Y.; Vervaeke, Michael; Van Erps, J.; Thienpont, H.

    2014-09-01

    Refocusing multi-channel imaging systems are nowadays commercially available only in bulky and expensive designs. Compact wafer-level multi-channel imaging systems have until now only been published without refocusing mechanisms, since classical refocusing concepts could not be integrated in a miniaturized configuration. This lack of refocusing capabilities limits the depth-of-field of these imaging designs and therefore their application in practical systems. We designed and characterized a wafer-level two-channel multi-resolution refocusing imaging system, based on an electrically tunable liquid lens and a design that can be realized with wafer-level mass-manufacturing techniques. One wide field-of-view channel (2x40°) gives a general image of the surroundings with a lower angular resolution (0.078°), whereas the high angular resolution channel (0.0098°) provides a detailed image of a small region of interest with a much narrower field-of-view (2x7.57°). The latter high resolution imaging channel contains the tunable lens and therefore the refocusing capability. The performances of this high resolution imaging channel were experimentally characterized in a proof-of-concept demonstrator. The experimental and simulated depth-of-field and resolving power correspond well. Moreover, we are able to obtain a depth-of-field from 0.25m until infinity, which is a significant improvement of the current state-of-the-art static multi-channel imaging systems, which show a depth-of-field from 9m until infinity. Both the high resolution and wide field-of-view imaging channels show a diffraction-limited image quality. The designed wafer-level two-channel imaging system can form the basis of an advanced three-dimensional stacked image sensor, where different image processing algorithms can be simultaneously applied to the different images on the image sensor.

  16. Multiresolution Wavelet Analysis of Heartbeat Intervals Discriminates Healthy Patients from Those with Cardiac Pathology

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1998-02-01

    We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.

  17. Accelerated single-beam wavefront reconstruction techniques based on relaxation and multiresolution strategies.

    PubMed

    Falaggis, Konstantinos; Kozacki, Tomasz; Kujawinska, Malgorzata

    2013-05-15

    A previous Letter by Pedrini et al. [Opt. Lett. 30, 833 (2005)] proposed an iterative single-beam wavefront reconstruction algorithm that uses a sequence of interferograms recorded at different planes. In this Letter, the use of relaxation and multiresolution strategies is investigated in terms of accuracy and computational effort. It is shown that the convergence rate of the conventional iterative algorithm can be significantly improved with the use of relaxation techniques combined with a hierarchy of downsampled intensities that are used within a preconditioner. These techniques prove to be more robust, to achieve a higher accuracy, and to overcome the stagnation problem met in the iterative wavefront reconstruction. PMID:23938902

  18. A Conceptual Framework for SAHRA Integrated Multi-resolution Modeling in the Rio Grande Basin

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Gupta, H.; Springer, E.; Wagener, T.; Brookshire, D.; Duffy, C.

    2004-12-01

    The sustainable management of water resources in a river basin requires an integrated analysis of the social, economic, environmental and institutional dimensions of the problem. Numerical models are commonly used for integration of these dimensions and for communication of the analysis results to stakeholders and policy makers. The National Science Foundation Science and Technology Center for Sustainability of semi-Arid Hydrology and Riparian Areas (SAHRA) has been developing integrated multi-resolution models to assess impacts of climate variability and land use change on water resources in the Rio Grande Basin. These models not only couple natural systems such as surface and ground waters, but will also include engineering, economic and social components that may be involved in water resources decision-making processes. This presentation will describe the conceptual framework being developed by SAHRA to guide and focus the multiple modeling efforts and to assist the modeling team in planning, data collection and interpretation, communication, evaluation, etc. One of the major components of this conceptual framework is a Conceptual Site Model (CSM), which describes the basin and its environment based on existing knowledge and identifies what additional information must be collected to develop technically sound models at various resolutions. The initial CSM is based on analyses of basin profile information that has been collected, including a physical profile (e.g., topographic and vegetative features), a man-made facility profile (e.g., dams, diversions, and pumping stations), and a land use and ecological profile (e.g., demographics, natural habitats, and endangered species). Based on the initial CSM, a Conceptual Physical Model (CPM) is developed to guide and evaluate the selection of a model code (or numerical model) for each resolution to conduct simulations and predictions. A CPM identifies, conceptually, all the physical processes and engineering and socio

  19. A multiresolution wavelet analysis and Gaussian Markov random field algorithm for breast cancer screening of digital mammography

    SciTech Connect

    Lee, C.G.; Chen, C.H.

    1996-12-31

    In this paper a novel multiresolution wavelet analysis (MWA) and non-stationary Gaussian Markov random field (GMRF) technique is introduced for the identification of microcalcifications with high accuracy. The hierarchical multiresolution wavelet information in conjunction with the contextual information of the images extracted from GMRF provides a highly efficient technique for microcalcification detection. A Bayesian teaming paradigm realized via the expectation maximization (EM) algorithm was also introduced for edge detection or segmentation of larger lesions recorded on the mammograms. The effectiveness of the approach has been extensively tested with a number of mammographic images provided by a local hospital.

  20. Multiresolution modeling with a JMASS-JWARS HLA Federation

    NASA Astrophysics Data System (ADS)

    Prince, John D.; Painter, Ron D.; Pendell, Brian; Richert, Walt; Wolcott, Christopher

    2002-07-01

    CACI, Inc.-Federal has built, tested, and demonstrated the use of a JMASS-JWARS HLA Federation that supports multi- resolution modeling of a weapon system and its subsystems in a JMASS engineering and engagement model environment, while providing a realistic JWARS theater campaign-level synthetic battle space and operational context to assess the weapon system's value added and deployment/employment supportability in a multi-day, combined force-on-force scenario. Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model is both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting extremely large simulation. One viable alternative is to integrate the current hierarchical suite of simulation models using the DoD's High Level Architecture in order to support multi- resolution modeling. An HLA integration eliminates the extremely large model problem, provides a well-defined and manageable mixed resolution simulation and minimizes VV&A issues.

  1. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  2. Concordance of Results from Randomized and Observational Analyses within the Same Study: A Re-Analysis of the Women’s Health Initiative Limited-Access Dataset

    PubMed Central

    Bolland, Mark J.; Grey, Andrew; Gamble, Greg D.; Reid, Ian R.

    2015-01-01

    Background Observational studies (OS) and randomized controlled trials (RCTs) often report discordant results. In the Women’s Health Initiative Calcium and Vitamin D (WHI CaD) RCT, women were randomly assigned to CaD or placebo, but were permitted to use personal calcium and vitamin D supplements, creating a unique opportunity to compare results from randomized and observational analyses within the same study. Methods WHI CaD was a 7-year RCT of 1g calcium/400IU vitamin D daily in 36,282 post-menopausal women. We assessed the effects of CaD on cardiovascular events, death, cancer and fracture in a randomized design- comparing CaD with placebo in 43% of women not using personal calcium or vitamin D supplements- and in a observational design- comparing women in the placebo group (44%) using personal calcium and vitamin D supplements with non-users. Incidence was assessed using Cox proportional hazards models, and results from the two study designs deemed concordant if the absolute difference in hazard ratios was ≤0.15. We also compared results from WHI CaD to those from the WHI Observational Study(WHI OS), which used similar methodology for analyses and recruited from the same population. Results In WHI CaD, for myocardial infarction and stroke, results of unadjusted and 6/8 covariate-controlled observational analyses (age-adjusted, multivariate-adjusted, propensity-adjusted, propensity-matched) were not concordant with the randomized design results. For death, hip and total fracture, colorectal and total cancer, unadjusted and covariate-controlled observational results were concordant with randomized results. For breast cancer, unadjusted and age-adjusted observational results were concordant with randomized results, but only 1/3 other covariate-controlled observational results were concordant with randomized results. Multivariate-adjusted results from WHI OS were concordant with randomized WHI CaD results for only 4/8 endpoints. Conclusions Results of

  3. Using sparse regularization for multiresolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-03-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of Slant Total Electron Content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on l2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the l1 minimization technique and wavelet basis functions due to their properties of compact representation. The l1 minimization is selected because it can optimise the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the interfrequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of l1 minimization to estimate the coefficients over the l2 minimization. This is in particular true for an uneven observation geometry and especially for multi resolution CIT.

  4. Multi-resolution model-based traffic sign detection and tracking

    NASA Astrophysics Data System (ADS)

    Marinas, Javier; Salgado, Luis; Camplani, Massimo

    2012-06-01

    In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.

  5. Wavelet-based multiresolution with n-th-root-of-2 Subdivision

    SciTech Connect

    Linsen, L; Pascucci, V; Duchaineau, M A; Hamann, B; Joy, K I

    2004-12-16

    Multiresolution methods are a common technique used for dealing with large-scale data and representing it at multiple levels of detail. The authors present a multiresolution hierarchy construction based on n{radical}2 subdivision, which has all the advantages of a regular data organization scheme while reducing the drawback of coarse granularity. The n{radical}2-subdivision scheme only doubles the number of vertices in each subdivision step regardless of dimension n. They describe the construction of 2D, 3D, and 4D hierarchies representing surfaces, volume data, and time-varying volume data, respectively. The 4D approach supports spatial and temporal scalability. For high-quality data approximation on each level of detail, they use downsampling filters based on n-variate B-spline wavelets. They present a B-spline wavelet lifting scheme for n{radical}2-subdivision steps to obtain small or narrow filters. Narrow filters support adaptive refinement and out-of-core data exploration techniques.

  6. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  7. Classification of mammographic lesion based in Completed Local Binary Pattern and using multiresolution representation

    NASA Astrophysics Data System (ADS)

    Duarte, Y. A. S.; Nascimento, M. Z.; Oliveira, D. L. L.

    2014-03-01

    This paper presents a comparison of two methods for features extraction of mammograms based in completed local binary pattern (CLBP) and wavelet transform. In first part, CLBP was applied in digitized mammograms. In second part, we applied CLBP in the sub-bands obtained from the wavelet multi-resolution representation of the mammographies. In this study, we evaluated the CLBP in the image in the spatial domain and in the sub-bands obtained with wavelet transform. Then, the statistical technique of variance analysis (ANOVA) was used to reduce the number of features. Finally, the classifier Support Vector Machine (SVM) was applied in the samples. The proposed methods were tested on 720 mammographies which 240 was diagnosed as normal samples, 240 as benign lesion and 240 as malign lesion. The images were obtained randomly of the Digital Database for Screening Mammography (DDSM). The system effectiveness was evaluated using the area under the ROC curve (AUC). The experiments demonstrate that the textural feature extraction of the multi-resolution representation was more relevant with value of AUC=1.0. In our experiments, CLBP in the spatial domain resulted in value of AUC=0.89. The proposed method demonstrated promising results in the classification of different classes of mammographic lesions.

  8. A multi-resolution image analysis system for computer-assisted grading of neuroblastoma differentiation

    NASA Astrophysics Data System (ADS)

    Kong, Jun; Sertel, Olcay; Shimada, Hiroyuki; Boyer, Kim L.; Saltz, Joel H.; Gurcan, Metin N.

    2008-03-01

    Neuroblastic Tumor (NT) is one of the most commonly occurring tumors in children. Of all types of NTs, neuroblastoma is the most malignant tumor that can be further categorized into undifferentiated (UD), poorly-differentiated (PD) and differentiating (D) types, in terms of the grade of pathological differentiation. Currently, pathologists determine the grade of differentiation by visual examinations of tissue samples under the microscope. However, this process is subjective and, hence, may lead to intra- and inter-reader variability. In this paper, we propose a multi-resolution image analysis system that helps pathologists classify tissue samples according to their grades of differentiation. The inputs to this system are color images of haematoxylin and eosin (H&E) stained tissue samples. The complete image analysis system has five stages: segmentation, feature construction, feature extraction, classification and confidence evaluation. Due to the large number of input images, both parallel processing and multi-resolution analysis were carried out to reduce the execution time of the algorithm. Our training dataset consists of 387 images tiles of size 512x512 in pixels from three whole-slide images. We tested the developed system with an independent set of 24 whole-slide images, eight from each grade. The developed system has an accuracy of 83.3% in correctly identifying the grade of differentiation, and it takes about two hours, on average, to process each whole slide image.

  9. A multi-resolution approach to retrospectively-gated cardiac micro-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Clark, D. P.; Johnson, G. A.; Badea, C. T.

    2014-03-01

    In preclinical research, micro-CT is commonly used to provide anatomical information; however, there is significant interest in using this technology to obtain functional information in cardiac studies. The fastest acquisition in 4D cardiac micro-CT imaging is achieved via retrospective gating, resulting in irregular angular projections after binning the projections into phases of the cardiac cycle. Under these conditions, analytical reconstruction algorithms, such as filtered back projection, suffer from streaking artifacts. Here, we propose a novel, multi-resolution, iterative reconstruction algorithm inspired by robust principal component analysis which prevents the introduction of streaking artifacts, while attempting to recover the highest temporal resolution supported by the projection data. The algorithm achieves these results through a unique combination of the split Bregman method and joint bilateral filtration. We illustrate the algorithm's performance using a contrast-enhanced, 2D slice through the MOBY mouse phantom and realistic projection acquisition and reconstruction parameters. Our results indicate that the algorithm is robust to under sampling levels of only 34 projections per cardiac phase and, therefore, has high potential in reducing both acquisition times and radiation dose. Another potential advantage of the multi-resolution scheme is the natural division of the reconstruction problem into a large number of independent sub-problems which can be solved in parallel. In future work, we will investigate the performance of this algorithm with retrospectively-gated, cardiac micro-CT data.

  10. Long-range force and moment calculations in multiresolution simulations of molecular systems

    SciTech Connect

    Poursina, Mohammad; Anderson, Kurt S.

    2012-08-30

    Multiresolution simulations of molecular systems such as DNAs, RNAs, and proteins are implemented using models with different resolutions ranging from a fully atomistic model to coarse-grained molecules, or even to continuum level system descriptions. For such simulations, pairwise force calculation is a serious bottleneck which can impose a prohibitive amount of computational load on the simulation if not performed wisely. Herein, we approximate the resultant force due to long-range particle-body and body-body interactions applicable to multiresolution simulations. Since the resultant force does not necessarily act through the center of mass of the body, it creates a moment about the mass center. Although this potentially important torque is neglected in many coarse-grained models which only use particle dynamics to formulate the dynamics of the system, it should be calculated and used when coarse-grained simulations are performed in a multibody scheme. Herein, the approximation for this moment due to far-field particle-body and body-body interactions is also provided.

  11. GPU-based multi-resolution direct numerical simulation of multiphase flows with phase change

    NASA Astrophysics Data System (ADS)

    Forster, Christopher J.; Smith, Marc K.

    2014-11-01

    Nucleate pool boiling heat transfer can be enhanced in several ways to increase the critical heat flux (CHF) and delay the transition to film boiling. Changes to the heated surface geometry using open microchannels and direct forcing of the vapor bubbles using acoustic interfacial excitation are being investigated for their effects on the CHF. The numerical simulation of boiling with these effects lends itself to multi-resolution techniques due to the multiple length and time scales present during evolution of the bubbles from initial nucleation in the microchannels to forming a bubble cloud above the heated surface. To this end, a wavelet multi-resolution boiling simulation based on a parallel GPU architecture is being developed to solve the compressible Navier-Stokes equations using a dual time stepping method with preconditioning to alleviate the stiffness problems associated with the liquid phase. Interface tracking is handled by the level-set method with a prescribed interface thickness based on the maximum amount of local grid refinement desired, which can approach the physical interface thickness. Initial cases to validate the simulation will be demonstrated, including the rising bubble test problem.

  12. a Virtual Globe-Based Multi-Resolution Tin Surface Modeling and Visualizetion Method

    NASA Astrophysics Data System (ADS)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2016-06-01

    The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD) generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN)-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.

  13. Multi-resolution and wavelet representations for identifying signatures of disease.

    PubMed

    Sajda, Paul; Laine, Andrew; Zeevi, Yehoshua

    2002-01-01

    Identifying physiological and anatomical signatures of disease in signals and images is one of the fundamental challenges in biomedical engineering. The challenge is most apparent given that such signatures must be identified in spite of tremendous inter and intra-subject variability and noise. Crucial for uncovering these signatures has been the development of methods that exploit general statistical properties of natural signals. The signal processing and applied mathematics communities have developed, in recent years, signal representations which take advantage of Gabor-type and wavelet-type functions that localize signal energy in a joint time-frequency and/or space-frequency domain. These techniques can be expressed as multi-resolution transformations, of which perhaps the best known is the wavelet transform. In this paper we review wavelets, and other related multi-resolution transforms, within the context of identifying signatures for disease. These transforms construct a general representation of signals which can be used in detection, diagnosis and treatment monitoring. We present several examples where these transforms are applied to biomedical signal and imaging processing. These include computer-aided diagnosis in mammography, real-time mosaicking of ophthalmic slit-lamp imagery, characterization of heart disease via ultrasound, predicting epileptic seizures and signature analysis of the electroencephalogram, and reconstruction of positron emission tomography data. PMID:14646044

  14. A hardware implementation of multiresolution filtering for broadband instrumentation

    SciTech Connect

    Kercel, S.W.; Dress, W.B.

    1995-12-01

    The authors have constructed a wavelet processing board that implements a 14-level wavelet transform. The board uses a high-speed, analog-to-digital (A/D) converter, a hardware queue, and five fixed-point digital signal processing (DSP) chips in a parallel pipeline architecture. All five processors are independently programmable. The board is designed as a general purpose engine for instrumentation applications requiring near real-time wavelet processing or multiscale filtering. The present application is the processing engine of a magnetic field monitor that covers 305 Hz through 5 MHz. The monitor is used for the detection of peak values of magnetic fields in nuclear power plants. This paper describes the design, development, simulation, and testing of the system. Specific issues include the conditioning of real-world signals for wavelet processing, practical trade-offs between queue length and filter length, selection of filter coefficients, simulation of a 14-octave filter bank, and limitations imposed by a fixed-point processor. Test results from the completed wavelet board are included.

  15. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    NASA Astrophysics Data System (ADS)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  16. Inferring Species Richness and Turnover by Statistical Multiresolution Texture Analysis of Satellite Imagery

    PubMed Central

    Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund

    2012-01-01

    Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the

  17. Multiresolution pattern recognition of small volcanos in Magellan data

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Anderson, C. H.; Aubele, J. C.; Crumpler, L. S.

    1992-01-01

    The Magellan data is a treasure-trove for scientific analysis of venusian geology, providing far more detail than was previously available from Pioneer Venus, Venera 15/16, or ground-based radar observations. However, at this point, planetary scientists are being overwhelmed by the sheer quantities of data collected--data analysis technology has not kept pace with our ability to collect and store it. In particular, 'small-shield' volcanos (less than 20 km in diameter) are the most abundant visible geologic feature on the planet. It is estimated, based on extrapolating from previous studies and knowledge of the underlying geologic processes, that there should be on the order of 10(exp 5) to 10(exp 6) of these volcanos visible in the Magellan data. Identifying and studying these volcanos is fundamental to a proper understanding of the geologic evolution of Venus. However, locating and parameterizing them in a manual manner is very time-consuming. Hence, we have undertaken the development of techniques to partially automate this task. The goal is not the unrealistic one of total automation, but rather the development of a useful tool to aid the project scientists. The primary constraints for this particular problem are as follows: (1) the method must be reasonably robust; and (2) the method must be reasonably fast. Unlike most geological features, the small volcanos of Venus can be ascribed to a basic process that produces features with a short list of readily defined characteristics differing significantly from other surface features on Venus. For pattern recognition purposes the relevant criteria include the following: (1) a circular planimetric outline; (2) known diameter frequency distribution from preliminary studies; (3) a limited number of basic morphological shapes; and (4) the common occurrence of a single, circular summit pit at the center of the edifice.

  18. Proteomic and Transcriptomic Analyses of “Candidatus Pelagibacter ubique” Describe the First PII-Independent Response to Nitrogen Limitation in a Free-Living Alphaproteobacterium

    PubMed Central

    Smith, Daniel P.; Thrash, J. Cameron; Nicora, Carrie D.; Lipton, Mary S.; Burnum-Johnson, Kristin E.; Carini, Paul; Smith, Richard D.; Giovannoni, Stephen J.

    2013-01-01

    ABSTRACT Nitrogen is one of the major nutrients limiting microbial productivity in the ocean, and as a result, most marine microorganisms have evolved systems for responding to nitrogen stress. The highly abundant alphaproteobacterium “Candidatus Pelagibacter ubique,” a cultured member of the order Pelagibacterales (SAR11), lacks the canonical GlnB, GlnD, GlnK, and NtrB/NtrC genes for regulating nitrogen assimilation, raising questions about how these organisms respond to nitrogen limitation. A survey of 266 Alphaproteobacteria genomes found these five regulatory genes nearly universally conserved, absent only in intracellular parasites and members of the order Pelagibacterales, including “Ca. Pelagibacter ubique.” Global differences in mRNA and protein expression between nitrogen-limited and nitrogen-replete cultures were measured to identify nitrogen stress responses in “Ca. Pelagibacter ubique” strain HTCC1062. Transporters for ammonium (AmtB), taurine (TauA), amino acids (YhdW), and opines (OccT) were all elevated in nitrogen-limited cells, indicating that they devote increased resources to the assimilation of nitrogenous organic compounds. Enzymes for assimilating amine into glutamine (GlnA), glutamate (GltBD), and glycine (AspC) were similarly upregulated. Differential regulation of the transcriptional regulator NtrX in the two-component signaling system NtrY/NtrX was also observed, implicating it in control of the nitrogen starvation response. Comparisons of the transcriptome and proteome supported previous observations of uncoupling between transcription and translation in nutrient-deprived “Ca. Pelagibacter ubique” cells. Overall, these data reveal a streamlined, PII-independent response to nitrogen stress in “Ca. Pelagibacter ubique,” and likely other Pelagibacterales, and show that they respond to nitrogen stress by allocating more resources to the assimilation of nitrogen-rich organic compounds. PMID:24281717

  19. Origin of mobility enhancement by chemical treatment of gate-dielectric surface in organic thin-film transistors: Quantitative analyses of various limiting factors in pentacene thin films

    NASA Astrophysics Data System (ADS)

    Matsubara, R.; Sakai, Y.; Nomura, T.; Sakai, M.; Kudo, K.; Majima, Y.; Knipp, D.; Nakamura, M.

    2015-11-01

    For the better performance of organic thin-film transistors (TFTs), gate-insulator surface treatments are often applied. However, the origin of mobility increase has not been well understood because mobility-limiting factors have not been compared quantitatively. In this work, we clarify the influence of gate-insulator surface treatments in pentacene thin-film transistors on the limiting factors of mobility, i.e., size of crystal-growth domain, crystallite size, HOMO-band-edge fluctuation, and carrier transport barrier at domain boundary. We quantitatively investigated these factors for pentacene TFTs with bare, hexamethyldisilazane-treated, and polyimide-coated SiO2 layers as gate dielectrics. By applying these surface treatments, size of crystal-growth domain increases but both crystallite size and HOMO-band-edge fluctuation remain unchanged. Analyzing the experimental results, we also show that the barrier height at the boundary between crystal-growth domains is not sensitive to the treatments. The results imply that the essential increase in mobility by these surface treatments is only due to the increase in size of crystal-growth domain or the decrease in the number of energy barriers at domain boundaries in the TFT channel.

  20. Testing the limits of micro-scale analyses of Si stable isotopes by femtosecond laser ablation multicollector inductively coupled plasma mass spectrometry with application to rock weathering

    NASA Astrophysics Data System (ADS)

    Schuessler, Jan A.; von Blanckenburg, Friedhelm

    2014-08-01

    An analytical protocol for accurate in-situ Si stable isotope analysis has been established on a new second-generation custom-built femtosecond laser ablation system. The laser was coupled to a multicollector inductively coupled plasma mass spectrometer (fsLA-MC-ICP-MS). We investigated the influence of laser parameters such as spot size, laser focussing, energy density and repetition rate, and ICP-MS operating conditions such as ICP mass load, spectral and non-spectral matrix effects, signal intensities, and data processing on precision and accuracy of Si isotope ratios. We found that stable and reproducible ICP conditions were obtained by using He as aerosol carrier gas mixed with Ar/H2O before entering the plasma. Precise δ29Si and δ30Si values (better than ± 0.23‰, 2SD) can be obtained if the area ablated is at least 50 × 50 μm; or, alternatively, for the analysis of geometric features down to the width of the laser spot (about 20 μm) if an equivalent area is covered. Larger areas can be analysed by rastering the laser beam, whereas small single spot analyses reduce the attainable precision of δ30Si to ca. ± 0.6‰, 2SD, for < 30 μm diameter spots. It was found that focussing the laser beam beneath the sample surface with energy densities between 1 and 3.8 J/cm2 yields optimal analytical conditions for all materials investigated here. Using pure quartz (NIST 8546 aka. NBS-28) as measurement standard for calibration (standard-sample-bracketing) did result in accurate and precise data of international reference materials and samples covering a wide range in chemical compositions (Si single crystal IRMM-017, basaltic glasses KL2-G, BHVO-2G and BHVO-2, andesitic glass ML3B-G, rhyolitic glass ATHO-G, diopside glass JER, soda-lime glasses NIST SRM 612 and 610, San Carlos olivine). No composition-dependent matrix effect was discernible within uncertainties of the method. The method was applied to investigate the Si isotope signature of rock weathering at

  1. Experimental and numerical analyses of high voltage 4H-SiC junction barrier Schottky rectifiers with linearly graded field limiting ring

    NASA Astrophysics Data System (ADS)

    Wang, Xiang-Dong; Deng, Xiao-Chuan; Wang, Yong-Wei; Wang, Yong; Wen, Yi; Zhang, Bo

    2014-05-01

    This paper describes the successful fabrication of 4H-SiC junction barrier Schottky (JBS) rectifiers with a linearly graded field limiting ring (LG-FLR). Linearly variable ring spacings for the FLR termination are applied to improve the blocking voltage by reducing the peak surface electric field at the edge termination region, which acts like a variable lateral doping profile resulting in a gradual field distribution. The experimental results demonstrate a breakdown voltage of 5 kV at the reverse leakage current density of 2 mA/cm2 (about 80% of the theoretical value). Detailed numerical simulations show that the proposed termination structure provides a uniform electric field profile compared to the conventional FLR termination, which is responsible for 45% improvement in the reverse blocking voltage despite a 3.7% longer total termination length.

  2. IMFIT Integrated Modeling Applications Supporting Experimental Analysis: Multiple Time-Slice Kinetic EFIT Reconstructions, MHD Stability Limits, and Energy and Momentum Flux Analyses

    NASA Astrophysics Data System (ADS)

    Collier, A.; Lao, L. L.; Abla, G.; Chu, M. S.; Prater, R.; Smith, S. P.; St. John, H. E.; Guo, W.; Li, G.; Pan, C.; Ren, Q.; Park, J. M.; Bisai, N.; Srinivasan, R.; Sun, A. P.; Liu, Y.; Worrall, M.

    2010-11-01

    This presentation summarizes several useful applications provided by the IMFIT integrated modeling framework to support DIII-D and EAST research. IMFIT is based on Python and utilizes modular task-flow architecture with a central manager and extensive GUI support to coordinate tasks among component modules. The kinetic-EFIT application allows multiple time-slice reconstructions by fetching pressure profile data directly from MDS+ or from ONETWO or PTRANSP. The stability application analyzes a given reference equilibrium for stability limits by performing parameter perturbation studies with MHD codes such as DCON, GATO, ELITE, or PEST3. The transport task includes construction of experimental energy and momentum fluxes from profile analysis and comparison against theoretical models such as MMM95, GLF23, or TGLF.

  3. DTMs: discussion of a new multi-resolution function based model

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Biagi, L.; Zamboni, G.

    2012-04-01

    The diffusion of new technologies based on WebGIS and virtual globes allows DTMs distribution and three dimensional representations to the Web users' community. In the Web distribution of geographical information, the database storage size represents a critical point: given a specific interest area, typically the server needs to perform some preprocessing, the data have to be sent to the client, that applies some additional processing. The efficiency of all these actions is crucial to guarantee a near real time availability of the information. DTMs are obtained from the raw observations by some sampling or interpolation technique and typically are stored and distributed as Triangular Irregular Networks (TIN) or regular grids. A new approach to store and transmit DTMs has been studied and implemented. The basic idea is to use multi-resolution bilinear spline functions to interpolate the raw observations and to represent the terrain. More in detail, the algorithm performs the following actions. 1) The spatial distribution of the raw observations is investigated. In areas where few data are available, few levels of splines are activated while more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with respect to the previous level. 2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the raw observations. The interpolation is computed by batch least squares. 3) Finally, the estimated coefficients of the splines are stored. The algorithm guarantees a local resolution consistent with the data density, exploiting all the available information provided by the sample. The model can be defined "function based" because the coefficients of a given function are stored instead of a set of heights: in particular, the resolution level, the position and the coefficient of each activated spline function are stored by the server and are

  4. Comparison of and limits of accuracy for statistical analyses of vibrational and electronic circular dichroism spectra in terms of correlations to and predictions of protein secondary structure.

    PubMed Central

    Pancoska, P.; Bitto, E.; Janota, V.; Urbanova, M.; Gupta, V. P.; Keiderling, T. A.

    1995-01-01

    This work provides a systematic comparison of vibrational CD (VCD) and electronic CD (ECD) methods for spectral prediction of secondary structure. The VCD and ECD data are simplified to a small set of spectral parameters using the principal component method of factor analysis (PC/FA). Regression fits of these parameters are made to the X-ray-determined fractional components (FC) of secondary structure. Predictive capability is determined by computing structures for proteins sequentially left out of the regression. All possible combinations of PC/FA spectral parameters (coefficients) were used to form a full set of restricted multiple regressions with the FC values, both independently for each spectral data set as well as for the two VCD sets and all the data grouped together. The complete search over all possible combinations of spectral parameters for different types of spectral data is a new feature of this study, and the focus on prediction is the strength of this approach. The PC/FA method was found to be stable in detail to expansion of the training set. Coupling amide II to amide I' parameters reduced the standard deviations of the VCD regression relationships, and combining VCD and ECD data led to the best fits. Prediction results had a minimum error when dependent on relatively few spectral coefficients. Such a limited dependence on spectral variation is the key finding of this work, which has ramifications for previous studies as well as suggests future directions for spectral analysis of structure. The best ECD prediction for helix and sheet uses only one parameter, the coefficient of the first subspectrum. With VCD, the best predictions sample coefficients of both the amide I' and II bands, but error is optimized using only a few coefficients. In this respect, ECD is more accurate than VCD for alpha-helix, and the combined VCD (amide I' + II) predicts the beta-sheet component better than does ECD. Combining VCD and ECD data sets yields exceptionally good

  5. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    PubMed

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized. PMID:26530048

  6. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  7. Coherent Vortex Simulation (CVS) of compressible turbulent mixing layers using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Roussel, Olivier; Farge, Marie

    2007-11-01

    Coherent Vortex Simulation is based on the wavelet decomposition of the flow into coherent and incoherent components. An adaptive multiresolution method using second order finite volumes with explicit time discretization, a 2-4 Mac Cormack scheme, allows an efficient computation of the coherent flow on a dynamically adapted grid. Neglecting the influence of the incoherent background models turbulent dissipation. We present CVS computation of three dimensional compressible time developing mixing layer. We show the speed up in CPU time with respect to DNS and the obtained memory reduction thanks to dynamical octree data structures. The impact of different filtering strategies is discussed and it is found that isotropic wavelet thresholding of the Favre averaged gradient of the momentum yields the most effective results.

  8. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  9. Coherent Vortex Simulation of weakly compressible turbulent mixing layers using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Roussel, Olivier; Schneider, Kai

    2010-03-01

    An adaptive mulitresolution method based on a second-order finite volume discretization is presented for solving the three-dimensional compressible Navier-Stokes equations in Cartesian geometry. The explicit time discretization is of second-order and for flux evaluation a 2-4 Mac Cormack scheme is used. Coherent Vortex Simulations (CVS) are performed by decomposing the flow variables into coherent and incoherent contributions. The coherent part is computed deterministically on a locally refined grid using the adaptive multiresolution method while the influence of the incoherent part is neglected to model turbulent dissipation. The computational efficiency of this approach in terms of memory and CPU time compression is illustrated for turbulent mixing layers in the weakly compressible regime and for Reynolds numbers based on the mixing layer thickness between 50 and 200. Comparisons with direct numerical simulations allow to assess the precision and efficiency of CVS.

  10. MULTI-RESOLUTION STATISTICAL ANALYSIS ON GRAPH STRUCTURED DATA IN NEUROIMAGING

    PubMed Central

    Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Adluru, Nagesh; Bendlin, Barbara B.; Johnson, Sterling C.

    2016-01-01

    Statistical data analysis plays a major role in discovering structural and functional imaging phenotypes for mental disorders such as Alzheimer’s disease (AD). The goal here is to identify, ideally early on, which regions in the brain show abnormal variations with a disorder. To make the method more sensitive, we rely on a multi-resolutional perspective of the given data. Since the underlying imaging data (such as cortical surfaces and connectomes) are naturally represented in the form of weighted graphs which lie in a non-Euclidean space, we introduce recent work from the harmonics literature to derive an effective multi-scale descriptor using wavelets on graphs that characterize the local context at each data point. Using this descriptor, we demonstrate experiments where we identify significant differences between AD and control populations using cortical surface data and tractography derived graphs/networks.

  11. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  12. Multiresolution Analysis Using Wavelet, Ridgelet, and Curvelet Transforms for Medical Image Segmentation

    PubMed Central

    AlZubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  13. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    NASA Technical Reports Server (NTRS)

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  14. Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation.

    PubMed

    Alzubi, Shadi; Islam, Naveed; Abbod, Maysam

    2011-01-01

    The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise. PMID:21960988

  15. Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.

    PubMed

    Verma, Gyanendra K; Tiwary, Uma Shanker

    2014-11-15

    The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. PMID:24269801

  16. Accessing the Global Multi-Resolution Topography (GMRT) Synthesis through Gmrt Maptool

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Barg, B.; Carbotte, S. M.

    2014-12-01

    The Global Multi-Resolution Topography (GMRT) Synthesis (http://gmrt.marine-geo.org) is a dynamically maintained global multi-resolution synthesis of terrestrial and seafloor elevation data maintained as both images and gridded data values as part of the IEDA Marine Geoscience Data System. GMRT seamlessly brings together a variety of elevation sources, and includes ship-based multibeam sonar collected throughout the global oceans that is processed by the GMRT Team and is gridded to 100-m resolution. New versions of GMRT are released twice each year, typically adding processed multibeam data from ~80 cruises per year. GMRT grids and images can be accessed through a variety of tools and interfaces including GeoMapApp (http://www.geomapapp.org) the GMRT MapTool (http://www.marine-geo.org/tools/maps_grids.php), and images can also be accessed through a Web Map Service. We have recently launched a new version of our web-based GMRT MapTool interface, which provides custom access to the gridded data values in standard formats including GeoTIFF, ArcASCII and GMT NetCDF. Several resolution options are provided for these gridded data, and corresponding images can also be generated. Coupled with this new interface is an XML metadata service that provides attribution information and detailed metadata about source data components (cruise metadata, sensor metadata, and full list of source data files) for any region of interest. Metadata from the attribution service is returned to the user along with the requested data, and is also combined with the data itself in new Bathymetry Attributed Grid (BAG) formatted files.

  17. Exploring a multi-resolution modeling approach within the shallow-water equations

    SciTech Connect

    Ringler, Todd; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William

    2011-01-01

    The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward

  18. a New Multi-Resolution Algorithm to Store and Transmit Compressed DTM

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Brovelli, M.; Zamboni, G.

    2012-07-01

    WebGIS and virtual globes allow DTMs distribution and three dimensional representations to the Web users' community. In these applications, the database storage size represents a critical point. DTMs are obtained by some sampling or interpolation on the raw observations and typically are stored and distributed by data based models, like for example regular grids. A new approach to store and transmit DTMs is presented. The idea is to use multi-resolution bilinear spline functions to interpolate the observations and to model the terrain. More in detail, the algorithm performs the following actions. 1) The spatial distribution of the observations is investigated. Where few data are available, few levels of splines are activated while more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with respect to the previous level. 2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the observations. The interpolation is computed by batch least squares. 3) Finally, the estimated coefficients of the splines are stored. The model guarantees a local resolution consistent with the data density and can be defined analytical, because the coefficients of a given function are stored instead of a set of heights. The approach is discussed and compared with the traditional techniques to interpolate, store and transmit DTMs, considering accuracy and storage requirements. It is also compared with another multi-resolution technique. The research has been funded by the INTERREG HELI-DEM (Helvetia Italy Digital Elevation Model) project.

  19. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  20. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  1. Implementation of the multiconfiguration time-dependent Hatree-Fock method for general molecules on a multiresolution Cartesian grid

    NASA Astrophysics Data System (ADS)

    Sawada, Ryohto; Sato, Takeshi; Ishikawa, Kenichi L.

    2016-02-01

    We report a three-dimensional numerical implementation of the multiconfiguration time-dependent Hartree-Fock method based on a multiresolution Cartesian grid, with no need to assume any symmetry of molecular structure. We successfully compute high-harmonic generation of H2 and H2O . The present implementation will open a way to the first-principles theoretical study of intense-field- and attosecond-pulse-induced ultrafast phenomena in general molecules.

  2. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    NASA Astrophysics Data System (ADS)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2016-06-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse ~1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to ~0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  3. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression.

    PubMed

    Guo, Chenlei; Zhang, Liming

    2010-01-01

    Salient areas in natural scenes are generally regarded as areas which the human eye will typically focus on, and finding these areas is the key step in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), Neuromorphic Vision Toolkit (NVT), and others, but they demand high computational cost and computing useful results mostly relies on their choice of parameters. Although some region-based approaches were proposed to reduce the computational complexity of feature maps, these approaches still were not able to work in real time. Recently, a simple and fast approach called spectral residual (SR) was proposed, which uses the SR of the amplitude spectrum to calculate the image's saliency map. However, in our previous work, we pointed out that it is the phase spectrum, not the amplitude spectrum, of an image's Fourier transform that is key to calculating the location of salient areas, and proposed the phase spectrum of Fourier transform (PFT) model. In this paper, we present a quaternion representation of an image which is composed of intensity, color, and motion features. Based on the principle of PFT, a novel multiresolution spatiotemporal saliency detection model called phase spectrum of quaternion Fourier transform (PQFT) is proposed in this paper to calculate the spatiotemporal saliency map of an image by its quaternion representation. Distinct from other models, the added motion dimension allows the phase spectrum to represent spatiotemporal saliency in order to perform attention selection not only for images but also for videos. In addition, the PQFT model can compute the saliency map of an image under various resolutions from coarse to fine. Therefore, the hierarchical selectivity (HS) framework based on the PQFT model is introduced here to construct the tree structure representation of an image. With the help of HS, a model called multiresolution wavelet domain foveation (MWDF) is

  4. Sociopolitical Analyses.

    ERIC Educational Resources Information Center

    Van Galen, Jane, Ed.; And Others

    1992-01-01

    This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E. Beyer's "Educational Studies and…

  5. Multiresolution Approach for Noncontact Measurements of Arterial Pulse Using Thermal Imaging

    NASA Astrophysics Data System (ADS)

    Chekmenev, Sergey Y.; Farag, Aly A.; Miller, William M.; Essock, Edward A.; Bhatnagar, Aruni

    This chapter presents a novel computer vision methodology for noncontact and nonintrusive measurements of arterial pulse. This is the only investigation that links the knowledge of human physiology and anatomy, advances in thermal infrared (IR) imaging and computer vision to produce noncontact and nonintrusive measurements of the arterial pulse in both time and frequency domains. The proposed approach has a physical and physiological basis and as such is of a fundamental nature. A thermal IR camera was used to capture the heat pattern from superficial arteries, and a blood vessel model was proposed to describe the pulsatile nature of the blood flow. A multiresolution wavelet-based signal analysis approach was applied to extract the arterial pulse waveform, which lends itself to various physiological measurements. We validated our results using a traditional contact vital signs monitor as a ground truth. Eight people of different age, race and gender have been tested in our study consistent with Health Insurance Portability and Accountability Act (HIPAA) regulations and internal review board approval. The resultant arterial pulse waveforms exactly matched the ground truth oximetry readings. The essence of our approach is the automatic detection of region of measurement (ROM) of the arterial pulse, from which the arterial pulse waveform is extracted. To the best of our knowledge, the correspondence between noncontact thermal IR imaging-based measurements of the arterial pulse in the time domain and traditional contact approaches has never been reported in the literature.

  6. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis.

    PubMed

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  7. A multiresolution approach to image enhancement via histogram shaping and adaptive Wiener filtering

    NASA Astrophysics Data System (ADS)

    Pace, T.; Manville, D.; Lee, H.; Cloud, G.; Puritz, J.

    2008-04-01

    It is critical in military applications to be able to extract features in imagery that may be of interest to the viewer at any time of the day or night. Infrared (IR) imagery is ideally suited for producing these types of images. However, even under the best of circumstances, the traditional approach of applying a global automatic gain control (AGC) to the digital image may not provide the user with local area details that may be of interest. Processing the imagery locally can enhance additional features and characteristics in the image which provide the viewer with an improved understanding of the scene being observed. This paper describes a multi-resolution pyramid approach for decomposing an image, enhancing its contrast by remapping the histograms to desired pdfs, filtering them and recombining them to create an output image with much more visible detail than the input image. The technique improves the local area image contrast in light and dark areas providing the warfighter with significantly improved situational awareness.

  8. Automatic multiresolution age-related macular degeneration detection from fundus images

    NASA Astrophysics Data System (ADS)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  9. Multi-resolutional brain network filtering and analysis via wavelets on non-Euclidean space.

    PubMed

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K; Charchut, Sylvia; GadElkarim, Johnson J; Altshuler, Lori; Moody, Teena; Kumar, Anand; Singh, Vikas; Leow, Alex D

    2013-01-01

    Advances in resting state fMRI and diffusion weighted imaging (DWI) have led to much interest in studies that evaluate hypotheses focused on how brain connectivity networks show variations across clinically disparate groups. However, various sources of error (e.g., tractography errors, magnetic field distortion, and motion artifacts) leak into the data, and make downstream statistical analysis problematic. In small sample size studies, such noise have an unfortunate effect that the differential signal may not be identifiable and so the null hypothesis cannot be rejected. Traditionally, smoothing is often used to filter out noise. But the construction of convolving with a Gaussian kernel is not well understood on arbitrarily connected graphs. Furthermore, there are no direct analogues of scale-space theory for graphs--ones which allow to view the signal at multiple resolutions. We provide rigorous frameworks for performing 'multi-resolutional' analysis on brain connectivity graphs. These are based on the recent theory of non-Euclidean wavelets. We provide strong evidence, on brain connectivity data from a network analysis study (structural connectivity differences in adult euthymic bipolar subjects), that the proposed algorithm allows identifying statistically significant network variations, which are clinically meaningful, where classical statistical tests, if applied directly, fail. PMID:24505816

  10. Practical operating points of multi-resolution frame compatible (MFC) stereo coding

    NASA Astrophysics Data System (ADS)

    Lu, Taoran; Ganapathy, Hariharan; Lakshminarayanan, Gopi; Chen, Tao; Yin, Peng; Brooks, David; Husak, Walt

    2013-09-01

    3D content is gaining popularity and the production and delivery of 3D video is now an active working item among video compression experts, content providers and the CE industry. Frame compatible stereo coding was initially adopted for the first generation of 3DTV broadcasting services for its compatibility with existing 2D decoders. However, the frame compatible solution sacrifices half of the original video resolution. In 2012, the Moving Picture Experts Group (MPEG) issued the call for proposal (CfP) for solutions that improve the resolution of frame compatible stereo 3D video signal while maintaining the backward compatibility to legacy decoders. The standardization process of the multiresolution frame compatible (MFC) stereo coding was then started. In this paper, the solution - Orthogonal Muxing Frame Compatible Full Resolution (OM-FCFR) - as a response to the CfP is introduced. In addition, this paper provides some experimental results for broadcasters to guide them in selecting operating points for MFC. It is observed that for typical broadcast bitrates, more than 0.5dB PSNR improvement can be achieved by MFC over the frame compatible solution with only 15%~20% overhead.

  11. Interactive, Internet Delivery of Scientific Visualization viaStructured, Prerendered Multiresolution Imagery

    SciTech Connect

    Chen, Jerry; Yoon, Ilmi; Bethel, E. Wes

    2005-04-20

    We present a novel approach for highly interactive remote delivery of visualization results. Instead of real-time rendering across the internet, our approach, inspired by QuickTime VR's Object Movieconcept, delivers pre-rendered images corresponding to different viewpoints and different time steps to provide the experience of 3D and temporal navigation. We use tiled, multiresolution image streaming to consume minimum bandwidth while providing the maximum resolution that a user can perceive from a given viewpoint. Since image data, a viewpoint and time stamps are the only required inputs, our approach is generally applicable to all visualization and graphics rendering applications capable of generating image files in an ordered fashion. Our design is a form of latency-tolerant remote visualization, where visualization and Rendering time is effectively decoupled from interactive exploration. Our approach trades off increased interactivity, flexible resolution (for individual clients), reduced load and effective reuse of coherent frames between multiple users (from the servers perspective) at the expense of unconstrained exploration. A normal web server is the vehicle for providing on-demand images to the remote client application, which uses client-pull to obtain and cache only those images required to fulfill the interaction needs. This paper presents an architectural description of the system along with a performance characterization for stage of production, delivery and viewing pipeline.

  12. Wavelet multiresolution based multifractal analysis of electric fields by lightning return strokes

    NASA Astrophysics Data System (ADS)

    Gou, Xueqiang; Chen, Mingli; Zhang, Yijun; Dong, Wansheng; Qie, Xiushu

    2009-02-01

    Lightning can be seen as a large-scale cooperative phenomenon, which may evolve in a self-similar cascaded way. Using the electric field waveforms recorded by the slow antenna system, the mono- and multifractal behaviors of 115 first return strokes in negative cloud-to-ground discharges have been investigated with a wavelet multiresolution based multifractal method. The results show that the return stroke process, in term of its electric field waveform, has apparent fractality and strong degree of multifractality. The multifractal spectra obtained for the 115 cases are all well fitted to a modified version of the binomial cascade multifractal model. The width of the multifractal spectra, which measure the strength of multifractality, is 1.6 on average. The fractal dimension of the electric field waveforms ranges from 1.2 to 1.5 with an average of 1.3, a similar value to the fractal dimension of the lightning channel obtained by others. This suggests that the lightning-produced electric fields may have the same fractal dimension as its channel. The relationship between the peak current of a return stroke and the charge deposition in its channel is also discussed. The results suggest that the wavelet and scaling analysis may be a powerful tool in interpretation of the lightning-produced electric fields and therefore in understanding lightning.

  13. Comparison of various texture classification methods using multiresolution analysis and linear regression modelling.

    PubMed

    Dhanya, S; Kumari Roshni, V S

    2016-01-01

    Textures play an important role in image classification. This paper proposes a high performance texture classification method using a combination of multiresolution analysis tool and linear regression modelling by channel elimination. The correlation between different frequency regions has been validated as a sort of effective texture characteristic. This method is motivated by the observation that there exists a distinctive correlation between the image samples belonging to the same kind of texture, at different frequency regions obtained by a wavelet transform. Experimentally, it is observed that this correlation differs across textures. The linear regression modelling is employed to analyze this correlation and extract texture features that characterize the samples. Our method considers not only the frequency regions but also the correlation between these regions. This paper primarily focuses on applying the Dual Tree Complex Wavelet Packet Transform and the Linear Regression model for classification of the obtained texture features. Additionally the paper also presents a comparative assessment of the classification results obtained from the above method with two more types of wavelet transform methods namely the Discrete Wavelet Transform and the Discrete Wavelet Packet Transform. PMID:26835234

  14. Combination of geodetic measurements by means of a multi-resolution representation

    NASA Astrophysics Data System (ADS)

    Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.

    2010-12-01

    Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.

  15. Adaptation of a multi-resolution adversarial model for asymmetric warfare

    NASA Astrophysics Data System (ADS)

    Rosenberg, Brad; Gonsalves, Paul G.

    2006-05-01

    Recent military operations have demonstrated the use by adversaries of non-traditional or asymmetric military tactics to offset US military might. Rogue nations with links to trans-national terrorists have created a highly unpredictable and potential dangerous environment for US military operations. Several characteristics of these threats include extremism in beliefs, global in nature, non-state oriented, and highly networked and adaptive, thus making these adversaries less vulnerable to conventional military approaches. Additionally, US forces must also contend with more traditional state-based threats that are further evolving their military fighting strategies and capabilities. What are needed are solutions to assist our forces in the prosecution of operations against these diverse threat types and their atypical strategies and tactics. To address this issue, we present a system that allows for the adaptation of a multi-resolution adversarial model. The developed model can then be used to support both training and simulation based acquisition requirements to effectively respond to such an adversary. The described system produces a combined adversarial model by merging behavior modeling at the individual level with aspects at the group and organizational level via network analysis. Adaptation of this adversarial model is performed by means of an evolutionary algorithm to build a suitable model for the chosen adversary.

  16. Multiresolution analysis over graphs for a motor imagery based online BCI game.

    PubMed

    Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy

    2016-01-01

    Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes. PMID:26599827

  17. A three-channel miniaturized optical system for multi-resolution imaging

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Thienpont, Hugo

    2013-09-01

    Inspired by the natural compound eyes of insects, multichannel imaging systems embrace many channels that scramble their entire Field-Of-View (FOV). Our aim in this work was to attain multi-resolution capability into a multi-channel imaging system by manipulating the available channels to possess different imaging properties (focal length, angular resolution). We have designed a three-channel imaging system where the first and third channels have highest and lowest angular resolution of 0.0096° and 0.078° and narrowest and widest FOVs of 7° and 80°, respectively. The design of the channels has been done for a single wavelength of 587.6 nm using CODE V. The three channels each consist of 4 aspherical lens surfaces and an absorbing baffle that avoids crosstalk among the neighbouring channels. The aspherical lens surfaces have been fabricated in PMMA by ultra-precision diamond tooling and the baffles by metal additive manufacturing. The profiles of the fabricated lens surfaces have been measured with an accurate multi-sensor coordinate measuring machine and compared with the corresponding profiles of the designed lens surfaces. The fabricated lens profiles are then incorporated into CODE V to realistically model the three channels and also compare their performances with those of the nominal design. We can conclude that the performances of the two latter models are in a good agreement.

  18. Automated detection of landslides with a hierarchical multi-resolution image analysis approach

    NASA Astrophysics Data System (ADS)

    Kurtz, Camille; Stumpf, André; Malet, Jean-Philippe; Puissant, Anne; Gançarski, Pierre; Passat, Nicolas

    2015-04-01

    The mapping of landslides from Very High Resolution (VHR) satellite optical images present several challenges related to the heterogeneity of landslide sizes, shapes and ground surface properties. However, a common geomorphological characteristic of landslides is to be organized with a series of embedded and scaled features. These properties motivated the use of a multiresolution image analysis approach based on a hybrid segmentation/classification region-based method. The method, which uses satellite optical images of the same area at various spatial resolutions (Medium to Very High Resolution), relies on a top-down hierarchical framework. In the specific context of landslide analysis, two main novelties are introduced to enrich this framework. The first novelty consists of using non-spectral information, obtained from Digital Surface Model (DSM), as a priori knowledge for the guidance of the segmentation/classification process. The second novelty consists of using a new domain adaptation strategy, that allows to reduce the expert's interaction when handling large image datasets. Experiments performed on satellite images acquired over terrains affected by landslides in ther French Alps demonstrate the efficiency of the proposed method with different hierarchical levels of detail addressing various operational needs.

  19. Multi-resolution analysis of high density spatial and temporal cloud inhomogeneity fields from HOPE campaign

    NASA Astrophysics Data System (ADS)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Macke, Andreas

    2015-04-01

    Clouds are the most complex structures in both spatial and temporal scales of the Earth's atmosphere that effect the downward surface reaching fluxes and thus contribute to large uncertainty in the global radiation budget. Within the framework of High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE), a high density network of 99 pyranometer stations was set up around Jülich, Germany (~ 10 × 12 km2 area) during April to July 2013 to capture the small-scale variability in cloud induced radiation fields at the surface. In this study, we perform multi-resolution analysis of the downward solar irradiance variability at the surface from the pyranometer network to investigate the dependence of temporal and spatial averaging scales on the variance and spatial correlation for different cloud regimes. Preliminary results indicate that correlation is strongly scale-dependent where as the variance is dependent on the length of averaging period. Implications of our findings will be useful for quantifying the effect of spatial collocation while validating the satellite inferred solar irradiance estimates, and also to explore the link between cloud structure and radiation. We will present the details of our analysis and results.

  20. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction. PMID:24469684

  1. Fully automated analysis of multi-resolution four-channel micro-array genotyping data

    NASA Astrophysics Data System (ADS)

    Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.

    2006-03-01

    We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.

  2. A method of image multi-resolution processing based on FPGA + DSP architecture

    NASA Astrophysics Data System (ADS)

    Peng, Xiaohan; Zhong, Sheng; Lu, Hongqiang

    2015-10-01

    In real-time image processing, with the improvement of resolution and frame rate of camera imaging, not only the requirement of processing capacity is improving, but also the requirement of the optimization of process is improving. With regards to the FPGA + DSP architecture image processing system, there are three common methods to overcome the challenge above. The first is using higher performance DSP. For example, DSP with higher core frequency or with more cores can be used. The second is optimizing the processing method, make the algorithm to accomplish the same processing results but spend less time. Last but not least, pre-processing in the FPGA can make the image processing more efficient. A method of multi-resolution pre-processing by FPGA based on FPGA + DSP architecture is proposed here. It takes advantage of built-in first in first out (FIFO) and external synchronous dynamic random access memory (SDRAM) to buffer the images which come from image detector, and provides down-sampled images or cut-down images for DSP flexibly and efficiently according to the request parameters sent by DSP. DSP can thus get the degraded image instead of the whole image to process, shortening the processing time and transmission time greatly. The method results in alleviating the burden of image processing of DSP and also solving the problem of single method of image resolution reduction cannot meet the requirements of image processing task of DSP.

  3. Developing a real-time emulation of multiresolutional control architectures for complex, discrete-event systems

    SciTech Connect

    Davis, W.J.; Macro, J.G.; Brook, A.L.

    1996-12-31

    This paper first discusses an object-oriented, control architecture and then applies the architecture to produce a real-time software emulator for the Rapid Acquisition of Manufactured Parts (RAMP) flexible manufacturing system (FMS). In specifying the control architecture, the coordinated object is first defined as the primary modeling element. These coordinated objects are then integrated into a Recursive, Object-Oriented Coordination Hierarchy. A new simulation methodology, the Hierarchical Object-Oriented Programmable Logic Simulator, is then employed to model the interactions among the coordinated objects. The final step in implementing the emulator is to distribute the models of the coordinated objects over a network of computers and to synchronize their operation to a real-time clock. The paper then introduces the Hierarchical Subsystem Controller as an intelligent controller for the coordinated object. The proposed approach to intelligent control is then compared to the concept of multiresolutional semiosis that has been developed by Dr. Alex Meystel. Finally, the plans for implementing an intelligent controller for the RAMP FMS are discussed.

  4. Multiscale and multiresolution modeling of shales and their flow and morphological properties

    PubMed Central

    Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad

    2015-01-01

    The need for more accessible energy resources makes shale formations increasingly important. Characterization of such low-permeability formations is complicated, due to the presence of multiscale features, and defies conventional methods. High-quality 3D imaging may be an ultimate solution for revealing the complexities of such porous media, but acquiring them is costly and time consuming. High-quality 2D images, on the other hand, are widely available. A novel three-step, multiscale, multiresolution reconstruction method is presented that directly uses 2D images in order to develop 3D models of shales. It uses a high-resolution 2D image representing the small-scale features to reproduce the nanopores and their network, a large scale, low-resolution 2D image to create the larger-scale characteristics, and generates stochastic realizations of the porous formation. The method is used to develop a model for a shale system for which the full 3D image is available and its properties can be computed. The predictions of the reconstructed models are in excellent agreement with the data. The method is, however, quite general and can be used for reconstructing models of other important heterogeneous materials and media. Two biological examples and from materials science are also reconstructed to demonstrate the generality of the method. PMID:26560178

  5. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    NASA Astrophysics Data System (ADS)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform – was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  6. Fast numerical algorithms for fitting multiresolution hybrid shape models to brain MRI.

    PubMed

    Vemuri, B C; Guo, Y; Lai, S H; Leonard, C M

    1997-09-01

    In this paper, we present new and fast numerical algorithms for shape recovery from brain MRI using multiresolution hybrid shape models. In this modeling framework, shapes are represented by a core rigid shape characterized by a superquadric function and a superimposed displacement function which is characterized by a membrane spline discretized using the finite-element method. Fitting the model to brain MRI data is cast as an energy minimization problem which is solved numerically. We present three new computational methods for model fitting to data. These methods involve novel mathematical derivations that lead to efficient numerical solutions of the model fitting problem. The first method involves using the nonlinear conjugate gradient technique with a diagonal Hessian preconditioner. The second method involves the nonlinear conjugate gradient in the outer loop for solving global parameters of the model and a preconditioned conjugate gradient scheme for solving the local parameters of the model. The third method involves the nonlinear conjugate gradient in the outer loop for solving the global parameters and a combination of the Schur complement formula and the alternating direction-implicit method for solving the local parameters of the model. We demonstrate the efficiency of our model fitting methods via experiments on several MR brain scans. PMID:9873915

  7. Interslice interpolation of anisotropic 3D images using multiresolution contour correlation

    NASA Astrophysics Data System (ADS)

    Lee, Jiann-Der; Wan, Shu-Yen; Ma, Cherng-Min

    2002-05-01

    To visualize, manipulate and analyze the geometrical structure of anatomical changes, it is often required to perform three-dimensional (3-D) interpolation of the interested organ shape from a series of cross-sectional images obtained from various imaging modalities, such as ultrasound, computed tomography (CT), magnetic resonance imaging (MRI), etc. In this paper, a novel wavelet-based interpolation scheme, which consists of four algorithms are proposed to 3-D image reconstruction. The multi-resolution characteristics of wavelet transform (WT) is completely used in this approach, which consists of two stages, boundary extraction and contour interpolation. More specifically, a wavelet-based radial search method is first designed to extract the boundary of the target object. Next, the global information of the extracted boundary is analyzed for interpolation using WT with various bases and scales. By using six performance measures to evaluate the effectiveness of the proposed scheme, experimental results show that the performance of all proposed algorithms is superior to traditional contour-based methods, linear interpolation and B-spline interpolation. The satisfactory outcome of the proposed scheme provides its capability for serving as an essential part of image processing system developed for medical applications.

  8. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  9. Noise reduction in small-animal PET images using a multiresolution transform.

    PubMed

    Mejia, Jose M; Ochoa Domínguez, Humberto de Jesús; Vergara Villegas, Osslan Osiris; Ortega Máynez, Leticia; Mederos, Boris

    2014-10-01

    In this paper, we address the problem of denoising reconstructed small animal positron emission tomography (PET) images, based on a multiresolution approach which can be implemented with any transform such as contourlet, shearlet, curvelet, and wavelet. The PET images are analyzed and processed in the transform domain by modeling each subband as a set of different regions separated by boundaries. Homogeneous and heterogeneous regions are considered. Each region is independently processed using different filters: a linear estimator for homogeneous regions and a surface polynomial estimator for the heterogeneous region. The boundaries between the different regions are estimated using a modified edge focusing filter. The proposed approach was validated by a series of experiments. Our method achieved an overall reduction of up to 26% in the %STD of the reconstructed image of a small animal NEMA phantom. Additionally, a test on a simulated lesion showed that our method yields better contrast preservation than other state-of-the art techniques used for noise reduction. Thus, the proposed method provides a significant reduction of noise while at the same time preserving contrast and important structures such as lesions. PMID:24951682

  10. Current limiters

    SciTech Connect

    Loescher, D.H.; Noren, K.

    1996-09-01

    The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.

  11. Bayesian Hierarchical Multiresolution Hazard Model for the Study of Time-Dependent Failure Patterns in Early Stage Breast Cancer

    PubMed Central

    Dukić, Vanja; Dignam, James

    2011-01-01

    The multiresolution estimator, developed originally in engineering applications as a wavelet-based method for density estimation, has been recently extended and adapted for estimation of hazard functions (Bouman et al. 2005, 2007). Using the multiresolution hazard (MRH) estimator in the Bayesian framework, we are able to incorporate any a priori desired shape and amount of smoothness in the hazard function. The MRH method’s main appeal is in its relatively simple estimation and inference procedures, making it possible to obtain simultaneous confidence bands on the hazard function over the entire time span of interest. Moreover, these confidence bands properly reflect the multiple sources of uncertainty, such as multiple centers or heterogeneity in the patient population. Also, rather than the commonly employed approach of estimating covariate effects and the hazard function separately, the Bayesian MRH method estimates all of these parameters jointly, thus resulting in properly adjusted inference about any of the quantities. In this paper, we extend the previously proposed MRH methods (Bouman et al. 2005, 2007) into the hierarchical multiresolution hazard setting (HMRH), to accommodate the case of separate hazard rate functions within each of several strata as well as some common covariate effects across all strata while accounting for within-stratum correlation. We apply this method to examine patterns of tumor recurrence after treatment for early stage breast cancer, using data from two large-scale randomized clinical trials that have substantially influenced breast cancer treatment standards. We implement the proposed model to estimate the recurrence hazard and explore how the shape differs between patients grouped by a key tumor characteristic (estrogen receptor status) and treatment types, after adjusting for other important patient characteristics such as age, tumor size and progesterone level. We also comment on whether the hazards exhibit nonmonotonic

  12. Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging

    NASA Astrophysics Data System (ADS)

    He, Wenda; Juette, Arne; Denton, Erica R. E.; Zwiggelaar, Reyer

    2015-03-01

    Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  13. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  14. Multiresolution internal template cleaning: an application to the Wilkinson Microwave Anisotropy Probe 7-yr polarization data

    NASA Astrophysics Data System (ADS)

    Fernández-Cobos, R.; Vielva, P.; Barreiro, R. B.; Martínez-González, E.

    2012-03-01

    The cosmic microwave background (CMB) radiation data obtained by different experiments contain, besides the desired signal, a superposition of microwave sky contributions. Using a wavelet decomposition on the sphere, we present a fast and robust method to recover the CMB signal from microwave maps. We present an application to the Wilkinson Microwave Anisotropy Probe (WMAP) polarization data, which shows its good performance, particularly in very polluted regions of the sky. The applied wavelet has the advantages that it requires little computational time in its calculations, it is adapted to the HEALPIX pixelization scheme and it offers the possibility of multiresolution analysis. The decomposition is implemented as part of a fully internal template fitting method, minimizing the variance of the resulting map at each scale. Using a χ2 characterization of the noise, we find that the residuals of the cleaned maps are compatible with those expected from the instrumental noise. The maps are also comparable to those obtained from the WMAP team, but in our case we do not make use of external data sets. In addition, at low resolution, our cleaned maps present a lower level of noise. The E-mode power spectrum ? is computed at high and low resolutions, and a cross-power spectrum ? is also calculated from the foreground reduced maps of temperature given by WMAP and our cleaned maps of polarization at high resolution. These spectra are consistent with the power spectra supplied by the WMAP team. We detect the E-mode acoustic peak at ℓ˜ 400, as predicted by the standard ΛCDM model. The B-mode power spectrum ? is compatible with zero.

  15. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency, Active and Passive Microwave Observations

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Anagnostou, Emmanouil N.; Olson, William S.; Starr, David OC. (Technical Monitor)

    2002-01-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated using both simulated and airborne data. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) satellite multi-frequency active and passive observations. These observations are characterized by various spatial and sampling resolutions. This makes the retrieval problem mathematically more difficult and ill-determined because the quality of information decreases with decreasing resolution. A model that, given reflectivity profiles and a small set of parameters (including the cloud water content, the intercept drop size distribution, and a variable describing the frozen hydrometeor properties), simulates high-resolution brightness temperatures is used. The high-resolution simulated brightness temperatures are convolved at the real sensor resolution. An optimal estimation procedure is used to minimize the differences between simulated and observed brightness temperatures. The retrieval technique is investigated using cloud model synthetic and airborne data from the Fourth Convection And Moisture Experiment. Simulated high-resolution brightness temperatures and reflectivities and airborne observation strong are convolved at the resolution of the TRMM instruments and retrievals are performed and analyzed relative to the reference data used in observations synthesis. An illustration of the possible use of the technique in satellite rainfall estimation is presented through an application to TRMM data. The study suggests improvements in combined active and passive retrievals even when the instruments resolutions are significantly different. Future work needs to better quantify the retrievals performance, especially in connection with satellite applications, and the uncertainty of the models used in retrieval.

  16. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators.

    PubMed

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-21

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator's inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq(-1), while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq(-1). Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators. PMID:27359049

  17. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  18. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models

    PubMed Central

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-01-01

    Mapping or “delimiting” landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  19. Quantum simulations of nuclei and nuclear pasta with the multiresolution adaptive numerical environment for scientific simulations

    NASA Astrophysics Data System (ADS)

    Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.

    2016-05-01

    Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.

  20. W-matrices, nonorthogonal multiresolution analysis, and finite signals of arbitrary length

    SciTech Connect

    Kwong, M.K.; Tang, P.T.P.

    1994-12-31

    Wavelet theory and discrete wavelet transforms have had great impact on the field of signal and image processing. In this paper the authors propose a new class of discrete transforms. It ``includes`` the classical Haar and Daubechies transforms. These transforms treat the endpoints of a signal in a different manner from that of conventional techniques. This new approach allows the authors to efficiently handle signals of any length; thus, one is not restricted to work with signal or image sizes that are multiples of a power of 2. Their method does not lengthen the output signal and does not require an additional bookkeeping vector. An exciting result is the uncovering of a new and simple transform that performs very well for compression purposes. It has compact support of length 4, and so is its inverse. The coefficients are symmetrical, and the associated scaling function is fairly smooth The Associated dual wavelet has vanishing moments up to order 2. Numerical results comparing the performance of this transform with that of the Daubechies D{sub 4} transform are given. The multiresolution decomposition, however, is not orthogonal. They will see why this apparent defect is not a real problem in practice. Furthermore, they will give a method to compute an orthogonal compensation that gives them the best approximation possible with the given scaling space. The transform can be described completely within the context of matrix theory and linear algebra. Thus, even without prior knowledge of wavelet theory, one can easily grasp the concrete algorithm and apply it to specific problems within a very short time, without having to master complex functional analysis. At the end of the paper, they shall make the connection to wavelet theory.

  1. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency Active and Passive Microwave Observations.

    NASA Astrophysics Data System (ADS)

    Grecu, Mircea; Olson, William S.; Anagnostou, Emmanouil N.

    2004-04-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) observations, and it is based on models that simulate high-resolution brightness temperatures as functions of observed reflectivity profiles and a parameter related to the raindrop size distribution. The modeled high-resolution brightness temperatures are used to determine normalized brightness temperature polarizations at the microwave radiometer resolution. An optimal estimation procedure is employed to minimize the differences between the simulated and observed normalized polarizations by adjusting the drop size distribution parameter. The impact of other unknowns that are not independent variables in the optimal estimation, but affect the retrievals, is minimized through statistical parameterizations derived from cloud model simulations. The retrieval technique is investigated using TRMM observations collected during the Kwajalein Experiment (KWAJEX). These observations cover an area extending from 5° to 12°N latitude and from 166° to 172°E longitude from July to September 1999 and are coincident with various ground-based observations, facilitating a detailed analysis of the retrieved precipitation. Using the method developed in this study, precipitation estimates consistent with both the passive and active TRMM observations are obtained. Various parameters characterizing these estimates, that is, the rain rate, precipitation water content, drop size distribution intercept, and the mass- weighted mean drop diameter, are in good qualitative agreement with independent experimental and theoretical estimates. Combined rain estimates are, in general, higher than the official TRMM precipitation radar (PR)-only estimates for the area and the period considered in the study. Ground-based precipitation estimates, derived

  2. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators

    NASA Astrophysics Data System (ADS)

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-01

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator’s inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq‑1, while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq‑1. Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  3. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models.

    PubMed

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  4. Multi-sensor multi-resolution image fusion for improved vegetation and urban area classification

    NASA Astrophysics Data System (ADS)

    Kumar, U.; Milesi, C.; Nemani, R. R.; Basu, S.

    2015-06-01

    In this paper, we perform multi-sensor multi-resolution data fusion of Landsat-5 TM bands (at 30 m spatial resolution) and multispectral bands of World View-2 (WV-2 at 2 m spatial resolution) through linear spectral unmixing model. The advantages of fusing Landsat and WV-2 data are two fold: first, spatial resolution of the Landsat bands increases to WV-2 resolution. Second, integration of data from two sensors allows two additional SWIR bands from Landsat data to the fused product which have advantages such as improved atmospheric transparency and material identification, for example, urban features, construction materials, moisture contents of soil and vegetation, etc. In 150 separate experiments, WV-2 data were clustered in to 5, 10, 15, 20 and 25 spectral classes and data fusion were performed with 3x3, 5x5, 7x7, 9x9 and 11x11 kernel sizes for each Landsat band. The optimal fused bands were selected based on Pearson product-moment correlation coefficient, RMSE (root mean square error) and ERGAS index and were subsequently used for vegetation, urban area and dark objects (deep water, shadows) classification using Random Forest classifier for a test site near Golden Gate Bridge, San Francisco, California, USA. Accuracy assessment of the classified images through error matrix before and after fusion showed that the overall accuracy and Kappa for fused data classification (93.74%, 0.91) was much higher than Landsat data classification (72.71%, 0.70) and WV-2 data classification (74.99%, 0.71). This approach increased the spatial resolution of Landsat data to WV-2 spatial resolution while retaining the original Landsat spectral bands with significant improvement in classification.

  5. Multiresolution terrain depiction and airport navigation function on an embedded SVS

    NASA Astrophysics Data System (ADS)

    Wiesemann, Thorsten; Schiefele, Jens; Bader, Joachim

    2002-07-01

    Many of today's and tomorrow's aviation applications demand accurate and reliable digital terrain elevation databases. Particularly, to enhance a pilot's situational awareness with future 3D synthetic vision systems, accurate, reliable, and hi-resolution terrain databases are required to offer a realistic and reliable terrain depiction. On the other hand, optimized or reduced terrain models are necessary to ensure real-time rendering and computing performance. In this paper a method for adaptive terrain meshing and depiction for SVS is presented. The initial dat set is decomposed by using wavelet transform. By examining the wavelet coefficients, an adaptive surface approximation for various level-of-detail is determined at runtime. Additionally, the dyadic scaling of the wavelet transform is used to build a hierarchical quad-tree representation for the terrain dat. This representation enhances fast interactive computations and real-time rendering methods. For the integrated airport navigation function an airport mapping database compliant to the new DO-272 standard is processed and integrated in the realized system. The used airport database contains precise airport vector geometries with additional object attributes as background information. In conjunction these data set can be used for various airport navigation functions like automatic taxi guidance. Booth, the multi-resolution terrain concept and airport navigation function are integrated into a high-level certifiable 2D/3D scene graph rendering system. It runs on an aviation certifiable embedded rendering graphics board. The optimized combination of multi- resolution terrain, scene graph, and graphics boards allows it to handle dynamically terrain models up to 1 arc second resolution. The system s and dat processing acknowledges certification rules based on DO-178B, DO-254, DO-200A, DO- 272, and DO-276.

  6. A gear rattle metric based on the wavelet multi-resolution analysis: Experimental investigation

    NASA Astrophysics Data System (ADS)

    Brancati, Renato; Rocca, Ernesto; Savino, Sergio

    2015-01-01

    In the article an investigation about the feasibility of a wavelet analysis for gear rattle metric in transmission gears, due to tooth impacts under unloaded conditions, is conducted. The technique adopts the discrete wavelet transform (DWT), following the Multi-resolution analysis, to decompose an experimental signal of the relative angular motion of gears into an approximation and in some detail vectors. The described procedure, previously developed by the authors, permits the qualitative evaluation of the impacts occurring between the teeth by examining in particular the detail vectors coming out from the wavelet decomposition. The technique enables discriminating between the impacts occurring on the two different sides of tooth. This situation is typical of the double-sided gear rattle produced in the automotive gear boxes. This paper considers the influence of oil lubricant, inserted between the teeth, in reducing the impacts. Analysis is performed by comparing three different lubrication conditions, and some of the classical wavelet functions adopted in literature are tested as "mother" wavelet. Moreover, comparisons with a metric based on the harmonic analysis by means of the Fast Fourier Transform (FFT), often adopted in this field, are conducted to put in evidence the advantages of the Wavelet technique with reference to the influence of some fundamental operative parameters. The experimental signals of the relative angular rotation of gear are acquired by two high resolution incremental encoders on a specific test rig for lightly loaded gears. The results of the proposed method appear optimistic also in the detection of defects that could produce little variations in the dynamic behavior of unloaded gears.

  7. Flight assessment of a real time multi-resolution image fusion system for use in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Smith, M. I.; Sadler, J. R. E.

    2007-04-01

    Military helicopter operations are often constrained by environmental conditions, including low light levels and poor weather. Recent experience has also shown the difficulty presented by certain terrain when operating at low altitude by day and night. For example, poor pilot cues over featureless terrain with low scene contrast, together with obscuration of vision due to wind-blown and re-circulated dust at low level (brown out). These sorts of conditions can result in loss of spatial awareness and precise control of the aircraft. Atmospheric obscurants such as fog, cloud, rain and snow can similarly lead to hazardous situations and reduced situational awareness. Day Night All Weather (DNAW) systems applied research sponsored by UK Ministry of Defence (MoD) has developed a multi-resolution real time Image Fusion system that has been flown as part of a wider flight trials programme investigating increased situational awareness. Dual-band multi-resolution adaptive image fusion was performed in real-time using imagery from a Thermal Imager and a Low Light TV, both co-bore sighted on a rotary wing trials aircraft. A number of sorties were flown in a range of climatic and environmental conditions during both day and night. (Neutral density filters were used on the Low Light TV during daytime sorties.) This paper reports on the results of the flight trial evaluation and discusses the benefits offered by the use of Image Fusion in degraded visual environments.

  8. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery.

    PubMed

    Belgiu, Mariana; Dr Guţ, Lucian

    2014-10-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  9. Radar Image and Rain-gauge Alignment using the Multi-resolution Viscous Alignment (MVA) Algorithm

    NASA Astrophysics Data System (ADS)

    Chatdarong, V.

    2007-12-01

    Rainfall is a complex environmental variable that is difficult to describe either deterministically or statistically. To understand rainfall behaviors, many types of instruments are employed to detect and collect rainfall information. Among them, radar seems to provide the most comprehensive rainfall measurement at fine spatial and temporal resolution and over a relatively wide area. Nevertheless, it does not detects surface rainfall directly like what rain-gauge does. The accuracy radar rainfall, therefore, depends greatly on the Z-R relationship which convert radar reflectivity (Z) to surface rainrate (R). This calibration is usually done by fitting the rain-gauge data with the corresponding radar reflectivity using the regression analysis. To best fit the data, the radar reflectivity at neighbor pixels are usually used to best match the rain-gauge data. However, when applying the Z-R relationship to the radar image, there is no position adjustment despite the calibration technique. Hence, it is desirable to adjust the position of the radar reflectivity images prior to applying the Z-R relationship to improve the accuracy of the rainfall estimation. In this research, the Multi-resolution Viscous Alignment (MVA) algorithm is applied to best align radar reflectivity images to rain-gauge data in order to improve rainfall estimation from the Z-R relationship. The MVA algorithm solves the motion estimation problems using a Bayesian formulation to minimize misfits between two data sets. In general, the problem are ill-posed; therefore, some regularizations and constraints based on smoothness and non-divergence assumptions are employed. This algorithm is superior to the conventional techniques and correlation based techniques. It is fast, robust, easy to implement, and does not require data training. In addition, it can handle higher-order, missing data, and small-scale deformations. The algorithm provides spatially dense, consistency, and smooth transition vector. The

  10. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    SciTech Connect

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios; Skouroliakou, Aikaterini; Hazle, John D.; Kagadis, George C. E-mail: George.Kagadis@med.upatras.gr

    2014-07-15

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet

  11. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery

    PubMed Central

    Belgiu, Mariana; Drǎguţ, Lucian

    2014-01-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  12. A new, multi-resolution bedrock elevation map of the Greenland ice sheet

    NASA Astrophysics Data System (ADS)

    Griggs, J. A.; Bamber, J. L.; Grisbed Consortium

    2010-12-01

    Gridded bedrock elevation for the Greenland ice sheet has previously been constructed with a 5 km posting. The true resolution of the data set was, in places, however, considerably coarser than this due to the across-track spacing of ice-penetrating radar transects. Errors were estimated to be on the order of a few percent in the centre of the ice sheet, increasing markedly in relative magnitude near the margins, where accurate thickness is particularly critical for numerical modelling and other applications. We use new airborne and satellite estimates of ice thickness and surface elevation to determine the bed topography for the whole of Greenland. This is a dynamic product, which will be updated frequently as new data, such as that from NASA’s Operation Ice Bridge, becomes available. The University of Kansas has in recent years, flown an airborne ice-penetrating radar system with close flightline spacing over several key outlet glacier systems. This allows us to produce a multi-resolution bedrock elevation dataset with the high spatial resolution needed for ice dynamic modelling over these key outlet glaciers and coarser resolution over the more sparsely sampled interior. Airborne ice thickness and elevation from CReSIS obtained between 1993 and 2009 are combined with JPL/UCI/Iowa data collected by the WISE (Warm Ice Sounding Experiment) covering the marginal areas along the south west coast from 2009. Data collected in the 1970’s by the Technical University of Denmark were also used in interior areas with sparse coverage from other sources. Marginal elevation data from the ICESat laser altimeter and the Greenland Ice Mapping Program were used to help constrain the ice thickness and bed topography close to the ice sheet margin where, typically, the terrestrial observations have poor sampling between flight tracks. The GRISBed consortium currently consists of: W. Blake, S. Gogineni, A. Hoch, C. M. Laird, C. Leuschen, J. Meisel, J. Paden, J. Plummer, F

  13. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    PubMed Central

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  14. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery

    NASA Astrophysics Data System (ADS)

    Belgiu, Mariana; ǎguţ, Lucian, , Dr

    2014-10-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  15. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  16. Applicability of Multi-Seasonal X-Band SAR Imagery for Multiresolution Segmentation: a Case Study in a Riparian Mixed Forest

    NASA Astrophysics Data System (ADS)

    Dabiri, Z.; Hölbling, D.; Lang, S.; Bartsch, A.

    2015-12-01

    The increasing availability of synthetic aperture radar (SAR) data from a range of different sensors necessitates efficient methods for semi-automated information extraction at multiple spatial scales for different fields of application. The focus of the presented study is two-fold: 1) to evaluate the applicability of multi-temporal TerraSAR-X imagery for multiresolution segmentation, and 2) to identify suitable Scale Parameters through different weighing of different homogeneity criteria, mainly colour variance. Multiresolution segmentation was used for segmentation of multi-temporal TerraSAR-X imagery, and the ESP (Estimation of Scale Parameter) tool was used to identify suitable Scale Parameters for image segmentation. The validation of the segmentation results was performed using very high resolution WorldView-2 imagery and a reference map, which was created by an ecological expert. The results of multiresolution segmentation revealed that in the context of object-based image analysis the TerraSAR-X images are applicable for generating optimal image objects. Furthermore, ESP tool can be used as an indicator for estimation of Scale Parameter for multiresolution segmentation of TerraSAR-X imagery. Additionally, for more reliable results, this study suggests that the homogeneity criterion of colour, in a variance based segmentation algorithm, needs to be set to high values. Setting the shape/colour criteria to 0.005/0.995 or 0.00/1 led to the best results and to the creation of adequate image objects.

  17. The Multi-Resolution Land Characteristics (MRLC) Consortium - 20 Years of Development and Integration of U.S. National Land Cover Data

    EPA Science Inventory

    The Multi-Resolution Land Characteristics (MRLC) Consortium is a good example of the national benefits of federal collaboration. It started in the mid-1990s as a small group of federal agencies with the straightforward goal of compiling a comprehensive national Landsat dataset t...

  18. The planetary hydraulics analysis based on a multi-resolution stereo DTMs and LISFLOOD-FP model: Case study in Mars

    NASA Astrophysics Data System (ADS)

    Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.

    2013-12-01

    Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough

  19. Multi-Resolution Analysis of LiDAR data for Characterizing a Stabilized Aeolian Landscape in South Texas

    NASA Astrophysics Data System (ADS)

    Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.; Houser, C.

    2014-12-01

    Aeolian systems are ideal natural laboratories for examining self-organization in patterned landscapes, as certain wind regimes generate certain morphologies. Topographic information and scale dependent analysis offer the opportunity to study such systems and characterize process-form relationships. A statistically based methodology for differentiating aeolian features would enable the quantitative association of certain surface characteristics with certain morphodynamic regimes. We conducted a multi-resolution analysis of LiDAR elevation data to assess scale-dependent morphometric variations in an aeolian landscape in South Texas. For each pixel, mean elevation values are calculated along concentric circles moving outward at 100-meter intervals (i.e. 500 m, 600 m, 700 m from pixel). The calculated average elevation values plotted against distance from the pixel of interest as curves are used to differentiate multi-scalar variations in elevation across the landscape. In this case, it is hypothesized these curves may be used to quantitatively differentiate certain morphometries from others like a spectral signature may be used to classify paved surfaces from natural vegetation, for example. After generating multi-resolution curves for all the pixels in a selected area of interest (AOI), a Principal Components Analysis is used to highlight commonalities and singularities between generated curves from pixels across the AOI. Our findings suggest that the resulting components could be used for identification of discrete aeolian features like open sands, trailing ridges and active dune crests, and, in particular, zones of deflation. This new approach to landscape characterization not only works to mitigate bias introduced when researchers must select training pixels for morphometric investigations, but can also reveal patterning in aeolian landscapes that would not be as obvious without quantitative characterization.

  20. Landscape pattern analysis for assessing ecosystem condition: Development of a multi-resolution method and application to watershed-delineated landscapes in Pennsylvania

    NASA Astrophysics Data System (ADS)

    Johnson, Glen D.

    Protection of ecological resources requires the study and management of whole landscape-level ecosystems. The subsequent need for characterizing landscape structure has led to a variety of measurements for assessing different aspects of spatial patterns; however, most of these measurements are known to depend on both the spatial extent of a specified landscape and the measurement grain; therefore, multi-scale measurements would be more informative. In response, a new method is developed for obtaining a multi-resolution characterization of fragmentation patterns in land cover raster maps within a fixed geographic extent. The concept of conditional entropy is applied to quantify landscape fragmentation as one moves from larger "parent" land cover pixels to smaller "child" pixels that are hierarchically nested within the parent pixels. When applied over a range of resolutions, one obtains a "conditional entropy profile" that can be defined by three parameters. A method for stochastically simulating landscapes is also developed which allows evaluation of the expected behavior of conditional entropy profiles under known landscape generating mechanisms. This modeling approach also allows for determining sample distributions of different landscape measurements via Monte Carlo simulations. Using an eight-category raster map that was based on 30-meter resolution LANDSAT TM images, a suite of landscape measurements was obtained for each of 102 Pennsylvania watersheds (a complete tessellation of the state). This included conditional entropy profiles based on the random filter for degrading raster map resolutions. For these watersheds, the conditional entropy profiles are quite sensitive to changing pattern, and together with the readily-available marginal land cover proportions, appear to be very valuable for categorizing landscapes with respect to common types. These profiles have the further appeal of presenting multi-scale fragmentation patterns in a way that can be easily

  1. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    PubMed Central

    Wang, Kun-Ching

    2015-01-01

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590

  2. Application of sub-image multiresolution analysis of Ground-penetrating radar data in a study of shallow structures

    NASA Astrophysics Data System (ADS)

    Jeng, Yih; Lin, Chun-Hung; Li, Yi-Wei; Chen, Chih-Sung; Yu, Hung-Ming

    2011-03-01

    Fourier-based algorithms originally developed for the processing of seismic data are applied routinely in the Ground-penetrating radar (GPR) data processing, but these conventional methods of data processing may result in an abundance of spurious harmonics without any geological meaning. We propose a new approach in this study based essentially on multiresolution wavelet analysis (MRA) for GPR noise suppression. The 2D GPR section is similar to an image in all aspects if we consider each data point of the GPR section to be an image pixel in general. This technique is an image analysis with sub-image decomposition. We start from the basic image decomposition procedure using conventional MRA approach and establish the filter bank accordingly. With reasonable knowledge of data and noise and the basic assumption of the target, it is possible to determine the components with high S/N ratio and eliminate noisy components. The MRA procedure is performed further for the components containing both signal and noise. We treated the selected component as an original image and applied the MRA procedure again to that single component with a mother wavelet of higher resolution. This recursive procedure with finer input allows us to extract features or noise events from GPR data more effectively than conventional process. To assess the performance of the MRA filtering method, we first test this method on a simple synthetic model and then on experimental data acquired from a control site using 400 MHz GPR system. A comparison of results from our method and from conventional filtering techniques demonstrates the effectiveness of the sub-image MRA method, particularly in removing ringing noise and scattering events. Field study was carried out in a trenched fault zone where a faulting structure was present at shallow depths ready for understanding the feasibility of improving the data S/N ratio by applying the sub-image multiresolution analysis. In contrast to the conventional

  3. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system

    NASA Technical Reports Server (NTRS)

    Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.

    1996-01-01

    We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.

  4. A Multiresolution Hazard Model for Multicenter Survival Studies: Application to Tamoxifen Treatment in Early Stage Breast Cancer

    PubMed Central

    BOUMAN, Peter; MENG, Xiao-Li; DIGNAM, James; DUKIĆ, Vanja

    2014-01-01

    In multicenter studies, one often needs to make inference about a population survival curve based on multiple, possibly heterogeneous survival data from individual centers. We investigate a flexible Bayesian method for estimating a population survival curve based on a semiparametric multiresolution hazard model that can incorporate covariates and account for center heterogeneity. The method yields a smooth estimate of the survival curve for “multiple resolutions” or time scales of interest. The Bayesian model used has the capability to accommodate general forms of censoring and a priori smoothness assumptions. We develop a model checking and diagnostic technique based on the posterior predictive distribution and use it to identify departures from the model assumptions. The hazard estimator is used to analyze data from 110 centers that participated in a multicenter randomized clinical trial to evaluate tamoxifen in the treatment of early stage breast cancer. Of particular interest are the estimates of center heterogeneity in the baseline hazard curves and in the treatment effects, after adjustment for a few key clinical covariates. Our analysis suggests that the treatment effect estimates are rather robust, even for a collection of small trial centers, despite variations in center characteristics. PMID:25620824

  5. Assessment of carotid diameter and wall thickness in ultrasound images using active contours improved by a multiresolution technique

    NASA Astrophysics Data System (ADS)

    Gutierrez, Marco A.; Pilon, Paulo E.; Lage, Silvia G.; Kopel, Liliane; Carvalho, Ricardo T.; Furuie, Sergio S.

    2002-04-01

    Carotid vessel ultrasound imaging is a reliable non-invasive technique to measure the arterial morphology. Vessel diameter, intima-media thickness (IMT) of the far wall and plaque presence can be reliably determined using B-mode ultrasound. In this paper we describe a semi-automatic approach to measure artery diameter and IMT based on an active contour technique improved by a multiresolution analysis. The operator selects a region-of-interest (ROI) in a series of carotid images obtained from B-mode ultrasound. This set of images is convolved with the corresponding partial derivatives of the Gaussian filter. The filter response is used to compute a 2D gradient magnitude image in order to refine the vessel's boundaries. Using an active contour technique the vessel's border is determined automatically. The near wall media-adventitia (NWMA), far wall media-adventitia (FWMA) and far wall lumen-intima (FWLI) borders are obtained by a least-square fitting of the active contours result. The distance between NWMA and FWLI (vessel diameter) and between FWLI and FWMA (far wall intima-media thickness) are obtained for all images and the mean value is computed during systole and diastole. The proposed method is a reliable and reproducible way of assessing the vessel diameter and far wall intima-media thickness of the carotid artery.

  6. Multiresolution fusion of radar sounder and altimeter data for the generation of high resolution DEMs of ice sheets

    NASA Astrophysics Data System (ADS)

    Ilisei, Ana-Maria; Bruzzone, Lorenzo

    2015-10-01

    Understanding the dynamics and processes of the ice sheets is crucial for predicting the behavior of climate change. A potential approach to achieve this is by using high resolution (HR) digital elevation models (DEMs) of the ice surface derived from remote sensing radar or laser altimeters. Unfortunately, at present HR DEMs of large portions of the ice sheets are not available. To address this issue, in this paper we propose a multisensor data fusion technique for the generation of a HR DEM of the ice sheets, which fuses two types of data, i.e., radargrams acquired by radar sounder (RS) instruments and ice surface elevation data measured by altimeter (ALT) instruments. The aim of the technique is to generate a DEM of the ice surface at the best possible horizontal resolution by exploiting the complementary characteristics of the RS and ALT data. This is done by defining a novel processing scheme that involves image processing techniques based on data rescaling, geostatistical interpolation and multiresolution analysis (MRA). The method has been applied to a subset of RS and ALT data acquired over a portion of the Byrd Glacier in Antarctica. Experimental results confirm the effectiveness of the proposed method.

  7. Using three-dimensional multigrid-based snake and multiresolution image registration for reconstruction of cranial defect.

    PubMed

    Liao, Yuan-Lin; Lu, Chia-Feng; Wu, Chieh-Tsai; Lee, Jiann-Der; Lee, Shih-Tseng; Sun, Yung-Nien; Wu, Yu-Te

    2013-02-01

    In cranioplasty, neurosurgeons use bone grafts to repair skull defects. To ensure the protection of intracranial tissues and recover the original head shape for aesthetic purposes, a custom-made pre-fabricated prosthesis must match the cranial incision as closely as possible. In our previous study (Liao et al. in Med Biol Eng Comput 49:203-211, 2011), we proposed an algorithm consisting of the 2D snake and image registration using the patient's own diagnostic low-resolution and defective high-resolution computed tomography (CT) images to repair the impaired skull. In this study, we developed a 3D multigrid snake and employed multiresolution image registration to improve the computational efficiency. After extracting the defect portion images, we designed an image-trimming process to remove the bumped inner margin that can facilitate the placement of skull implants without manual trimming during surgery. To evaluate the performance of the proposed algorithm, a set of skull phantoms were manufactured to simulate six different conditions of cranial defects, namely, unilateral, bilateral, and cross-midline defects with 20 or 40% skull defects. The overall image processing time in reconstructing the defect portion images can be reduced from 3 h to 20 min, as compared with our previous method. Furthermore, the reconstruction accuracies using the 3D multigrid snake were superior to those using the 2D snake. PMID:23076880

  8. Multi-resolutional shape features via non-Euclidean wavelets: Applications to statistical analysis of cortical thickness

    PubMed Central

    Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.

    2014-01-01

    Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060

  9. MCA: Multiresolution Correlation Analysis, a graphical tool for subpopulation identification in single-cell gene expression data

    PubMed Central

    2014-01-01

    Background Biological data often originate from samples containing mixtures of subpopulations, corresponding e.g. to distinct cellular phenotypes. However, identification of distinct subpopulations may be difficult if biological measurements yield distributions that are not easily separable. Results We present Multiresolution Correlation Analysis (MCA), a method for visually identifying subpopulations based on the local pairwise correlation between covariates, without needing to define an a priori interaction scale. We demonstrate that MCA facilitates the identification of differentially regulated subpopulations in simulated data from a small gene regulatory network, followed by application to previously published single-cell qPCR data from mouse embryonic stem cells. We show that MCA recovers previously identified subpopulations, provides additional insight into the underlying correlation structure, reveals potentially spurious compartmentalizations, and provides insight into novel subpopulations. Conclusions MCA is a useful method for the identification of subpopulations in low-dimensional expression data, as emerging from qPCR or FACS measurements. With MCA it is possible to investigate the robustness of covariate correlations with respect subpopulations, graphically identify outliers, and identify factors contributing to differential regulation between pairs of covariates. MCA thus provides a framework for investigation of expression correlations for genes of interests and biological hypothesis generation. PMID:25015590

  10. Classification of glioblastoma and metastasis for neuropathology intraoperative diagnosis: a multi-resolution textural approach to model the background

    NASA Astrophysics Data System (ADS)

    Ahmad Fauzi, Mohammad Faizal; Gokozan, Hamza Numan; Elder, Brad; Puduvalli, Vinay K.; Otero, Jose J.; Gurcan, Metin N.

    2014-03-01

    Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame (typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the classification to more types of cancers.

  11. A multi-resolution filtered-x LMS algorithm based on discrete wavelet transform for active noise control

    NASA Astrophysics Data System (ADS)

    Qiu, Z.; Lee, C.-M.; Xu, Z. H.; Sui, L. N.

    2016-01-01

    We have developed a new active control algorithm based on discrete wavelet transform (DWT) for both stationary and non-stationary noise control. First, the Mallat pyramidal algorithm is introduced to implement the DWT, which can decompose the reference signal into several sub-bands with multi-resolution and provides a perfect reconstruction (PR) procedure. To reduce the extra computational complexity introduced by DWT, an efficient strategy is proposed that updates the adaptive filter coefficients in the frequency domainDeepthi B.B using a fast Fourier transform (FFT). Based on the reference noise source, a 'Haar' wavelet is employed and by decomposing the noise signal into two sub-band (3-band), the proposed DWT-FFT-based FXLMS (DWT-FFT-FXLMS) algorithm has greatly reduced complexity and a better convergence performance compared to a time domain filtered-x least mean square (TD-FXLMS) algorithm. As a result of the outstanding time-frequency characteristics of wavelet analysis, the proposed DWT-FFT-FXLMS algorithm can effectively cancel both stationary and non-stationary noise, whereas the frequency domain FXLMS (FD-FXLMS) algorithm cannot approach this point.

  12. Physics-based Multi-resolution Radar-Radiometer Soil Moisture Estimation within the SMAP Mission Framework

    NASA Astrophysics Data System (ADS)

    Akbar, R.; Moghaddam, M.

    2014-12-01

    To further develop our understanding of global carbon and water cycles and to support the NASA Soil Moisture Active-Passive (SMAP) mission efforts have been made to develop joint and combined radar and radiometer soil moisture estimation algorithms. Taking advantage of the complimentary sensitivities of radar backscatter and brightness temperature to soil moisture and vegetation has the potential to greatly improve global soil moisture estimates. With the advent of SMAP, not only combing radar and radiometer information is of interest, combing multi-resolution data becomes critical. The work presented here will discuss methods to estimate soil moisture within the SMAP framework via a global optimization technique. Fine resolution radar backscatter measurements (3 km for SMAP) are combined with coarse resolution radiometer data (36 km for SMAP) in a joint cost function. Brightness temperature disaggregation and soil moisture estimation are then performed at the radar resolution. Furthermore, to capture the underlying physics of emission and scattering within the cost function, physics-based forward models which link emission and scattering from first principles are employed. The resulting effect is the ability to define a parameter kernel shared between emission and scattering models. Preliminary investigation yields improved soil moisture estimation when radar and radiometer information are used jointly. Furthermore, over a wide range of soil moisture (0.04 - 0.4 cm3/cm3) and vegetation (0- 5 kg/m2) physics based joint estimation yields the least retrieval errors.

  13. The Multi-Resolution Land Characteristics (MRLC) Consortium: 20 years of development and integration of USA national land cover data

    USGS Publications Warehouse

    Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John

    2014-01-01

    The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.

  14. Evaluation of the Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010) Using ICESat Geodetic Control

    NASA Technical Reports Server (NTRS)

    Carabajal, Claudia C.; Harding, David J.; Boy, Jean-Paul; Danielson, Jeffrey J.; Gesch, Dean B.; Suchdeo, Vijay P.

    2011-01-01

    Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (+/- 86deg latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete approx.50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m.

  15. Evaluation of the Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010) using ICESat geodetic control

    USGS Publications Warehouse

    Carabajal, C.C.; Harding, D.J.; Boy, J.-P.; Danielson, J.J.; Gesch, D.B.; Suchdeo, V.P.

    2011-01-01

    Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (?? 86?? latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete ???50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m. ?? 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

  16. Recent advances in wavelet analyses: Part 1. A review of concepts

    NASA Astrophysics Data System (ADS)

    Labat, David

    2005-11-01

    This contribution provides a review of the most recent wavelet applications in the field of earth sciences and is devoted to introducing and illustrating new wavelet analysis methods in the field of hydrology. Wavelet analysis remains unknown in the field of hydrology even though it clearly overcomes the well-known limits of the classical Fourier analysis. New wavelet-based tools are proposed to hydrologists in order to make wavelet analysis more attractive. First, a multiresolution continuous wavelet analysis method is shown to significantly improve the determination of the temporal-scale structure of a given signal. Second, the concept of wavelet entropy in both continuous and multiresolution frameworks is introduced allowing for an estimation of the temporal evolution of a given hydrological or climatologic signal's complexity. New insights in the scale-dependence of the relationship are exposed by introducing wavelet cross-correlation and wavelet coherence. Continuous wavelet cross-correlation provides a time-scale distribution of the correlation between two signals, whereas continuous wavelet coherence provides a qualitative estimator of the temporal evolution of the degree of linearity of the relationship between two signals on a given scale. These methods are applied to four large river runoffs and two global climatic indexes in a companion paper.

  17. Analyses to improve operational flexibility

    SciTech Connect

    Trikouros, N.G.

    1986-01-01

    Operational flexibility is greatly enhanced if the technical bases for plant limits and design margins are fully understood, and the analyses necessary to evaluate the effect of plant modifications or changes in operating modes on these parameters can be performed as required. If a condition should arise that might jeopardize a plant limit or reduce operational flexibility, it would be necessary to understand the basis for the limit or the specific condition limiting operational flexibility and be capable of performing a reanalysis to either demonstrate that the limit will not be violated or to change the limit. This paper provides examples of GPU Nuclear efforts in this regard. Examples of Oyster Creek and Three Mile Island operating experiences are discussed.

  18. Les possibilités et limitations de l'atomisation électrothermique en spectrométrie d'absorption atomique lors de l'analyse directe des metaux lourds dans l'eau de mer

    NASA Astrophysics Data System (ADS)

    Hoenig, M.; Wollast, R.

    This work shows the analytical possibilities of an electrothermal atomizer for the direct determination of trace metals in sea-water. The high background signals generated by the matrix perturb in particular volatile elements because of the low decomposition temperature allowed. In the case of cadmium, addition of ascorbic acid to the sample permits the modification of the atomization mechanism and the reduction of the optimum temperature. Under these conditions, absorption peak of the cadmium precedes the background absorption and consequently the analysis is no longer limited by the magnitude of the matrix signal: the determination of cadmium concentrations far below the μg -1 level is easily possible. Although the direct determination of the other elements should be in principle less disturbed by the background, the analytical performance is poorer than for cadmium. Limits of determination of the order from 0.1 to 1 μg -1 can be reached for chromium, copper and manganese. Lead and nickel appeared to be the most difficult elements; their direct determination is only possible in polluted coastal or estuarine waters. The injection of the sample as an aerosol into hot graphite tube showed to be well adapted to this kind of investigations. The simultaneous visualization of specific and background signals allows interpretations which until now were impossible with commercially available apparatus.

  19. Optimally combined confidence limits

    NASA Astrophysics Data System (ADS)

    Janot, P.; Le Diberder, F.

    1998-02-01

    An analytical and optimal procedure to combine statistically independent sets of confidence levels on a quantity is presented. This procedure does not impose any constraint on the methods followed by each analysis to derive its own limit. It incorporates the a priori statistical power of each of the analyses to be combined, in order to optimize the overall sensitivity. It can, in particular, be used to combine the mass limits obtained by several analyses searching for the Higgs boson in different decay channels, with different selection efficiencies, mass resolution and expected background. It can also be used to combine the mass limits obtained by several experiments (e.g. ALEPH, DELPHI, L3 and OPAL, at LEP 2) independently of the method followed by each of these experiments to derive their own limit. A method to derive the limit set by one analysis is also presented, along with an unbiased prescription to optimize the expected mass limit in the no-signal-hypothesis.

  20. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    SciTech Connect

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; Harrison, Robert J.

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using a numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.

  1. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    DOE PAGESBeta

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; Harrison, Robert J.

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  2. Geomorphometric multi-scale analysis for the recognition of Moon surface features using multi-resolution DTMs

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Jianping; Sofia, Giulia; Tarolli, Paolo

    2014-05-01

    Moon surface features have great significance in understanding and reconstructing the lunar geological evolution. Linear structures like rilles and ridges are closely related to the internal forced tectonic movement. The craters widely distributed on the moon are also the key research targets for external forced geological evolution. The extremely rare availability of samples and the difficulty for field works make remote sensing the most important approach for planetary studies. New and advanced lunar probes launched by China, U.S., Japan and India provide nowadays a lot of high-quality data, especially in the form of high-resolution Digital Terrain Models (DTMs), bringing new opportunities and challenges for feature extraction on the moon. The aim of this study is to recognize and extract lunar features using geomorphometric analysis based on multi-scale parameters and multi-resolution DTMs. The considered digital datasets include CE1-LAM (Chang'E One, Laser AltiMeter) data with resolution of 500m/pix, LRO-WAC (Lunar Reconnaissance Orbiter, Wide Angle Camera) data with resolution of 100m/pix, LRO-LOLA (Lunar Reconnaissance Orbiter, Lunar Orbiter Laser Altimeter) data with resolution of 60m/pix, and LRO-NAC (Lunar Reconnaissance Orbiter, Narrow Angle Camera) data with resolution of 2-5m/pix. We considered surface derivatives to recognize the linear structures including Rilles and Ridges. Different window scales and thresholds for are considered for feature extraction. We also calculated the roughness index to identify the erosion/deposits area within craters. The results underline the suitability of the adopted methods for feature recognition on the moon surface. The roughness index is found to be a useful tool to distinguish new craters, with higher roughness, from the old craters, which present a smooth and less rough surface.

  3. Anisotropic multi-resolution analysis in 2D, application to long-range correlations in cloud mm-radar fields

    SciTech Connect

    Davis, A.B.; Clothiaux, E.

    1999-03-01

    Because of Earth`s gravitational field, its atmosphere is strongly anisotropic with respect to the vertical; the effect of the Earth`s rotation on synoptic wind patterns also causes a more subtle form of anisotropy in the horizontal plane. The authors survey various approaches to statistically robust anisotropy from a wavelet perspective and present a new one adapted to strongly non-isotropic fields that are sampled on a rectangular grid with a large aspect ratio. This novel technique uses an anisotropic version of Multi-Resolution Analysis (MRA) in image analysis; the authors form a tensor product of the standard dyadic Haar basis, where the dividing ratio is {lambda}{sub z} = 2, and a nonstandard triadic counterpart, where the dividing ratio is {lambda}{sub x} = 3. The natural support of the field is therefore 2{sup n} pixels (vertically) by 3{sup n} pixels (horizontally) where n is the number of levels in the MRA. The natural triadic basis includes the French top-hat wavelet which resonates with bumps in the field whereas the Haar wavelet responds to ramps or steps. The complete 2D basis has one scaling function and five wavelets. The resulting anisotropic MRA is designed for application to the liquid water content (LWC) field in boundary-layer clouds, as the prevailing wind advects them by a vertically pointing mm-radar system. Spatial correlations are notoriously long-range in cloud structure and the authors use the wavelet coefficients from the new MRA to characterize these correlations in a multifractal analysis scheme. In the present study, the MRA is used (in synthesis mode) to generate fields that mimic cloud structure quite realistically although only a few parameters are used to control the randomness of the LWC`s wavelet coefficients.

  4. Detecting hidden spatial and spatio-temporal structures in glasses and complex physical systems by multiresolution network clustering.

    PubMed

    Ronhovde, P; Chakrabarty, S; Hu, D; Sahu, M; Sahu, K K; Kelton, K F; Mauro, N A; Nussinov, Z

    2011-09-01

    We elaborate on a general method that we recently introduced for characterizing the "natural" structures in complex physical systems via multi-scale network analysis. The method is based on "community detection" wherein interacting particles are partitioned into an "ideal gas" of optimally decoupled groups of particles. Specifically, we construct a set of network representations ("replicas") of the physical system based on interatomic potentials and apply a multiscale clustering ("multiresolution community detection") analysis using information-based correlations among the replicas. Replicas may i) be different representations of an identical static system, ii) embody dynamics by considering replicas to be time separated snapshots of the system (with a tunable time separation), or iii) encode general correlations when different replicas correspond to different representations of the entire history of the system as it evolves in space-time. Inputs for our method are the inter-particle potentials or experimentally measured two (or higher order) particle correlations. We apply our method to computer simulations of a binary Kob-Andersen Lennard-Jones system in a mixture ratio of A(80)B(20) , a ternary model system with components "A", "B", and "C" in ratios of A(88)B(7)C(5) (as in Al(88)Y(7)Fe(5) , and to atomic coordinates in a Zr(80)Pt(20) system as gleaned by reverse Monte Carlo analysis of experimentally determined structure factors. We identify the dominant structures (disjoint or overlapping) and general length scales by analyzing extrema of the information theory measures. We speculate on possible links between i) physical transitions or crossovers and ii) changes in structures found by this method as well as phase transitions associated with the computational complexity of the community detection problem. We also briefly consider continuum approaches and discuss rigidity and the shear penetration depth in amorphous systems; this latter length scale increases as

  5. Hierarchical progressive surveys. Multi-resolution HEALPix data structures for astronomical images, catalogues, and 3-dimensional data cubes

    NASA Astrophysics Data System (ADS)

    Fernique, P.; Allen, M. G.; Boch, T.; Oberto, A.; Pineau, F.-X.; Durand, D.; Bot, C.; Cambrésy, L.; Derriere, S.; Genova, F.; Bonnarel, F.

    2015-06-01

    Context. Scientific exploitation of the ever increasing volumes of astronomical data requires efficient and practical methods for data access, visualisation, and analysis. Hierarchical sky tessellation techniques enable a multi-resolution approach to organising data on angular scales from the full sky down to the individual image pixels. Aims: We aim to show that the hierarchical progressive survey (HiPS) scheme for describing astronomical images, source catalogues, and three-dimensional data cubes is a practical solution to managing large volumes of heterogeneous data and that it enables a new level of scientific interoperability across large collections of data of these different data types. Methods: HiPS uses the HEALPix tessellation of the sphere to define a hierarchical tile and pixel structure to describe and organise astronomical data. HiPS is designed to conserve the scientific properties of the data alongside both visualisation considerations and emphasis on the ease of implementation. We describe the development of HiPS to manage a large number of diverse image surveys, as well as the extension of hierarchical image systems to cube and catalogue data. We demonstrate the interoperability of HiPS and multi-order coverage (MOC) maps and highlight the HiPS mechanism to provide links to the original data. Results: Hierarchical progressive surveys have been generated by various data centres and groups for ˜200 data collections including many wide area sky surveys, and archives of pointed observations. These can be accessed and visualised in Aladin, Aladin Lite, and other applications. HiPS provides a basis for further innovations in the use of hierarchical data structures to facilitate the description and statistical analysis of large astronomical data sets.

  6. Multiresolution analysis of precipitation teleconnections with large-scale climate signals: A case study in South Australia

    NASA Astrophysics Data System (ADS)

    He, Xinguang; Guan, Huade

    2013-10-01

    Climatic teleconnections are often used to interpret and sometimes to predict precipitation temporal variability at various time scales. However, the teleconnections are intertwined between the effects of multiple large-scale climate signals which are often interdependent. Each climate signal is composed of multitemporal components, which may result in different teleconnection patterns. The time lags of precipitation response may vary with climate signals and their multitemporal components. In order to effectively address these problems, a multiresolution analysis (MRA) with a discrete wavelet transform is utilized, and a stepwise linear regression model based on MRA and cross correlation analysis is developed in this study. The method is applied to examine monthly precipitation teleconnections in South Australia (SA) with five large-scale climate signals. The MRA first decomposes each of original monthly precipitation anomaly and climate signals into several component series at different temporal scales. Then the hierarchical lag relationships between them are determined for regression modeling using cross-correlation analysis. The results indicate that the MRA-based method is able to reveal at which time scale(s) and with what time lag(s) the teleconnections occur, and their spatial patterns. The method is also useful to examine the time-scale patterns of the interdependence between climate signals. These altogether make the MRA-based method a promising tool to address the difficulties in the climate teleconnection studies. The multiple linear regression based on MRA-decomposed climate signals is expected to better interpret monthly precipitation temporal variability than that based on the original climate signals.

  7. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic

    USGS Publications Warehouse

    Chavez, P.S., Jr.; Sides, S.C.; Anderson, J.A.

    1991-01-01

    The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors

  8. VizieR Online Data Catalog: Multi-resolution images of M33 (Boquien+, 2015)

    NASA Astrophysics Data System (ADS)

    Boquien, M.; Calzetti, D.; Aalto, S.; Boselli, A.; Braine, J.; Buat, V.; Combes, F.; Israel, F.; Kramer, C.; Lord, S.; Relano, M.; Rosolowsky, E.; Stacey, G.; Tabatabaei, F.; van der Tak, F.; van der Werf, P.; Verley, S.; Xilouris, M.

    2015-02-01

    The FITS file contains maps of the flux in star formation tracing bands, maps of the SFR, maps of the attenuation in star formation tracing bands, and a map of the stellar mass of M33, each from a resolution of 8"/pixel to 512"/pixel. The FUV GALEX data from NGS were obtained directly from the GALEX website through GALEXVIEW. The observation was carried out on 25 November 2003 for a total exposure time of 3334s. Hα+[NII] observations were carried out in November 1995 on the Burrel Schmidt telescope at Kitt Peak National Observatory. The observations and the data processing are analysed in detail in Hoopes & Walterbos (2000ApJ...541..597H). The Spitzer IRAC 8um image sensitive to the emission of Polycyclic Aromatic Hydrocarbons (PAH) and the MIPS 24um image sensitive to the emission of Very Small Grains (VSG) were obtained from the NASA Extragalactic Database and have been analysed by Hinz et al. (2004ApJS..154..259H) and Verley et al. (2007A&A...476.1161V, Cat. J/A+A/476/1161). The PACS data at 70um and 100um, which are sensitive to the warm dust heated by massive stars, come from two different programmes. The 100um image was obtained in the context of the Herschel HerM33es open time key project (Kramer et al., 2010A&A...518L..67K, observation ID 1342189079 and 1342189080). The observation was carried out in parallel mode on 7 January 2010 for a duration of 6.3h. It consisted in 2 orthogonal scans at a speed of 20"/s, with a leg length of 7'. The 70um image was obtained as a follow-up open time cycle 2 programme (OT2mboquien4, observation ID 1342247408 and 1342247409). M33 was scanned on 25 June 2012 at a speed of 20"/s in 2 orthogonal directions over 50' with 5 repetitions of this scheme in order to match the depth of the 100um image. The total duration of the observation was 9.9h. The cube, cube.fits files, contains 16 extensions: * FUV * HALPHA * 8 * 24 * 70 * 100 * SFR_FUV * SFR_HALPHA * SFR_24 * SFR_70 * SFR_100 * SFRFUV24 * SFRHALPHA24 * A_FUV * A

  9. Land cover characterization and mapping of continental southeast Asia using multi-resolution satellite sensor data

    USGS Publications Warehouse

    Giri, Chandra; Defourny, P.; Shrestha, Surendra

    2003-01-01

    Land use/land cover change, particularly that of tropical deforestation and forest degradation, has been occurring at an unprecedented rate and scale in Southeast Asia. The rapid rate of economic development, demographics and poverty are believed to be the underlying forces responsible for the change. Accurate and up-to-date information to support the above statement is, however, not available. The available data, if any, are outdated and are not comparable for various technical reasons. Time series analysis of land cover change and the identification of the driving forces responsible for these changes are needed for the sustainable management of natural resources and also for projecting future land cover trajectories. We analysed the multi-temporal and multi-seasonal NOAA Advanced Very High Resolution Radiometer (AVHRR) satellite data of 1985/86 and 1992 to (1) prepare historical land cover maps and (2) to identify areas undergoing major land cover transformations (called ‘hot spots’). The identified ‘hot spot’ areas were investigated in detail using high-resolution satellite sensor data such as Landsat and SPOT supplemented by intensive field surveys. Shifting cultivation, intensification of agricultural activities and change of cropping patterns, and conversion of forest to agricultural land were found to be the principal reasons for land use/land cover change in the Oudomxay province of Lao PDR, the Mekong Delta of Vietnam and the Loei province of Thailand, respectively. Moreover, typical land use/land cover change patterns of the ‘hot spot’ areas were also examined. In addition, we developed an operational methodology for land use/land cover change analysis at the national level with the help of national remote sensing institutions.

  10. Deconstructing a Polygenetic Landscape Using LiDAR and Multi-Resolution Analysis

    NASA Astrophysics Data System (ADS)

    Houser, C.; Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.

    2015-12-01

    In many earth surface systems characteristic morphologies are associated with various regimes both past and present. Aeolian systems contain a variety of features differentiated largely by morphometric differences, which in turn reflect age and divergent process regimes. Using quantitative analysis of high-resolution elevation data to generate detailed information regarding these characteristic morphometries enables geomorphologists to effectively map process regimes from a distance. Combined with satellite imagery and other types of remotely sensed data, the outputs can even help to delineate phases of activity within aeolian systems. The differentiation of regimes and identification of relict features together enables a greater level of rigor to analyses leading to field-based investigations, which are highly dependent on site-specific historical contexts that often obscure distinctions between separate process-form regimes. We present results from a Principal Components Analysis (PCA) performed on a LiDAR-derived elevation model of a largely stabilized aeolian system in South Texas. The resulting components are layered and classified to generate a map of aeolian morphometric signatures for a portion of the landscape. Several of these areas do not immediately appear to be aeolian in nature in satellite imagery or LiDAR-derived models, yet field observations and historical imagery reveal the PCA did in fact identify stabilized and relict dune features. This methodology enables researchers to generate a morphometric classification of the land surface. We believe this method is a valuable and innovative tool for researchers identifying process regimes within a study area, particularly in field-based investigations that rely heavily on site-specific context.

  11. Introduction of wavelet analyses to rainfall/runoffs relationship for a karstic basin: the case of Licq-Atherey karstic system (France).

    PubMed

    Labat, D; Ababou, R; Mangin, A

    2001-01-01

    Karstic systems are highly heterogeneous geological formations characterized by a multiscale temporal and spatial hydrologic behavior with more or less localized temporal and spatial structures. Classical correlation and spectral analyses cannot take into account these properties. Therefore, it is proposed to introduce a new kind of transformation: the wavelet transform. Here we focus particularly on the use of wavelets to study temporal behavior of local precipitation and watershed runoffs from a part of the karstic system. In the first part of the paper, a brief mathematical overview of the continuous Morlet wavelet transform and of the multiresolution analysis is presented. An analogy with spectral analyses allows the introduction of concepts such as wavelet spectrum and cross-spectrum. In the second part, classical methods (spectral and correlation analyses) and wavelet transforms are applied and compared for daily rainfall rates and runoffs measured on a French karstic watershed (Pyrénées) over a period of 30 years. Different characteristic time scales of the rainfall and runoff processes are determined. These time scales are typically on the order of a few days for floods, but they also include significant half-year and one-year components and multi-annual components. The multiresolution cross-analysis also provides a new interpretation of the impulse response of the system. To conclude, wavelet transforms provide a valuable amount of information, which may be now taken into account in both temporal and spatially distributed karst modeling of precipitation and runoff. PMID:11447860

  12. Multi-resolution Analysis of the slip history of 1999 Chi-Chi, Taiwan earthquake

    NASA Astrophysics Data System (ADS)

    Ji, C.; Helmberger, D. V.

    2001-05-01

    Studies of large earthquakes have revealed strong heterogeneity in faulting slip distributions at mid-crustal depths. These results are inferred from modeling l ocal GPS and strong motion records but are usually limited by the lack of data density. Here we report on the fault complexity of the large (Magnitude 7.6) Chi- Chi earthquake obtained by inverting densely and well distributed static measure ments consisting of 119 GPS and 23 doubly integrated strong motion records, whic h is the best static data set yet recorded for a large earthquake. We show that the slip of the Chi-Chi earthquake was concentrated on the surface of a "wedge shaped" block. Furthermore, similar to our previous study in 1999 Hector Mine ea rthquake (Ji et al., 2001), the static data, teleseismic body wave and local str ong motion data are used to constrain the rupture process. A simulated annealing method combined with wavelet transform approach is employed to solve for the sl ip histories on subfault elements with variable sizes. The sizes are adjusted it eratively based on data type and distribution to produce an optimal balance betw een resolution and reliability. Results indicate strong local variations in rupt ure characteristics with relatively rapid changes in the middle and southern por tion producing relatively strong accelerations.

  13. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  14. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    SciTech Connect

    Tokuhiro, Akiro; Ruggles, Art; Pointer, David

    2015-01-22

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed

  15. Incorporating multiresolution analysis with multiclassifiers and decision fusion for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    West, Terrance R.

    The ongoing development and increased affordability of hyperspectral sensors are increasing their utilization in a variety of applications, such as agricultural monitoring and decision making. Hyperspectral Automated Target Recognition (ATR) systems typically rely heavily on dimensionality reduction methods, and particularly intelligent reduction methods referred to as feature extraction techniques. This dissertation reports on the development, implementation, and testing of new hyperspectral analysis techniques for ATR systems, including their use in agricultural applications where ground truthed observations available for training the ATR system are typically very limited. This dissertation reports the design of effective methods for grouping and down-selecting Discrete Wavelet Transform (DWT) coefficients and the design of automated Wavelet Packet Decomposition (WPD) filter tree pruning methods for use within the framework of a Multiclassifiers and Decision Fusion (MCDF) ATR system. The efficacy of the DWT MCDF and WPD MCDF systems are compared to existing ATR methods commonly used in hyperspectral remote sensing applications. The newly developed methods' sensitivity to operating conditions, such as mother wavelet selection, decomposition level, and quantity and quality of available training data are also investigated. The newly developed ATR systems are applied to the problem of hyperspectral remote sensing of agricultural food crop contaminations either by airborne chemical application, specifically Glufosinate herbicide at varying concentrations applied to corn crops, or by biological infestation, specifically soybean rust disease in soybean crops. The DWT MCDF and WPD MCDF methods significantly outperform conventional hyperspectral ATR methods. For example, when detecting and classifying varying levels of soybean rust infestation, stepwise linear discriminant analysis, results in accuracies of approximately 30%-40%, but WPD MCDF methods result in accuracies

  16. Multi-resolution processing for fractal analysis of airborne remotely sensed data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, D.; Lam, N.

    1992-01-01

    Fractal geometry is increasingly becoming a useful tool for modeling natural phenomenon. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. Since they are characterized by self-similarity, an ideal fractal surface is scale-independent; i.e. at different scales a fractal surface looks the same. This is not exactly true for natural surfaces. When viewed at different spatial resolutions parts of natural surfaces look alike in a statistical manner and only for a limited range of scales. Images acquired by NASA's Thermal Infrared Multispectral Scanner are used to compute the fractal dimension as a function of spatial resolution. Three methods are used to determine the fractal dimension - Schelberg's line-divider method, the variogram method, and the triangular prism method. A description of these methods and the results of applying these methods to a remotely-sensed image is also presented. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected was the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. This corresponds to 3 different pixel sizes - 5m, 15m, and 30m. After, simulating different spatial sampling intervals within the same image for each of the 3 image sets, the results are cross-correlated to compare the extent of detail and complexity that is obtained when data is taken at lower spatial intervals.

  17. Multi-resolution integrated modeling for basin-scale water resources management and policy analysis

    SciTech Connect

    Gupta, Hoshin V. ,; Brookshire, David S.; Springer, E. P.; Wagener, Thorsten

    2004-01-01

    Approximately one-third of the land surface of the Earth is considered to be arid or semi-arid with an annual average of less than 12-14 inches of rainfall. The availability of water in such regions is of course, particularly sensitive to climate variability while the demand for water is experiencing explosive population growth. The competition for available water is exerting considerable pressure on the water resources management. Policy and decision makers in the southwestern U.S. increasingly have to cope with over-stressed rivers and aquifers as population and water demands grow. Other factors such as endangered species and Native American water rights further complicate the management problems. Further, as groundwater tables are drawn down due to pumping in excess of natural recharge, considerable (potentially irreversible) environmental impacts begin to be felt as, for example, rivers run dry for significant portions of the year, riparian habitats disappear (with consequent effects on the bio-diversity of the region), aquifers compact resulting in large scale subsidence, and water quality begins to suffer. The current drought (1999-2002) in the southwestern U.S. is raising new concerns about how to sustain the combination of agricultural, urban and in-stream uses of water that underlie the socio-economic and ecological structure in the region. The water stressed nature of arid and semi-arid environments means that competing water uses of various kinds vie for access to a highly limited resource. If basin-scale water sustainability is to be achieved, managers must somehow achieve a balance between supply and demand throughout the basin, not just for the surface water or stream. The need to move water around a basin such as the Rio Grande or Colorado River to achieve this balance has created the stimulus for water transfers and water markets, and for accurate hydrologic information to sustain such institutions [Matthews et al. 2002; Brookshire et al 2003

  18. Benefits of an ultra large and multiresolution ensemble for estimating available wind power

    NASA Astrophysics Data System (ADS)

    Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik

    2016-04-01

    In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 ‑ 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.

  19. The multi-module multi-resolution SPECT system: A tool for variable-pinhole small-animal imaging

    NASA Astrophysics Data System (ADS)

    Hesterman, Jacob Yost

    The multi-module, multi-resolution SPECT system (M 3R) was developed and evaluated at the University of Arizona's Center for Gamma-Ray Imaging (CGRI). The system consists of four modular gamma cameras stationed around a Cerrobend shielding assembly. Slots machined into the shielding allow for the easy interchange of pinhole apertures, providing M3R with excellent hardware flexibility. Motivation for the system included serving as a prototype for a tabletop, small-animal SPECT system, acting as a testbed for image quality by enabling the experimental validation of imaging theory, and aiding in the development of techniques for the emerging field of adaptive SPECT imaging. Development of the system included design and construction of the shielding assembly and pinhole apertures. The issue of pinhole design and evaluation represents a recurring theme of the presented work. Existing calibration methods were adapted for use with M3R. A new algorithm, the contracting grid-search algorithm, that is capable of being executed in hardware was developed for use in position estimation. The algorithm was successfully applied in software and progress was made in hardware implementation. Special equipment and interpolation techniques were also developed to deal with M3R's unique system design and calibration requirements. A code library was created to simplify the many image processing steps required to realize successful analysis of measured image and calibration data and to achieve reconstruction. Observer studies were performed using both projection data and reconstructed images. These observer studies sought to explore signal-detection and activity estimation for various pinhole apertures. Special attention was paid to object variability, including the development and statistical analysis of a phantom capable of generating multiple realizations of a random, textured background. The results of these studies indicate potential for multiple-pinhole, multiplexed apertures but

  20. Tools for Automated Quality Assurance of Multibeam Bathymetry Data for the Global Multi-Resolution Topography (GMRT) Synthesis

    NASA Astrophysics Data System (ADS)

    O'Hara, S. H.; Ferrini, V.; Coplan, J.; Morton, J. J.

    2010-12-01

    The preservation and sharing of oceanographic data collected aboard diverse research cruises throughout the world’s ocean enables the creation of global compilations and syntheses. With an increase in the availability of data comes a need for developing tools and protocols that can be used to rapidly reduce data to produce high quality data products. Quality evaluation and cleaning of bathymetric data are the key components of multibeam data assembly necessary to produce maps and grids of seafloor topography. The intended use for a particular data product largely determines the level of processing, such that no single approach can fully exploit the richness of a particular data set. The Global Multi-Resolution Topography (GMRT) Synthesis is a global compilation of seafloor topography to >100 m resolution that makes use of sonar data in the public domain. A new procedure for routinely handling large volumes of swath bathymetry data for inclusion in version 2.0 of the GMRT Synthesis was developed to ensure overall data quality and rapidly identify and address correctable problems in the data. Data quality assessment (QA) includes the use of automated scripts, manual inspection of data and processing of the files to address problems. The process was designed around the MBSystem (http://www.ldeo.columbia.edu/res/pi/MB-System/) software suite, leverages existing tools to open each multibeam file and extract relevant information, and uses QA criteria based specifically on the needs of the GMRT Synthesis. Parameters that are assessed relate to system settings, navigation, depth values, and sound velocity. The QA process generates a file set summary, and a detailed file-based listing of problems, both of which are intended to inform subsequent manual inspection of data. A custom version of GeoMapApp (www.geomapapp.org) is the primary interface used for inspecting and interacting with the data, providing a rapid means for identifying problems within the context of the

  1. Change of spatial information under rescaling: A case study using multi-resolution image series

    NASA Astrophysics Data System (ADS)

    Chen, Weirong; Henebry, Geoffrey M.

    Spatial structure in imagery depends on a complicated interaction between the observational regime and the types and arrangements of entities within the scene that the image portrays. Although block averaging of pixels has commonly been used to simulate coarser resolution imagery, relatively little attention has been focused on the effects of simple rescaling on spatial structure and the explanation and a possible solution to the problem. Yet, if there are significant differences in spatial variance between rescaled and observed images, it may affect the reliability of retrieved biogeophysical quantities. To investigate these issues, a nested series of high spatial resolution digital imagery was collected at a research site in eastern Nebraska in 2001. An airborne Kodak DCS420IR camera acquired imagery at three altitudes, yielding nominal spatial resolutions ranging from 0.187 m to 1 m. The red and near infrared (NIR) bands of the co-registered image series were normalized using pseudo-invariant features, and the normalized difference vegetation index (NDVI) was calculated. Plots of grain sorghum planted in orthogonal crop row orientations were extracted from the image series. The finest spatial resolution data were then rescaled by averaging blocks of pixels to produce a rescaled image series that closely matched the spatial resolution of the observed image series. Spatial structures of the observed and rescaled image series were characterized using semivariogram analysis. Results for NDVI and its component bands show, as expected, that decreasing spatial resolution leads to decreasing spatial variability and increasing spatial dependence. However, compared to the observed data, the rescaled images contain more persistent spatial structure that exhibits limited variation in both spatial dependence and spatial heterogeneity. Rescaling via simple block averaging fails to consider the effect of scene object shape and extent on spatial information. As the features

  2. Confusion-limited galaxy fields. II - Classical analyses

    NASA Technical Reports Server (NTRS)

    Chokshi, Arati; Wright, Edward L.

    1989-01-01

    Chokshi and Wright presented a detailed model for simulating angular distribution of galaxy images in fields that extended to very high redshifts. Standard tools are used to analyze these simulated galaxy fields for the Omega(O) = 0 and the Omega(O) = 1 cases in order to test the discriminatory power of these tools. Classical number-magnitude diagrams and surface brightness-color-color diagrams are employed to study crowded galaxy fields. An attempt is made to separate the effects due to stellar evolution in galaxies from those due to the space time geometry. The results show that this discrimination is maximized at near-infrared wavelengths where the stellar photospheres are still visible but stellar evolution effects are less severe than those observed at optical wavelenghts. Rapid evolution of the stars on the asymptotic giant branch is easily recognized in the simulated data for both cosmologies and serves to discriminate between the two extreme values of Omega(O). Measurements of total magnitudes of individual galaxies are not essential for studying light distribution in galaxies as a function of redshift. Calculations for the extragalactic background radiation are carried out using the simulated data, and compared to integrals over the evolutionary models used.

  3. On limit and limit setting.

    PubMed

    Gorney, J E

    1994-01-01

    This article investigates the role of limit and limit setting within the psychoanalytic situation. Limit is understood to be a boundary between self and others, established as an interactional dimension of experience. Disorders of limit are here understood within the context of Winnicott's conception of the "anti-social tendency." Limit setting is proposed as a necessary and authentic response to the patient's acting out via holding and empathic responsiveness, viewed here as a form of boundary delineation. It is proposed that the patient attempts to repair his or her boundary problem through a seeking of secure limits within the analyst. The setting of secure and appropriate limits must arise from a working through of the analyst's own countertransference response to the patient. It is critical that this response be evoked by, and arise from, the immediate therapeutic interaction so that the patient can experience limit setting as simultaneously personal and authentic. PMID:7972580

  4. A Scale-Adaptive Approach for Spatially-Varying Urban Morphology Characterization in Boundary Layer Parametrization Using Multi-Resolution Analysis

    NASA Astrophysics Data System (ADS)

    Mouzourides, P.; Kyprianou, A.; Neophytou, M. K.-A.

    2013-12-01

    Urban morphology characterization is crucial for the parametrization of boundary-layer development over urban areas. One complexity in such a characterization is the three-dimensional variation of the urban canopies and textures, which are customarily reduced to and represented by one-dimensional varying parametrization such as the aerodynamic roughness length and zero-plane displacement . The scope of the paper is to provide novel means for a scale-adaptive spatially-varying parametrization of the boundary layer by addressing this 3-D variation. Specifically, the 3-D variation of urban geometries often poses questions in the multi-scale modelling of air pollution dispersion and other climate or weather-related modelling applications that have not been addressed yet, such as: (a) how we represent urban attributes (parameters) appropriately for the multi-scale nature and multi-resolution basis of weather numerical models, (b) how we quantify the uniqueness of an urban database in the context of modelling urban effects in large-scale weather numerical models, and (c) how we derive the impact and influence of a particular building in pre-specified sub-domain areas of the urban database. We illustrate how multi-resolution analysis (MRA) addresses and answers the afore-mentioned questions by taking as an example the Central Business District of Oklahoma City. The selection of MRA is motivated by its capacity for multi-scale sampling; in the MRA the "urban" signal depicting a city is decomposed into an approximation, a representation at a higher scale, and a detail, the part removed at lower scales to yield the approximation. Different levels of approximations were deduced for the building height and planar packing density . A spatially-varying characterization with a scale-adaptive capacity is obtained for the boundary-layer parameters (aerodynamic roughness length and zero-plane displacement ) using the MRA-deduced results for the building height and the planar packing

  5. Development of RESTful services and map-based user interface tools for access to the Global Multi-Resolution Topography (GMRT) Synthesis

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Barg, B.

    2015-12-01

    The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of quality controlled multibeam sonar data, collected by scientists and institutions worldwide, that is merged with gridded terrestrial and marine elevation data. The multi-resolutional elevation components of GMRT are delivered to the user through a variety of interfaces as both images and grids. The GMRT provides quantitative access to gridded data and images to the full native resolution of the sonar as well as attribution information and access to source data files. To construct the GMRT, multibeam sonar data are evaluated, cleaned and gridded by the MGDS Team and are then merged with gridded global and regional elevation data that are available at a variety of scales from 1km resolution to sub-meter resolution. As of June 2015, GMRT included processed swath data from nearly 850 research cruises with over 2.7 million ship-track miles of coverage. Several new services were developed over the past year to improve access to the GMRT Synthesis. In addition to our long-standing Web Map Services, we now offer RESTful services to provide programmatic access to gridded data in standard formats including ArcASCII, GeoTIFF, COARDS/CF-compliant NetCDF, and GMT NetCDF, as well as access to custom images of the GMRT in JPEG format. An attribution metadata XML service was also developed to return all relevant information about component data in an area, including cruise names, multibeam file names, and gridded data components. These new services are compliant with the EarthCube GeoWS Building Blocks specifications. Supplemental services include the release of data processing reports for each cruise included in the GMRT and data querying services that return elevation values at a point and great circle arc profiles using the highest available resolution data. Our new and improved map-based web application, GMRT MapTool, provides user access to the GMRT

  6. DATA AND ANALYSES

    EPA Science Inventory

    In order to promote transparency and clarity of the analyses performed in support of EPA's Supplemental Guidance for Assessing Susceptibility from Early-Life Exposure to Carcinogens, the data and the analyses are now available on this web site. The data is presented in two diffe...

  7. SNS shielding analyses overview

    SciTech Connect

    Popova, Irina; Gallmeier, Franz; Iverson, Erik B; Lu, Wei; Remec, Igor

    2015-01-01

    This paper gives an overview on on-going shielding analyses for Spallation Neutron Source. Presently, the most of the shielding work is concentrated on the beam lines and instrument enclosures to prepare for commissioning, save operation and adequate radiation background in the future. There is on-going work for the accelerator facility. This includes radiation-protection analyses for radiation monitors placement, designing shielding for additional facilities to test accelerator structures, redesigning some parts of the facility, and designing test facilities to the main accelerator structure for component testing. Neutronics analyses are required as well to support spent structure management, including waste characterisation analyses, choice of proper transport/storage package and shielding enhancement for the package if required.

  8. NOAA's National Snow Analyses

    NASA Astrophysics Data System (ADS)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  9. On Limits

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.

    2008-01-01

    In the last 3 decades or so, the size of systems we have been able to verify formally with automated tools has increased dramatically. At each point in this development, we encountered a different set of limits -- many of which we were eventually able to overcome. Today, we may have reached some limits that may be much harder to conquer. The problem I will discuss is the following: given a hypothetical machine with infinite memory that is seamlessly shared among infinitely many CPUs (or CPU cores), what is the largest problem size that we could solve?

  10. Spacelab Charcoal Analyses

    NASA Technical Reports Server (NTRS)

    Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.

    1995-01-01

    This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.

  11. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  12. Apollo 14 microbial analyses

    NASA Technical Reports Server (NTRS)

    Taylor, G. R.

    1972-01-01

    Extensive microbiological analyses that were performed on the Apollo 14 prime and backup crewmembers and ancillary personnel are discussed. The crewmembers were subjected to four separate and quite different environments during the 137-day monitoring period. The relation between each of these environments and observed changes in the microflora of each astronaut are presented.

  13. 3-D Cavern Enlargement Analyses

    SciTech Connect

    EHGARTNER, BRIAN L.; SOBOLIK, STEVEN R.

    2002-03-01

    Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.

  14. Information Omitted From Analyses.

    PubMed

    2015-08-01

    In the Original Article titled “Higher- Order Genetic and Environmental Structure of Prevalent Forms of Child and Adolescent Psychopathology” published in the February 2011 issue of JAMA Psychiatry (then Archives of General Psychiatry) (2011;68[2]:181-189), there were 2 errors. Although the article stated that the dimensions of psychopathology were measured using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder, major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder, all dimensional scores used in the reported analyses were actually based on parent reports of symptoms; youth reports were not used. In addition, whereas the article stated that each symptom dimension was residualized on age, sex, age-squared, and age by sex, the dimensions actually were only residualized on age, sex, and age-squared. All analyses were repeated using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder,major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder; these dimensional scores were residualized on age, age-squared, sex, sex by age, and sex by age-squared. The results of the new analyses were qualitatively the same as those reported in the article, with no substantial changes in conclusions. The only notable small difference was that major depression and generalized anxiety disorder dimensions had small but significant loadings on the internalizing factor in addition to their substantial loadings on the general factor in the analyses of both genetic and non-shared covariances in the selected models in the new analyses. Corrections were made to the

  15. Family Limitation

    PubMed Central

    Smith, Robert

    1966-01-01

    Dr Robert Smith surveys the history of birth control and sounds a warning for the future of mankind, if the population explosion is allowed to continue unchecked. He stresses the importance of the role of the general practitioner in the limitation of births. Sir Theodore Fox describes the work of the Family Planning Association and stresses that, increasingly, this is a specialist service covering all aspects of fertility. He also feels that the general practitioner has a role in family planning. PMID:5954261

  16. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    Model calculations and analyses have been carried out to compare with several sets of data (dose, induced radioactivity in various experiment samples and spacecraft components, fission foil measurements, and LET spectra) from passive radiation dosimetry on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The calculations and data comparisons are used to estimate the accuracy of current models and methods for predicting the ionizing radiation environment in low earth orbit. The emphasis is on checking the accuracy of trapped proton flux and anisotropy models.

  17. Broadband rotor noise analyses

    NASA Technical Reports Server (NTRS)

    George, A. R.; Chou, S. T.

    1984-01-01

    The various mechanisms which generate broadband noise on a range of rotors studied include load fluctuations due to inflow turbulence, due to turbulent boundary layers passing the blades' trailing edges, and due to tip vortex formation. Existing analyses are used and extensions to them are developed to make more accurate predictions of rotor noise spectra and to determine which mechanisms are important in which circumstances. Calculations based on the various prediction methods in existing experiments were compared. The present analyses are adequate to predict the spectra from a wide variety of experiments on fans, full scale and model scale helicopter rotors, wind turbines, and propellers to within about 5 to 10 dB. Better knowledge of the inflow turbulence improves the accuracy of the predictions. Results indicate that inflow turbulence noise depends strongly on ambient conditions and dominates at low frequencies. Trailing edge noise and tip vortex noise are important at higher frequencies if inflow turbulence is weak. Boundary layer trailing edge noise, important, for large sized rotors, increases slowly with angle of attack but not as rapidly as tip vortex noise.

  18. Broadband rotor noise analyses

    NASA Astrophysics Data System (ADS)

    George, A. R.; Chou, S. T.

    1984-04-01

    The various mechanisms which generate broadband noise on a range of rotors studied include load fluctuations due to inflow turbulence, due to turbulent boundary layers passing the blades' trailing edges, and due to tip vortex formation. Existing analyses are used and extensions to them are developed to make more accurate predictions of rotor noise spectra and to determine which mechanisms are important in which circumstances. Calculations based on the various prediction methods in existing experiments were compared. The present analyses are adequate to predict the spectra from a wide variety of experiments on fans, full scale and model scale helicopter rotors, wind turbines, and propellers to within about 5 to 10 dB. Better knowledge of the inflow turbulence improves the accuracy of the predictions. Results indicate that inflow turbulence noise depends strongly on ambient conditions and dominates at low frequencies. Trailing edge noise and tip vortex noise are important at higher frequencies if inflow turbulence is weak. Boundary layer trailing edge noise, important, for large sized rotors, increases slowly with angle of attack but not as rapidly as tip vortex noise.

  19. Amino acid analyses of Apollo 14 samples.

    NASA Technical Reports Server (NTRS)

    Gehrke, C. W.; Zumwalt, R. W.; Kuo, K.; Aue, W. A.; Stalling, D. L.; Kvenvolden, K. A.; Ponnamperuma, C.

    1972-01-01

    Detection limits were between 300 pg and 1 ng for different amino acids, in an analysis by gas-liquid chromatography of water extracts from Apollo 14 lunar fines in which amino acids were converted to their N-trifluoro-acetyl-n-butyl esters. Initial analyses of water and HCl extracts of sample 14240 and 14298 samples showed no amino acids above background levels.

  20. Force Limited Vibration Testing

    NASA Technical Reports Server (NTRS)

    Scharton, Terry; Chang, Kurng Y.

    2005-01-01

    This slide presentation reviews the concept and applications of Force Limited Vibration Testing. The goal of vibration testing of aerospace hardware is to identify problems that would result in flight failures. The commonly used aerospace vibration tests uses artificially high shaker forces and responses at the resonance frequencies of the test item. It has become common to limit the acceleration responses in the test to those predicted for the flight. This requires an analysis of the acceleration response, and requires placing accelerometers on the test item. With the advent of piezoelectric gages it has become possible to improve vibration testing. The basic equations have are reviewed. Force limits are analogous and complementary to the acceleration specifications used in conventional vibration testing. Just as the acceleration specification is the frequency spectrum envelope of the in-flight acceleration at the interface between the test item and flight mounting structure, the force limit is the envelope of the in-flight force at the interface . In force limited vibration tests, both the acceleration and force specifications are needed, and the force specification is generally based on and proportional to the acceleration specification. Therefore, force limiting does not compensate for errors in the development of the acceleration specification, e.g., too much conservatism or the lack thereof. These errors will carry over into the force specification. Since in-flight vibratory force data are scarce, force limits are often derived from coupled system analyses and impedance information obtained from measurements or finite element models (FEM). Fortunately, data on the interface forces between systems and components are now available from system acoustic and vibration tests of development test models and from a few flight experiments. Semi-empirical methods of predicting force limits are currently being developed on the basis of the limited flight and system test

  1. Top-down and bottom-up inventory approach for above ground forest biomass and carbon monitoring in REDD framework using multi-resolution satellite data.

    PubMed

    Sharma, Laxmi Kant; Nathawat, Mahendra Singh; Sinha, Suman

    2013-10-01

    This study deals with the future scope of REDD (Reduced Emissions from Deforestation and forest Degradation) and REDD+ regimes for measuring and monitoring the current state and dynamics of carbon stocks over time with integrated geospatial and field-based biomass inventory approach. Multi-temporal and multi-resolution geospatial synergic approach incorporating satellite sensors from moderate to high resolution with stratified random sampling design is used. The inventory process involves a continuous forest inventory to facilitate the quantification of possible CO2 reductions over time using statistical up-scaling procedures on various levels. The combined approach was applied on a regional scale taking Himachal Pradesh (India), as a case study, with a hierarchy of forest strata representing the forest structure found in India. Biophysical modeling implemented revealed power regression model as the best fit (R (2) = 0.82) to model the relationship between Normalized Difference Vegetation Index and biomass which was further implemented to calculate multi-temporal above ground biomass and carbon sequestration. The calculated value of net carbon sequestered by the forests totaled to 11.52 million tons (Mt) over the period of 20 years at the rate of 0.58 Mt per year since 1990 while CO2 equivalent reduced from the environment by the forests under study during 20 years comes to 42.26 Mt in the study area. PMID:23604728

  2. Multiresolution wavelets and natural time analysis before the January-February 2014 Cephalonia (Mw6.1 & 6.0) sequence of strong earthquake events

    NASA Astrophysics Data System (ADS)

    Vallianatos, Filippos; Michas, Georgios; Hloupis, George

    On January 26 and February 3, 2014, Cephalonia Island (Ionian Sea, Greece) was struck by two strong, shallow earthquakes (moment magnitudes Mw6.1 and Mw6.0, respectively) that ruptured two sub-parallel, strike-slip faults, with right-lateral kinematics. The scope of the present work is to investigate the complex correlations of the earthquake activity that preceded the Mw6.1 event in the broader area of the Cephalonia Island and identify possible indications of critical stages in the evolution of the earthquake generation process. We apply the recently introduced methods of Multiresolution Wavelet Analysis (MRWA) and Natural Time (NT) analysis and for the first time we combine their results in a joint approach that may lead to universal principles in describing the evolution of the earthquake activity as it approaches a major event. In particular, the initial application of MRWA on the inter-event time series indicates a time marker 12 days prior to the major event. By using this time as the initiation point of the NT analysis, the critical stage of seismicity, where the κ1 parameter reaches the critical value of κ1 = 0.070, is approached few days before the occurrence of the Mw6.1 earthquake.

  3. A system for generating multi-resolution Digital Terrain Models of Mars based on the ESA Mars Express and NASA Mars Reconnaissance Orbiter data

    NASA Astrophysics Data System (ADS)

    Yershov, V.

    2015-10-01

    We describe a processing system for generating multiresolution digital terrain models (DTM) of Mars within the the iMars project of the European Seventh Framework Programme. This system is based on a non-rigorous sensor model for processing highresolution stereoscopic images obtained fromthe High Resolution Imaging Science Experiment (HiRISE) camera and Context Camera (CTX) onboard the NASA Mars Reconnaissance Orbiter (MRO) spacecraft. The system includes geodetic control based on the polynomial fit of the input CTX images with respect to to a reference image obtained from the ESA Mars Express High Resolution Stereo Camera (HRSC). The input image processing is based on the Integrated Software for Images and Spectrometers (ISIS) and the NASA Ames stereo pipeline. The accuracy of the produced CTX DTM is improved by aligning it with the reference HRSC DTMand the altimetry data from the Mars Orbiter Laser Altimeter (MOLA) onboard the Mars Global Surveyor (MGS) spacecraft. The higher-resolution HiRISE imagery data are processed in the the same way, except that the reference images and DTMs are taken from the CTX results obtained during the first processing stage. A quality assessment of image photogrammetric registration is demonstrated by using data generated by the NASA Ames stereo pipeline and the BAE Socet system. Such DTMs will be produced for all available stereo-pairs and be displayed asWMS layers within the iMarsWeb GIS.

  4. EEG analyses with SOBI.

    SciTech Connect

    Glickman, Matthew R.; Tang, Akaysha

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  5. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    This report covers work performed by Science Applications International Corporation (SAIC) under contract NAS8-39386 from the NASA Marshall Space Flight Center entitled LDEF Satellite Radiation Analyses. The basic objective of the study was to evaluate the accuracy of present models and computational methods for defining the ionizing radiation environment for spacecraft in Low Earth Orbit (LEO) by making comparisons with radiation measurements made on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The emphasis of the work here is on predictions and comparisons with LDEF measurements of induced radioactivity and Linear Energy Transfer (LET) measurements. These model/data comparisons have been used to evaluate the accuracy of current models for predicting the flux and directionality of trapped protons for LEO missions.

  6. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  7. US ITER limiter module design

    SciTech Connect

    Mattas, R.F.; Billone, M.; Hassanein, A.

    1996-08-01

    The recent U.S. effort on the ITER (International Thermonuclear Experimental Reactor) shield has been focused on the limiter module design. This is a multi-disciplinary effort that covers design layout, fabrication, thermal hydraulics, materials evaluation, thermo- mechanical response, and predicted response during off-normal events. The results of design analyses are presented. Conclusions and recommendations are also presented concerning, the capability of the limiter modules to meet performance goals and to be fabricated within design specifications using existing technology.

  8. AFR-100 safety analyses

    SciTech Connect

    Sumner, T.; Moisseytsev, A.; Wei, T. Y. C.

    2012-07-01

    The Advanced Fast Reactor-100 (AFR-100) is Argonne National Laboratory's 250 MWth metal-fueled modular sodium-cooled pool-type fast reactor concept. [1] A series of accident sequences that focused on the AFR-100's ability to provide protection against reactor damage during low probability accident sequences resulting from multiple equipment failures were examined. Protected and Unprotected Loss of Flow (PLOF and ULOF) and Unprotected Transient Over-Power (UTOP) accidents were simulated using the SAS4A/SASSYS-1 safety analysis code. The large heat capacity of the sodium in the pool-type reactor allows the AFR-100 to absorb large amounts of energy during a PLOF with relatively small temperature increases throughout the system. During a ULOF with a 25-second flow halving time, coolant and cladding temperatures peak around 720 deg. C within the first minute before reactivity feedback effects decrease power to match the flow. Core radial expansion and fuel Doppler provide the necessary feedback during the UTOP to bring the system back to critical before system temperatures exceed allowable limits. Simulation results indicate that adequate ULOF safety margins exist for the AFR-100 design with flow halving times of twenty-five seconds. Significant safety margins are maintained for PLOF accidents as well as UTOP accidents if a rod stop is used. (authors)

  9. Analyses and characterization of double shell tank

    SciTech Connect

    Not Available

    1994-10-04

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams.

  10. An electrochemical calibration unit for hydrogen analysers.

    PubMed

    Merzlikin, Sergiy V; Mingers, Andrea M; Kurz, Daniel; Hassel, Achim Walter

    2014-07-01

    Determination of hydrogen in solids such as high strength steels or other metals in the ppb or ppm range requires hot-extraction or melt-extraction. Calibration of commercially available hydrogen analysers is performed either by certified reference materials CRMs, often having limited availability and reliability or by gas dosing for which the determined value significantly depends on atmospheric pressure and the construction of the gas dosing valve. The sharp and sudden appearance of very high gas concentrations from gas dosing is very different from real effusion transients and is therefore another source of errors. To overcome these limitations, an electrochemical calibration method for hydrogen analysers was developed and employed in this work. Exactly quantifiable, faradaic amounts of hydrogen can be produced in an electrochemical reaction and detected by the hydrogen analyser. The amount of hydrogen is exactly known from the transferred charge in the reaction following Faradays law; and the current time program determines the apparent hydrogen effusion transient. Random effusion transient shaping becomes possible to fully comply with real samples. Evolution time and current were varied for determining a quantitative relationship. The device was used to produce either diprotium (H2) or dideuterium (D2) from the corresponding electrolytes. The functional principle is electrochemical in nature and thus an automation is straightforward, can be easily implemented at an affordable price of 1-5% of the hydrogen analysers price. PMID:24840442

  11. Time-series analysis of multi-resolution optical imagery for quantifying forest cover loss in Sumatra and Kalimantan, Indonesia

    NASA Astrophysics Data System (ADS)

    Broich, Mark; Hansen, Matthew C.; Potapov, Peter; Adusei, Bernard; Lindquist, Erik; Stehman, Stephen V.

    2011-04-01

    Indonesia than change maps based on image composites. Unlike other time-series analyses employing observations with a consistent periodicity, our study area was characterized by highly unequal observation counts and frequencies due to persistent cloud cover, scan line corrector off (SLC-off) gaps, and the absence of a complete archive. Our method accounts for this variation by generating a generic variable space. We evaluated our results against an independent probability sample-based estimate of gross forest cover loss and expert mapped gross forest cover loss at 64 sample sites. The mapped gross forest cover loss for Sumatra and Kalimantan was 2.86% of the land area, or 2.86 Mha from 2000 to 2005, with the highest concentration having occurred in Riau and Kalimantan Tengah provinces.

  12. Recent Trends in Conducting School-Based Experimental Functional Analyses

    ERIC Educational Resources Information Center

    Carter, Stacy L.

    2009-01-01

    Demonstrations of school-based experimental functional analyses have received limited attention within the literature. School settings present unique practical and ethical concerns related to the implementation of experimental analyses which were originally developed within clinical settings. Recent examples have made definite contributions toward…

  13. Optimizing multi-resolution segmentation scale using empirical methods: Exploring the sensitivity of the supervised discrepancy measure Euclidean distance 2 (ED2)

    NASA Astrophysics Data System (ADS)

    Witharana, Chandi; Civco, Daniel L.

    2014-01-01

    Multiresolution segmentation (MRS) has proven to be one of the most successful image segmentation algorithms in the geographic object-based image analysis (GEOBIA) framework. This algorithm is relatively complex and user-dependent; scale, shape, and compactness are the main parameters available to users for controlling the algorithm. Plurality of segmentation results is common because each parameter may take a range of values within its parameter space or different combinations of values among parameters. Finding optimal parameter values through a trial-and-error process is commonly practiced at the expense of time and labor, thus, several alternative supervised and unsupervised methods for supervised automatic parameter setting have been proposed and tested. In the case of supervised empirical assessments, discrepancy measures are employed for computing measures of dissimilarity between a reference polygon and an image object candidate. Evidently the reliability of the optimal-parameter prediction heavily relies on the sensitivity of the segmentation quality metric. The idea behind pursuing optimal parameter setting is that, for instance, a given scale setting provides image object candidates different from the other scale setting; thus, by design the supervised quality metric should capture this difference. In this exploratory study, we selected the Euclidean distance 2 (ED2) metric, a recently proposed supervised metric, whose main design goal is to optimize the geometrical discrepancy (potential segmentation error (PSE)) and arithmetic discrepancy between image objects and reference polygons (number-of segmentation ratio (NSR)) in two dimensional Euclidean space, as a candidate to investigate the validity and efficacy of empirical discrepancy measures for finding the optimal scale parameter setting of the MRS algorithm. We chose test image scenes from four different space-borne sensors with varying spatial resolutions and scene contents and systematically

  14. Multiresolution analysis and classification of river discharges in France and their climate forcing over the Euro-Atlantic area using Wavelet transforms and Composite analysis

    NASA Astrophysics Data System (ADS)

    Fossa, Manuel; Nicolle, Marie; Massei, Nicolas; Fournier, Matthieu; Laignel, Benoit

    2016-04-01

    heights and meridional and zonal winds in the Euro-Atlantic area both for the winter and summer seasons for each station. The links are studied at different time scales of variability using multiresolution analysis. This allows assessing the large scale pattern that partly explains each scale of variability within the discharges. A cluster analysis is done on the obtained composite maps. A comparison is then realized between this classification and the one established in the first part of this study in order to test if stations that have similar time scales of variability also share the same climate forcings.

  15. Scalar limitations of diffractive optical elements

    NASA Technical Reports Server (NTRS)

    Johnson, Eric G.; Hochmuth, Diane; Moharam, M. G.; Pommet, Drew

    1993-01-01

    In this paper, scalar limitations of diffractive optic components are investigated using coupled wave analyses. Results are presented for linear phase gratings and fanout devices. In addition, a parametric curve is given which correlates feature size with scalar performance.

  16. FUEL CASK IMPACT LIMITER VULNERABILITIES

    SciTech Connect

    Leduc, D; Jeffery England, J; Roy Rothermel, R

    2009-02-09

    Cylindrical fuel casks often have impact limiters surrounding just the ends of the cask shaft in a typical 'dumbbell' arrangement. The primary purpose of these impact limiters is to absorb energy to reduce loads on the cask structure during impacts associated with a severe accident. Impact limiters are also credited in many packages with protecting closure seals and maintaining lower peak temperatures during fire events. For this credit to be taken in safety analyses, the impact limiter attachment system must be shown to retain the impact limiter following Normal Conditions of Transport (NCT) and Hypothetical Accident Conditions (HAC) impacts. Large casks are often certified by analysis only because of the costs associated with testing. Therefore, some cask impact limiter attachment systems have not been tested in real impacts. A recent structural analysis of the T-3 Spent Fuel Containment Cask found problems with the design of the impact limiter attachment system. Assumptions in the original Safety Analysis for Packaging (SARP) concerning the loading in the attachment bolts were found to be inaccurate in certain drop orientations. This paper documents the lessons learned and their applicability to impact limiter attachment system designs.

  17. Uncertainty quantification approaches for advanced reactor analyses.

    SciTech Connect

    Briggs, L. L.; Nuclear Engineering Division

    2009-03-24

    The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.

  18. Neuronal network analyses: premises, promises and uncertainties

    PubMed Central

    Parker, David

    2010-01-01

    Neuronal networks assemble the cellular components needed for sensory, motor and cognitive functions. Any rational intervention in the nervous system will thus require an understanding of network function. Obtaining this understanding is widely considered to be one of the major tasks facing neuroscience today. Network analyses have been performed for some years in relatively simple systems. In addition to the direct insights these systems have provided, they also illustrate some of the difficulties of understanding network function. Nevertheless, in more complex systems (including human), claims are made that the cellular bases of behaviour are, or will shortly be, understood. While the discussion is necessarily limited, this issue will examine these claims and highlight some traditional and novel aspects of network analyses and their difficulties. This introduction discusses the criteria that need to be satisfied for network understanding, and how they relate to traditional and novel approaches being applied to addressing network function. PMID:20603354

  19. Multiresolution FOPEN SAR image formation

    NASA Astrophysics Data System (ADS)

    DiPietro, Robert C.; Fante, Ronald L.; Perry, Richard P.; Soumekh, Mehrdad; Tromp, Laurens D.

    1999-08-01

    This paper presents a new technique for FOPEN SAR (foliage penetration synthetic aperture radar) image formation of Ultra Wideband UHF radar data. Planar Subarray Processing (PSAP) has successfully demonstrated the capability of forming multi- resolution images for X and Ka band radar systems under MITRE IR&D and the DARPA IBC program. We have extended the PSAP algorithm to provide the capability to form strip map, multi- resolution images for Ultra Wideband UHF radar systems. The PSAP processing can accommodate very large SAR integration angles and the resulting very large range migration. It can also accommodate long coherent integration times and wide swath coverage. Major PSAP algorithm features include: multiple SAR sub-arrays providing different look angles at the same image area that can enable man-made target responses to be distinguished from other targets and clutter by their angle dependent specular characteristics, the capability to provide a full resolution image in these and other selected areas without the processing penalty of full resolution in non required areas, and the capability to include angle-dependent motion compensation within the image formation process.

  20. Seamless multiresolution isosurfaces using wavelets

    SciTech Connect

    Udeshi, T.; Hudson, R.; Papka, M. E.

    2000-04-11

    Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.

  1. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Muller, Jan-Peter; Tao, Yu; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis; Fanara, Lida; Waenlish, Marita; Walter, Sebastian; Steinkert, Ralf; Schreiner, Bjorn; Cantini, Federico; Wardlaw, Jessica; Sprinks, James; Giordano, Michele; Marsh, Stuart

    2016-07-01

    Understanding planetary atmosphere-surface and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back in time to the mid 1970s, to examine time-varying changes, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, UCL have developed a fully automated multi-resolution DTM processing chain, called the Co-registration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP), which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed and is being applied to level-1 EDR images taken by the 4 NASA orbital cameras since 1976 using the HRSC map products (both mosaics and orbital strips) as a map-base. The project has also included Mars Radar profiles from Mars Express and Mars Reconnaissance Orbiter missions. A webGIS has been developed for displaying this time sequence of imagery and a demonstration will be shown applied to one of the map-sheets. Automated quality control techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. These data mining techniques are then being employed within a citizen science project within the Zooniverse family

  2. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Oberst, Jürgen; Yershov, Vladimir; Muller, Jan-Peter; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004, the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25 m nadir images) with 87% coverage with more than 65% useful for stereo mapping. NASA began imaging the surface of Mars, initially from flybys in the 1960s and then from the first orbiter with image resolution less than 100 m in the late 1970s from Viking Orbiter. The most recent orbiter, NASA MRO, has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈20 cm) and ≈5% from CTX (≈6 m) in stereo. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing (“Big Data”) solution is being developed to generate the best possible multi-resolution DTM of Mars. In addition, HRSC OrthoRectified Images (ORI) will be used as a georeference basis so that all higher resolution ORIs will be co-registered to the HRSC DTMs (50-100m grid) products generated at DLR and, from CTX (6-20 m grid) and HiRISE (1-3 m grids) on a large-scale Linux cluster based at MSSL. The HRSC products will be employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points and initial results will be shown here. In 2015, many of the entire NASA and ESA orbital images will be co-registered and the updated georeferencing

  3. Pawnee Nation Energy Option Analyses

    SciTech Connect

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  4. NEXT Ion Thruster Performance Dispersion Analyses

    NASA Technical Reports Server (NTRS)

    Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The NEXT ion thruster is a low specific mass, high performance thruster with a nominal throttling range of 0.5 to 7 kW. Numerous engineering model and one prototype model thrusters have been manufactured and tested. Of significant importance to propulsion system performance is thruster-to-thruster performance dispersions. This type of information can provide a bandwidth of expected performance variations both on a thruster and a component level. Knowledge of these dispersions can be used to more conservatively predict thruster service life capability and thruster performance for mission planning, facilitate future thruster performance comparisons, and verify power processor capabilities are compatible with the thruster design. This study compiles the test results of five engineering model thrusters and one flight-like thruster to determine unit-to-unit dispersions in thruster performance. Component level performance dispersion analyses will include discharge chamber voltages, currents, and losses; accelerator currents, electron backstreaming limits, and perveance limits; and neutralizer keeper and coupling voltages and the spot-to-plume mode transition flow rates. Thruster level performance dispersion analyses will include thrust efficiency.

  5. Nuclear analyses for the ITER ECRH launcher

    NASA Astrophysics Data System (ADS)

    Serikov, A.; Fischer, U.; Heidinger, R.; Spaeh, P.; Stickel, S.; Tsige-Tamirat, H.

    2008-05-01

    Computational results of the nuclear analyses for the ECRH launcher integrated into the ITER upper port are presented. The purpose of the analyses was to provide the proof for the launcher design that the nuclear requirements specified in the ITER project can be met. The aim was achieved on the basis of 3D neutronics radiation transport calculations using the Monte Carlo code MCNP. In the course of the analyses an adequate shielding configuration against neutron and gamma radiation was developed keeping the necessary empty space for mm-waves propagation in accordance with the ECRH physics guidelines. Different variants of the shielding configuration for the extended performance front steering launcher (EPL) were compared in terms of nuclear response functions in the critical positions. Neutron damage (dpa), nuclear heating, helium production rate, neutron and gamma fluxes have been calculated under the conditions of ITER operation. It has been shown that the radiation shielding criteria are satisfied and the supposed shutdown dose rates are below the ITER nuclear design limits.

  6. Analysing Children's Drawings: Applied Imagination

    ERIC Educational Resources Information Center

    Bland, Derek

    2012-01-01

    This article centres on a research project in which freehand drawings provided a richly creative and colourful data source of children's imagined, ideal learning environments. Issues concerning the analysis of the visual data are discussed, in particular, how imaginative content was analysed and how the analytical process was dependent on an…

  7. Feed analyses and their interpretation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Compositional analysis is central to determining the nutritional value of feedstuffs. The utility of the values and how they should be used depends on how representative the feed subsample is, the nutritional relevance of the assays, analytical variability of the analyses, and whether a feed is suit...

  8. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: One year on with a focus on auto-DTM, auto-coregistration and citizen science.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Sidiropoulos, Panagiotis; Yershov, Vladimir; Gwinner, Klaus; van Gasselt, Stephan; Walter, Sebastian; Ivanov, Anton; Morley, Jeremy; Sprinks, James; Houghton, Robert; Bamford, Stephen; Kim, Jung-Rack

    2015-04-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as impact craters, RSLs, CO2 geysers, gullies, boulder movements and a host of ice-related phenomena). Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 98% coverage with images ≤100m and more than 70% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. Recently [Gwinner et al., 2015] have shown the ability to generate mosaiced DTM and BRDF-corrected surface reflectance maps. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈25cm) and ≈5% from CTX (≈6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing changes in time

  9. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: an overview and a request for scientific inputs.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Gwinner, Klaus; van Gasselt, Stephan; Ivanov, Anton; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Yershov, Vladimir; Sidirpoulos, Panagiotis; Kim, Jungrack

    2014-05-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 7 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement [Orloff et al., 2011] or the sublimation of sub-surface ice revealed by meteoritic impact [Byrne et al., 2009] as well as examine geophysical phenomena, such as surface roughness on different length scales. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 87% coverage with images ≤25m and more than 65% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≡20cm) and ≡5% from CTX (≡6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing

  10. Analyses of response-stimulus sequences in descriptive observations.

    PubMed

    Samaha, Andrew L; Vollmer, Timothy R; Borrero, Carrie; Sloman, Kimberly; Pipkin, Claire St Peter; Bourret, Jason

    2009-01-01

    Descriptive observations were conducted to record problem behavior displayed by participants and to record antecedents and consequences delivered by caregivers. Next, functional analyses were conducted to identify reinforcers for problem behavior. Then, using data from the descriptive observations, lag-sequential analyses were conducted to examine changes in the probability of environmental events across time in relation to occurrences of problem behavior. The results of the lag-sequential analyses were interpreted in light of the results of functional analyses. Results suggested that events identified as reinforcers in a functional analysis followed behavior in idiosyncratic ways: after a range of delays and frequencies. Thus, it is possible that naturally occurring reinforcement contingencies are arranged in ways different from those typically evaluated in applied research. Further, these complex response-stimulus relations can be represented by lag-sequential analyses. However, limitations to the lag-sequential analysis are evident. PMID:19949537

  11. Workload analyse of assembling process

    NASA Astrophysics Data System (ADS)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  12. Supplementary report on antilock analyses

    NASA Technical Reports Server (NTRS)

    Zellner, J. W.

    1985-01-01

    Generic modulator analysis was performed to quantify the effects of dump and reapply pressure rates on antilock stability and performance. Analysis will include dump and reapply rates, and lumped modulator delay. Based on the results of the generic modulator analysis and earlier toggle optimization analysis (with Mitsubishi modulator), a recommended preliminary antilock design was synthesized and its response and performance simulated. The results of these analyses are documented.

  13. Biological aerosol warner and analyser

    NASA Astrophysics Data System (ADS)

    Schlemmer, Harry; Kürbitz, Gunther; Miethe, Peter; Spieweck, Michael

    2006-05-01

    The development of an integrated sensor device BiSAM (Biological Sampling and Analysing Module) is presented which is designed for rapid detection of aerosol or dust particles potentially loaded with biological warfare agents. All functional steps from aerosol collection via immuno analysis to display of results are fully automated. The core component of the sensor device is an ultra sensitive rapid analyser PBA (Portable Benchtop Analyser) based on a 3 dimensional immuno filtration column of large internal area, Poly HRP marker technology and kinetic optical detection. High sensitivity despite of the short measuring time, high chemical stability of the micro column and robustness against interferents make the PBA an ideal tool for fielded sensor devices. It is especially favourable to combine the PBA with a bio collector because virtually no sample preparation is necessary. Overall, the BiSAM device is capable to detect and identify living micro organisms (bacteria, spores, viruses) as well as toxins in a measuring cycle of typically half an hour duration. In each batch up to 12 different tests can be run in parallel together with positive and negative controls to keep the false alarm rate low.

  14. Mitogenomic analyses of eutherian relationships.

    PubMed

    Arnason, U; Janke, A

    2002-01-01

    Reasonably correct phylogenies are fundamental to the testing of evolutionary hypotheses. Here, we present phylogenetic findings based on analyses of 67 complete mammalian mitochondrial (mt) genomes. The analyses, irrespective of whether they were performed at the amino acid (aa) level or on nucleotides (nt) of first and second codon positions, placed Erinaceomorpha (hedgehogs and their kin) as the sister group of remaining eutherians. Thus, the analyses separated Erinaceomorpha from other traditional lipotyphlans (e.g., tenrecs, moles, and shrews), making traditional Lipotyphla polyphyletic. Both the aa and nt data sets identified the two order-rich eutherian clades, the Cetferungulata (comprising Pholidota, Carnivora, Perissodactyla, Artiodactyla, and Cetacea) and the African clade (Tenrecomorpha, Macroscelidea, Tubulidentata, Hyracoidea, Proboscidea, and Sirenia). The study corroborated recent findings that have identified a sister-group relationship between Anthropoidea and Dermoptera (flying lemurs), thereby making our own order, Primates, a paraphyletic assembly. Molecular estimates using paleontologically well-established calibration points, placed the origin of most eutherian orders in Cretaceous times, 70-100 million years before present (MYBP). The same estimates place all primate divergences much earlier than traditionally believed. For example, the divergence between Homo and Pan is estimated to have taken place approximately 10 MYBP, a dating consistent with recent findings in primate paleontology. PMID:12438776

  15. Neutronic Analyses of the Trade Demonstration Facility

    SciTech Connect

    Rubbia, C.

    2004-09-15

    The TRiga Accelerator-Driven Experiment (TRADE), to be performed in the TRIGA reactor of the ENEA-Casaccia Centre in Italy, consists of the coupling of an external proton accelerator to a target to be installed in the central channel of the reactor scrammed to subcriticality. This pilot experiment, aimed at a global demonstration of the accelerator-driven system concept, is based on an original idea of C. Rubbia. The present paper reports the results of some neutronic analyses focused on the feasibility of TRADE. Results show that all relevant experiments (at different power levels in a wide range of subcriticalities) can be carried out with relatively limited modifications to the present TRIGA reactor.

  16. Analyses of containment structures with corrosion damage

    SciTech Connect

    Cherry, J.L.

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a {open_quotes}lower bound{close_quotes}, {open_quotes}best estimate{close_quotes}, and {open_quotes}upper bound{close_quotes} failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties.

  17. ISFSI site boundary radiation dose rate analyses.

    PubMed

    Hagler, R J; Fero, A H

    2005-01-01

    Across the globe nuclear utilities are in the process of designing and analysing Independent Spent Fuel Storage Installations (ISFSI) for the purpose of above ground spent-fuel storage primarily to mitigate the filling of spent-fuel pools. Using a conjoining of discrete ordinates transport theory (DORT) and Monte Carlo (MCNP) techniques, an ISFSI was analysed to determine neutron and photon dose rates for a generic overpack, and ISFSI pad configuration and design at distances ranging from 1 to -1700 m from the ISFSI array. The calculated dose rates are used to address the requirements of 10CFR72.104, which provides limits to be enforced for the protection of the public by the NRC in regard to ISFSI facilities. For this overpack, dose rates decrease by three orders of magnitude through the first 200 m moving away from the ISFSI. In addition, the contributions from different source terms changes over distance. It can be observed that although side photons provide the majority of dose rate in this calculation, scattered photons and side neutrons take on more importance as the distance from the ISFSI is increased. PMID:16604670

  18. Waste Stream Analyses for Nuclear Fuel Cycles

    SciTech Connect

    N. R. Soelberg

    2010-08-01

    A high-level study was performed in Fiscal Year 2009 for the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE) Advanced Fuel Cycle Initiative (AFCI) to provide information for a range of nuclear fuel cycle options (Wigeland 2009). At that time, some fuel cycle options could not be adequately evaluated since they were not well defined and lacked sufficient information. As a result, five families of these fuel cycle options are being studied during Fiscal Year 2010 by the Systems Analysis Campaign for the DOE NE Fuel Cycle Research and Development (FCRD) program. The quality and completeness of data available to date for the fuel cycle options is insufficient to perform quantitative radioactive waste analyses using recommended metrics. This study has been limited thus far to qualitative analyses of waste streams from the candidate fuel cycle options, because quantitative data for wastes from the front end, fuel fabrication, reactor core structure, and used fuel for these options is generally not yet available.

  19. Transportation systems analyses: Volume 1: Executive Summary

    NASA Astrophysics Data System (ADS)

    1993-05-01

    The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This executive summary of the transportation systems analyses (TSM) semi-annual report addresses the SSF logistics resupply. Our analysis parallels the ongoing NASA SSF redesign effort. Therefore, there could be no SSF design to drive our logistics analysis. Consequently, the analysis attempted to bound the reasonable SSF design possibilities (and the subsequent transportation implications). No other strategy really exists until after a final decision is rendered on the SSF configuration.

  20. Analysing photonic structures in plants

    PubMed Central

    Vignolini, Silvia; Moyroud, Edwige; Glover, Beverley J.; Steiner, Ullrich

    2013-01-01

    The outer layers of a range of plant tissues, including flower petals, leaves and fruits, exhibit an intriguing variation of microscopic structures. Some of these structures include ordered periodic multilayers and diffraction gratings that give rise to interesting optical appearances. The colour arising from such structures is generally brighter than pigment-based colour. Here, we describe the main types of photonic structures found in plants and discuss the experimental approaches that can be used to analyse them. These experimental approaches allow identification of the physical mechanisms producing structural colours with a high degree of confidence. PMID:23883949

  1. Summary of LDEF battery analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Chris; Thaller, Larry; Bittner, Harlin; Deligiannis, Frank; Tiller, Smith; Sullivan, David; Bene, James

    1992-01-01

    Tests and analyses of NiCd, LiSO2, and LiCf batteries flown on the Long Duration Exposure Facility (LDEF) includes results from NASA, Aerospace, and commercial labs. The LiSO2 cells illustrate six-year degradation of internal components acceptable for space applications, with up to 85 percent battery capacity remaining on discharge of some returned cells. LiCf batteries completed their mission, but lost any remaining capacity due to internal degradation. Returned NiCd batteries tested an GSFC showed slight case distortion due to pressure build up, but were functioning as designed.

  2. Limits to adaptation

    NASA Astrophysics Data System (ADS)

    Dow, Kirstin; Berkhout, Frans; Preston, Benjamin L.; Klein, Richard J. T.; Midgley, Guy; Shaw, M. Rebecca

    2013-04-01

    An actor-centered, risk-based approach to defining limits to social adaptation provides a useful analytic framing for identifying and anticipating these limits and informing debates over society's responses to climate change.

  3. Limited range of motion

    MedlinePlus

    Limited range of motion is a term meaning that a joint or body part cannot move through its normal range of motion. ... Motion may be limited because of a problem within the joint, swelling of tissue around the joint, ...

  4. [Network analyses in neuroimaging studies].

    PubMed

    Hirano, Shigeki; Yamada, Makiko

    2013-06-01

    Neurons are anatomically and physiologically connected to each other, and these connections are involved in various neuronal functions. Multiple important neural networks involved in neurodegenerative diseases can be detected using network analyses in functional neuroimaging. First, the basic methods and theories of voxel-based network analyses, such as principal component analysis, independent component analysis, and seed-based analysis, are described. Disease- and symptom-specific brain networks have been identified using glucose metabolism images in patients with Parkinson's disease. These networks enable us to objectively evaluate individual patients and serve as diagnostic tools as well as biomarkers for therapeutic interventions. Many functional MRI studies have shown that "hub" brain regions, such as the posterior cingulate cortex and medial prefrontal cortex, are deactivated by externally driven cognitive tasks; such brain regions form the "default mode network." Recent studies have shown that this default mode network is disrupted from the preclinical phase of Alzheimer's disease and is associated with amyloid deposition in the brain. Some recent studies have shown that the default mode network is also impaired in Parkinson's disease, whereas other studies have shown inconsistent results. These incongruent results could be due to the heterogeneous pharmacological status, differences in mesocortical dopaminergic impairment status, and concomitant amyloid deposition. Future neuroimaging network analysis studies will reveal novel and interesting findings that will uncover the pathomechanisms of neurological and psychiatric disorders. PMID:23735528

  5. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    PubMed Central

    Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David

    2016-01-01

    Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175

  6. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses.

    PubMed

    Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David

    2016-01-01

    Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175

  7. LANSCE beam current limiter

    SciTech Connect

    Gallegos, F.R.

    1996-06-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. Active instrumentation, such as the Beam Current Limiter, is a component of the RSS. The current limiter is designed to limit the average current in a beam line below a specific level, thus minimizing the maximum current available for a beam spill accident. The beam current limiter is a self-contained, electrically isolated toroidal beam transformer which continuously monitors beam current. It is designed as fail-safe instrumentation. The design philosophy, hardware design, operation, and limitations of the device are described.

  8. LANSCE beam current limiter

    SciTech Connect

    Gallegos, F.R.

    1997-01-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. Active instrumentation, such as the beam current limiter, is a component of the RSS. The current limiter is designed to limit the average current in a beamline below a specific level, thus minimizing the maximum current available for a beam spill accident. The beam current limiter is a self-contained, electrically isolated toroidal beam transformer which continuously monitors beam current. It is designed as fail-safe instrumentation. The design philosophy, hardware design, operation, and limitations of the device are described. {copyright} {ital 1997 American Institute of Physics.}

  9. Geomorphic analyses from space imagery

    NASA Technical Reports Server (NTRS)

    Morisawa, M.

    1985-01-01

    One of the most obvious applications of space imagery to geomorphological analyses is in the study of drainage patterns and channel networks. LANDSAT, high altitude photography and other types of remote sensing imagery are excellent for depicting stream networks on a regional scale because of their broad coverage in a single image. They offer a valuable tool for comparing and analyzing drainage patterns and channel networks all over the world. Three aspects considered in this geomorphological study are: (1) the origin, evolution and rates of development of drainage systems; (2) the topological studies of network and channel arrangements; and (3) the adjustment of streams to tectonic events and geologic structure (i.e., the mode and rate of adjustment).

  10. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  11. Chemical analyses of provided samples

    NASA Technical Reports Server (NTRS)

    Becker, Christopher H.

    1993-01-01

    Two batches of samples were received and chemical analysis was performed of the surface and near surface regions of the samples by the surface analysis by laser ionization (SALI) method. The samples included four one-inch optics and several paint samples. The analyses emphasized surface contamination or modification. In these studies, pulsed sputtering by 7 keV Ar+ and primarily single-photon ionization (SPI) by coherent 118 nm radiation (at approximately 5 x 10(exp 5) W/cm(sup 2) were used. For two of the samples, also multiphoton ionization (MPI) at 266 nm (approximately 5 x 10(exp 11) W/cm(sup 2) was used. Most notable among the results was the silicone contamination on Mg2 mirror 28-92, and that the Long Duration Exposure Facility (LDEF) paint sample had been enriched in K and Na and depleted in Zn, Si, B, and organic compounds relative to the control paint.

  12. Isotopic signatures by bulk analyses

    SciTech Connect

    Efurd, D.W.; Rokop, D.J.

    1997-12-01

    Los Alamos National Laboratory has developed a series of measurement techniques for identification of nuclear signatures by analyzing bulk samples. Two specific applications for isotopic fingerprinting to identify the origin of anthropogenic radioactivity in bulk samples are presented. The first example is the analyses of environmental samples collected in the US Arctic to determine the impact of dumping of radionuclides in this polar region. Analyses of sediment and biota samples indicate that for the areas sampled the anthropogenic radionuclide content of sediments was predominantly the result of the deposition of global fallout. The anthropogenic radionuclide concentrations in fish, birds and mammals were very low. It can be surmised that marine food chains are presently not significantly affected. The second example is isotopic fingerprinting of water and sediment samples from the Rocky Flats Facility (RFP). The largest source of anthropogenic radioactivity presently affecting surface-waters at RFP is the sediments that are currently residing in the holding ponds. One gram of sediment from a holding pond contains approximately 50 times more plutonium than 1 liter of water from the pond. Essentially 100% of the uranium in Ponds A-1 and A-2 originated as depleted uranium. The largest source of radioactivity in the terminal Ponds A-4, B-5 and C-2 was naturally occurring uranium and its decay product radium. The uranium concentrations in the waters collected from the terminal ponds contained 0.05% or less of the interim standard calculated derived concentration guide for uranium in waters available to the public. All of the radioactivity observed in soil, sediment and water samples collected at RFP was naturally occurring, the result of processes at RFP or the result of global fallout. No extraneous anthropogenic alpha, beta or gamma activities were detected. The plutonium concentrations in Pond C-2 appear to vary seasonally.

  13. 7 CFR 94.102 - Analyses available.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analyses available. 94.102 Section 94.102 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.102 Analyses available. A wide array of analyses for voluntary egg product samples is available. Voluntary egg product samples include...

  14. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...

  15. 7 CFR 94.102 - Analyses available.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analyses available. 94.102 Section 94.102 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.102 Analyses available. A wide array of analyses for voluntary egg product samples is available. Voluntary egg product samples include...

  16. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...

  17. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    SciTech Connect

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings.

  18. Systematics and limit calculations

    SciTech Connect

    Fisher, Wade; /Fermilab

    2006-12-01

    This note discusses the estimation of systematic uncertainties and their incorporation into upper limit calculations. Two different approaches to reducing systematics and their degrading impact on upper limits are introduced. An improved {chi}{sup 2} function is defined which is useful in comparing Poisson distributed data with models marginalized by systematic uncertainties. Also, a technique using profile likelihoods is introduced which provides a means of constraining the degrading impact of systematic uncertainties on limit calculations.

  19. Speed analyses of stimulus equivalence.

    PubMed Central

    Spencer, T J; Chase, P N

    1996-01-01

    The functional substitutability of stimuli in equivalence classes was examined through analyses of the speed of college students' accurate responding. After training subjects to respond to 18 conditional relations, subjects' accuracy and speed of accurate responding were compared across trial types (baseline, symmetry, transitivity, and combined transitivity and symmetry) and nodal distance (one- through five-node transitive and combined transitive and symmetric relations). Differences in accuracy across nodal distance and trial type were significant only on the first tests of equivalence, whereas differences in speed were significant even after extended testing. Response speed was inversely related to the number of nodes on which the tested relations were based. Significant differences in response speed were also found across trial types, except between transitivity and combined trials. To determine the generality of these comparisons, three groups of subjects were included: An instructed group was given an instruction that specified the interchangeability of stimuli related through training; a queried group was queried about the basis for test-trial responding: and a standard group was neither instructed nor queried. There were no significant differences among groups. These results suggest the use of response speed and response accuracy to measure the strength of matching relations. PMID:8636663

  20. Helicopter tail rotor noise analyses

    NASA Technical Reports Server (NTRS)

    George, A. R.; Chou, S. T.

    1986-01-01

    A study was made of helicopter tail rotor noise, particularly that due to interactions with the main rotor tip vortices, and with the fuselage separation mean wake. The tail rotor blade-main rotor tip vortex interaction is modelled as an airfoil of infinite span cutting through a moving vortex. The vortex and the geometry information required by the analyses are obtained through a free wake geometry analysis of the main rotor. The acoustic pressure-time histories for the tail rotor blade-vortex interactions are then calculated. These acoustic results are compared to tail rotor loading and thickness noise, and are found to be significant to the overall tail rotor noise generation. Under most helicopter operating conditions, large acoustic pressure fluctuations can be generated due to a series of skewed main rotor tip vortices passing through the tail rotor disk. The noise generation depends strongly upon the helicopter operating conditions and the location of the tail rotor relative to the main rotor.

  1. Digital image analyser for autoradiography

    SciTech Connect

    Muth, R.A.; Plotnick, J.

    1985-05-01

    The most critical parameter in quantitative autoradiography for assay of tissue concentrations of tracers is the ability to obtain precise and accurate measurements of optical density of the images. Existing high precision systems for image analysis, rotating drum densitometers, are expensive, suffer from mechanical problems and are slow. More moderately priced and reliable video camera based systems are available, but their outputs generally do not have the uniformity and stability necessary for high resolution quantitative autoradiography. The authors have designed and constructed an image analyser optimized for quantitative single and multiple tracer autoradiography which the authors refer to as a memory-mapped charged-coupled device scanner (MM-CCD). The input is from a linear array of CCD's which is used to optically scan the autoradiograph. Images are digitized into 512 x 512 picture elements with 256 gray levels and the data is stored in buffer video memory in less than two seconds. Images can then be transferred to RAM memory by direct memory-mapping for further processing. Arterial blood curve data and optical density-calibrated standards data can be entered and the optical density images can be converted automatically to tracer concentration or functional images. In double tracer studies, images produced from both exposures can be stored and processed in RAM to yield ''pure'' individual tracer concentration or functional images. Any processed image can be transmitted back to the buffer memory to be viewed on a monitor and processed for region of interest analysis.

  2. Imprecise probabilities in engineering analyses

    NASA Astrophysics Data System (ADS)

    Beer, Michael; Ferson, Scott; Kreinovich, Vladik

    2013-05-01

    Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which involve both probabilistic and non-probabilistic information. A common feature of the various concepts of imprecise probabilities is the consideration of an entire set of probabilistic models in one analysis. The theoretical differences between the concepts mainly concern the mathematical description of the set of probabilistic models and the connection to the probabilistic models involved. This paper provides an overview on developments which involve imprecise probabilities for the solution of engineering problems. Evidence theory, probability bounds analysis with p-boxes, and fuzzy probabilities are discussed with emphasis on their key features and on their relationships to one another. This paper was especially prepared for this special issue and reflects, in various ways, the thinking and presentation preferences of the authors, who are also the guest editors for this special issue.

  3. Comparison between Inbreeding Analyses Methodologies.

    PubMed

    Esparza, Mireia; Martínez-Abadías, Neus; Sjøvold, Torstein; González-José, Rolando; Hernández, Miquel

    2015-12-01

    Surnames are widely used in inbreeding analysis, but the validity of results has often been questioned due to the failure to comply with the prerequisites of the method. Here we analyze inbreeding in Hallstatt (Austria) between the 17th and the 19th centuries both using genealogies and surnames. The high and significant correlation of the results obtained by both methods demonstrates the validity of the use of surnames in this kind of studies. On the other hand, the inbreeding values obtained (0.24 x 10⁻³ in the genealogies analysis and 2.66 x 10⁻³ in the surnames analysis) are lower than those observed in Europe for this period and for this kind of population, demonstrating the falseness of the apparent isolation of Hallstatt's population. The temporal trend of inbreeding in both analyses does not follow the European general pattern, but shows a maximum in 1850 with a later decrease along the second half of the 19th century. This is probably due to the high migration rate that is implied by the construction of transport infrastructures around the 1870's. PMID:26987150

  4. Network analyses in systems pharmacology

    PubMed Central

    Berger, Seth I.; Iyengar, Ravi

    2009-01-01

    Systems pharmacology is an emerging area of pharmacology which utilizes network analysis of drug action as one of its approaches. By considering drug actions and side effects in the context of the regulatory networks within which the drug targets and disease gene products function, network analysis promises to greatly increase our knowledge of the mechanisms underlying the multiple actions of drugs. Systems pharmacology can provide new approaches for drug discovery for complex diseases. The integrated approach used in systems pharmacology can allow for drug action to be considered in the context of the whole genome. Network-based studies are becoming an increasingly important tool in understanding the relationships between drug action and disease susceptibility genes. This review discusses how analysis of biological networks has contributed to the genesis of systems pharmacology and how these studies have improved global understanding of drug targets, suggested new targets and approaches for therapeutics, and provided a deeper understanding of the effects of drugs. Taken together, these types of analyses can lead to new therapeutic options while improving the safety and efficacy of existing medications. Contact: ravi.iyengar@mssm.edu PMID:19648136

  5. Designing forgiveness interventions: guidance from five meta-analyses.

    PubMed

    Recine, Ann C

    2015-06-01

    The Nursing Interventions Classification system includes forgiveness facilitation as part of the research-based taxonomy of nursing interventions. Nurses need practical guidance in finding the type of intervention that works best in the nursing realm. Five meta-analyses of forgiveness interventions were reviewed to illuminate best practice. The only studies included were meta-analyses of forgiveness interventions in which the authors calculated effect size. Forgiveness interventions were shown to be helpful in addressing mental/emotional health. Components of effective interventions include recalling the offense, empathizing with the offender, committing to forgive, and overcoming feelings of unforgiveness. The meta-analyses showed that people receiving forgiveness interventions reported more forgiveness than those who had no intervention. Forgiveness interventions resulted in more hope and less depression and anxiety than no treatment. A process-based intervention is more effective than a shorter cognitive decision-based model. Limitations of the meta-analyses included inconsistency of measures and a lack of consensus on a definition of forgiveness. Notwithstanding these limitations, the meta-analyses offer strong evidence of what contributes to the effectiveness of forgiveness interventions. The implications of the studies are useful for designing evidence-based clinical forgiveness interventions to enhance nursing practice. PMID:25487180

  6. Dose limits for astronauts

    NASA Technical Reports Server (NTRS)

    Sinclair, W. K.

    2000-01-01

    Radiation exposures to individuals in space can greatly exceed natural radiation exposure on Earth and possibly normal occupational radiation exposures as well. Consequently, procedures limiting exposures would be necessary. Limitations were proposed by the Radiobiological Advisory Panel of the National Academy of Sciences/National Research Council in 1970. This panel recommended short-term limits to avoid deterministic effects and a single career limit (of 4 Sv) based on a doubling of the cancer risk in men aged 35 to 55. Later, when risk estimates for cancer had increased and were recognized to be age and sex dependent, the NCRP, in Report No. 98 in 1989, recommended a range of career limits based on age and sex from 1 to 4 Sv. NCRP is again in the process of revising recommendations for astronaut exposure, partly because risk estimates have increased further and partly to recognize trends in limiting radiation exposure occupationally on the ground. The result of these considerations is likely to be similar short-term limits for deterministic effects but modified career limits.

  7. Limits to Inclusion

    ERIC Educational Resources Information Center

    Hansen, Janne Hedegaard

    2012-01-01

    In this article, I will argue that a theoretical identification of the limit to inclusion is needed in the conceptual identification of inclusion. On the one hand, inclusion is formulated as a vision that is, in principle, limitless. On the other hand, there seems to be an agreement that inclusion has a limit in the pedagogical practice. However,…

  8. NOx analyser interefence from alkenes

    NASA Astrophysics Data System (ADS)

    Bloss, W. J.; Alam, M. S.; Lee, J. D.; Vazquez, M.; Munoz, A.; Rodenas, M.

    2012-04-01

    Nitrogen oxides (NO and NO2, collectively NOx) are critical intermediates in atmospheric chemistry. NOx abundance controls the levels of the primary atmospheric oxidants OH, NO3 and O3, and regulates the ozone production which results from the degradation of volatile organic compounds. NOx are also atmospheric pollutants in their own right, and NO2 is commonly included in air quality objectives and regulations. In addition to their role in controlling ozone formation, NOx levels affect the production of other pollutants such as the lachrymator PAN, and the nitrate component of secondary aerosol particles. Consequently, accurate measurement of nitrogen oxides in the atmosphere is of major importance for understanding our atmosphere. The most widely employed approach for the measurement of NOx is chemiluminescent detection of NO2* from the NO + O3 reaction, combined with NO2 reduction by either a heated catalyst or photoconvertor. The reaction between alkenes and ozone is also chemiluminescent; therefore alkenes may contribute to the measured NOx signal, depending upon the instrumental background subtraction cycle employed. This interference has been noted previously, and indeed the effect has been used to measure both alkenes and ozone in the atmosphere. Here we report the results of a systematic investigation of the response of a selection of NOx analysers, ranging from systems used for routine air quality monitoring to atmospheric research instrumentation, to a series of alkenes ranging from ethene to the biogenic monoterpenes, as a function of conditions (co-reactants, humidity). Experiments were performed in the European Photoreactor (EUPHORE) to ensure common calibration, a common sample for the monitors, and to unequivocally confirm the alkene (via FTIR) and NO2 (via DOAS) levels present. The instrument responses ranged from negligible levels up to 10 % depending upon the alkene present and conditions used. Such interferences may be of substantial importance

  9. Fundamental limits on EMC

    NASA Astrophysics Data System (ADS)

    Showers, R. M.; Lin, S.-Y.; Schulz, R. B.

    1981-02-01

    Both fundamental and state-of-the-art limits are treated with emphasis on the former. Fundamental limits result from both natural and man-made electromagnetic noise which then affect two basic ratios, signal-to-noise (S/N) and extraneous-input-to-noise (I/N). Tolerable S/N values are discussed for both digital and analog communications systems. These lead to tolerable signal-to-extraneous-input (S/I) ratios, again for digital and analog communications systems, as well as radar and sonar. State-of-the-art limits for transmitters include RF noise emission, spurious emissions, and intermodulation. Receiver limits include adjacent-channel interactions, image, IF, and other spurious responses, including cross modulation, intermodulation, and desensitization. Unintentional emitters and receivers are also discussed. Coupling limitations between undesired sources and receptors are considered from mechanisms including radiation, induction, and conduction.

  10. Thermal mechanical analyses of large diameter ion accelerator systems

    SciTech Connect

    Brophy, J.R.; Aston, G.

    1989-01-01

    Thermal mechanical analyses of large diameter ion accelerator systems are performed using commercially available finite element software executed on a desktop computer. Finite element models of a 30-cm-diameter accelerator system formulated using plate/shell elements give calculated results which agree well with similar published obtained on a mainframe computer. Analyses of a 50-cm-diameter, three-grid accelerator system using measured grid temperatures (corresponding to discharge powers of 653 and 886 watts) indicate that thermally induced grid movements need not be the performance limiting phenomena for accelerator systems of this size. 8 refs.

  11. Thermal mechanical analyses of large diameter ion accelerator systems

    NASA Technical Reports Server (NTRS)

    Brophy, John R.; Aston, Graeme

    1989-01-01

    Thermal mechanical analyses of large diameter ion accelerator systems are performed using commercially available finite element software executed on a desktop computer. Finite element models of a 30-cm-diameter accelerator system formulated using plate/shell elements give calculated results which agree well with similar published obtained on a mainframe computer. Analyses of a 50-cm-diameter, three-grid accelerator system using measured grid temperatures (corresponding to discharge powers of 653 and 886 watts) indicate that thermally induced grid movements need not be the performance limiting phenomena for accelerator systems of this size.

  12. Understanding the cancer cell phenotype beyond the limitations of current omics analyses.

    PubMed

    Moreno-Sánchez, Rafael; Saavedra, Emma; Gallardo-Pérez, Juan Carlos; Rumjanek, Franklin D; Rodríguez-Enríquez, Sara

    2016-01-01

    Efforts to understand the mechanistic principles driving cancer metabolism and proliferation have been lately governed by genomic, transcriptomic and proteomic studies. This paper analyzes the caveats of these approaches. As molecular biology's central dogma proposes a unidirectional flux of information from genes to mRNA to proteins, it has frequently been assumed that monitoring the changes in the gene sequences and in mRNA and protein contents is sufficient to explain complex cellular processes. Such a stance commonly disregards that post-translational modifications can alter the protein function/activity and also that regulatory mechanisms enter into action, to coordinate the protein activities of pathways/cellular processes, in order to keep the cellular homeostasis. Hence, the actual protein activities (as enzymes/transporters/receptors) and their regulatory mechanisms ultimately dictate the final outcomes of a pathway/cellular process. In this regard, it is here documented that the mRNA levels of many metabolic enzymes and transcriptional factors have no correlation with the respective protein contents and activities. The validity of current clinical mRNA-based tests and proposed metabolite biomarkers for cancer detection/prognosis is also discussed. Therefore, it is proposed that, to achieve a thorough understanding of the modifications undergone by proliferating cancer cells, it is mandatory to experimentally analyze the cellular processes at the functional level. This could be achieved (a) locally, by examining the actual protein activities in the cell and their kinetic properties (or at least kinetically characterize the most controlling steps of the pathway/cellular process); (b) systemically, by analyzing the main fluxes of the pathway/cellular process, and how they are modulated by metabolites, all which should contribute to comprehending the regulatory mechanisms that have been altered in cancer cells. By adopting a more holistic approach it may become possible to improve the design of therapeutic strategies that would target cancer cells more specifically. PMID:26417966

  13. Analyses of Transistor Punchthrough Failures

    NASA Technical Reports Server (NTRS)

    Nicolas, David P.

    1999-01-01

    The failure of two transistors in the Altitude Switch Assembly for the Solid Rocket Booster followed by two additional failures a year later presented a challenge to failure analysts. These devices had successfully worked for many years on numerous missions. There was no history of failures with this type of device. Extensive checks of the test procedures gave no indication for a source of the cause. The devices were manufactured more than twenty years ago and failure information on this lot date code was not readily available. External visual exam, radiography, PEID, and leak testing were performed with nominal results Electrical testing indicated nearly identical base-emitter and base-collector characteristics (both forward and reverse) with a low resistance short emitter to collector. These characteristics are indicative of a classic failure mechanism called punchthrough. In failure analysis punchthrough refers to an condition where a relatively low voltage pulse causes the device to conduct very hard producing localized areas of thermal runaway or "hot spots". At one or more of these hot spots, the excessive currents melt the silicon. Heavily doped emitter material diffuses through the base region to the collector forming a diffusion pipe shorting the emitter to base to collector. Upon cooling, an alloy junction forms between the pipe and the base region. Generally, the hot spot (punch-through site) is under the bond and no surface artifact is visible. The devices were delidded and the internal structures were examined microscopically. The gold emitter lead was melted on one device, but others had anomalies in the metallization around the in-tact emitter bonds. The SEM examination confirmed some anomalies to be cosmetic defects while other anomalies were artifacts of the punchthrough site. Subsequent to these analyses, the contractor determined that some irregular testing procedures occurred at the time of the failures heretofore unreported. These testing

  14. Analyses of the LMC Novae

    NASA Astrophysics Data System (ADS)

    Vanlandingham, K. M.; Schwarz, G. J.; Starrfield, S.; Hauschildt, P. H.; Shore, S. N.; Sonneborn, G.

    In the past 10 years, 6 classical novae have been observed in the Large Magellanic Cloud (LMC). We have begun a study of these objects using ultraviolet spectra obtained by IUE and optical spectra from nova surveys. We are using the results of this study to further our understanding of novae and stellar evolution. Our study includes analysis of both the early, optically thick spectra using model atmospheres, and the later nebular spectra using optimization of photoionization codes. By analysing of all the LMC novae in a consistent manner, we can compare their individual results and use their combined properties to calibrate Galactic novae. In addition, our studies can be used to determine the elemental abundances of the nova ejecta, the amount of mass ejected, and the contribution of novae to the ISM abundances. To date we have analyzed Nova LMC 1988#1 and Nova LMC 1990#1, and have obtained preliminary results for Nova LMC 1991. The results of this work are presented in this poster. The metal content of the LMC is known to be sub-solar and varies as a function of location within the cloud. A detailed abundance analysis of the ejecta of the LMC novae provides important information concerning the effect of initial metal abundances on energetics of the nova outburst. Since the distance to the LMC is well known, many important parameters of the outburst, such as the luminosity, can be absolutely determined. Both galactic and extragalactic novae have been proposed as potential standard candles. Recent work by Della Valle & Livio (1995) has improved on the standard relations (e.g., Schmidt 1957; Pfau 1976; Cohen 1985; Livio 1992) by including novae from the LMC and M31. Unfortunately, the dependence of the nova outburst on metallicity has not been well-studied. Recent theoretical work by Starrfield et al. (1998) indicates that the luminosity of the outburst increases with decreasing metal abundances. If there is a dependence of luminosity on metallicity, it will have to

  15. The limits of prevention.

    PubMed Central

    McGinnis, J M

    1985-01-01

    Recent years have been marked by unprecedented accomplishments in preventing disease and reducing mortality. More gains can be expected, but there are limits. The forces shaping the nature and potential of prevention programs can be characterized as points falling along a spectrum ranging from the purely scientific to the purely social. This paper focuses on four elements of that spectrum, discussing some of the limitations to prevention that are presented by biological, technical, ethical, and economic factors. The author concludes with an essentially optimistic perspective on the prospects, special opportunities, and imperatives inherent in each of the categories of limitations discussed. PMID:3923530

  16. CONTROL LIMITER DEVICE

    DOEpatents

    DeShong, J.A.

    1960-03-01

    A control-limiting device for monltoring a control system is described. The system comprises a conditionsensing device, a condition-varying device exerting a control over the condition, and a control means to actuate the condition-varying device. A control-limiting device integrates the total movement or other change of the condition-varying device over any interval of time during a continuum of overlapping periods of time, and if the tothl movement or change of the condition-varying device exceeds a preset value, the control- limiting device will switch the control of the operated apparatus from automatic to manual control.

  17. Galilean limit of electrodynamics.

    NASA Astrophysics Data System (ADS)

    Reula, O. A.; Hamity, V. H.; Frittelli, S.

    The final interest of the authors' work is to study the Newtonian limit as an approximation to General Relativity. In this paper they show, using the Galilean limit of electrodynamics with external sources as a test model, some of the problems that they will be confronted with, and the techniques that are introduced to attack them. The crucial physical issue, to define an asymptotic expansion of a class of solutions, is the selection of initial data which results of imposing regularity conditions in the nonrelativistic limit. The authors' model is an example of a more general class of systems which includes, hopefully, the gravitational field plus matter.

  18. Optical limiting materials

    DOEpatents

    McBranch, Duncan W.; Mattes, Benjamin R.; Koskelo, Aaron C.; Heeger, Alan J.; Robinson, Jeanne M.; Smilowitz, Laura B.; Klimov, Victor I.; Cha, Myoungsik; Sariciftci, N. Serdar; Hummelen, Jan C.

    1998-01-01

    Optical limiting materials. Methanofullerenes, fulleroids and/or other fullerenes chemically altered for enhanced solubility, in liquid solution, and in solid blends with transparent glass (SiO.sub.2) gels or polymers, or semiconducting (conjugated) polymers, are shown to be useful as optical limiters (optical surge protectors). The nonlinear absorption is tunable such that the energy transmitted through such blends saturates at high input energy per pulse over a wide range of wavelengths from 400-1100 nm by selecting the host material for its absorption wavelength and ability to transfer the absorbed energy into the optical limiting composition dissolved therein. This phenomenon should be generalizable to other compositions than substituted fullerenes.

  19. Aerothermodynamic Analyses of Towed Ballutes

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Buck, Greg; Moss, James N.; Nielsen, Eric; Berger, Karen; Jones, William T.; Rudavsky, Rena

    2006-01-01

    A ballute (balloon-parachute) is an inflatable, aerodynamic drag device for application to planetary entry vehicles. Two challenging aspects of aerothermal simulation of towed ballutes are considered. The first challenge, simulation of a complete system including inflatable tethers and a trailing toroidal ballute, is addressed using the unstructured-grid, Navier-Stokes solver FUN3D. Auxiliary simulations of a semi-infinite cylinder using the rarefied flow, Direct Simulation Monte Carlo solver, DSV2, provide additional insight into limiting behavior of the aerothermal environment around tethers directly exposed to the free stream. Simulations reveal pressures higher than stagnation and corresponding large heating rates on the tether as it emerges from the spacecraft base flow and passes through the spacecraft bow shock. The footprint of the tether shock on the toroidal ballute is also subject to heating amplification. Design options to accommodate or reduce these environments are discussed. The second challenge addresses time-accurate simulation to detect the onset of unsteady flow interactions as a function of geometry and Reynolds number. Video of unsteady interactions measured in the Langley Aerothermodynamic Laboratory 20-Inch Mach 6 Air Tunnel and CFD simulations using the structured grid, Navier-Stokes solver LAURA are compared for flow over a rigid spacecraft-sting-toroid system. The experimental data provides qualitative information on the amplitude and onset of unsteady motion which is captured in the numerical simulations. The presence of severe unsteady fluid - structure interactions is undesirable and numerical simulation must be able to predict the onset of such motion.

  20. Local spin analyses using density functional theory

    NASA Astrophysics Data System (ADS)

    Abate, Bayileyegn; Peralta, Juan

    Local spin analysis is a valuable technique in computational investigations magnetic interactions on mono- and polynuclear transition metal complexes, which play vital roles in catalysis, molecular magnetism, artificial photosynthesis, and several other commercially important materials. The relative size and complex electronic structure of transition metal complexes often prohibits the use of multi-determinant approaches, and hence, practical calculations are often limited to single-determinant methods. Density functional theory (DFT) has become one of the most successful and widely used computational tools for the electronic structure study of complex chemical systems; transition metal complexes in particular. Within the DFT formalism, a more flexible and complete theoretical modeling of transition metal complexes can be achieved by considering noncollinear spins, in which the spin density is 'allowed to' adopt noncollinear structures in stead of being constrained to align parallel/antiparallel to a universal axis of magnetization. In this meeting, I will present local spin analyses results obtained using different DFT functionals. Local projection operators are used to decompose the expectation value of the total spin operator; first introduced by Clark and Davidson.

  1. Interim Basis for PCB Sampling and Analyses

    SciTech Connect

    BANNING, D.L.

    2001-01-18

    This document was developed as an interim basis for sampling and analysis of polychlorinated biphenyls (PCBs) and will be used until a formal data quality objective (DQO) document is prepared and approved. On August 31, 2000, the Framework Agreement for Management of Polychlorinated Biphenyls (PCBs) in Hanford Tank Waste was signed by the US. Department of Energy (DOE), the Environmental Protection Agency (EPA), and the Washington State Department of Ecology (Ecology) (Ecology et al. 2000). This agreement outlines the management of double shell tank (DST) waste as Toxic Substance Control Act (TSCA) PCB remediation waste based on a risk-based disposal approval option per Title 40 of the Code of Federal Regulations 761.61 (c). The agreement calls for ''Quantification of PCBs in DSTs, single shell tanks (SSTs), and incoming waste to ensure that the vitrification plant and other ancillary facilities PCB waste acceptance limits and the requirements of the anticipated risk-based disposal approval are met.'' Waste samples will be analyzed for PCBs to satisfy this requirement. This document describes the DQO process undertaken to assure appropriate data will be collected to support management of PCBs and is presented in a DQO format. The DQO process was implemented in accordance with the U.S. Environmental Protection Agency EPA QAlG4, Guidance for the Data Quality Objectives Process (EPA 1994) and the Data Quality Objectives for Sampling and Analyses, HNF-IP-0842, Rev. 1 A, Vol. IV, Section 4.16 (Banning 1999).

  2. Interim Basis for PCB Sampling and Analyses

    SciTech Connect

    BANNING, D.L.

    2001-03-20

    This document was developed as an interim basis for sampling and analysis of polychlorinated biphenyls (PCBs) and will be used until a formal data quality objective (DQO) document is prepared and approved. On August 31, 2000, the Framework Agreement for Management of Polychlorinated Biphenyls (PCBs) in Hanford Tank Waste was signed by the U.S. Department of Energy (DOE), the Environmental Protection Agency (EPA), and the Washington State Department of Ecology (Ecology) (Ecology et al. 2000). This agreement outlines the management of double shell tank (DST) waste as Toxic Substance Control Act (TSCA) PCB remediation waste based on a risk-based disposal approval option per Title 40 of the Code of Federal Regulations 761.61 (c). The agreement calls for ''Quantification of PCBs in DSTs, single shell tanks (SSTs), and incoming waste to ensure that the vitrification plant and other ancillary facilities PCB waste acceptance limits and the requirements of the anticipated risk-based disposal approval are met.'' Waste samples will be analyzed for PCBs to satisfy this requirement. This document describes the DQO process undertaken to assure appropriate data will be collected to support management of PCBs and is presented in a DQO format. The DQO process was implemented in accordance with the U.S. Environmental Protection Agency EPA QA/G4, Guidance for the Data Quality Objectives Process (EPA 1994) and the Data Quality Objectives for Sampling and Analyses, HNF-IP-0842, Rev. 1A, Vol. IV, Section 4.16 (Banning 1999).

  3. Transportation systems analyses. Volume 1: Executive summary

    NASA Astrophysics Data System (ADS)

    1992-11-01

    The principal objective is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform crew delivery and return, cargo transfer, cargo delivery and return, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include: the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationship between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. Conceptual studies of transportation elements contribute to the systems approach by identifying elements (such as ETO node and transfer/excursion vehicles) needed in current and planned transportation systems. These studies are also a mechanism to integrate the results of relevant parallel studies.

  4. PLT rotating pumped limiter

    SciTech Connect

    Cohen, S.A.; Budny, R.V.; Corso, V.; Boychuck, J.; Grisham, L.; Heifetz, D.; Hosea, J.; Luyber, S.; Loprest, P.; Manos, D.

    1984-07-01

    A limiter with a specially contoured front face and the ability to rotate during tokamak discharges has been installed in a PLT pump duct. These features have been selected to handle the unique particle removal and heat load requirements of ICRF heating and lower-hybrid current-drive experiments. The limiter has been conditioned and commissioned in an ion-beam test stand by irradiation with 1 MW power, 200 ms duration beams of 40 keV hydrogen ions. Operation in PLT during ohmic discharges has proven the ability of the limiter to reduce localized heating caused by energetic electron bombardment and to remove about 2% of the ions lost to the PLT walls and limiters.

  5. PEAK LIMITING AMPLIFIER

    DOEpatents

    Goldsworthy, W.W.; Robinson, J.B.

    1959-03-31

    A peak voltage amplitude limiting system adapted for use with a cascade type amplifier is described. In its detailed aspects, the invention includes an amplifier having at least a first triode tube and a second triode tube, the cathode of the second tube being connected to the anode of the first tube. A peak limiter triode tube has its control grid coupled to thc anode of the second tube and its anode connected to the cathode of the second tube. The operation of the limiter is controlled by a bias voltage source connected to the control grid of the limiter tube and the output of the system is taken from the anode of the second tube.

  6. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For major and significant transactions, applicants shall submit impact analyses (exhibit 12) describing...

  7. 10 CFR 61.13 - Technical analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Technical analyses. 61.13 Section 61.13 Energy NUCLEAR....13 Technical analyses. The specific technical information must also include the following analyses... air, soil, groundwater, surface water, plant uptake, and exhumation by burrowing animals. The...

  8. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 8 2011-10-01 2011-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For major and significant transactions, applicants shall submit impact analyses (exhibit 12) describing...

  9. 10 CFR 61.13 - Technical analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Technical analyses. 61.13 Section 61.13 Energy NUCLEAR....13 Technical analyses. The specific technical information must also include the following analyses... air, soil, groundwater, surface water, plant uptake, and exhumation by burrowing animals. The...

  10. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or timing of cash flows are uncertain and...

  11. Pawnee Nation Energy Option Analyses

    SciTech Connect

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-31

    introduced two model energy codes Pawnee Nation should consider for adoption. Summary of Current and Expected Future Electricity Usage The research team provided a summary overview of electricity usage patterns in current buildings and included discussion of known plans for new construction. Utility Options Review Pawnee Nation electric utility options were analyzed through a four-phase process, which included: 1) summarizing the relevant utility background information; 2) gathering relevant utility assessment data; 3) developing a set of realistic Pawnee electric utility service options, and 4) analyzing the various Pawnee electric utility service options for the Pawnee Energy Team’s consideration. III. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor market developments in the bio-energy industry, establish contacts with research institutions with which the tribe could potentially partner in grant-funded research initiatives. In addition, a substantial effort by the Kaw and Cherokee tribes is underway to pursue wind development at the Chilocco School Site in northern Oklahoma where Pawnee is a joint landowner. Pawnee Nation representatives should become actively involved in these development discussions and should explore the potential for joint investment in wind development at the Chilocco site.

  12. Comparison of retrospective analyses of the global ocean heat content

    NASA Astrophysics Data System (ADS)

    Chepurin, Gennady A.; Carton, James A.

    1999-07-01

    In this study, we compare seven retrospective analyses of basin- to global-scale upper ocean temperature. The analyses span a minimum of 10 years during the 50-year period since World War II. Three of the analyses (WOA-94, WHITE, BMRC) are based on objective analysis and thus, do not rely on a numerical forecast model. The remaining four (NCEP, WAJSOWICZ, ROSATI, SODA) are based on data assimilation in which the numerical forecast is provided by some form of the Geophysical Fluid Dynamics Laboratory Modular Ocean Model driven by historical winds. The comparison presented here is limited to heat content in the upper 250 m, information that is available for all analyses. The results are presented in three frequency bands: seasonal, interannual (periods of 1-5 years), and decadal (periods of 5-25 years). At seasonal frequencies, all of the analyses are quite similar. Otherwise, the differences among analyses are limited to the regions of the western boundary currents, and some regions in the Southern Hemisphere. At interannual frequencies, significant differences appear between the objective analyses and the data assimilation analyses. Along the equator in the Pacific, where variability is dominated by El Niño, the objective analyses have somewhat noisier fields, as well as reduced variance prior to 1980 due to lack of observations. Still, the correlation among analyses generally exceeds 80% in this region. Along the equator in the Atlantic, the correlation is lower (30-60%) although inspection of the time series shows that the same biennial progression of warm and cool events appears in all analyses since 1980. In the midlatitude Pacific agreement among objective analyses and data assimilation analyses is good. The analysis of Rosati et al. [Rosati, A., Gudgel, R., Miyakoda, K., 1995. Decadal analysis produced from an ocean assimilation system. Mon. Weather Rev., 123, 2, 206.] differs somewhat from the others apparently because in this analysis, the forecast model

  13. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  14. Force Limit System

    NASA Technical Reports Server (NTRS)

    Pawlik, Ralph; Krause, David; Bremenour, Frank

    2011-01-01

    The Force Limit System (FLS) was developed to protect test specimens from inadvertent overload. The load limit value is fully adjustable by the operator and works independently of the test system control as a mechanical (non-electrical) device. When a test specimen is loaded via an electromechanical or hydraulic test system, a chance of an overload condition exists. An overload applied to a specimen could result in irreparable damage to the specimen and/or fixturing. The FLS restricts the maximum load that an actuator can apply to a test specimen. When testing limited-run test articles or using very expensive fixtures, the use of such a device is highly recommended. Test setups typically use electronic peak protection, which can be the source of overload due to malfunctioning components or the inability to react quickly enough to load spikes. The FLS works independently of the electronic overload protection.

  15. Improved limited discrepancy search

    SciTech Connect

    Korf, R.E.

    1996-12-31

    We present an improvement to Harvey and Ginsberg`s limited discrepancy search algorithm, which eliminates much of the redundancy in the original, by generating each path from the root to the maximum search depth only once. For a complete binary tree of depth d this reduces the asymptotic complexity from O(d+2/2 2{sup d}) to O(2{sup d}). The savings is much less in a partial tree search, or in a heavily pruned tree. The overhead of the improved algorithm on a complete binary tree is only a factor of b/(b - 1) compared to depth-first search. While this constant factor is greater on a heavily pruned tree, this improvement makes limited discrepancy search a viable alternative to depth-first search, whenever the entire tree may not be searched. Finally, we present both positive and negative empirical results on the utility of limited discrepancy search, for the problem of number partitioning.

  16. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques: A Mid-term Report

    NASA Astrophysics Data System (ADS)

    Muller, J.-P.; Yershov, V.; Sidiropoulos, P.; Gwinner, K.; Willner, K.; Fanara, L.; Waelisch, M.; van Gasselt, S.; Walter, S.; Ivanov, A.; Cantini, F.; Morley, J. G.; Sprinks, J.; Giordano, M.; Wardlaw, J.; Kim, J.-R.; Chen, W.-T.; Houghton, R.; Bamford, S.

    2015-10-01

    Understanding the role of different solid surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10s of cms) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the potential to be able to overlay different epochs back to the mid-1970s. Within iMars, a processing system has been developed to generate 3D Digital Terrain Models (DTMs) and corresponding OrthoRectified Images (ORIs) fully automatically from NASA MRO HiRISE and CTX stereo-pairs which are coregistered to corresponding HRSC ORI/DTMs. In parallel, iMars has developed a fully automated processing chain for co-registering level-1 (EDR) images from all previous NASA orbital missions to these HRSC ORIs and in the case of HiRISE these are further co-registered to previously co-registered CTX-to-HRSC ORIs. Examples will be shown of these multi-resolution ORIs and the application of different data mining algorithms to change detection using these co-registered images. iMars has recently launched a citizen science experiment to evaluate best practices for future citizen scientist validation of such data mining processed results. An example of the iMars website will be shown along with an embedded Version 0 prototype of a webGIS based on OGC standards.

  17. Estimating turbine limit load

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1993-01-01

    A method for estimating turbine limit-load pressure ratio from turbine map information is presented and demonstrated. It is based on a mean line analysis at the last-rotor exit. The required map information includes choke flow rate at all speeds as well as pressure ratio and efficiency at the onset of choke at design speed. One- and two-stage turbines are analyzed to compare the results with those from a more rigorous off-design flow analysis and to show the sensitivities of the computed limit-load pressure ratios to changes in the key assumptions.

  18. Optical limiting materials

    DOEpatents

    McBranch, D.W.; Mattes, B.R.; Koskelo, A.C.; Heeger, A.J.; Robinson, J.M.; Smilowitz, L.B.; Klimov, V.I.; Cha, M.; Sariciftci, N.S.; Hummelen, J.C.

    1998-04-21

    Methanofullerenes, fulleroids and/or other fullerenes chemically altered for enhanced solubility, in liquid solution, and in solid blends with transparent glass (SiO{sub 2}) gels or polymers, or semiconducting (conjugated) polymers, are shown to be useful as optical limiters (optical surge protectors). The nonlinear absorption is tunable such that the energy transmitted through such blends saturates at high input energy per pulse over a wide range of wavelengths from 400--1,100 nm by selecting the host material for its absorption wavelength and ability to transfer the absorbed energy into the optical limiting composition dissolved therein. This phenomenon should be generalizable to other compositions than substituted fullerenes. 5 figs.

  19. Limits on nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Fouché, M.; Battesti, R.; Rizzo, C.

    2016-05-01

    In this paper we set a framework in which experiments whose goal is to test QED predictions can be used in a more general way to test nonlinear electrodynamics (NLED) which contains low-energy QED as a special case. We review some of these experiments and we establish limits on the different free parameters by generalizing QED predictions in the framework of NLED. We finally discuss the implications of these limits on bound systems and isolated charged particles for which QED has been widely and successfully tested.

  20. 40 CFR 63.7515 - When must I conduct subsequent performance tests or fuel analyses?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Industrial, Commercial, and Institutional Boilers and Process Heaters Testing, Fuel Analyses, and Initial Compliance... or process heater continues to meet the emission limit for particulate matter, HCl, mercury, or...

  1. Spectral analyses of solar-like stars

    NASA Astrophysics Data System (ADS)

    Doyle, Amanda P.

    2015-03-01

    Accurate stellar parameters are important not just to understand the stars themselves, but also for understanding the planets that orbit them. Despite the availability of high quality spectra, there are still many uncertainties in stellar spectroscopy. In this thesis, the finer details of spectroscopic analyses are discussed and critically evaluated, with a focus on improving the stellar parameters. Using high resolution, high signal-to-noise HARPS spectra, accurate parameters were determined for 22 WASP stars. It is shown that there is a limit to the accuracy of stellar parameters that can be achieved, despite using high S/N spectra. It is also found that the selection of spectral lines used and the accuracy of atomic data is crucial, and different line lists can result in different values of parameters. Different spectral analysis methods often give vastly different results even for the same spectrum of the same star. Here it is shown that many of these discrepancies can be explained by the choice of lines used and by the various assumptions made. This will enable a more reliable homogeneous study of solar-like stars in the future. The Rossiter-McLaughlin effect observed for transiting exoplanets often requires prior knowledge of the projected rotational velocity (vsini). This is usually provided via spectroscopy, however this method has uncertainties as spectral lines are also broadened by photospheric velocity fields known as "macroturbulence". Using rotational splitting frequencies for 28 Kepler stars that were provided via asteroseismology, accurate vsini values have been determined. By inferring the macroturbulence for 28 Kepler stars, it was possible to obtain a new calibration between macroturbulence, effective temperature and surface gravity. Therefore macroturbulence, and thus vsini, can now be determined with confidence for stars that do not have asteroseismic data available. New spectroscopic vsini values were then determined for the WASP planet host

  2. Finite Element analyses of soil bioengineered slopes

    NASA Astrophysics Data System (ADS)

    Tamagnini, Roberto; Switala, Barbara Maria; Sudan Acharya, Madhu; Wu, Wei; Graf, Frank; Auer, Michael; te Kamp, Lothar

    2014-05-01

    Soil Bioengineering methods are not only effective from an economical point of view, but they are also interesting as fully ecological solutions. The presented project is aimed to define a numerical model which includes the impact of vegetation on slope stability, considering both mechanical and hydrological effects. In this project, a constitutive model has been developed that accounts for the multi-phase nature of the soil, namely the partly saturated condition and it also includes the effects of a biological component. The constitutive equation is implemented in the Finite Element (FE) software Comes-Geo with an implicit integration scheme that accounts for the collapse of the soils structure due to wetting. The mathematical formulation of the constitutive equations is introduced by means of thermodynamics and it simulates the growth of the biological system during the time. The numerical code is then applied in the analysis of an ideal rainfall induced landslide. The slope is analyzed for vegetated and non-vegetated conditions. The final results allow to quantitatively assessing the impact of vegetation on slope stability. This allows drawing conclusions and choosing whenever it is worthful to use soil bioengineering methods in slope stabilization instead of traditional approaches. The application of the FE methods show some advantages with respect to the commonly used limit equilibrium analyses, because it can account for the real coupled strain-diffusion nature of the problem. The mechanical strength of roots is in fact influenced by the stress evolution into the slope. Moreover, FE method does not need a pre-definition of any failure surface. FE method can also be used in monitoring the progressive failure of the soil bio-engineered system as it calculates the amount of displacements and strains of the model slope. The preliminary study results show that the formulated equations can be useful for analysis and evaluation of different soil bio

  3. On categorizations in analyses of alcohol teratogenesis.

    PubMed Central

    Sampson, P D; Streissguth, A P; Bookstein, F L; Barr, H M

    2000-01-01

    In biomedical scientific investigations, expositions of findings are conceptually simplest when they comprise comparisons of discrete groups of individuals or involve discrete features or characteristics of individuals. But the descriptive benefits of categorization become outweighed by their limitations in studies involving dose-response relationships, as in many teratogenic and environmental exposure studies. This article addresses a pair of categorization issues concerning the effects of prenatal alcohol exposure that have important public health consequences: the labeling of individuals as fetal alcohol syndrome (FAS) versus fetal alcohol effects (FAE) or alcohol-related neurodevelopmental disorder (ARND), and the categorization of prenatal exposure dose by thresholds. We present data showing that patients with FAS and others with FAE do not have meaningfully different behavioral performance, standardized scores of IQ, arithmetic and adaptive behavior, or secondary disabilities. Similarly overlapping distributions on measures of executive functioning offer a basis for identifying alcohol-affected individuals in a manner that does not simply reflect IQ deficits. At the other end of the teratological continuum, we turn to the reporting of threshold effects in dose-response relationships. Here we illustrate the importance of multivariate analyses using data from the Seattle, Washington, longitudinal prospective study on alcohol and pregnancy. Relationships between many neurobehavioral outcomes and measures of prenatal alcohol exposure are monotone without threshold down to the lowest nonzero levels of exposure, a finding consistent with reports from animal studies. In sum, alcohol effects on the developing human brain appear to be a continuum without threshold when dose and behavioral effects are quantified appropriately. Images Figure 1 Figure 3 PMID:10852839

  4. Intellectually Limited Mothers.

    ERIC Educational Resources Information Center

    Kaminer, Ruth K.; Cohen, Herbert J.

    The paper examines whether a relationship exists between intellectual limitation on the mother's part and unfavorable outcomes for her children. The scope of the problem is examined and the difficulties inherent in estimating prevalence are noted. The issue of child neglect, rather than abuse is shown to be a major problem among institutionalized…

  5. On gas detonation limits

    SciTech Connect

    Nikolaev, Yu.A.; Gapanov, O.A.

    1995-11-01

    A one-dimensional model for a multiheaded detonation has been constructed with account for friction, heat losses, and the decay of gas velocity pulsations. The existence of detonation limits in narrow channels has been numerically shown. The calculation results are in satisfactory agreement with experimental data.

  6. The Outer Limits: English.

    ERIC Educational Resources Information Center

    Tyler, Barbara R.; Biesekerski, Joan

    The Quinmester course "The Outer Limits" involves an exploration of unknown worlds, mental and physical, through fiction and nonfiction. Its purpose is to focus attention on the ongoing conquest of the frontiers of the mind, the physical world, and outer space. The subject matter includes identification and investigation of unknown worlds in the…

  7. Limits to Stability

    ERIC Educational Resources Information Center

    Cottey, Alan

    2012-01-01

    The author reflects briefly on what limited degree of global ecological stability and human cultural stability may be achieved, provided that humanity retains hope and does not give way to despair or hide in denial. These thoughts were triggered by a recent conference on International Stability and Systems Engineering. (Contains 5 notes.)

  8. The Limits of Laughter.

    ERIC Educational Resources Information Center

    Mindess, Harvey

    1983-01-01

    Three incidents which elucidate the limits of laughter are described. Most persons enjoy humor as comic relief, but when humor strikes a blow at something they hold dear, they find it very hard to laugh. People are upset by an irreverent attitude toward things they hold in esteem. (RM)

  9. Limitations in scatter propagation

    NASA Astrophysics Data System (ADS)

    Lampert, E. W.

    1982-04-01

    A short description of the main scatter propagation mechanisms is presented; troposcatter, meteor burst communication and chaff scatter. For these propagation modes, in particular for troposcatter, the important specific limitations discussed are: link budget and resulting hardware consequences, diversity, mobility, information transfer and intermodulation and intersymbol interference, frequency range and future extension in frequency range for troposcatter, and compatibility with other services (EMC).

  10. Defined by Limitations

    ERIC Educational Resources Information Center

    Arriola, Sonya; Murphy, Katy

    2010-01-01

    Undocumented students are a population defined by limitations. Their lack of legal residency and any supporting paperwork (e.g., Social Security number, government issued identification) renders them essentially invisible to the American and state governments. They cannot legally work. In many states, they cannot legally drive. After the age of…

  11. Defining structural limit zones

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1978-01-01

    Method for defining limit loads uses probability distribution of largest load occurring during given time intervals. Method is compatible with both deterministic and probabilistic structural design criteria. It also rationally accounts for fact that longer structure is exposed to random loading environment, greater is possibility that it will experience extreme load.

  12. Fracture mechanics validity limits

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.; Ernst, Hugo A.

    1994-01-01

    Fracture behavior is characteristics of a dramatic loss of strength compared to elastic deformation behavior. Fracture parameters have been developed and exhibit a range within which each is valid for predicting growth. Each is limited by the assumptions made in its development: all are defined within a specific context. For example, the stress intensity parameters, K, and the crack driving force, G, are derived using an assumption of linear elasticity. To use K or G, the zone of plasticity must be small as compared to the physical dimensions of the object being loaded. This insures an elastic response, and in this context, K and G will work well. Rice's J-integral has been used beyond the limits imposed on K and G. J requires an assumption of nonlinear elasticity, which is not characteristic of real material behavior, but is thought to be a reasonable approximation if unloading is kept to a minimum. As well, the constraint cannot change dramatically (typically, the crack extension is limited to ten-percent of the initial remaining ligament length). Rice, et al investigated the properties required of J-type parameters, J(sub x), and showed that the time rate, dJ(sub x)/dt, must not be a function of the crack extension rate, da/dt. Ernst devised the modified-J parameter, J(sub M), that meets this criterion. J(sub M) correlates fracture data to much higher crack growth than does J. Ultimately, a limit of the validity of J(sub M) is anticipated, and this has been estimated to be at a crack extension of about 40-percent of the initial remaining ligament length. None of the various parameters can be expected to describe fracture in an environment of gross plasticity, in which case the process is better described by deformation parameters, e.g., stress and strain. In the current study, various schemes to identify the onset of the plasticity-dominated behavior, i.e., the end of fracture mechanics validity, are presented. Each validity limit parameter is developed in

  13. Limits in decision making arise from limits in memory retrieval.

    PubMed

    Giguère, Gyslain; Love, Bradley C

    2013-05-01

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people's memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people's test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers. PMID:23610402

  14. MELCOR analyses for accident progression issues

    SciTech Connect

    Dingman, S.E.; Shaffer, C.J.; Payne, A.C.; Carmel, M.K. )

    1991-01-01

    Results of calculations performed with MELCOR and HECTR in support of the NUREG-1150 study are presented in this report. The analyses examined a wide range of issues. The analyses included integral calculations covering an entire accident sequence, as well as calculations that addressed specific issues that could affect several accident sequences. The results of the analyses for Grand Gulf, Peach Bottom, LaSalle, and Sequoyah are described, and the major conclusions are summarized. 23 refs., 69 figs., 8 tabs.

  15. Electron/proton spectrometer certification documentation analyses

    NASA Technical Reports Server (NTRS)

    Gleeson, P.

    1972-01-01

    A compilation of analyses generated during the development of the electron-proton spectrometer for the Skylab program is presented. The data documents the analyses required by the electron-proton spectrometer verification plan. The verification plan was generated to satisfy the ancillary hardware requirements of the Apollo Applications program. The certification of the spectrometer requires that various tests, inspections, and analyses be documented, approved, and accepted by reliability and quality control personnel of the spectrometer development program.

  16. Database-Driven Analyses of Astronomical Spectra

    NASA Astrophysics Data System (ADS)

    Cami, Jan

    2012-03-01

    species to the fullerene species C60 and C70 [4]. Given the large number and variety of molecules detected in space, molecular infrared spectroscopy can be used to study pretty much any astrophysical environment that is not too energetic to dissociate the molecules. At the lowest energies, it is interesting to note that molecules such as CN have been used to measure the temperature of the Cosmic Microwave Background (see e.g., Ref. 15). The great diagnostic potential of infrared molecular spectroscopy comes at a price though. Extracting the physical parameters from the observations requires expertise in knowing how various physical processes and instrumental characteristics play together in producing the observed spectra. In addition to the astronomical aspects, this often includes interpreting and understanding the limitations of laboratory data and quantum-chemical calculations; the study of the interaction of matter with radiation at microscopic scales (called radiative transfer, akin to ray tracing) and the effects of observing (e.g., smoothing and resampling) on the resulting spectra and possible instrumental effects (e.g., fringes). All this is not trivial. To make matters worse, observational spectra often contain many components, and might include spectral contributions stemming from very different physical conditions. Fully analyzing such observations is thus a time-consuming task that requires mastery of several techniques. And with ever-increasing rates of observational data acquisition, it seems clear that in the near future, some form of automation is required to handle the data stream. It is thus appealing to consider what part of such analyses could be done without too much human intervention. Two different aspects can be separated: the first step involves simply identifying the molecular species present in the observations. Once the molecular inventory is known, we can try to extract the physical parameters from the observed spectral properties. For both

  17. Limited health literacy in advanced kidney disease.

    PubMed

    Taylor, Dominic M; Bradley, John A; Bradley, Clare; Draper, Heather; Johnson, Rachel; Metcalfe, Wendy; Oniscu, Gabriel; Robb, Matthew; Tomson, Charles; Watson, Chris; Ravanan, Rommel; Roderick, Paul

    2016-09-01

    Limited health literacy may reduce the ability of patients with advanced kidney disease to understand their disease and treatment and take part in shared decision making. In dialysis and transplant patients, limited health literacy has been associated with low socioeconomic status, comorbidity, and mortality. Here, we investigated the prevalence and associations of limited health literacy using data from the United Kingdom-wide Access to Transplantation and Transplant Outcome Measures (ATTOM) program. Incident dialysis, incident transplant, and transplant wait-listed patients ages 18 to 75 were recruited from 2011 to 2013 and data were collected from patient questionnaires and case notes. A score >2 in the Single-Item Literacy Screener was used to define limited health literacy. Univariate and multivariate analyses were performed to identify patient factors associated with limited health literacy. We studied 6842 patients, 2621 were incident dialysis, 1959 were wait-listed, and 2262 were incident transplant. Limited health literacy prevalence was 20%, 15%, and 12% in each group, respectively. Limited health literacy was independently associated with low socioeconomic status, poor English fluency, and comorbidity. However, transplant wait-listing, preemptive transplantation, and live-donor transplantation were associated with increasing health literacy. PMID:27521115

  18. Telescopic limiting magnitudes

    NASA Technical Reports Server (NTRS)

    Schaefer, Bradley E.

    1990-01-01

    The prediction of the magnitude of the faintest star visible through a telescope by a visual observer is a difficult problem in physiology. Many prediction formulas have been advanced over the years, but most do not even consider the magnification used. Here, the prediction algorithm problem is attacked with two complimentary approaches: (1) First, a theoretical algorithm was developed based on physiological data for the sensitivity of the eye. This algorithm also accounts for the transmission of the atmosphere and the telescope, the brightness of the sky, the color of the star, the age of the observer, the aperture, and the magnification. (2) Second, 314 observed values for the limiting magnitude were collected as a test of the formula. It is found that the formula does accurately predict the average observed limiting magnitudes under all conditions.

  19. Chief executives. Off limits.

    PubMed

    Dudley, Nigel

    2002-04-11

    Trust remuneration committees are paying chief executives above the limits recommended in Department of Health guidance. In doing so they are ignoring the government's stated policy of fair pay for all in the NHS and their duty of accountability. Excessive awards made by a remuneration committee can be subject to judicial review and overturned. The health secretary should review the workings of trust remuneration committees and ensure that their decisions are transparent to the public. PMID:11989336

  20. Quantum limits of thermometry

    SciTech Connect

    Stace, Thomas M.

    2010-07-15

    The precision of typical thermometers consisting of N particles scales as {approx}1/{radical}(N). For high-precision thermometry and thermometric standards, this presents an important theoretical noise floor. Here it is demonstrated that thermometry may be mapped onto the problem of phase estimation, and using techniques from optimal phase estimation, it follows that the scaling of the precision of a thermometer may in principle be improved to {approx}1/N, representing a Heisenberg limit to thermometry.

  1. Limits of social mobilization.

    PubMed

    Rutherford, Alex; Cebrian, Manuel; Dsouza, Sohan; Moro, Esteban; Pentland, Alex; Rahwan, Iyad

    2013-04-16

    The Internet and social media have enabled the mobilization of large crowds to achieve time-critical feats, ranging from mapping crises in real time, to organizing mass rallies, to conducting search-and-rescue operations over large geographies. Despite significant success, selection bias may lead to inflated expectations of the efficacy of social mobilization for these tasks. What are the limits of social mobilization, and how reliable is it in operating at these limits? We build on recent results on the spatiotemporal structure of social and information networks to elucidate the constraints they pose on social mobilization. We use the DARPA Network Challenge as our working scenario, in which social media were used to locate 10 balloons across the United States. We conduct high-resolution simulations for referral-based crowdsourcing and obtain a statistical characterization of the population recruited, geography covered, and time to completion. Our results demonstrate that the outcome is plausible without the presence of mass media but lies at the limit of what time-critical social mobilization can achieve. Success relies critically on highly connected individuals willing to mobilize people in distant locations, overcoming the local trapping of diffusion in highly dense areas. However, even under these highly favorable conditions, the risk of unsuccessful search remains significant. These findings have implications for the design of better incentive schemes for social mobilization. They also call for caution in estimating the reliability of this capability. PMID:23576719

  2. Strategic arms limitation

    SciTech Connect

    Greb, G.A.; Johnson, G.W.

    1983-01-01

    Since 1969, the focus of the Soviet-American arms control process has been on limiting the numbers and sizes of both defensive and offensive strategic systems. The format for this effort has been the Strategic Arms Limitations Talks (SALT) and more recently the Strategic Arms Reduction Talks (START). Both sides came to these negotiations convinced that nuclear arsenals had grown so large that some form of mutual restraint was needed. Although the SALT/START process has been slow and ponderous, it has produced several concrete agreements and collateral benefits. The 1972 ABM Treaty restricts the deployment of ballistic missile defense systems, the 1972 Interim Agreement places a quantitative freeze on each side's land based and sea based strategic launchers, and the as yet unratified 1979 SALT II Treaty sets numerical limits on all offensive strategic systems and sublimits on MIRVed systems. Collateral benefits include improved verification procedures, working definitions and counting rules, and permanent bureaucratic apparatus that enhance stability and increase the chances for achieving additional agreements.

  3. Elastic limit of silicane.

    PubMed

    Peng, Qing; De, Suvranu

    2014-10-21

    Silicane is a fully hydrogenated silicene-a counterpart of graphene-having promising applications in hydrogen storage with capacities larger than 6 wt%. Knowledge of its elastic limit is critical in its applications as well as tailoring its electronic properties by strain. Here we investigate the mechanical response of silicane to various strains using first-principles calculations based on density functional theory. We illustrate that non-linear elastic behavior is prominent in two-dimensional nanomaterials as opposed to bulk materials. The elastic limits defined by ultimate tensile strains are 0.22, 0.28, and 0.25 along armchair, zigzag, and biaxial directions, respectively, an increase of 29%, 33%, and 24% respectively in reference to silicene. The in-plane stiffness and Poisson ratio are reduced by a factor of 16% and 26%, respectively. However, hydrogenation/dehydrogenation has little effect on its ultimate tensile strengths. We obtained high order elastic constants for a rigorous continuum description of the nonlinear elastic response. The limitation of second, third, fourth, and fifth order elastic constants are in the strain range of 0.02, 0.08, and 0.13, and 0.21, respectively. The pressure effect on the second order elastic constants and Poisson's ratio were predicted from the third order elastic constants. Our results could provide a safe guide for promising applications and strain-engineering the functions and properties of silicane monolayers. PMID:25190587

  4. LIMITS ON QUAOAR'S ATMOSPHERE

    SciTech Connect

    Fraser, Wesley C.; Gwyn, Stephen; Kavelaars, J. J.; Trujillo, Chad; Stephens, Andrew W.; Gimeno, German

    2013-09-10

    Here we present high cadence photometry taken by the Acquisition Camera on Gemini South, of a close passage by the {approx}540 km radius Kuiper belt object, (50000) Quaoar, of a r' = 20.2 background star. Observations before and after the event show that the apparent impact parameter of the event was 0.''019 {+-} 0.''004, corresponding to a close approach of 580 {+-} 120 km to the center of Quaoar. No signatures of occultation by either Quaoar's limb or its potential atmosphere are detectable in the relative photometry of Quaoar and the target star, which were unresolved during closest approach. From this photometry we are able to put constraints on any potential atmosphere Quaoar might have. Using a Markov chain Monte Carlo and likelihood approach, we place pressure upper limits on sublimation supported, isothermal atmospheres of pure N{sub 2}, CO, and CH{sub 4}. For N{sub 2} and CO, the upper limit surface pressures are 1 and 0.7 {mu}bar, respectively. The surface temperature required for such low sublimation pressures is {approx}33 K, much lower than Quaoar's mean temperature of {approx}44 K measured by others. We conclude that Quaoar cannot have an isothermal N{sub 2} or CO atmosphere. We cannot eliminate the possibility of a CH{sub 4} atmosphere, but place upper surface pressure and mean temperature limits of {approx}138 nbar and {approx}44 K, respectively.

  5. Deriving exposure limits

    NASA Astrophysics Data System (ADS)

    Sliney, David H.

    1990-07-01

    Historically many different agencies and standards organizations have proposed laser occupational exposure limits (EL1s) or maximum permissible exposure (MPE) levels. Although some safety standards have been limited in scope to manufacturer system safety performance standards or to codes of practice most have included occupational EL''s. Initially in the 1960''s attention was drawn to setting EL''s however as greater experience accumulated in the use of lasers and some accident experience had been gained safety procedures were developed. It became clear by 1971 after the first decade of laser use that detailed hazard evaluation of each laser environment was too complex for most users and a scheme of hazard classification evolved. Today most countries follow a scheme of four major hazard classifications as defined in Document WS 825 of the International Electrotechnical Commission (IEC). The classifications and the associated accessible emission limits (AEL''s) were based upon the EL''s. The EL and AEL values today are in surprisingly good agreement worldwide. There exists a greater range of safety requirements for the user for each class of laser. The current MPE''s (i. e. EL''s) and their basis are highlighted in this presentation. 2. 0

  6. Limits of social mobilization

    PubMed Central

    Rutherford, Alex; Cebrian, Manuel; Dsouza, Sohan; Moro, Esteban; Pentland, Alex; Rahwan, Iyad

    2013-01-01

    The Internet and social media have enabled the mobilization of large crowds to achieve time-critical feats, ranging from mapping crises in real time, to organizing mass rallies, to conducting search-and-rescue operations over large geographies. Despite significant success, selection bias may lead to inflated expectations of the efficacy of social mobilization for these tasks. What are the limits of social mobilization, and how reliable is it in operating at these limits? We build on recent results on the spatiotemporal structure of social and information networks to elucidate the constraints they pose on social mobilization. We use the DARPA Network Challenge as our working scenario, in which social media were used to locate 10 balloons across the United States. We conduct high-resolution simulations for referral-based crowdsourcing and obtain a statistical characterization of the population recruited, geography covered, and time to completion. Our results demonstrate that the outcome is plausible without the presence of mass media but lies at the limit of what time-critical social mobilization can achieve. Success relies critically on highly connected individuals willing to mobilize people in distant locations, overcoming the local trapping of diffusion in highly dense areas. However, even under these highly favorable conditions, the risk of unsuccessful search remains significant. These findings have implications for the design of better incentive schemes for social mobilization. They also call for caution in estimating the reliability of this capability. PMID:23576719

  7. Aviation System Analysis Capability Executive Assistant Analyses

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen; Kostiuk, Peter

    1999-01-01

    This document describes the analyses that may be incorporated into the Aviation System Analysis Capability Executive Assistant. The document will be used as a discussion tool to enable NASA and other integrated aviation system entities to evaluate, discuss, and prioritize analyses.

  8. 49 CFR 1572.107 - Other analyses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... conviction for a serious crime not listed in 49 CFR 1572.103, or a period of foreign or domestic imprisonment... 49 Transportation 9 2011-10-01 2011-10-01 false Other analyses. 1572.107 Section 1572.107... ASSESSMENTS Standards for Security Threat Assessments § 1572.107 Other analyses. (a) TSA may determine that...

  9. 49 CFR 1572.107 - Other analyses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... conviction for a serious crime not listed in 49 CFR 1572.103, or a period of foreign or domestic imprisonment... 49 Transportation 9 2010-10-01 2010-10-01 false Other analyses. 1572.107 Section 1572.107... ASSESSMENTS Standards for Security Threat Assessments § 1572.107 Other analyses. (a) TSA may determine that...

  10. Amplitude analyses of charmless B decays

    NASA Astrophysics Data System (ADS)

    Latham, Thomas

    2016-05-01

    We present recent results from the LHCb experiment of Amplitude Analyses of charmless decays of B0 and BS0 mesons to two vector mesons. Measurements obtained include the branching fractions and polarization fractions, as well as CP asymmetries. The analyses use the data recorded by the LHCb experiment during Run 1 of the LHC.

  11. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 8 2012-10-01 2012-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For... identify and address relevant markets and issues, and provide additional information as requested by...

  12. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 8 2014-10-01 2014-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For... identify and address relevant markets and issues, and provide additional information as requested by...

  13. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 8 2013-10-01 2013-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For... identify and address relevant markets and issues, and provide additional information as requested by...

  14. 10 CFR 61.13 - Technical analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... air, soil, groundwater, surface water, plant uptake, and exhumation by burrowing animals. The analyses... expected exposures due to routine operations and likely accidents during handling, storage, and disposal of... 10 Energy 2 2013-01-01 2013-01-01 false Technical analyses. 61.13 Section 61.13 Energy...

  15. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  16. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  17. On the skill of high frequency precipitation analyses

    NASA Astrophysics Data System (ADS)

    Kann, A.; Meirold-Mautner, I.; Schmid, F.; Kirchengast, G.; Fuchsberger, J.

    2014-10-01

    The ability of radar-rain gauge merging algorithms to precisely analyse convective precipitation patterns is of high interest for many applications, e.g. hydrological modelling. However, due to drawbacks of methods like cross-validation and due to the limited availability of reference datasets on high temporal and spatial scale, an adequate validation is usually hardly possible, especially on an operational basis. The present study evaluates the skill of very high resolution and frequently updated precipitation analyses (rapid-INCA) by means of a very dense station network (WegenerNet), operated in a limited domain of the south-eastern parts of Austria (Styria). Based on case studies and a longer term validation over the convective season 2011, a general underestimation of the rapid-INCA precipitation amounts is shown, although the temporal and spatial variability of the errors is - by convective nature - high. The contribution of the rain gauge measurements to the analysis skill is crucial. However, the capability of the analyses to precisely assess the convective precipitation distribution predominantly depends on the representativeness of the stations under the prevalent convective condition.

  18. Strategic arms limitation

    NASA Astrophysics Data System (ADS)

    Allen Greb, G.; Johnson, Gerald W.

    1983-10-01

    Following World War II, American scientists and politicians proposed in the Baruch plan a radical solution to the problem of nuclear weapons: to eliminate them forever under the auspices of an international nuclear development authority. The Soviets, who as yet did not possess the bomb, rejected this plan. Another approach suggested by Secretary of War Henry Stimson to negotiate directly with the Soviet Union was not accepted by the American leadership. These initial arms limitation failures both reflected and exacerbated the hostile political relationship of the superpowers in the 1950s and 1960s. Since 1969, the more modest focus of the Soviet-American arms control process has been on limiting the numbers and sizes of both defensive and offensive strategic systems. The format for this effort has been the Strategic Arms Limitatins Talks (Salt) and more recently the Strategic Arms Reduction Talks (START). Both sides came to these negotiations convinced that nuclear arsenals had grown so large that some for of mutual restraint was needed. Although the SALT/START process has been slow and ponderous, it has produced several concrete the agreements and collateral benefits. The 1972 ABM Treaty restricts the deployment of ballistic missile defense systems, the 1972 Interim Agreement places a quantitative freeze on each side's land based and sea based strategic launchers, and the as yet unratified 1979 SALT II Treaty sets numerical limits on all offensive strategic systems and sublimits on MIRVed systems. Collateral benefits include improved verification procedures, working definitions and counting rules, and permanent bureaucratic apparatus which enhance stability and increase the chances for achieving additional agreements.

  19. Structural equation modeling: strengths, limitations, and misconceptions.

    PubMed

    Tomarken, Andrew J; Waller, Niels G

    2005-01-01

    Because structural equation modeling (SEM) has become a very popular data-analytic technique, it is important for clinical scientists to have a balanced perception of its strengths and limitations. We review several strengths of SEM, with a particular focus on recent innovations (e.g., latent growth modeling, multilevel SEM models, and approaches for dealing with missing data and with violations of normality assumptions) that underscore how SEM has become a broad data-analytic framework with flexible and unique capabilities. We also consider several limitations of SEM and some misconceptions that it tends to elicit. Major themes emphasized are the problem of omitted variables, the importance of lower-order model components, potential limitations of models judged to be well fitting, the inaccuracy of some commonly used rules of thumb, and the importance of study design. Throughout, we offer recommendations for the conduct of SEM analyses and the reporting of results. PMID:17716081

  20. Fundamental limitations for quantum and nanoscale thermodynamics.

    PubMed

    Horodecki, Michał; Oppenheim, Jonathan

    2013-01-01

    The relationship between thermodynamics and statistical physics is valid in the thermodynamic limit-when the number of particles becomes very large. Here we study thermodynamics in the opposite regime-at both the nanoscale and when quantum effects become important. Applying results from quantum information theory, we construct a theory of thermodynamics in these limits. We derive general criteria for thermodynamical state transitions, and, as special cases, find two free energies: one that quantifies the deterministically extractable work from a small system in contact with a heat bath, and the other that quantifies the reverse process. We find that there are fundamental limitations on work extraction from non-equilibrium states, owing to finite size effects and quantum coherences. This implies that thermodynamical transitions are generically irreversible at this scale. As one application of these methods, we analyse the efficiency of small heat engines and find that they are irreversible during the adiabatic stages of the cycle. PMID:23800725

  1. Fault current limiter

    DOEpatents

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  2. Advancing the quantification of humid tropical forest cover loss with multi-resolution optical remote sensing data: Sampling & wall-to-wall mapping

    NASA Astrophysics Data System (ADS)

    Broich, Mark

    Humid tropical forest cover loss is threatening the sustainability of ecosystem goods and services as vast forest areas are rapidly cleared for industrial scale agriculture and tree plantations. Despite the importance of humid tropical forest in the provision of ecosystem services and economic development opportunities, the spatial and temporal distribution of forest cover loss across large areas is not well quantified. Here I improve the quantification of humid tropical forest cover loss using two remote sensing-based methods: sampling and wall-to-wall mapping. In all of the presented studies, the integration of coarse spatial, high temporal resolution data with moderate spatial, low temporal resolution data enable advances in quantifying forest cover loss in the humid tropics. Imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used as the source of coarse spatial resolution, high temporal resolution data and imagery from the Landsat Enhanced Thematic Mapper Plus (ETM+) sensor are used as the source of moderate spatial, low temporal resolution data. In a first study, I compare the precision of different sampling designs for the Brazilian Amazon using the annual deforestation maps derived by the Brazilian Space Agency for reference. I show that sampling designs can provide reliable deforestation estimates; furthermore, sampling designs guided by MODIS data can provide more efficient estimates than the systematic design used for the United Nations Food and Agricultural Organization Forest Resource Assessment 2010. Sampling approaches, such as the one demonstrated, are viable in regions where data limitations, such as cloud contamination, limit exhaustive mapping methods. Cloud-contaminated regions experiencing high rates of change include Insular Southeast Asia, specifically Indonesia and Malaysia. Due to persistent cloud cover, forest cover loss in Indonesia has only been mapped at a 5-10 year interval using photo interpretation of single

  3. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...

  4. Limitations of inclusive fitness.

    PubMed

    Allen, Benjamin; Nowak, Martin A; Wilson, Edward O

    2013-12-10

    Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed. PMID:24277847

  5. (Limiting the greenhouse effect)

    SciTech Connect

    Rayner, S.

    1991-01-07

    Traveler attended the Dahlem Research Conference organized by the Freien Universitat, Berlin. The subject of the conference was Limiting the Greenhouse Effect: Options for Controlling Atmospheric CO{sub 2} Accumulation. Like all Dahlem workshops, this was a meeting of scientific experts, although the disciplines represented were broader than usual, ranging across anthropology, economics, international relations, forestry, engineering, and atmospheric chemistry. Participation by scientists from developing countries was limited. The conference was divided into four multidisciplinary working groups. Traveler acted as moderator for Group 3 which examined the question What knowledge is required to tackle the principal social and institutional barriers to reducing CO{sub 2} emissions'' The working rapporteur was Jesse Ausubel of Rockefeller University. Other working groups examined the economic costs, benefits, and technical feasibility of options to reduce emissions per unit of energy service; the options for reducing energy use per unit of GNP; and the significant of linkage between strategies to reduce CO{sub 2} emissions and other goals. Draft reports of the working groups are appended. Overall, the conference identified a number of important research needs in all four areas. It may prove particularly important in bringing the social and institutional research needs relevant to climate change closer to the forefront of the scientific and policy communities than hitherto.

  6. Limits to biofuels

    NASA Astrophysics Data System (ADS)

    Johansson, S.

    2013-06-01

    Biofuel production is dependent upon agriculture and forestry systems, and the expectations of future biofuel potential are high. A study of the global food production and biofuel production from edible crops implies that biofuel produced from edible parts of crops lead to a global deficit of food. This is rather well known, which is why there is a strong urge to develop biofuel systems that make use of residues or products from forest to eliminate competition with food production. However, biofuel from agro-residues still depend upon the crop production system, and there are many parameters to deal with in order to investigate the sustainability of biofuel production. There is a theoretical limit to how much biofuel can be achieved globally from agro-residues and this amounts to approximately one third of todays' use of fossil fuels in the transport sector. In reality this theoretical potential may be eliminated by the energy use in the biomass-conversion technologies and production systems, depending on what type of assessment method is used. By surveying existing studies on biofuel conversion the theoretical limit of biofuels from 2010 years' agricultural production was found to be either non-existent due to energy consumption in the conversion process, or up to 2-6000TWh (biogas from residues and waste and ethanol from woody biomass) in the more optimistic cases.

  7. Limitations of inclusive fitness

    PubMed Central

    Allen, Benjamin; Nowak, Martin A.; Wilson, Edward O.

    2013-01-01

    Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed. PMID:24277847

  8. Relativistic corrections to fractal analyses of the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Célérier, M.-N.; Thieberger, R.

    2001-02-01

    The effect of curvature on the results of fractal analyses of the galaxy distribution is investigated. We show that, if the universe satisfies the criteria of a wide class of parabolic homogeneous models, the observers measuring the fractal index with the integrated conditional density procedure may use the Hubble formula, without having to allow for curvature, out to distances of 600 Mpc, and possibly far beyond. This contradicts a previous claim by Ribeiro (\\cite{r33}) that, in the Einstein-de Sitter case, relativistic corrections should be taken into account at much smaller scales. We state for the class of cosmological models under study, and give grounds for conjecture for others, that the averaging procedure has a smoothing effect and that, therefore, the redshift-distance relation provides an upper limit to the relativistic corrections involved in such analyses.

  9. Analyses of particles in beryllium by ion imaging

    SciTech Connect

    Price, C.W.; Norberg, J.C.; Evans and Associates, Redwood City, CA )

    1989-10-06

    Ion microanalysis using a {sup 133}Cs{sup +} primary ion beam and SIMS has sufficiently high sensitivity that it can be used to analyze Be for trace amounts of most elements. High sensitivity is important, because O, C, and other elements have low solubilities in Be, and reliable analyses of these elements becaome difficult as they approach their solid solubility limits (about 6 appm for O; C also is suspected to be within this range). Because of the low solubilities of these elements, major portions of their total concentrations can be contained in particles. Quantitative depth-profile analyses using ion-implanted standards are ideal to analyze the Be matrix, but if particles exist, supplementary techniques such as stereology are required to determine the amounts of the elements that are associated with the particles. This paper will demonstrate the use of ion imaging to identify various types of particles and determine their spatial distributions. 4 refs., 3 figs.

  10. Functional analyses and treatment of precursor behavior.

    PubMed

    Najdowski, Adel C; Wallace, Michele D; Ellsworth, Carrie L; MacAleese, Alicia N; Cleveland, Jackie M

    2008-01-01

    Functional analysis has been demonstrated to be an effective method to identify environmental variables that maintain problem behavior. However, there are cases when conducting functional analyses of severe problem behavior may be contraindicated. The current study applied functional analysis procedures to a class of behavior that preceded severe problem behavior (precursor behavior) and evaluated treatments based on the outcomes of the functional analyses of precursor behavior. Responding for all participants was differentiated during the functional analyses, and individualized treatments eliminated precursor behavior. These results suggest that functional analysis of precursor behavior may offer an alternative, indirect method to assess the operant function of severe problem behavior. PMID:18468282

  11. EU-FP7-iMARS: Analysis of Mars Multi-Resolution Images Using Auto-Coregistration Data Mining and Crowd Source Techniques: Processed Results - a First Look

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Tao, Yu; Sidiropoulos, Panagiotis; Gwinner, Klaus; Willner, Konrad; Fanara, Lida; Waehlisch, Marita; van Gasselt, Stephan; Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Ivanov, Anton; Cantini, Federico; Wardlaw, Jessica; Morley, Jeremy; Sprinks, James; Giordano, Michele; Marsh, Stuart; Kim, Jungrack; Houghton, Robert; Bamford, Steven

    2016-06-01

    Understanding planetary atmosphere-surface exchange and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to overlay image data and derived information from different epochs, back in time to the mid 1970s, to examine changes through time, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, we have developed a fully automated multi-resolution DTM processing chain, called the Coregistration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP) [Tao et al., this conference], which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR [Gwinner et al., 2015] have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed by [Sidiropoulos & Muller, this conference]. Using the HRSC map products (both mosaics and orbital strips) as a map-base it is being applied to many of the 400,000 level-1 EDR images taken by the 4 NASA orbital cameras. In particular, the NASA Viking Orbiter camera (VO), Mars Orbiter Camera (MOC), Context Camera (CTX) as well as the High Resolution Imaging Science Experiment (HiRISE) back to 1976. A webGIS has been developed [van Gasselt et al., this conference] for displaying this time sequence of imagery and will be demonstrated showing an example from one of the HRSC quadrangle map-sheets. Automated quality control [Sidiropoulos & Muller, 2015] techniques are applied to screen for

  12. Limits of custodial symmetry

    SciTech Connect

    Chivukula, R. Sekhar; Simmons, Elizabeth H.; Di Chiara, Stefano; Foadi, Roshan

    2009-11-01

    We introduce a toy model implementing the proposal of using a custodial symmetry to protect the Zb{sub L}b{sub L} coupling from large corrections. This 'doublet-extended standard model' adds a weak doublet of fermions (including a heavy partner of the top quark) to the particle content of the standard model in order to implement an O(4)xU(1){sub X}{approx}SU(2){sub L}xSU(2){sub R}xP{sub LR}xU(1){sub X} symmetry in the top-quark mass generating sector. This symmetry is softly broken to the gauged SU(2){sub L}xU(1){sub Y} electroweak symmetry by a Dirac mass M for the new doublet; adjusting the value of M allows us to explore the range of possibilities between the O(4)-symmetric (M{yields}0) and standard-model-like (M{yields}{infinity}) limits. In this simple model, we find that the experimental limits on the Zb{sub L}b{sub L} coupling favor smaller M while the presence of a potentially sizable negative contribution to {alpha}T strongly favors large M. Comparison with precision electroweak data shows that the heavy partner of the top quark must be heavier than about 3.4 TeV, making it difficult to search for at LHC. This result demonstrates that electroweak data strongly limit the amount by which the custodial symmetry of the top-quark mass generating sector can be enhanced relative to the standard model. Using an effective field theory calculation, we illustrate how the leading contributions to {alpha}T, {alpha}S, and the Zb{sub L}b{sub L} coupling in this model arise from an effective operator coupling right-handed top quarks to the Z boson, and how the effects on these observables are correlated. We contrast this toy model with extradimensional models in which the extended custodial symmetry is invoked to control the size of additional contributions to {alpha}T and the Zb{sub L}b{sub L} coupling, while leaving the standard model contributions essentially unchanged.

  13. Limits of Executive Control

    PubMed Central

    Verbruggen, Frederick; McAndrew, Amy; Weidemann, Gabrielle; Stevens, Tobias; McLaren, Ian P. L.

    2016-01-01

    Cognitive-control theories attribute action control to executive processes that modulate behavior on the basis of expectancy or task rules. In the current study, we examined corticospinal excitability and behavioral performance in a go/no-go task. Go and no-go trials were presented in runs of five, and go and no-go runs alternated predictably. At the beginning of each trial, subjects indicated whether they expected a go trial or a no-go trial. Analyses revealed that subjects immediately adjusted their expectancy ratings when a new run started. However, motor excitability was primarily associated with the properties of the previous trial, rather than the predicted properties of the current trial. We also observed a large latency cost at the beginning of a go run (i.e., reaction times were longer for the first trial in a go run than for the second trial). These findings indicate that actions in predictable environments are substantially influenced by previous events, even if this influence conflicts with conscious expectancies about upcoming events. PMID:27000177

  14. 7 CFR 1400.204 - Limited partnerships, limited liability partnerships, limited liability companies, corporations...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Limited partnerships, limited liability partnerships... AND SUBSEQUENT CROP, PROGRAM, OR FISCAL YEARS Payment Eligibility § 1400.204 Limited partnerships, limited liability partnerships, limited liability companies, corporations, and other similar...

  15. 7 CFR 1400.204 - Limited partnerships, limited liability partnerships, limited liability companies, corporations...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Limited partnerships, limited liability partnerships... AND SUBSEQUENT CROP, PROGRAM, OR FISCAL YEARS Payment Eligibility § 1400.204 Limited partnerships, limited liability partnerships, limited liability companies, corporations, and other similar...

  16. Nature limits filarial transmission

    PubMed Central

    Chandra, Goutam

    2008-01-01

    Lymphatic filariasis, caused by Wuchereria bancrofti, Brugia malayi and B. timori is a public health problem of considerable magnitude of the tropics and subtropics. Presently 1.3 billion people are at risk of lymphatic filariasis (LF) infection and about 120 million people are affected in 83 countries. In this context it is worth mentioning that 'nature' itself limits filarial transmission to a great extent in a number of ways such as by reducing vector populations, parasitic load and many other bearings. Possibilities to utilize these bearings of natural control of filariasis should be searched and if manipulations on nature, like indiscriminate urbanization and deforestation, creating sites favourable for the breeding of filarial vectors and unsanitary conditions, water pollution with organic matters etc., are reduced below the threshold level, we will be highly benefited. Understandings of the factors related to natural phenomena of control of filariasis narrated in this article may help to adopt effective control strategies. PMID:18500974

  17. Physical limits to magnetogenetics.

    PubMed

    Meister, Markus

    2016-01-01

    This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report magnetic control of membrane conductance by attaching ferritin to an ion channel protein and then tugging the ferritin or heating it with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims conflict with basic laws of physics. The discrepancies are large: from 5 to 10 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells. PMID:27529126

  18. 7 CFR 94.102 - Analyses available.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... analyses for total ash, fat by acid hydrolysis, moisture, salt, protein, beta-carotene, catalase... glycol, SLS, and zeolex. There are also be tests for starch, total sugars, sugar profile, whey,...

  19. 7 CFR 94.102 - Analyses available.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... analyses for total ash, fat by acid hydrolysis, moisture, salt, protein, beta-carotene, catalase... glycol, SLS, and zeolex. There are also be tests for starch, total sugars, sugar profile, whey,...

  20. Quality control considerations in performing washability analyses

    SciTech Connect

    Graham, R.D.

    1984-10-01

    The author describes, in considerable detail, the procedures for carrying out washability analyses as laid down in ASTM Standard Test Method D4371. These include sampling, sample preparation, hydrometer standardisation, washability testing, and analysis of specific gravity fractions.