Science.gov

Sample records for limits multiresolution analyses

  1. MATLAB implementation of W-matrix multiresolution analyses

    SciTech Connect

    Kwong, Man Kam

    1997-01-01

    We present a MATLAB toolbox on multiresolution analysis based on the W-transform introduced by Kwong and Tang. The toolbox contains basic commands to perform forward and inverse transforms on finite 1D and 2D signals of arbitrary length, to perform multiresolution analysis of given signals to a specified number of levels, to visualize the wavelet decomposition, and to do compression. Examples of numerical experiments are also discussed.

  2. Evaluating Climate Causation of Conflict in Darfur Using Multi-temporal, Multi-resolution Satellite Image Datasets With Novel Analyses

    NASA Astrophysics Data System (ADS)

    Brown, I.; Wennbom, M.

    2013-12-01

    Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors

  3. Multiresolution Topological Simplification

    PubMed Central

    Xia, Kelin; Zhao, Zhixiong

    2015-01-01

    Abstract Persistent homology has been advocated as a new strategy for the topological simplification of complex data. However, it is computationally intractable for large data sets. In this work, we introduce multiresolution persistent homology for tackling large datasets. Our basic idea is to match the resolution with the scale of interest so as to create a topological microscopy for the underlying data. We adjust the resolution via a rigidity density-based filtration. The proposed multiresolution topological analysis is validated by the study of a complex RNA molecule. PMID:26222626

  4. Research potential and limitations of trace analyses of cremated remains.

    PubMed

    Harbeck, Michaela; Schleuder, Ramona; Schneider, Julius; Wiechmann, Ingrid; Schmahl, Wolfgang W; Grupe, Gisela

    2011-01-30

    Human cremation is a common funeral practice all over the world and will presumably become an even more popular choice for interment in the future. Mainly for purposes of identification, there is presently a growing need to perform trace analyses such as DNA or stable isotope analyses on human remains after cremation in order to clarify pending questions in civil or criminal court cases. The aim of this study was to experimentally test the potential and limitations of DNA and stable isotope analyses when conducted on cremated remains. For this purpose, tibiae from modern cattle were experimentally cremated by incinerating the bones in increments of 100°C until a maximum of 1000°C was reached. In addition, cremated human remains were collected from a modern crematory. The samples were investigated to determine level of DNA preservation and stable isotope values (C and N in collagen, C and O in the structural carbonate, and Sr in apatite). Furthermore, we assessed the integrity of microstructural organization, appearance under UV-light, collagen content, as well as the mineral and crystalline organization. This was conducted in order to provide a general background with which to explain observed changes in the trace analyses data sets. The goal is to develop an efficacious screening method for determining at which degree of burning bone still retains its original biological signals. We found that stable isotope analysis of the tested light elements in bone is only possible up to a heat exposure of 300°C while the isotopic signal from strontium remains unaltered even in bones exposed to very high temperatures. DNA-analyses seem theoretically possible up to a heat exposure of 600°C but can not be advised in every case because of the increased risk of contamination. While the macroscopic colour and UV-fluorescence of cremated bone give hints to temperature exposure of the bone's outer surface, its histological appearance can be used as a reliable indicator for the

  5. The Limited Informativeness of Meta-Analyses of Media Effects.

    PubMed

    Valkenburg, Patti M

    2015-09-01

    In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children's video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I compare Ferguson's nonsignificant effects sizes with earlier meta-analyses on the same topics that yielded larger, significant effect sizes. I argue that Ferguson's choice for partial effects sizes is unjustified on both methodological and theoretical grounds. I then plead for a more constructive debate on the effects of violent video games on children and adolescents. Until now, this debate has been dominated by two camps with diametrically opposed views on the effects of violent media on children. However, even the earliest media effects studies tell us that children can react quite differently to the same media content. Thus, if researchers truly want to understand how media affect children, rather than fight for the presence or absence of effects, they need to adopt a perspective that takes differential susceptibility to media effects more seriously.

  6. The Limited Informativeness of Meta-Analyses of Media Effects.

    PubMed

    Valkenburg, Patti M

    2015-09-01

    In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children's video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I compare Ferguson's nonsignificant effects sizes with earlier meta-analyses on the same topics that yielded larger, significant effect sizes. I argue that Ferguson's choice for partial effects sizes is unjustified on both methodological and theoretical grounds. I then plead for a more constructive debate on the effects of violent video games on children and adolescents. Until now, this debate has been dominated by two camps with diametrically opposed views on the effects of violent media on children. However, even the earliest media effects studies tell us that children can react quite differently to the same media content. Thus, if researchers truly want to understand how media affect children, rather than fight for the presence or absence of effects, they need to adopt a perspective that takes differential susceptibility to media effects more seriously. PMID:26386007

  7. Multiresolution image gathering and restoration

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.

  8. Multiresolution Simulations of Photoinjectors

    SciTech Connect

    Mihalcea, D.; Bohn, C. L.; Terzic, B.

    2006-11-27

    We report a successful implementation of a three-dimensional wavelet-based solver for Poisson's equation with Dirichlet boundary conditions, optimized for use in particle-in-cell beam dynamics simulations. We explain how the new algorithm works and the advantages it brings to accelerator simulations. The solver is integrated into a full photoinjector-simulation code (Impact-T), and the code is then benchmarked by comparing its output against that of other codes (verification) and against laboratory measurements (validation). We also simulated the AES/JLab photoinjector using a suite of codes. This activity revealed certain performance limitations and their causes.

  9. Multiresolution Simulations of Photoinjectors

    NASA Astrophysics Data System (ADS)

    Mihalcea, D.; Bohn, C. L.; Terzić, B.

    2006-11-01

    We report a successful implementation of a three-dimensional wavelet-based solver for Poisson's equation with Dirichlet boundary conditions, optimized for use in particle-in-cell beam dynamics simulations. We explain how the new algorithm works and the advantages it brings to accelerator simulations. The solver is integrated into a full photoinjector-simulation code (Impact-T), and the code is then benchmarked by comparing its output against that of other codes (verification) and against laboratory measurements (validation). We also simulated the AES/JLab photoinjector using a suite of codes. This activity revealed certain performance limitations and their causes.

  10. Multiresolution analysis of SAR data

    NASA Astrophysics Data System (ADS)

    Hummel, Robert

    1993-01-01

    The 'Multiresolution Analysis of SAR Data' program supported research work in five areas. Geometric hashing theory can now be viewed as a Bayesian approach to object recognition. False alarm rates can be greatly reduced by using certain enhancements and modifications developed under this project. Geometric hashing algorithms now exist for the Connection Machine. Recognition of synthetically-produced dot arrays was demonstrated using a model base of 1024 objects. The work represents a substantial advance over existing model-based vision capabilities. Algorithms were developed for determining the translation and rotation of a sensor given only the image flow field data. These are new algorithms, and are much more stable than existing computer vision algorithms for this task. The algorithms might provide independent verification of gyroscopic data, or might be used to compute relative motion with respect to a moving scene object, or may be useful for motion-based segmentation. Our theories explaining the Dempster/Shafer calculus and developing new uncertainty reasoning calculi were extended, and presented at a conference and were incorporated into the Bayesian interpretation of geometric hashing. 'Wavelet Slice Theorem' was developed in several different versions, any of which yields an alternate approach to image formation. The result may well provide a more stable approach to image formation than the standard Fourier-based projection slide theorem, since interpolation of unknown spectra values is better-founded.

  11. Multiresolution Analysis Adapted to Irregularly Spaced Data

    NASA Astrophysics Data System (ADS)

    Mokraoui, Anissa; Duhamel, Pierre

    2009-12-01

    This paper investigates the mathematical background of multiresolution analysis in the specific context where the signal is represented by irregularly sampled data at known locations. The study is related to the construction of nested piecewise polynomial multiresolution spaces represented by their corresponding orthonormal bases. Using simple spline basis orthonormalization procedures involves the construction of a large family of orthonormal spline scaling bases defined on consecutive bounded intervals. However, if no more additional conditions than those coming from multiresolution are imposed on each bounded interval, the orthonormal basis is represented by a set of discontinuous scaling functions. The spline wavelet basis also has the same problem. Moreover, the dimension of the corresponding wavelet basis increases with the spline degree. An appropriate orthonormalization procedure of the basic spline space basis, whatever the degree of the spline, allows us to (i) provide continuous scaling and wavelet functions, (ii) reduce the number of wavelets to only one, and (iii) reduce the complexity of the filter bank. Examples of the multiresolution implementations illustrate that the main important features of the traditional multiresolution are also satisfied.

  12. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    SciTech Connect

    Milani, Gabriele Valente, Marco

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  13. Downscaling, parameterization, decomposition, compression: a perspective from the multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Yano, J.-I.

    2010-06-01

    Geophysical models in general, and atmospheric models more specifically, are always limited in spatial resolutions. Due to this limitation, we face with two different needs. The first is a need for knowing (or "downscaling") more spatial details (e.g., precipitation distribution) than having model simulations for practical applications, such as hydrological modelling. The second is a need for "parameterizing" the subgrid-scale physical processes in order to represent the feedbacks of these processes on to the resolved scales (e.g., the convective heating rate). The present article begins by remarking that it is essential to consider the downscaling and parametrization as an "inverse" of each other: downscaling seeks a detail of the subgrid-scale processes, then the parameterization seeks an integrated effect of the former into the resolved scales. A consideration on why those two closely-related operations are traditionally treated separately, gives insights of the fundamental limitations of the current downscalings and parameterizations. The multiresolution analysis (such as those based on wavelet) provides an important conceptual framework for developing a unified formulation for the downscaling and parameterization. In the vocabulary of multiresolution analysis, these two operations may be considered as types of decompression and compression. A new type of a subgrid-scale representation scheme, NAM-SCA (nonhydrostatic anelastic model with segmentally-constant approximation), is introduced under this framework.

  14. A multiresolution restoration method for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Franquiz, Juan Manuel

    Single-photon emission computed tomography (SPECT) is affected by photon attenuation and image blurring due to Compton scatter and geometric detector response. Attenuation correction is important to increase diagnostic accuracy of cardiac SPECT. However, in attenuation-corrected scans, scattered photons from radioactivity in the liver could produce a spillover of counts into the inferior myocardial wall. In the clinical setting, blurring effects could be compensated by restoration with Wiener and Metz filters. Inconveniences of these procedures are that the Wiener filter depends upon the power spectra of the object image and noise, which are unknown, while Metz parameters have to be optimized by trial and error. This research develops an alternative restoration procedure based on a multiresolution denoising and regularization algorithm. It was hypothesized that this representation leads to a more straightforward and automatic restoration than conventional filters. The main objective of the research was the development and assessment of the multiresolution algorithm for compensating the liver spillover artifact. The multiresolution algorithm decomposes original SPECT projections into a set of sub-band frequency images. This allows a simple denoising and regularization procedure by discarding high frequency channels and performing inversion only in low and intermediate frequencies. The method was assessed in bull's eye polar maps and short- axis attenuation-corrected reconstructions of a realistic cardiac-chest phantom with a custom-made liver insert and different 99mTc liver-to-heart activity ratios. Inferior myocardial defects were simulated in some experiments. The cardiac phantom in free air was considered as the gold standard reference. Quantitative analysis was performed by calculating contrast of short- axis slices and the normalized chi-square measure, defect size and mean and standard deviation of polar map counts. The performance of the multiresolution

  15. Large Deformation Multiresolution Diffeomorphic Metric Mapping for Multiresolution Cortical Surfaces: A Coarse-to-Fine Approach.

    PubMed

    Tan, Mingzhen; Qiu, Anqi

    2016-09-01

    Brain surface registration is an important tool for characterizing cortical anatomical variations and understanding their roles in normal cortical development and psychiatric diseases. However, surface registration remains challenging due to complicated cortical anatomy and its large differences across individuals. In this paper, we propose a fast coarse-to-fine algorithm for surface registration by adapting the large diffeomorphic deformation metric mapping (LDDMM) framework for surface mapping and show improvements in speed and accuracy via a multiresolution analysis of surface meshes and the construction of multiresolution diffeomorphic transformations. The proposed method constructs a family of multiresolution meshes that are used as natural sparse priors of the cortical morphology. At varying resolutions, these meshes act as anchor points where the parameterization of multiresolution deformation vector fields can be supported, allowing the construction of a bundle of multiresolution deformation fields, each originating from a different resolution. Using a coarse-to-fine approach, we show a potential reduction in computation cost along with improvements in sulcal alignment when compared with LDDMM surface mapping. PMID:27254865

  16. Optical design and system engineering of a multiresolution foveated laparoscope

    PubMed Central

    Qin, Yi; Hua, Hong

    2016-01-01

    The trade-off between the spatial resolution and field of view is one major limitation of state-of-the-art laparoscopes. In order to address this limitation, we demonstrated a multiresolution foveated laparoscope (MRFL) which is capable of simultaneously capturing both a wide-angle overview for situational awareness and a high-resolution zoomed-in view for accurate surgical operation. In this paper, we focus on presenting the optical design and system engineering process for developing the MRFL prototype. More specifically, the first-order specifications and properties of the optical system are discussed, followed by a detailed discussion on the optical design strategy and procedures of each subsystem. The optical performance of the final system, including diffraction efficiency, tolerance analysis, stray light and ghost image, is fully analyzed. Finally, the prototype assembly process and the final prototype are demonstrated. PMID:27139875

  17. Optical design and system engineering of a multiresolution foveated laparoscope.

    PubMed

    Qin, Yi; Hua, Hong

    2016-04-10

    The trade-off between the spatial resolution and field of view is one major limitation of state-of-the-art laparoscopes. In order to address this limitation, we demonstrated a multiresolution foveated laparoscope (MRFL) which is capable of simultaneously capturing both a wide-angle overview for situational awareness and a high-resolution zoomed-in view for accurate surgical operation. In this paper, we focus on presenting the optical design and system engineering process for developing the MRFL prototype. More specifically, the first-order specifications and properties of the optical system are discussed, followed by a detailed discussion on the optical design strategy and procedures of each subsystem. The optical performance of the final system, including diffraction efficiency, tolerance analysis, stray light and ghost image, is fully analyzed. Finally, the prototype assembly process and the final prototype are demonstrated. PMID:27139875

  18. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  19. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  20. Incorporation of concentration data below the limit of quantification in population pharmacokinetic analyses

    PubMed Central

    Keizer, Ron J; Jansen, Robert S; Rosing, Hilde; Thijssen, Bas; Beijnen, Jos H; Schellens, Jan H M; Huitema, Alwin D R

    2015-01-01

    Handling of data below the lower limit of quantification (LLOQ), below the limit of quantification (BLOQ) in population pharmacokinetic (PopPK) analyses is important for reducing bias and imprecision in parameter estimation. We aimed to evaluate whether using the concentration data below the LLOQ has superior performance over several established methods. The performance of this approach (“All data”) was evaluated and compared to other methods: “Discard,” “LLOQ/2,” and “LIKE” (likelihood-based). An analytical and residual error model was constructed on the basis of in-house analytical method validations and analyses from literature, with additional included variability to account for model misspecification. Simulation analyses were performed for various levels of BLOQ, several structural PopPK models, and additional influences. Performance was evaluated by relative root mean squared error (RMSE), and run success for the various BLOQ approaches. Performance was also evaluated for a real PopPK data set. For all PopPK models and levels of censoring, RMSE values were lowest using “All data.” Performance of the “LIKE” method was better than the “LLOQ/2” or “Discard” method. Differences between all methods were small at the lowest level of BLOQ censoring. “LIKE” method resulted in low successful minimization (<50%) and covariance step success (<30%), although estimates were obtained in most runs (∼90%). For the real PK data set (7.4% BLOQ), similar parameter estimates were obtained using all methods. Incorporation of BLOQ concentrations showed superior performance in terms of bias and precision over established BLOQ methods, and shown to be feasible in a real PopPK analysis. PMID:26038706

  1. Exploring a Multi-resolution Approach Using AMIP Simulations

    SciTech Connect

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun; Yang, Qing; Lu, Jian; Hagos, Samson M.; Rauscher, Sara; Dong, Li; Ringler, Todd; Lauritzen, P. H.

    2015-07-31

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulations reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.

  2. Multiresolutional models of uncertainty generation and reduction

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    Kolmogorov's axiomatic principles of the probability theory, are reconsidered in the scope of their applicability to the processes of knowledge acquisition and interpretation. The model of uncertainty generation is modified in order to reflect the reality of engineering problems, particularly in the area of intelligent control. This model implies algorithms of learning which are organized in three groups which reflect the degree of conceptualization of the knowledge the system is dealing with. It is essential that these algorithms are motivated by and consistent with the multiresolutional model of knowledge representation which is reflected in the structure of models and the algorithms of learning.

  3. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  4. Liver fibrosis grading using multiresolution histogram information in real-time elastography

    NASA Astrophysics Data System (ADS)

    Albouy-Kissi, A.; Sarry, L.; Massoulier, S.; Bonny, C.; Randl, K.; Abergel, A.

    2010-03-01

    Despites many limitations, liver biopsy remains the gold standard method for grading and staging liver biopsy. Several modalities have been developed for a non invasive assessment of liver diseases. Real-time elastography may constitute a true alternative to liver biopsy by providing an image of tissular elasticity distribution correlated to the fibrosis grade. In this paper, we investigate a new approach for the assessment of liver fibrosis by the classification of fibrosis morphometry. Multiresolution histogram, based on a combination of intensity and texture features, has been tested as feature space. Thus, the ability of such multiresolution histograms to discriminate fibrosis grade has been proven. The results have been tested on seventeen patients that underwent a real time elastography and FibroScan examination.

  5. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources. PMID:20350850

  6. MULTIRESOLUTION REPRESENTATION OF OPERATORS WITH BOUNDARY CONDITIONS ON SIMPLE DOMAINS

    SciTech Connect

    Beylkin, Gregory; Fann, George I; Harrison, Robert J; Kurcz, Christopher E; Monzon, Lucas A

    2011-01-01

    We develop a multiresolution representation of a class of integral operators satisfying boundary conditions on simple domains in order to construct fast algorithms for their application. We also elucidate some delicate theoretical issues related to the construction of periodic Green s functions for Poisson s equation. By applying the method of images to the non-standard form of the free space operator, we obtain lattice sums that converge absolutely on all scales, except possibly on the coarsest scale. On the coarsest scale the lattice sums may be only conditionally convergent and, thus, allow for some freedom in their definition. We use the limit of square partial sums as a definition of the limit and obtain a systematic, simple approach to the construction (in any dimension) of periodized operators with sparse non-standard forms. We illustrate the results on several examples in dimensions one and three: the Hilbert transform, the projector on divergence free functions, the non-oscillatory Helmholtz Green s function and the Poisson operator. Remarkably, the limit of square partial sums yields a periodic Poisson Green s function which is not a convolution. Using a short sum of decaying Gaussians to approximate periodic Green s functions, we arrive at fast algorithms for their application. We further show that the results obtained for operators with periodic boundary conditions extend to operators with Dirichlet, Neumann, or mixed boundary conditions.

  7. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  8. Filter design for directional multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Cunha, Arthur L.; Do, Minh N.

    2005-08-01

    In this paper we discuss recent developments on design tools and methods for multidimensional filter banks in the context of directional multiresolution representations. Due to the inherent non-separability of the filters and the lack of multi-dimensional factorization tools, one generally has to overcome factorization by indirect methods. One such method is the mapping technique. In the context of contourlets we review methods for designing filters with directional vanishing moments (DVM). The DVM property is crucial in guaranteeing the non-linear approximation efficacy of contourlets. Our approach allows for easy design of two-channel linear-phase filter banks with DVM of any order. Next we study the design via mapping of nonsubsampled filter banks. Our methodology allows for a fast implementation through ladder steps. The proposed design is then used to construct the nonsubsampled contourlet transform which is particularly efficiently in image denoising, as experiments in this paper show.

  9. Multiresolution dynamic predictor based on neural networks

    NASA Astrophysics Data System (ADS)

    Tsui, Fu-Chiang; Li, Ching-Chung; Sun, Mingui; Sclabassi, Robert J.

    1996-03-01

    We present a multiresolution dynamic predictor (MDP) based on neural networks for multi- step prediction of a time series. The MDP utilizes the discrete biorthogonal wavelet transform to compute wavelet coefficients at several scale levels and recurrent neural networks (RNNs) to form a set of dynamic nonlinear models for prediction of the time series. By employing RNNs in wavelet coefficient space, the MDP is capable of predicting a time series for both the long-term (with coarse resolution) and short-term (with fine resolution). Experimental results have demonstrated the effectiveness of the MDP for multi-step prediction of intracranial pressure (ICP) recorded from head-trauma patients. This approach has applicability to quasi- stationary signals and is suitable for on-line computation.

  10. Multiresolution target discrimination during image formation

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Oh, Seung-Mok; McClellan, James H.

    2000-08-01

    This paper presents a novel scheme to detect and discriminate landmines from other clutter objects during the image formation process for ultra-wideband (UWB) synthetic aperture radar (SAR) systems. By identifying likely regions containing the targets of interest, i.e., landmines, it is possible to speed up the overall formation time by pruning the processing to resolve regions that do not contain targets. The image formation algorithm is a multiscale approximation to standard backprojection known as the quadtree that uses a 'divide-and- conquer' strategy. The intermediate quadtree data admits multiresolution representations of the scene, and we develop a contrast statistic to discriminate structured/diffuse regions and an aperture diversity statistic to discriminate between regions containing mines and desert scrub. The potential advantages of this technique are illustrated using data collected at Yuma, AZ by the ARL BoomSAR system.

  11. Multiresolution MR elastography using nonlinear inversion

    PubMed Central

    McGarry, M. D. J.; Van Houten, E. E. W.; Johnson, C. L.; Georgiadis, J. G.; Sutton, B. P.; Weaver, J. B.; Paulsen, K. D.

    2012-01-01

    Purpose: Nonlinear inversion (NLI) in MR elastography requires discretization of the displacement field for a finite element (FE) solution of the “forward problem”, and discretization of the unknown mechanical property field for the iterative solution of the “inverse problem”. The resolution requirements for these two discretizations are different: the forward problem requires sufficient resolution of the displacement FE mesh to ensure convergence, whereas lowering the mechanical property resolution in the inverse problem stabilizes the mechanical property estimates in the presence of measurement noise. Previous NLI implementations use the same FE mesh to support the displacement and property fields, requiring a trade-off between the competing resolution requirements. Methods: This work implements and evaluates multiresolution FE meshes for NLI elastography, allowing independent discretizations of the displacements and each mechanical property parameter to be estimated. The displacement resolution can then be selected to ensure mesh convergence, and the resolution of the property meshes can be independently manipulated to control the stability of the inversion. Results: Phantom experiments indicate that eight nodes per wavelength (NPW) are sufficient for accurate mechanical property recovery, whereas mechanical property estimation from 50 Hz in vivo brain data stabilizes once the displacement resolution reaches 1.7 mm (approximately 19 NPW). Viscoelastic mechanical property estimates of in vivo brain tissue show that subsampling the loss modulus while holding the storage modulus resolution constant does not substantially alter the storage modulus images. Controlling the ratio of the number of measurements to unknown mechanical properties by subsampling the mechanical property distributions (relative to the data resolution) improves the repeatability of the property estimates, at a cost of modestly decreased spatial resolution. Conclusions: Multiresolution

  12. Hanging-wall deformation above a normal fault: sequential limit analyses

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand

    2015-04-01

    The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring

  13. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay rather than the entire sample process. Our objective was to develop a method to determine the 95% LOD (lowest co...

  14. Multiresolution reconstruction method to optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Patrickeyev, Igor; Oraevsky, Alexander A.

    2003-06-01

    A new method for reconstruction of optoacoustic images is proposed. The method of image reconstruction incorporates multiresolution wavelet filtering into spherical back-projection algorithm. According to our method, each optoacoustic signal detected with an array of ultrawide-band transducers is decomposed into a set of self-similar wavelets with different resolution (characteristic frequency) and then back-projected along the spherical traces for each resolution scale separately. The advantage of this approach is that one can reconstruct objects of a preferred size or a range of sizes. The sum of all images reconstructed with different resolutions yields an image that visualizes small and large objects at once. An approximate speed of the proposed algorithm is of the same order as algorithm, based on the Fast Fourier Transform (FFT). The accuracy of the proposed method is illustrated by images, which are reconstructed from simulated optoacoustic signals as well as signals measured with the Laser Optoacoustic Imaging System (LOIS) from a loop of blood vessel embedded in a gel phantom. The method can be used for contrast-enhanced optoacoustic imaging in the depth of tissue, i.e. for medical applications such as breast cancer or prostate cancer detection.

  15. Multiresolution segmentation technique for spine MRI images

    NASA Astrophysics Data System (ADS)

    Li, Haiyun; Yan, Chye H.; Ong, Sim Heng; Chui, Cheekong K.; Teoh, Swee H.

    2002-05-01

    In this paper, we describe a hybrid method for segmentation of spinal magnetic resonance imaging that has been developed based on the natural phenomenon of stones appearing as water recedes. The candidate segmentation region corresponds to the stones with characteristics similar to that of intensity extrema, edges, intensity ridge and grey-level blobs. The segmentation method is implemented based on a combination of wavelet multiresolution decomposition and fuzzy clustering. First thresholding is performed dynamically according to local characteristic to detect possible target areas, We then use fuzzy c-means clustering in concert with wavelet multiscale edge detection to identify the maximum likelihood anatomical and functional target areas. Fuzzy C-Means uses iterative optimization of an objective function based on a weighted similarity measure between the pixels in the image and each of c cluster centers. Local extrema of this objective function are indicative of an optimal clustering of the input data. The multiscale edges can be detected and characterized from local maxima of the modulus of the wavelet transform while the noise can be reduced to some extent by enacting thresholds. The method provides an efficient and robust algorithm for spinal image segmentation. Examples are presented to demonstrate the efficiency of the technique on some spinal MRI images.

  16. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  17. Multi-Resolution Dynamic Meshes with Arbitrary Deformations

    SciTech Connect

    Shamir, A.; Pascucci, V.; Bajaj, C.

    2000-07-10

    Multi-resolution techniques and models have been shown to be effective for the display and transmission of large static geometric object. Dynamic environments with internally deforming models and scientific simulations using dynamic meshes pose greater challenges in terms of time and space, and need the development of similar solutions. In this paper we introduce the T-DAG, an adaptive multi-resolution representation for dynamic meshes with arbitrary deformations including attribute, position, connectivity and topology changes. T-DAG stands for Time-dependent Directed Acyclic Graph which defines the structure supporting this representation. We also provide an incremental algorithm (in time) for constructing the T-DAG representation of a given input mesh. This enables the traversal and use of the multi-resolution dynamic model for partial playback while still constructing new time-steps.

  18. Numerical analyses on optical limiting performances of chloroindium phthalocyanines with different substituent positions

    NASA Astrophysics Data System (ADS)

    Yu-Jin, Zhang; Xing-Zhe, Li; Ji-Cai, Liu; Chuan-Kui, Wang

    2016-01-01

    Optical limiting properties of two soluble chloroindium phthalocyanines with α- and β-alkoxyl substituents in nanosecond laser field have been studied by solving numerically the coupled singlet-triplet rate equation together with the paraxial wave field equation under the Crank-Nicholson scheme. Both transverse and longitudinal effects of the laser field on photophysical properties of the compounds are considered. Effective transfer time between the ground state and the lowest triplet state is defined in reformulated rate equations to characterize dynamics of singlet-triplet state population transfer. It is found that both phthalocyanines exhibit good nonlinear optical absorption abilities, while the compound with α-substituent shows enhanced optical limiting performance. Our ab-initio calculations reveal that the phthalocyanine with α-substituent has more obvious electron delocalization and lower frontier orbital transfer energies, which are responsible for its preferable photophysical properties. Project supported by the National Basic Research Program of China (Grant No. 2011CB808100), the National Natural Science Foundation of China (Grant Nos. 11204078 and 11574082), and the Fundamental Research Funds for the Central Universities of China (Grant No. 2015MS54).

  19. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  20. Limited-host-range plasmid of Agrobacterium tumefaciens: molecular and genetic analyses of transferred DNA.

    PubMed Central

    Yanofsky, M; Montoya, A; Knauf, V; Lowe, B; Gordon, M; Nester, E

    1985-01-01

    A tumor-inducing (Ti) plasmid from a strain of Agrobacterium tumefaciens that induces tumors on only a limited range of plants was characterized and compared with the Ti plasmids from strains that induce tumors on a wide range of plants. Whereas all wide-host-range Ti plasmids characterized to date contain closely linked oncogenic loci within a single transferred DNA (T-DNA) region, homology to these loci is divided into two widely separated T-DNA regions on the limited-host-range plasmid. These two plasmid regions, TA-DNA and TB-DNA, are separated by approximately 25 kilobases of DNA which is not maintained in the tumor. The TA-DNA region resembles a deleted form of the wide-host-range TL-DNA and contains a region homologous to the cytokinin biosynthetic gene. However, a region homologous to the two auxin biosynthetic loci of the wide-host-range plasmid mapped within the TB-DNA region. These latter genes play an important role in tumor formation because mutations in these loci result in a loss of virulence on Nicotiana plants. Furthermore, the TB-DNA region alone conferred tumorigenicity onto strains with an intact set of vir genes. Our results suggest that factors within both the T-DNA and the vir regions contribute to the expression of host range in Agrobacterium species. There was a tremendous variation among plants in susceptibility to tumor formation by various A. tumefaciens strains. This variation occurred not only among different plant species, but also among different varieties of plants within the same genus. Images PMID:4008445

  1. Examination of metabolic responses to phosphorus limitation via proteomic analyses in the marine diatom Phaeodactylum tricornutum

    PubMed Central

    Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye

    2015-01-01

    Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms. PMID:26020491

  2. Examination of metabolic responses to phosphorus limitation via proteomic analyses in the marine diatom Phaeodactylum tricornutum.

    PubMed

    Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye

    2015-05-28

    Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms.

  3. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  4. Multiresolution Analysis of UTAT B-spline Curves

    NASA Astrophysics Data System (ADS)

    Lamnii, A.; Mraoui, H.; Sbibih, D.; Zidna, A.

    2011-09-01

    In this paper, we describe a multiresolution curve representation based on periodic uniform tension algebraic trigonometric (UTAT) spline wavelets of class ??? and order four. Then we determine the decomposition and the reconstruction vectors corresponding to UTAT-spline spaces. Finally, we give some applications in order to illustrate the efficiency of the proposed approach.

  5. Multiscale/multiresolution landslides susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Grozavu, Adrian; Cătălin Stanga, Iulian; Valeriu Patriche, Cristian; Toader Juravle, Doru

    2014-05-01

    Within the European strategies, landslides are considered an important threatening that requires detailed studies to identify areas where these processes could occur in the future and to design scientific and technical plans for landslide risk mitigation. In this idea, assessing and mapping the landslide susceptibility is an important preliminary step. Generally, landslide susceptibility at small scale (for large regions) can be assessed through qualitative approach (expert judgements), based on a few variables, while studies at medium and large scale requires quantitative approach (e.g. multivariate statistics), a larger set of variables and, necessarily, the landslide inventory. Obviously, the results vary more or less from a scale to another, depending on the available input data, but also on the applied methodology. Since it is almost impossible to have a complete landslide inventory on large regions (e.g. at continental level), it is very important to verify the compatibility and the validity of results obtained at different scales, identifying the differences and fixing the inherent errors. This paper aims at assessing and mapping the landslide susceptibility at regional level through a multiscale-multiresolution approach from small scale and low resolution to large scale and high resolution of data and results, comparing the compatibility of results. While the first ones could be used for studies at european and national level, the later ones allows results validation, including through fields surveys. The test area, namely the Barlad Plateau (more than 9000 sq.km) is located in Eastern Romania, covering a region where both the natural environment and the human factor create a causal context that favor these processes. The landslide predictors were initially derived from various databases available at pan-european level and progressively completed and/or enhanced together with scale and the resolution: the topography (from SRTM at 90 meters to digital

  6. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  7. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  8. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  9. A multiresolution wavelet representation in two or more dimensions

    NASA Technical Reports Server (NTRS)

    Bromley, B. C.

    1992-01-01

    In the multiresolution approximation, a signal is examined on a hierarchy of resolution scales by projection onto sets of smoothing functions. Wavelets are used to carry the detail information connecting adjacent sets in the resolution hierarchy. An algorithm has been implemented to perform a multiresolution decomposition in n greater than or equal to 2 dimensions based on wavelets generated from products of 1-D wavelets and smoothing functions. The functions are chosen so that an n-D wavelet may be associated with a single resolution scale and orientation. The algorithm enables complete reconstruction of a high resolution signal from decomposition coefficients. The signal may be oversampled to accommodate non-orthogonal wavelet systems, or to provide approximate translational invariance in the decomposition arrays.

  10. Gaze-contingent multiresolutional displays: an integrative review.

    PubMed

    Reingold, Eyal M; Loschky, Lester C; McConkie, George W; Stampe, David M

    2003-01-01

    Gaze-contingent multiresolutional displays (GCMRDs) center high-resolution information on the user's gaze position, matching the user's area of interest (AOI). Image resolution and details outside the AOI are reduced, lowering the requirements for processing resources and transmission bandwidth in demanding display and imaging applications. This review provides a general framework within which GCMRD research can be integrated, evaluated, and guided. GCMRDs (or "moving windows") are analyzed in terms of (a) the nature of their images (i.e., "multiresolution," "variable resolution," "space variant," or "level of detail"), and (b) the movement of the AOI (i.e., "gaze contingent," "foveated," or "eye slaved"). We also synthesize the known human factors research on GCMRDs and point out important questions for future research and development. Actual or potential applications of this research include flight, medical, and driving simulators; virtual reality; remote piloting and teleoperation; infrared and indirect vision; image transmission and retrieval; telemedicine; video teleconferencing; and artificial vision systems. PMID:14529201

  11. a DTM Multi-Resolution Compressed Model for Efficient Data Storage and Network Transfer

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Brovelli, M.; Zamboni, G.

    2011-08-01

    In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging) technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs) show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's) and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions). At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a digital terrain

  12. A Multiresolution Independent Component Analysis for textile images

    NASA Astrophysics Data System (ADS)

    Coltuc, D.; Fournel, T.; Becker, J. M.; Jourlin, M.

    2007-07-01

    This paper aims to provide an efficient tool for pattern recognition in the fight against counterfeiting in textile design. As fabrics patterns to be protected can present numerous and various characteristics related to intensity or color feature but also to texture and relative scales features, we introduce a tool able to separate image independent components at different resolutions. The suggested `Multiresolution ICA' combines the properties from both wavelet transform and Independent Component Analysis.

  13. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    USGS Publications Warehouse

    Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.

    2016-01-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.

  14. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  15. The paddle move commonly used in magic tricks as a means for analysing the perceptual limits of combined motion trajectories.

    PubMed

    Hergovich, Andreas; Gröbl, Kristian; Carbon, Claus-Christian

    2011-01-01

    Following Gustav Kuhn's inspiring technique of using magicians' acts as a source of insight into cognitive sciences, we used the 'paddle move' for testing the psychophysics of combined movement trajectories. The paddle move is a standard technique in magic consisting of a combined rotating and tilting movement. Careful control of the mutual speed parameters of the two movements makes it possible to inhibit the perception of the rotation, letting the 'magic' effect emerge--a sudden change of the tilted object. By using 3-D animated computer graphics we analysed the interaction of different angular speeds and the object shape/size parameters in evoking this motion disappearance effect. An angular speed of 540 degrees s(-1) (1.5 rev. s(-1)) sufficed to inhibit the perception of the rotary movement with the smallest object showing the strongest effect. 90.7% of the 172 participants were not able to perceive the rotary movement at an angular speed of 1125 degrees s(-1) (3.125 rev. s(-1)). Further analysis by multiple linear regression revealed major influences on the effectiveness of the magic trick of object height and object area, demonstrating the applicability of analysing key factors of magic tricks to reveal limits of the perceptual system.

  16. Multiresolution persistent homology for excessively large biomolecular datasets

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  17. A multiresolution framework to MEG/EEG source imaging.

    PubMed

    Gavit, L; Baillet, S; Mangin, J F; Pescatore, J; Garnero, L

    2001-10-01

    A new method based on a multiresolution approach for solving the ill-posed problem of brain electrical activity reconstruction from electroencephaloram (EEG)/magnetoencephalogram (MEG) signals is proposed in a distributed source model. At each step of the algorithm, a regularized solution to the inverse problem is used to constrain the source space on the cortical surface to be scanned at higher spatial resolution. We present the iterative procedure together with an extension of the ST-maximum a posteriori method [1] that integrates spatial and temporal a priori information in an estimator of the brain electrical activity. Results from EEG in a phantom head experiment with a real human skull and from real MEG data on a healthy human subject are presented. The performances of the multiresolution method combined with a nonquadratic estimator are compared with commonly used dipolar methods, and to minimum-norm method with and without multiresolution. In all cases, the proposed approach proved to be more efficient both in terms of computational load and result quality, for the identification of sparse focal patterns of cortical current density, than the fixed scale imaging approach.

  18. Survey and analysis of multiresolution methods for turbulence data

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; Ahrens, James; Hamann, Bernd

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between themore » algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less

  19. Survey and analysis of multiresolution methods for turbulence data

    SciTech Connect

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; Ahrens, James; Hamann, Bernd

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.

  20. Multiresolution persistent homology for excessively large biomolecular datasets

    PubMed Central

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-01-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs. PMID:26450288

  1. Concurrent multiresolution finite element: formulation and algorithmic aspects

    NASA Astrophysics Data System (ADS)

    Tang, Shan; Kopacz, Adrian M.; Chan O'Keeffe, Stephanie; Olson, Gregory B.; Liu, Wing Kam

    2013-12-01

    A multiresolution concurrent theory for heterogenous materials is proposed with novel macro scale and micro scale constitutive laws that include the plastic yield function at different length scales. In contrast to the conventional plasticity, the plastic flow at the micro zone depends on the plastic strain gradient. The consistency condition at the macro and micro zones can result in a set of algebraic equations. Using appropriate boundary conditions, the finite element discretization was derived from a variational principle with the extra degrees of freedom for the micro zones. In collaboration with LSTC Inc, the degrees of freedom at the micro zone and their related history variables have been augmented in LS-DYNA. The 3D multiresolution theory has been implemented. Shear band propagation and the large scale simulation of a shear driven ductile fracture process were carried out. Our results show that the proposed multiresolution theory in combination with the parallel implementation into LS-DYNA can capture the effects of the microstructure on shear band propagation and allows for realistic modeling of ductile fracture process.

  2. Multiresolution persistent homology for excessively large biomolecular datasets

    SciTech Connect

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-07

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  3. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  4. Protons are one of the limiting factors in determining sensitivity of nano surface-assisted (+)-mode LDI MS analyses.

    PubMed

    Cho, Eunji; Ahn, Miri; Kim, Young Hwan; Kim, Jongwon; Kim, Sunghwan

    2013-10-01

    A proton source employing a nanostructured gold surface for use in (+)-mode laser desorption ionization mass spectrometry (LDI-MS) was evaluated. Analysis of perdeuterated polyaromatic hydrocarbon compound dissolved in regular toluene, perdeuterated toluene, and deuterated methanol all showed that protonated ions were generated irregardless of solvent system. Therefore, it was concluded that residual water on the surface of the LDI plate was the major source of protons. The fact that residual water remaining after vacuum drying was the source of protons suggests that protons may be the limiting reagent in the LDI process and that overall ionization efficiency can be improved by incorporating an additional proton source. When extra proton sources, such as thiolate compounds and/or citric acid, were added to a nanostructured gold surface, the protonated signal abundance increased. These data show that protons are one of the limiting components in (+)-mode LDI MS analyses employing nanostructured gold surfaces. Therefore, it has been suggested that additional efforts are required to identify compounds that can act as proton donors without generating peaks that interfere with mass spectral interpretation.

  5. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  6. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  7. Multi-parametric cytometry from a complex cellular sample: Improvements and limits of manual versus computational-based interactive analyses.

    PubMed

    Gondois-Rey, F; Granjeaud, S; Rouillier, P; Rioualen, C; Bidaut, G; Olive, D

    2016-05-01

    The wide possibilities opened by the developments of multi-parametric cytometry are limited by the inadequacy of the classical methods of analysis to the multi-dimensional characteristics of the data. While new computational tools seemed ideally adapted and were applied successfully, their adoption is still low among the flow cytometrists. In the purpose to integrate unsupervised computational tools for the management of multi-stained samples, we investigated their advantages and limits by comparison to manual gating on a typical sample analyzed in immunomonitoring routine. A single tube of PBMC, containing 11 populations characterized by different sizes and stained with 9 fluorescent markers, was used. We investigated the impact of the strategy choice on manual gating variability, an undocumented pitfall of the analysis process, and we identified rules to optimize it. While assessing automatic gating as an alternate, we introduced the Multi-Experiment Viewer software (MeV) and validated it for merging clusters and annotating interactively populations. This procedure allowed the finding of both targeted and unexpected populations. However, the careful examination of computed clusters in standard dot plots revealed some heterogeneity, often below 10%, that was overcome by increasing the number of clusters to be computed. MeV facilitated the identification of populations by displaying both the MFI and the marker signature of the dataset simultaneously. The procedure described here appears fully adapted to manage homogeneously high number of multi-stained samples and allows improving multi-parametric analyses in a way close to the classic approach. © 2016 International Society for Advancement of Cytometry.

  8. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  9. A multiresolution restoration method for cardiac SPECT imaging.

    PubMed

    Franquiz, J M; Shukla, S

    1998-12-01

    In this study we present a multiresolution based method for restoring cardiac SPECT projections. Original projections were decomposed into a set of sub-band frequency images by using analyzing functions localized in both the space and frequency domain. This representation allows a simple denoising and restoration procedure by discarding high-frequency channels and performing inversion only in low frequencies. The method was evaluated in bull's eye reconstructions of a realistic cardiac chest phantom with a custom-made liver insert and 99mTc liver-to-heart activity ratios (LHAR) of 0:1, 1.5:1, 2.5:1, and 3.5:1. The cardiac phantom in free air was used as the reference standard. Reconstructions were performed by filtered backprojection using (1) no correction; (2) restoration without attenuation correction; (3) attenuation correction without restoration; and (4) restoration and attenuation correction. The attenuation correction was carried out with the Chang's method for one iteration. Results were compared with those obtained using an optimized prereconstruction Metz filter. Quantitative analysis was performed by calculating the normalized chi-square measure and mean +/- s.d. of bull's eye counts. In reconstructions with high liver activity (LHAR > 2), attenuation correction without restoration severely distorted the polar maps due to the spill-over of liver activity into the inferior myocardial wall. Both restoration methods when combined with an attenuation correction compensated this artifact and yielded uniform polar maps similar to that of the standard reference. There was no visual or quantitative difference between the performance of Metz filtering and multiresolution restoration. However, the main advantage of the multiresolution method is that it states a more concise and straightforward approach to the restoration problem. Multiresolution based methods does not require information about the object image or optimization processes, such as in conventional

  10. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  11. A Global, Multi-Resolution Approach to Regional Ocean Modeling

    SciTech Connect

    Du, Qiang

    2013-11-08

    In this collaborative research project between Pennsylvania State University, Colorado State University and Florida State University, we mainly focused on developing multi-resolution algorithms which are suitable to regional ocean modeling. We developed hybrid implicit and explicit adaptive multirate time integration method to solve systems of time-dependent equations that present two signi cantly di erent scales. We studied the e ects of spatial simplicial meshes on the stability and the conditioning of fully discrete approximations. We also studies adaptive nite element method (AFEM) based upon the Centroidal Voronoi Tessellation (CVT) and superconvergent gradient recovery. Some of these techniques are now being used by geoscientists(such as those at LANL).

  12. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  13. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  14. Multiresolution fusion of remotely sensed images with the Hermite transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Lopez-Caloca, Alejandra A.; Zambrano-Gallardo, Cira F.

    2004-02-01

    The Hermite Transform is an image representation model that incorporates some important properties of visual perception such as the analysis through overlapping receptive fields and the Gaussian derivative model of early vision. It also allows the construction of pyramidal multiresolution analysis-synthesis schemes. We show how the Hermite Transform can be used to build image fusion schemes that take advantage of the fact that Gaussian derivatives are good operators for the detection of relevant image patterns at different spatial scales. These patterns are later combined in the transform coefficient domain. Applications of this fusion algorithm are shown with remote sensing images, namely LANDSAT, IKONOS, RADARSAT and SAR AeS-1 images.

  15. Physiological, biomass elemental composition and proteomic analyses of Escherichia coli ammonium-limited chemostat growth, and comparison with iron- and glucose-limited chemostat growth

    PubMed Central

    Folsom, James Patrick

    2015-01-01

    Escherichia coli physiological, biomass elemental composition and proteome acclimations to ammonium-limited chemostat growth were measured at four levels of nutrient scarcity controlled via chemostat dilution rate. These data were compared with published iron- and glucose-limited growth data collected from the same strain and at the same dilution rates to quantify general and nutrient-specific responses. Severe nutrient scarcity resulted in an overflow metabolism with differing organic byproduct profiles based on limiting nutrient and dilution rate. Ammonium-limited cultures secreted up to 35  % of the metabolized glucose carbon as organic byproducts with acetate representing the largest fraction; in comparison, iron-limited cultures secreted up to 70  % of the metabolized glucose carbon as lactate, and glucose-limited cultures secreted up to 4  % of the metabolized glucose carbon as formate. Biomass elemental composition differed with nutrient limitation; biomass from ammonium-limited cultures had a lower nitrogen content than biomass from either iron- or glucose-limited cultures. Proteomic analysis of central metabolism enzymes revealed that ammonium- and iron-limited cultures had a lower abundance of key tricarboxylic acid (TCA) cycle enzymes and higher abundance of key glycolysis enzymes compared with glucose-limited cultures. The overall results are largely consistent with cellular economics concepts, including metabolic tradeoff theory where the limiting nutrient is invested into essential pathways such as glycolysis instead of higher ATP-yielding, but non-essential, pathways such as the TCA cycle. The data provide a detailed insight into ecologically competitive metabolic strategies selected by evolution, templates for controlling metabolism for bioprocesses and a comprehensive dataset for validating in silico representations of metabolism. PMID:26018546

  16. Global multi-resolution terrain elevation data 2010 (GMTED2010)

    USGS Publications Warehouse

    Danielson, Jeffrey J.; Gesch, Dean B.

    2011-01-01

    In 1996, the U.S. Geological Survey (USGS) developed a global topographic elevation model designated as GTOPO30 at a horizontal resolution of 30 arc-seconds for the entire Earth. Because no single source of topographic information covered the entire land surface, GTOPO30 was derived from eight raster and vector sources that included a substantial amount of U.S. Defense Mapping Agency data. The quality of the elevation data in GTOPO30 varies widely; there are no spatially-referenced metadata, and the major topographic features such as ridgelines and valleys are not well represented. Despite its coarse resolution and limited attributes, GTOPO30 has been widely used for a variety of hydrological, climatological, and geomorphological applications as well as military applications, where a regional, continental, or global scale topographic model is required. These applications have ranged from delineating drainage networks and watersheds to using digital elevation data for the extraction of topographic structure and three-dimensional (3D) visualization exercises (Jenson and Domingue, 1988; Verdin and Greenlee, 1996; Lehner and others, 2008). Many of the fundamental geophysical processes active at the Earth's surface are controlled or strongly influenced by topography, thus the critical need for high-quality terrain data (Gesch, 1994). U.S. Department of Defense requirements for mission planning, geographic registration of remotely sensed imagery, terrain visualization, and map production are similarly dependent on global topographic data. Since the time GTOPO30 was completed, the availability of higher-quality elevation data over large geographic areas has improved markedly. New data sources include global Digital Terrain Elevation Data (DTEDRegistered) from the Shuttle Radar Topography Mission (SRTM), Canadian elevation data, and data from the Ice, Cloud, and land Elevation Satellite (ICESat). Given the widespread use of GTOPO30 and the equivalent 30-arc

  17. Multiresolution in CROCO (Coastal and Regional Ocean Community model)

    NASA Astrophysics Data System (ADS)

    Debreu, Laurent; Auclair, Francis; Benshila, Rachid; Capet, Xavier; Dumas, Franck; Julien, Swen; Marchesiello, Patrick

    2016-04-01

    CROCO (Coastal and Regional Ocean Community model [1]) is a new oceanic modeling system built upon ROMS_AGRIF and the non-hydrostatic kernel of SNH, gradually including algorithms from MARS3D (sediments)and HYCOM (vertical coordinates). An important objective of CROCO is to provide the possibility of running truly multiresolution simulations. Our previous work on structured mesh refinement [2] allowed us to run two-way nesting with the following major features: conservation, spatial and temporal refinement, coupling at the barotropic level. In this presentation, we will expose the current developments in CROCO towards multiresolution simulations: connection between neighboring grids at the same level of resolution and load balancing on parallel computers. Results of preliminary experiments will be given both on an idealized test case and on a realistic simulation of the Bay of Biscay with high resolution along the coast. References: [1] : CROCO : http://www.croco-ocean.org [2] : Debreu, L., P. Marchesiello, P. Penven, and G. Cambon, 2012: Two-way nesting in split-explicit ocean models: algorithms, implementation and validation. Ocean Modelling, 49-50, 1-21.

  18. Automated transformation-invariant shape recognition through wavelet multiresolution

    NASA Astrophysics Data System (ADS)

    Brault, Patrice; Mounier, Hugues

    2001-12-01

    We present here new results in Wavelet Multi-Resolution Analysis (W-MRA) applied to shape recognition in automatic vehicle driving applications. Different types of shapes have to be recognized in this framework. They pertain to most of the objects entering the sensors field of a car. These objects can be road signs, lane separation lines, moving or static obstacles, other automotive vehicles, or visual beacons. The recognition process must be invariant to global, affine or not, transformations which are : rotation, translation and scaling. It also has to be invariant to more local, elastic, deformations like the perspective (in particular with wide angle camera lenses), and also like deformations due to environmental conditions (weather : rain, mist, light reverberation) or optical and electrical signal noises. To demonstrate our method, an initial shape, with a known contour, is compared to the same contour altered by rotation, translation, scaling and perspective. The curvature computed for each contour point is used as a main criterion in the shape matching process. The original part of this work is to use wavelet descriptors, generated with a fast orthonormal W-MRA, rather than Fourier descriptors, in order to provide a multi-resolution description of the contour to be analyzed. In such way, the intrinsic spatial localization property of wavelet descriptors can be used and the recognition process can be speeded up. The most important part of this work is to demonstrate the potential performance of Wavelet-MRA in this application of shape recognition.

  19. Using Fuzzy Logic to Enhance Stereo Matching in Multiresolution Images

    PubMed Central

    Medeiros, Marcos D.; Gonçalves, Luiz Marcos G.; Frery, Alejandro C.

    2010-01-01

    Stereo matching is an open problem in Computer Vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859

  20. Image compression using wavelet transform and multiresolution decomposition.

    PubMed

    Averbuch, A; Lazar, D; Israeli, M

    1996-01-01

    Schemes for image compression of black-and-white images based on the wavelet transform are presented. The multiresolution nature of the discrete wavelet transform is proven as a powerful tool to represent images decomposed along the vertical and horizontal directions using the pyramidal multiresolution scheme. The wavelet transform decomposes the image into a set of subimages called shapes with different resolutions corresponding to different frequency bands. Hence, different allocations are tested, assuming that details at high resolution and diagonal directions are less visible to the human eye. The resultant coefficients are vector quantized (VQ) using the LGB algorithm. By using an error correction method that approximates the reconstructed coefficients quantization error, we minimize distortion for a given compression rate at low computational cost. Several compression techniques are tested. In the first experiment, several 512x512 images are trained together and common table codes created. Using these tables, the training sequence black-and-white images achieve a compression ratio of 60-65 and a PSNR of 30-33. To investigate the compression on images not part of the training set, many 480x480 images of uncalibrated faces are trained together and yield global tables code. Images of faces outside the training set are compressed and reconstructed using the resulting tables. The compression ratio is 40; PSNRs are 30-36. Images from the training set have similar compression values and quality. Finally, another compression method based on the end vector bit allocation is examined.

  1. Information Extraction of High-Resolution Remotely Sensed Image based on Multiresolution Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, P.; Yang, G.-D.; Niu, X.-F.; Zhang, X.-P.; Zhan, F.-L.; Tang, T.-Q.

    2013-11-01

    The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of remotely sensed image based on this principle. The target image was divided into regions based on objectoriented multiresolution segmentation and edge-detection. Further, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information) were extracted by the spectral and geometrical features. The results indicates that edge-detection make a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% through confusion matrix.

  2. Sensitivity Analysis of Wavelet-based Approach to Multiresolution-Characterization and Scaling of Two-Dimensional Heterogeneous Fields

    NASA Astrophysics Data System (ADS)

    Hyun, Y.; Ahn, Y.

    2012-12-01

    A wavelet-based scaling approach has recently been used to characterize and/or upscale hydro-geologic variables with given Hurst coefficient, characteristic length scale, and orientation. A wavelet-based approach requires specifying a mother wavelet for wavelet analysis. We perform the sensitivity analysis of wavelet transforms to several types of mother wavelets in characterizing and scaling two-dimensional random fractal fields which are theoretically generated for various Hurst coefficient, characteristic lengths, and orientations. We use haar, Daubechies, Symlets, and Coiflets wavelets and compare the results. The numerical studies are carried out using Matlab wavelet toolbox. Results show that the Daubechies wavelet is most suitable for scaling random fractal fields with among various wavelets. In characterization of heterogeneous fields on a multiresolution, characteristic lengths inferred from simulated fields vary with mother wavelets. This study suggests that one should be careful in choosing a mother wavelet function for scaling studies by means of wavelet-based analyses for reliable results and no reliable results are expected for characterizing fractal fields on a multiresolution with various mother wavelets.

  3. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  4. Automatic image segmentation by dynamic region growth and multiresolution merging.

    PubMed

    Ugarriza, Luis Garcia; Saber, Eli; Vantaram, Sreenath Rao; Amuso, Vincent; Shaw, Mark; Bhaskar, Ranjit

    2009-10-01

    Image segmentation is a fundamental task in many computer vision applications. In this paper, we propose a new unsupervised color image segmentation algorithm, which exploits the information obtained from detecting edges in color images in the CIE L *a *b * color space. To this effect, by using a color gradient detection technique, pixels without edges are clustered and labeled individually to identify some initial portion of the input image content. Elements that contain higher gradient densities are included by the dynamic generation of clusters as the algorithm progresses. Texture modeling is performed by color quantization and local entropy computation of the quantized image. The obtained texture and color information along with a region growth map consisting of all fully grown regions are used to perform a unique multiresolution merging procedure to blend regions with similar characteristics. Experimental results obtained in comparison to published segmentation techniques demonstrate the performance advantages of the proposed method. PMID:19535323

  5. Multi-resolution texture synthesis from turntable image sequences

    NASA Astrophysics Data System (ADS)

    Wang, Xuedong; Wu, Xiaojun; Zhang, Xiaorong

    2012-03-01

    Texture synthesis and texture mapping are important technologies for rendering realistic three-dimensional scene. It has been widely used in virtual reality, urban modeling, 3D animation, gaming and other areas. In this paper, we propose a fast method to construct high quality texture map for multi-resolution texture synthesis from turntable image sequences. Given a 3D mesh model, we first get the projection relationship between 3D mesh and image sequences. We then use image sequences to construct a texture triangle for each 3D triangle mesh and get a global rectangular texture map for the whole mesh. Another approach to construct a texture map is using Stretch-minimizing mesh parameterization. Finally, we map the texture to mesh model to verify the quality of these two methods. The high performance of this method has been demonstrated in many real object models.

  6. Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme

    NASA Astrophysics Data System (ADS)

    Hickmann, K. S.; Godinez, H. C.

    2015-12-01

    When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.

  7. Multiresolution 3-D reconstruction from side-scan sonar images.

    PubMed

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  8. Parallel implementation of the biorthogonal multiresolution time-domain method

    NASA Astrophysics Data System (ADS)

    Zhu, Xianyang; Carin, Lawrence; Dogaru, Traian

    2003-05-01

    The three-dimensional biorthogonal multiresolution time-domain (Bi-MRTD) method is presented for both free-space and half-space scattering problems. The perfectly matched layer (PML) is used as an absorbing boundary condition. It has been shown that improved numerical-dispersion properties can be obtained with the use of smooth, compactly supported wavelet functions as the basis, whereas we employ the Cohen-Daubechies-Fouveau (CDF) biorthogonal wavelets. When a CDF-wavelet expansion is used, the spatial-sampling rate can be reduced considerably compared with that of the conventional finite-difference time-domain (FDTD) method, implying that larger targets can be simulated without sacrificing accuracy. We implement the Bi-MRTD on a cluster of allocated-memory machines, using the message-passing interface (MPI), such that very large targets can be modeled. Numerical results are compared with analytical ones and with those obtained by use of the traditional FDTD method.

  9. Multiresolution strategies for the numerical solution of optimal control problems

    NASA Astrophysics Data System (ADS)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  10. Out-of-Core Construction and Visualization of Multiresolution Surfaces

    SciTech Connect

    Lindstrom, P

    2003-02-03

    We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a rectilinear octree grid to coarsen and create a hierarchy for the mesh, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are disk and memory efficient, and are fairly straightforward to implement.

  11. Out-of-Core Construction and Visualization of Multiresolution Surfaces

    SciTech Connect

    Lindstrom, P

    2002-11-04

    We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a uniform octree grid to coarsen a mesh and create a hierarchy, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are both disk and memory efficient, and are fairly straightforward to implement.

  12. A Multi-Resolution Data Structure for Two-Dimensional Morse Functions

    SciTech Connect

    Bremer, P-T; Edelsbrunner, H; Hamann, B; Pascucci, V

    2003-07-30

    The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.

  13. Static multiresolution grids with inline hierarchy information for cosmic ray propagation

    NASA Astrophysics Data System (ADS)

    Müller, Gero

    2016-08-01

    For numerical simulations of cosmic-ray propagation fast access to static magnetic field data is required. We present a data structure for multiresolution vector grids which is optimized for fast access, low overhead and shared memory use. The hierarchy information is encoded into the grid itself, reducing the memory overhead. Benchmarks show that in certain scenarios the differences in deflections introduced by sampling the magnetic field model can be significantly reduced when using the multiresolution approach.

  14. Continuously zoom imaging probe for the multi-resolution foveated laparoscope.

    PubMed

    Qin, Yi; Hua, Hong

    2016-04-01

    In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration.

  15. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-09-01

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  16. Continuously zoom imaging probe for the multi-resolution foveated laparoscope.

    PubMed

    Qin, Yi; Hua, Hong

    2016-04-01

    In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration. PMID:27446645

  17. Multi-Resolution Seismic Tomography Based on Recursive Tessellation Hierarchy

    SciTech Connect

    Simmons, N A; Myers, S C; Ramirez, A

    2009-07-01

    tomographic problems. They also apply the progressive inversion approach with Pn waves traveling within the Middle East region and compare the results to simple tomographic inversions. As expected from synthetic testing, the progressive approach results in detailed structure where there is high data density and broader regional anomalies where seismic information is sparse. The ultimate goal is to use these methods to produce a seamless, multi-resolution global tomographic model with local model resolution determined by the constraints afforded by available data. They envisage this new technique as the general approach to be employed for future multi-resolution model development with complex arrangements of regional and teleseismic information.

  18. Leaf Length Tracker: a novel approach to analyse leaf elongation close to the thermal limit of growth in the field

    PubMed Central

    Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim

    2016-01-01

    Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called ‘Leaf Length Tracker’ (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions. PMID:26818912

  19. Native and nonnative fish populations of the Colorado River are food limited--evidence from new food web analyses

    USGS Publications Warehouse

    Kennedy, Theodore A.; Cross, Wyatt F.; Hall, Robert O.; Baxter, Colden V.; Rosi-Marshall, Emma J.

    2013-01-01

    Fish populations in the Colorado River downstream from Glen Canyon Dam appear to be limited by the availability of high-quality invertebrate prey. Midge and blackfly production is low and nonnative rainbow trout in Glen Canyon and native fishes in Grand Canyon consume virtually all of the midge and blackfly biomass that is produced annually. In Glen Canyon, the invertebrate assemblage is dominated by nonnative New Zealand mudsnails, the food web has a simple structure, and transfers of energy from the base of the web (algae) to the top of the web (rainbow trout) are inefficient. The food webs in Grand Canyon are more complex relative to Glen Canyon, because, on average, each species in the web is involved in more interactions and feeding connections. Based on theory and on studies from other ecosystems, the structure and organization of Grand Canyon food webs should make them more stable and less susceptible to large changes following perturbations of the flow regime relative to food webs in Glen Canyon. In support of this hypothesis, Grand Canyon food webs were much less affected by a 2008 controlled flood relative to the food web in Glen Canyon.

  20. Hybrid multi-resolution detection of moving targets in infrared imagery

    NASA Astrophysics Data System (ADS)

    Tewary, Suman; Akula, Aparna; Ghosh, Ripul; Kumar, Satish; Sardana, H. K.

    2014-11-01

    A hybrid moving target detection approach in multi-resolution framework for thermal infrared imagery is presented. Background subtraction and optical flow methods are widely used to detect moving targets. However, each method has some pros and cons which limits the performance. Conventional background subtraction is affected by dynamic noise and partial extraction of targets. Fast independent component analysis based background subtraction is efficient for target detection in infrared image sequences; however the noise increases for small targets. Well known motion detection method is optical flow. Still the method produces partial detection for low textured images and also computationally expensive due to gradient calculation for each pixel location. The synergistic approach of conventional background subtraction, fast independent component analysis and optical flow methods at different resolutions provide promising detection of targets with reduced time complexity. The dynamic background noise is compensated by the background update. The methodology is validated with benchmark infrared image datasets as well as experimentally generated infrared image sequences of moving targets in the field under various conditions of varying illumination, ambience temperature and the distance of the target from the sensor location. The significant value of F-measure validates the efficiency of the proposed methodology with high confidence of detection and low false alarms.

  1. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    SciTech Connect

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  2. Analysing the Spectrum of Andesitic Plinian Eruptions: Approaching the Uppermost Hazard Limits Expected from MT. Ruapehu, New Zealand

    NASA Astrophysics Data System (ADS)

    Pardo, N.; Cronin, S. J.; Palmer, A. S.; Procter, J.; Smith, I. E.; Nemeth, K.

    2011-12-01

    parameters by comparing different methodologies, in order to best estimate realistic uppermost hazard limits. We found Sulpizio (2005) method of k1 vs. √Aip, by integrating multiple segments, as the best approach to quantify past eruptions where the exposures are limited to proximal-intermediate locations and isopachs thinner than 5 cm cannot be constructed. The bilobate nature of both isopachs and isopleth maps reflects the complexity of tephra dispersion in a form of non-elliptical isopleths shapes showing high contour distortion and lobe axis bending, reflecting important shifts in the wind-direction over a short time interval. Calculated eruptive parameters such as minimum erupted volumes (0.3 to 0.6 km3), break in slope distances (√Aip: 31.4 - 80.8 km), column heights (22-37 km), volume discharge rates (~104-105 m3/s), and mass discharge rates (~107-108 kg/s), are all consistent with Plinian style eruptions, significantly larger than eruptions that have occurred over the past 5000 yr (VEI = 3). This new data could yield the "worst-case" eruption scenario of Ruapehu, similar to the Plinian phases of Askja 1875 and Chaitén 2008 eruptions.

  3. Multi-tissue analyses reveal limited inter-annual and seasonal variation in mercury exposure in an Antarctic penguin community.

    PubMed

    Brasso, Rebecka L; Polito, Michael J; Emslie, Steven D

    2014-10-01

    Inter-annual variation in tissue mercury concentrations in birds can result from annual changes in the bioavailability of mercury or shifts in dietary composition and/or trophic level. We investigated potential annual variability in mercury dynamics in the Antarctic marine food web using Pygoscelis penguins as biomonitors. Eggshell membrane, chick down, and adult feathers were collected from three species of sympatrically breeding Pygoscelis penguins during the austral summers of 2006/2007-2010/2011. To evaluate the hypothesis that mercury concentrations in penguins exhibit significant inter-annual variation and to determine the potential source of such variation (dietary or environmental), we compared tissue mercury concentrations with trophic levels as indicated by δ(15)N values from all species and tissues. Overall, no inter-annual variation in mercury was observed in adult feathers suggesting that mercury exposure, on an annual scale, was consistent for Pygoscelis penguins. However, when examining tissues that reflected more discrete time periods (chick down and eggshell membrane) relative to adult feathers, we found some evidence of inter-annual variation in mercury exposure during penguins' pre-breeding and chick rearing periods. Evidence of inter-annual variation in penguin trophic level was also limited suggesting that foraging ecology and environmental factors related to the bioavailability of mercury may provide more explanatory power for mercury exposure compared to trophic level alone. Even so, the variable strength of relationships observed between trophic level and tissue mercury concentrations across and within Pygoscelis penguin species suggest that caution is required when selecting appropriate species and tissue combinations for environmental biomonitoring studies in Antarctica.

  4. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  5. On analysis of electroencephalogram by multiresolution-based energetic approach

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer

    2013-10-01

    Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.

  6. A novel adaptive multi-resolution combined watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, QiWei

    2008-04-01

    The rapid development of IT and WWW technique, causing person frequently confronts with various kinds of authorized identification problem, especially the copyright problem of digital products. The digital watermarking technique was emerged as one kind of solutions. The balance between robustness and imperceptibility is always the object sought by related researchers. In order to settle the problem of robustness and imperceptibility, a novel adaptive multi-resolution combined digital image watermarking algorithm was proposed in this paper. In the proposed algorithm, we first decompose the watermark into several sub-bands, and according to its significance to embed the sub-band to different DWT coefficient of the carrier image. While embedding, the HVS was considered. So under the precondition of keeping the quality of image, the larger capacity of watermark can be embedding. The experimental results have shown that the proposed algorithm has better performance in the aspects of robustness and security. And with the same visual quality, the technique has larger capacity. So the unification of robustness and imperceptibility was achieved.

  7. Interactive multiscale tensor reconstruction for multiresolution volume visualization.

    PubMed

    Suter, Susanne K; Guitián, José A Iglesias; Marton, Fabio; Agus, Marco; Elsener, Andreas; Zollikofer, Christoph P E; Gopi, M; Gobbetti, Enrico; Pajarola, Renato

    2011-12-01

    Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.

  8. Adaptive surface meshing and multiresolution terrain depiction for SVS

    NASA Astrophysics Data System (ADS)

    Wiesemann, Thorsten; Schiefele, Jens; Kubbat, Wolfgang

    2001-08-01

    Many of today's and tomorrow's aviation applications demand accurate and reliable digital terrain elevation databases. Particularly future Vertical Cut Displays or 3D Synthetic Vision Systems (SVS) require accurate and hi-resolution data to offer a reliable terrain depiction. On the other hand, optimized or reduced terrain models are necessary to ensure real-time rendering and computing performance. In this paper a new method for adaptive terrain meshing and depiction for SVS is presented. The initial data set is decomposed by using a wavelet transform. By examining the wavelet coefficients, an adaptive surface approximation for various Level-of-Detail is determined. Additionally, the dyadic scaling of the wavelet transform is used to build a hierarchical quad-tree representation for the terrain data. This representation enhances fast interactive computations and real-time rendering methods. The proposed terrain representation is integrated into a standard navigation display. Due to the multi-resolution data organization, terrain depiction e.g. resolution is adaptive to a selected zooming level or flight phase. Moreover, the wavelet decomposition helps to define local regions of interest. A depicted terrain resolution has a finer grain nearby the current airplane position and gets coarser with increasing aircraft distance. In addition, flight critical regions can be depicted in a higher resolution.

  9. Multiresolution parameterization of meshes for improved surface-based registration

    NASA Astrophysics Data System (ADS)

    Jaume, Sylvain; Ferrant, Matthieu; Warfield, Simon K.; Macq, Benoit M. M.

    2001-07-01

    Common problems in medical image analysis involve surface-based registration. The applications range from atlas matching to tracking an object's boundary in an image sequence, or segmenting anatomical structures out of images. Most proposed solutions are based on deformable surface algorithms. The main problem of such methods is that the local accuracy of the matching must often be traded off against global smoothness of the surface in order to reach global convergence of the deformation process. Our contribution is to first build a Multi-Resolution (M-R) surface from a reference segmented image, and then match this surface onto the target image in an M-R fashion using a deformable surface-like algorithm. As we proceed from lower to higher resolution, the smoothing effect of the deformable surface is more and more localized, and the surface gets closer and closer to the target boundary. We present initial results of our algorithm for atlas registration onto brain MRI showing improved convergence and accuracy over classical deformable surface methods.

  10. Eagle II: A prototype for multi-resolution combat modeling

    SciTech Connect

    Powell, D.R.; Hutchinson, J.L.

    1993-02-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  11. Multiresolution analysis over simple graphs for brain computer interfaces

    NASA Astrophysics Data System (ADS)

    Asensio-Cubero, J.; Gan, J. Q.; Palaniappan, R.

    2013-08-01

    Objective. Multiresolution analysis (MRA) offers a useful framework for signal analysis in the temporal and spectral domains, although commonly employed MRA methods may not be the best approach for brain computer interface (BCI) applications. This study aims to develop a new MRA system for extracting tempo-spatial-spectral features for BCI applications based on wavelet lifting over graphs. Approach. This paper proposes a new graph-based transform for wavelet lifting and a tailored simple graph representation for electroencephalography (EEG) data, which results in an MRA system where temporal, spectral and spatial characteristics are used to extract motor imagery features from EEG data. The transformed data is processed within a simple experimental framework to test the classification performance of the new method. Main Results. The proposed method can significantly improve the classification results obtained by various wavelet families using the same methodology. Preliminary results using common spatial patterns as feature extraction method show that we can achieve comparable classification accuracy to more sophisticated methodologies. From the analysis of the results we can obtain insights into the pattern development in the EEG data, which provide useful information for feature basis selection and thus for improving classification performance. Significance. Applying wavelet lifting over graphs is a new approach for handling BCI data. The inherent flexibility of the lifting scheme could lead to new approaches based on the hereby proposed method for further classification performance improvement.

  12. Wavelet-based multiresolution analysis of Wivenhoe Dam water temperatures

    NASA Astrophysics Data System (ADS)

    Percival, D. B.; Lennox, S. M.; Wang, Y.-G.; Darnell, R. E.

    2011-05-01

    Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

  13. Hierarchical extraction of landslides from multiresolution remotely sensed optical images

    NASA Astrophysics Data System (ADS)

    Kurtz, Camille; Stumpf, André; Malet, Jean-Philippe; Gançarski, Pierre; Puissant, Anne; Passat, Nicolas

    2014-01-01

    The automated detection and mapping of landslides from Very High Resolution (VHR) images present several challenges related to the heterogeneity of landslide sizes, shapes and soil surface characteristics. However, a common geomorphological characteristic of landslides is to be organized with a series of embedded and scaled features. These properties motivated the use of a multiresolution image analysis approach for their detection. In this work, we propose a hybrid segmentation/classification region-based method, devoted to this specific issue. The method, which uses images of the same area at various spatial resolutions (Medium to Very High Resolution), relies on a recently introduced top-down hierarchical framework. In the specific context of landslide analysis, two main novelties are introduced to enrich this framework. The first novelty consists of using non-spectral information, obtained from Digital Terrain Model (DTM), as a priori knowledge for the guidance of the segmentation/classification process. The second novelty consists of using a new domain adaptation strategy, that allows to reduce the expert's interaction when handling large image datasets. Experiments performed on satellite images acquired over terrains affected by landslides demonstrate the efficiency of the proposed method with different hierarchical levels of detail addressing various operational needs.

  14. Multiresolution community detection for megascale networks by information-based replica correlations

    NASA Astrophysics Data System (ADS)

    Ronhovde, Peter; Nussinov, Zohar

    2009-07-01

    We use a Potts model community detection algorithm to accurately and quantitatively evaluate the hierarchical or multiresolution structure of a graph. Our multiresolution algorithm calculates correlations among multiple copies (“replicas”) of the same graph over a range of resolutions. Significant multiresolution structures are identified by strongly correlated replicas. The average normalized mutual information, the variation in information, and other measures, in principle, give a quantitative estimate of the “best” resolutions and indicate the relative strength of the structures in the graph. Because the method is based on information comparisons, it can, in principle, be used with any community detection model that can examine multiple resolutions. Our approach may be extended to other optimization problems. As a local measure, our Potts model avoids the “resolution limit” that affects other popular models. With this model, our community detection algorithm has an accuracy that ranks among the best of currently available methods. Using it, we can examine graphs over 40×106 nodes and more than 1×109 edges. We further report that the multiresolution variant of our algorithm can solve systems of at least 200000 nodes and 10×106 edges on a single processor with exceptionally high accuracy. For typical cases, we find a superlinear scaling O(L1.3) for community detection and O(L1.3logN) for the multiresolution algorithm, where L is the number of edges and N is the number of nodes in the system.

  15. A multiresolution spatial parametrization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions.

    SciTech Connect

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew

    2013-04-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.

  16. Techniques and potential capabilities of multi-resolutional information (knowledge) processing

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.

  17. A one-time truncate and encode multiresolution stochastic framework

    NASA Astrophysics Data System (ADS)

    Abgrall, R.; Congedo, P. M.; Geraci, G.

    2014-01-01

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan-Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  18. A one-time truncate and encode multiresolution stochastic framework

    SciTech Connect

    Abgrall, R.; Congedo, P.M.; Geraci, G.

    2014-01-15

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  19. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    SciTech Connect

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  20. Multi-resolution Statistical Analysis of Brain Connectivity Graphs in Preclinical Alzheimer's Disease

    PubMed Central

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K.; Okonkwo, Ozioma C.; Johnson, Sterling C.; Bendlin, Barbara; Singh, Vikas

    2015-01-01

    There is significant interest, both from basic and applied research perspectives, in understanding how structural/functional connectivity changes can explain behavioral symptoms and predict decline in neurodegenerative diseases such as Alzheimer's disease (AD). The first step in most such analyses is to encode the connectivity information as a graph; then, one may perform statistical inference on various ‘global’ graph theoretic summary measures (e.g., modularity, graph diameter) and/or at the level of individual edges (or connections). For AD in particular, clear differences in connectivity at the dementia stage of the disease (relative to healthy controls) have been identified. Despite such findings, AD-related connectivity changes in preclinical disease remain poorly characterized. Such preclinical datasets are typically smaller and group differences are weaker. In this paper, we propose a new multi-resolution method for performing statistical analysis of connectivity networks/graphs derived from neuroimaging data. At the high level, the method occupies the middle ground between the two contrasts — that is, to analyze global graph summary measures (global) or connectivity strengths or correlations for individual edges similar to voxel based analysis (local). Instead, our strategy derives a Wavelet representation at each primitive (connection edge) which captures the graph context at multiple resolutions. We provide extensive empirical evidence of how this framework offers improved statistical power by analyzing two distinct AD datasets. Here, connectivity is derived from diffusion tensor magnetic resonance images by running a tractography routine. We first present results showing significant connectivity differences between AD patients and controls that were not evident using standard approaches. Later, we show results on populations that are not diagnosed with AD but have a positive family history risk of AD where our algorithm helps in identifying

  1. Multi-resolution statistical analysis of brain connectivity graphs in preclinical Alzheimer's disease.

    PubMed

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K; Okonkwo, Ozioma C; Johnson, Sterling C; B Bendlin, Barbara; Singh, Vikas

    2015-09-01

    There is significant interest, both from basic and applied research perspectives, in understanding how structural/functional connectivity changes can explain behavioral symptoms and predict decline in neurodegenerative diseases such as Alzheimer's disease (AD). The first step in most such analyses is to encode the connectivity information as a graph; then, one may perform statistical inference on various 'global' graph theoretic summary measures (e.g., modularity, graph diameter) and/or at the level of individual edges (or connections). For AD in particular, clear differences in connectivity at the dementia stage of the disease (relative to healthy controls) have been identified. Despite such findings, AD-related connectivity changes in preclinical disease remain poorly characterized. Such preclinical datasets are typically smaller and group differences are weaker. In this paper, we propose a new multi-resolution method for performing statistical analysis of connectivity networks/graphs derived from neuroimaging data. At the high level, the method occupies the middle ground between the two contrasts - that is, to analyze global graph summary measures (global) or connectivity strengths or correlations for individual edges similar to voxel based analysis (local). Instead, our strategy derives a Wavelet representation at each primitive (connection edge) which captures the graph context at multiple resolutions. We provide extensive empirical evidence of how this framework offers improved statistical power by analyzing two distinct AD datasets. Here, connectivity is derived from diffusion tensor magnetic resonance images by running a tractography routine. We first present results showing significant connectivity differences between AD patients and controls that were not evident using standard approaches. Later, we show results on populations that are not diagnosed with AD but have a positive family history risk of AD where our algorithm helps in identifying potentially

  2. Compressed plane waves yield a compactly supported multiresolution basis for the Laplace operator.

    PubMed

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2014-02-01

    This paper describes an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves, that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities.

  3. Guidelines to implement the license renewal technical requirements of 10CFR54 for integrated plant assessments and time-limited aging analyses. Final report

    SciTech Connect

    Lehnert, G.; Philpot, L.

    1995-11-01

    This report documents the initial results of the Nuclear Energy Institute License Renewal Implementation Guideline Task Force over the period August 1994 to July 1995 to develop guidance for complying with technical requirements of 10CFR54. The report also provided a starting point for the development of NEI 95-10, ``Industry Guideline for Implementing the Requirements of 10CCR54-The License Renewal Rule``. Information in this document can be used by utilities to prepare the technical material needed in an application for license renewal (LR) of a nuclear power unit. This guideline provides methods for identifying systems, structures, and components (SSCs) and their intended functions within the scope of license renewal. It identifies structures and components (SCs) requiring aging management review and methods for performing the aging management review. The guideline provides a process for identifying and evaluating time-limited aging analyses.

  4. The Global Multi-Resolution Topography (GMRT) Synthesis

    NASA Astrophysics Data System (ADS)

    Arko, R.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; O'Hara, S.; Chayes, D.; Weissel, R.; Goodwillie, A.; Ferrini, V.; Stroker, K.; Virden, W.

    2007-12-01

    Topographic maps provide a backdrop for research in nearly every earth science discipline. There is particular demand for bathymetry data in the ocean basins, where existing coverage is sparse. Ships and submersibles worldwide are rapidly acquiring large volumes of new data with modern swath mapping systems. The science community is best served by a global topography compilation that is easily accessible, up-to-date, and delivers data in the highest possible (i.e. native) resolution. To meet this need, the NSF-supported Marine Geoscience Data System (MGDS; www.marine-geo.org) has partnered with the National Geophysical Data Center (NGDC; www.ngdc.noaa.gov) to produce the Global Multi-Resolution Topography (GMRT) synthesis - a continuously updated digital elevation model that is accessible through Open Geospatial Consortium (OGC; www.opengeospatial.org) Web services. GMRT had its genesis in 1992 with the NSF RIDGE Multibeam Synthesis (RMBS); later grew to include the Antarctic Multibeam Synthesis (AMBS); expanded again to include the NSF Ridge 2000 and MARGINS programs; and finally emerged as a global compilation in 2005 with the NSF Legacy of Ocean Exploration (LOE) project. The LOE project forged a permanent partnership between MGDS and NGDC, in which swath bathymetry data sets are routinely published and exchanged via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH; www.openarchives.org). GMRT includes both color-shaded relief images and underlying elevation values at ten different resolutions as high as 100m. New data are edited, gridded, and tiled using tools originally developed by William Haxby at Lamont-Doherty Earth Observatory. Global and regional data sources include the NASA Shuttle Radar Topography Mission (SRTM; http://www.jpl.nasa.gov/srtm/); Smith & Sandwell Satellite Predicted Bathymetry (http://topex.ucsd.edu/marine_topo/); SCAR Subglacial Topographic Model of the Antarctic (BEDMAP; http://www.antarctica.ac.uk/bedmap/); and

  5. Limited Flow of Continental Crust at UHP Depths: Coupled Age and Trace-Element Analyses of Titanite in the Western Gneiss Region, Norway

    NASA Astrophysics Data System (ADS)

    Garber, J. M.; Hacker, B. R.; Kylander-Clark, A. R.

    2015-12-01

    Coupled age and trace-element data from titanites in the Western Gneiss Region (WGR) of Norway suggest that continental crust underwent limited recrystallization and ductile flow through ~40 My of deep subduction and subsequent exhumation. Precambrian igneous titanites in granitic to tonalitic orthogneisses from the WGR were metastably preserved though Caledonian ultrahigh-pressure (UHP) metamorphism and variably recrystallized through subsequent amphibolite-facies metamorphism from ~420-385 Ma. The inherited Precambrian titanites are not present everywhere but rather cluster primarily in a cooler "southern domain" (peak T ~650oC) and a hotter "northern domain" (peak T ~750-800oC).Titanite data were collected using LASS (laser-ablation split stream inductively-coupled plasma mass spectrometry) at UCSB, and a principal component analysis (PCA) was used to define age and trace-element populations. These data indicate that inherited titanites are LREE-enriched, HFSE-enriched, and have higher Th/U, consistent with Precambrian neocrystallization from a granitic melt. In contrast, the recrystallized titanites have generally lower Th/U and flat, LREE-depleted, or hump-shaped trace-element patterns. These data suggest that (1) Caledonian titanite recrystallization occurred in the presence of LREE-depleted melts or fluids, or that (2) recrystallization was accompanied by a "typical" granitic melt, but that titanite/bulk-rock distribution coefficients are different for neo- and recrystallization; on-going whole-rock analyses will clarify these hypotheses. Critically, the geochemical signature of recrystallized titanite in felsic orthogneisses is comparable across the entire WGR - emphasizing that the petrologic process of titanite recrystallization was similar orogen-wide, but was less extensive in the domains where inherited titanite was preserved. In this case, large volumes of crust outside of the "old domains" may also have retained metastable titanite during subduction

  6. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also

  7. A multi-resolution method for climate system modeling: application of Spherical Centroidal A multi-resolution method for climate system modeling: Application of Spherical Centroidal Voroni Tessellations

    SciTech Connect

    Ringler, Todd D; Gunzburger, Max; Ju, Lili

    2008-01-01

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multi-resolution schemes that are able, at least regional to faithfully simulate these fine-scale processes. Spherical Centroidal Voronoi Tessellations (SCVTs) offer one potential path toward the development of robust, multi-resolution climate system component models, SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function, each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean-ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear shallow-water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multi-resolution method and the challenges ahead.

  8. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  9. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  10. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  11. Water storage variations extracted from GRACE data by combination of multi-resolution representation (MRR) and principal component analysis (PCA)

    NASA Astrophysics Data System (ADS)

    Ressler, Gerhard; Eicker, Annette; Lieb, Verena; Schmidt, Michael; Seitz, Florian; Shang, Kun; Shum, Che-Kwan

    2015-04-01

    Regionally changing hydrological conditions and their link to the availability of water for human consumption and agriculture is a challenging topic in the context of global change that is receiving increasing attention. Gravity field changes related to signals of land hydrology have been observed by the Gravity Recovery And Climate Experiment (GRACE) satellite mission over a period of more than 12 years. These changes are being analysed in our studies with respect to changing hydrological conditions, especially as a consequence of extreme weather situations and/or a change of climatic conditions. Typically, variations of the Earth's gravity field are modeled as a series expansion in terms of global spherical harmonics with time dependent harmonic coefficients. In order to investigate specific structures in the signal we alternatively apply a wavelet-based multi-resolution technique for the determination of regional spatiotemporal variations of the Earth's gravitational potential in combination with principal component analysis (PCA) for detailed evaluation of these structures. The multi-resolution representation (MRR) i.e. the composition of a signal considering different resolution levels is a suitable approach for spatial gravity modeling especially in case of inhomogeneous distribution of observation data on the one hand and because of the inhomogeneous structure of the Earth's gravity field itself on the other hand. In the MRR the signal is split into detail signals by applying low- and band-pass filters realized e.g. by spherical scaling and wavelet functions. Each detail signal is related to a specific resolution level and covers a certain part of the signal spectrum. Principal component analysis (PCA) enables for revealing specific signal patterns in the space as well as the time domain like trends and seasonal as well as semi seasonal variations. We apply the above mentioned combined technique to GRACE L1C residual potential differences that have been

  12. Two-dimensional SPICE-linked multiresolution impedance method for low-frequency electromagnetic interactions.

    PubMed

    Eberdt, Michael; Brown, Patrick K; Lazzi, Gianluca

    2003-07-01

    A multiresolution impedance method for the solution of low-frequency electromagnetic interaction problems typically encountered in bioelectromagnetics is presented. While the impedance method in its original form is based on the discretization of the scattering objects into equal-sized cells, our formulation decreases the number of unknowns by using an automatic mesh generation method that does not yield equal-sized cells in the modeling space. Results indicate that our multiresolution mesh generation scheme can provide a 50%-80% reduction in cell count, providing new opportunities for the solution of low-frequency bioelectromagnetic problems that require a high level of detail only in specific regions of the modeling space. Furthermore, linking the mesh generator to a circuit simulator such as SPICE permits the addition of arbitrarily complex passive and active circuit elements to the generated impedance network, opening the door to significant advances in the modeling of bioelectromagnetic phenomena.

  13. Multiresolution diffusion entropy analysis of time series: an application to births to teenagers in Texas

    NASA Astrophysics Data System (ADS)

    Scafetta, Nicola; West, Bruce J.

    2004-04-01

    The multiresolution diffusion entropy analysis is used to evaluate the stochastic information left in a time series after systematic removal of certain non-stationarities. This method allows us to establish whether the identified patterns are sufficient to capture all relevant information contained in a time series. If they do not, the method suggests the need for further interpretation to explain the residual memory in the signal. We apply the multiresolution diffusion entropy analysis to the daily count of births to teens in Texas from 1964 through 2000 because it is a typical example of a non-stationary time series, having an anomalous trend, an annual variation, as well as short time fluctuations. The analysis is repeated for the three main racial/ethnic groups in Texas (White, Hispanic and African American), as well as, to married and unmarried teens during the years from 1994 to 2000 and we study the differences that emerge among the groups.

  14. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  15. Adaptive Multiresolution or Adaptive Mesh Refinement? A Case Study for 2D Euler Equations

    SciTech Connect

    Deiterding, Ralf; Domingues, Margarete O.; Gomes, Sonia M.; Roussel, Olivier; Schneider, Kai

    2009-01-01

    We present adaptive multiresolution (MR) computations of the two-dimensional compressible Euler equations for a classical Riemann problem. The results are then compared with respect to accuracy and computational efficiency, in terms of CPU time and memory requirements, with the corresponding finite volume scheme on a regular grid. For the same test-case, we also perform computations using adaptive mesh refinement (AMR) imposing similar accuracy requirements. The results thus obtained are compared in terms of computational overhead and compression of the computational grid, using in addition either local or global time stepping strategies. We preliminarily conclude that the multiresolution techniques yield improved memory compression and gain in CPU time with respect to the adaptive mesh refinement method.

  16. Combining nonlinear multiresolution system and vector quantization for still image compression

    SciTech Connect

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  17. Comparative evaluation of multiresolution optimization strategies for multimodality image registration by maximization of mutual information.

    PubMed

    Maes, F; Vandermeulen, D; Suetens, P

    1999-12-01

    Maximization of mutual information of voxel intensities has been demonstrated to be a very powerful criterion for three-dimensional medical image registration, allowing robust and accurate fully automated affine registration of multimodal images in a variety of applications, without the need for segmentation or other preprocessing of the images. In this paper, we investigate the performance of various optimization methods and multiresolution strategies for maximization of mutual information, aiming at increasing registration speed when matching large high-resolution images. We show that mutual information is a continuous function of the affine registration parameters when appropriate interpolation is used and we derive analytic expressions of its derivatives that allow numerically exact evaluation of its gradient. Various multiresolution gradient- and non-gradient-based optimization strategies, such as Powell, simplex, steepest-descent, conjugate-gradient, quasi-Newton and Levenberg-Marquardt methods, are evaluated for registration of computed tomography (CT) and magnetic resonance images of the brain. Speed-ups of a factor of 3 on average compared to Powell's method at full resolution are achieved with similar precision and without a loss of robustness with the simplex, conjugate-gradient and Levenberg-Marquardt method using a two-level multiresolution scheme. Large data sets such as 256(2) x 128 MR and 512(2) x 48 CT images can be registered with subvoxel precision in <5 min CPU time on current workstations. PMID:10709702

  18. Implications of Multi-resolution AOD retrievals for Air Quality Studies

    NASA Astrophysics Data System (ADS)

    Kumar, N.; Chu, A. D.

    2009-12-01

    Implications of Multi-resolution AOD retrievals for Air Quality Studies This paper examines the robustness of multi-resolution (2-km, 5-km and 10-km) AOD retrievals using MODIS measurements and with the sunphotometer measurements over the period 2000 to 2007 in two distinct aerosol loading environments: Bondville (USA) and Kanpur (India), with multi-annual mean (±standard deviation) of 0.161±0.0006 and 0.547±0.001, respectively. Our analysis suggests that 2-km and 5-km AODMODIS are significantly better correlated with the sunphotometer measurements as compared to 10-km AODMODIS irrespective of background aerosol loading. The best correlation (~0.91) is observed when both datasets are aggregated within the smallest space and time intervals of 0.025° and 15 minutes, and then the correlation decreases sharply with distance >0.075° and time interval >30 minutes. Based on these findings it is expected that the association between ground measurements of ambient particulates of different sizes and multi-resolution MODIS AOD is likely to vary significantly, and will have significant implications for air quality studies.

  19. Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Wu, Bo; Zhu, Qing

    2016-04-01

    Due to the widespread availability of satellite imagery nowadays, it is common for regions to be covered by satellite imagery from multiple sources with multiple resolutions. This paper presents a combined adjustment approach to integrate multi-source multi-resolution satellite imagery for improved geo-positioning accuracy without the use of ground control points (GCPs). Instead of using all the rational polynomial coefficients (RPCs) of images for processing, only those dominating the geo-positioning accuracy are used in the combined adjustment. They, together with tie points identified in the images, are used as observations in the adjustment model. Proper weights are determined for each observation, and ridge parameters are determined for better convergence of the adjustment solution. The outputs from the combined adjustment are the improved dominating RPCs of images, from which improved geo-positioning accuracy can be obtained. Experiments using ZY-3, SPOT-7 and Pleiades-1 imagery in Hong Kong, and Cartosat-1 and Worldview-1 imagery in Catalonia, Spain demonstrate that the proposed method is able to effectively improve the geo-positioning accuracy of satellite images. The combined adjustment approach offers an alternative method to improve geo-positioning accuracy of satellite images. The approach enables the integration of multi-source and multi-resolution satellite imagery for generating more precise and consistent 3D spatial information, which permits the comparative and synergistic use of multi-resolution satellite images from multiple sources.

  20. Deconstructing a polygenetic landscape using LiDAR and multi-resolution analysis

    NASA Astrophysics Data System (ADS)

    Barrineau, Patrick; Dobreva, Iliyana; Bishop, Michael P.; Houser, Chris

    2016-04-01

    It is difficult to deconstruct a complex polygenetic landscape into distinct process-form regimes using digital elevation models (DEMs) and fundamental land-surface parameters. This study describes a multi-resolution analysis approach for extracting geomorphological information from a LiDAR-derived DEM over a stabilized aeolian landscape in south Texas that exhibits distinct process-form regimes associated with different stages in landscape evolution. Multi-resolution analysis was used to generate average altitudes using a Gaussian filter with a maximum radius of 1 km at 20 m intervals, resulting in 50 generated DEMs. This multi-resolution dataset was analyzed using Principal Components Analysis (PCA) to identify the dominant variance structure in the dataset. The first 4 principal components (PC) account for 99.9% of the variation, and classification of the variance structure reveals distinct multi-scale topographic variation associated with different process-form regimes and evolutionary stages. Our results suggest that this approach can be used to generate quantitatively rigorous morphometric maps to guide field-based sedimentological and geophysical investigations, which tend to use purposive sampling techniques resulting in bias and error.

  1. Multi-resolution description of three-dimensional anthropometric data for design simplification.

    PubMed

    Niu, Jianwei; Li, Zhizhong; Salvendy, Gavriel

    2009-07-01

    Three-dimensional (3D) anthropometry can provide rich information for ergonomic product design with better safety and health considerations. To reduce computational load and model complexity in product design when using 3D anthropometric data, wavelet analysis is adopted in this paper to establish multi-resolution mathematical description of 3D anthropometric data. A proper resolution can be selected for design reference according to the application purpose. To examine the approximation errors under difference resolutions, 510 upper head, whole head, and face samples of Chinese young men have been analyzed. Descriptives of approximation errors under different resolutions are presented. These data can be used as resolution selection guide. The application of the multi-resolution method in product design is illustrated by two examples. RELEVANCE TO INDUSTRY: Multi-resolution description of 3D anthropometric data would facilitate the analysis of and design with 3D anthropometric data to improve fitting comfort. The error data under different resolutions provide important reference for resolution selection.

  2. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  3. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  4. a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R.

    2011-09-01

    Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype.

  5. Application of new multi-resolution methods for the comparison of biomolecular electrostatic properties in the absence of global structural similarity.

    PubMed

    Zhang, Xiaoyu; Bajaj, Chandrajit L; Kwon, Bongjune; Dolinsky, Todd J; Nielsen, Jens E; Baker, Nathan A

    2006-01-01

    In this paper we present a method for the multi-resolution comparison of biomolecular electrostatic potentials without the need for global structural alignment of the biomolecules. The underlying computational geometry algorithm uses multi-resolution attributed contour trees (MACTs) to compare the topological features of volumetric scalar fields. We apply the MACTs to compute electrostatic similarity metrics for a large set of protein chains with varying degrees of sequence, structure, and function similarity. For calibration, we also compute similarity metrics for these chains by a more traditional approach based upon 3D structural alignment and analysis of Carbo similarity indices. Moreover, because the MACT approach does not rely upon pairwise structural alignment, its accuracy and efficiency promises to perform well on future large-scale classification efforts across groups of structurally-diverse proteins. The MACT method discriminates between protein chains at a level comparable to the Carbo similarity index method; i.e., it is able to accurately cluster proteins into functionally-relevant groups which demonstrate strong dependence on ligand binding sites. The results of the analyses are available from the linked web databases http://ccvweb.cres.utexas.edu/MolSignature/ and http://agave.wustl.edu/similarity/. The MACT analysis tools are available as part of the public domain library of the Topological Analysis and Quantitative Tools (TAQT) from the Center of Computational Visualization, at the University of Texas at Austin (http://ccvweb.csres.utexas.edu/software). The Carbo software is available for download with the open-source APBS software package at http://apbs.sf.net/.

  6. Cost tradeoffs in consequence management at nuclear power plants: A risk based approach to setting optimal long-term interdiction limits for regulatory analyses

    SciTech Connect

    Mubayi, V.

    1995-05-01

    The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.

  7. Proof-of-concept demonstration of a miniaturized three-channel multiresolution imaging system

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Vervaeke, Michael; Van Erps, Jürgen; Thienpont, Hugo

    2014-05-01

    Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.

  8. Comparison of statistical, LBP, and multi-resolution analysis features for breast mass classification.

    PubMed

    Reyad, Yasser A; Berbar, Mohamed A; Hussain, Muhammad

    2014-09-01

    Millions of women are suffering from breast cancer, which can be treated effectively if it is detected early. Mammography is broadly recognized as an effective imaging modality for the early detection of breast cancer. Computer-aided diagnosis (CAD) systems are very helpful for radiologists in detecting and diagnosing abnormalities earlier and faster than traditional screening programs. An important step of a CAD system is feature extraction. This research gives a comprehensive study of the effects of different features to be used in a CAD system for the classification of masses. The features are extracted using local binary pattern (LBP), which is a texture descriptor, statistical measures, and multi-resolution frameworks. Statistical and LBP features are extracted from each region of interest (ROI), taken from mammogram images, after dividing it into N×N blocks. The multi-resolution features are based on discrete wavelet transform (DWT) and contourlet transform (CT). In multi-resolution analysis, ROIs are decomposed into low sub-band and high sub-bands at different resolution levels and the coefficients of the low sub-band at the last level are taken as features. Support vector machines (SVM) is used for classification. The evaluation is performed using Digital Database for Screening Mammography (DDSM) database. An accuracy of 98.43 is obtained using statistical or LBP features but when both these types of features are fused, the accuracy is increased to 98.63. The CT features achieved classification accuracy of 98.43 whereas the accuracy resulted from DWT features is 96.93. The statistical analysis and ROC curves show that methods based on LBP, statistical measures and CT performs equally well and they not only outperform DWT based method but also other existing methods. PMID:25037713

  9. A multi-resolution method for climate system modeling: application of spherical centroidal Voronoi tessellations

    SciTech Connect

    Ringler, Todd; Ju, Lili; Gunzburger, Max

    2008-01-01

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoi tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.

  10. Temporal patterns in southern Aegean seismicity revealed by the multiresolution wavelet analysis

    NASA Astrophysics Data System (ADS)

    Telesca, Luciano; Hloupis, George; Nikolintaga, Irini; Vallianatos, Filippos

    2007-12-01

    We applied multiresolution wavelet analysis to the sequence of times between earthquakes occurred between 1970 and 2003 in the southern Aegean area, one of the most seismically active area in the Mediterranean. We observed a twofold features in the wavelet-coefficient standard deviation σwav: (i) at low scales it decreases in correspondence with the occurrence of the strongest earthquake, mainly due to the aftershock activation mechanism; (ii) at high scales it is characterized by oscillating behaviour, which is a typical background seismicity.

  11. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  12. Multi-Resolution and Wavelet Representations for Identifying Signatures of Disease

    PubMed Central

    Sajda, Paul; Laine, Andrew; Zeevi, Yehoshua

    2002-01-01

    Identifying physiological and anatomical signatures of disease in signals and images is one of the fundamental challenges in biomedical engineering. The challenge is most apparent given that such signatures must be identified in spite of tremendous inter and intra-subject variability and noise. Crucial for uncovering these signatures has been the development of methods that exploit general statistical properties of natural signals. The signal processing and applied mathematics communities have developed, in recent years, signal representations which take advantage of Gabor-type and wavelet-type functions that localize signal energy in a joint time-frequency and/or space-frequency domain. These techniques can be expressed as multi-resolution transformations, of which perhaps the best known is the wavelet transform. In this paper we review wavelets, and other related multi-resolution transforms, within the context of identifying signatures for disease. These transforms construct a general representation of signals which can be used in detection, diagnosis and treatment monitoring. We present several examples where these transforms are applied to biomedical signal and imaging processing. These include computer-aided diagnosis in mammography, real-time mosaicking of ophthalmic slit-lamp imagery, characterization of heart disease via ultrasound, predicting epileptic seizures and signature analysis of the electroencephalogram, and reconstruction of positron emission tomography data. PMID:14646044

  13. A multi-resolution approach to retrospectively-gated cardiac micro-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Clark, D. P.; Johnson, G. A.; Badea, C. T.

    2014-03-01

    In preclinical research, micro-CT is commonly used to provide anatomical information; however, there is significant interest in using this technology to obtain functional information in cardiac studies. The fastest acquisition in 4D cardiac micro-CT imaging is achieved via retrospective gating, resulting in irregular angular projections after binning the projections into phases of the cardiac cycle. Under these conditions, analytical reconstruction algorithms, such as filtered back projection, suffer from streaking artifacts. Here, we propose a novel, multi-resolution, iterative reconstruction algorithm inspired by robust principal component analysis which prevents the introduction of streaking artifacts, while attempting to recover the highest temporal resolution supported by the projection data. The algorithm achieves these results through a unique combination of the split Bregman method and joint bilateral filtration. We illustrate the algorithm's performance using a contrast-enhanced, 2D slice through the MOBY mouse phantom and realistic projection acquisition and reconstruction parameters. Our results indicate that the algorithm is robust to under sampling levels of only 34 projections per cardiac phase and, therefore, has high potential in reducing both acquisition times and radiation dose. Another potential advantage of the multi-resolution scheme is the natural division of the reconstruction problem into a large number of independent sub-problems which can be solved in parallel. In future work, we will investigate the performance of this algorithm with retrospectively-gated, cardiac micro-CT data.

  14. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    PubMed

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-01

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. PMID:26388368

  15. Classification of mammographic lesion based in Completed Local Binary Pattern and using multiresolution representation

    NASA Astrophysics Data System (ADS)

    Duarte, Y. A. S.; Nascimento, M. Z.; Oliveira, D. L. L.

    2014-03-01

    This paper presents a comparison of two methods for features extraction of mammograms based in completed local binary pattern (CLBP) and wavelet transform. In first part, CLBP was applied in digitized mammograms. In second part, we applied CLBP in the sub-bands obtained from the wavelet multi-resolution representation of the mammographies. In this study, we evaluated the CLBP in the image in the spatial domain and in the sub-bands obtained with wavelet transform. Then, the statistical technique of variance analysis (ANOVA) was used to reduce the number of features. Finally, the classifier Support Vector Machine (SVM) was applied in the samples. The proposed methods were tested on 720 mammographies which 240 was diagnosed as normal samples, 240 as benign lesion and 240 as malign lesion. The images were obtained randomly of the Digital Database for Screening Mammography (DDSM). The system effectiveness was evaluated using the area under the ROC curve (AUC). The experiments demonstrate that the textural feature extraction of the multi-resolution representation was more relevant with value of AUC=1.0. In our experiments, CLBP in the spatial domain resulted in value of AUC=0.89. The proposed method demonstrated promising results in the classification of different classes of mammographic lesions.

  16. QRS detection by lifting scheme constructing multi-resolution morphological decomposition.

    PubMed

    Zhang, Pu; Ma, Heather T; Zhang, Qinyu

    2014-01-01

    QRS complex detecting algorithm is core of ECG auto-diagnosis method and deeply influences cardiac cycle division for signal compression. However, ECG signals collected by noninvasive surface electrodes areusually mixed with several kinds of interference, and its waveform variation is the main reason for the hard realization of ECG processing. This paper proposes a QRS complex detecting algorithm based on multi-resolution mathematical morphological decomposition. This algorithm possesses superiorities in R peak detection of both mathematical morphological method and multi-resolution decomposition. Moreover, a lifting constructing method with Maximizationupdating operator is adopted to further improve the algorithm performance. And an efficient R peak search-back algorithm is employed to reduce the false positives (FP) and false negatives (FN). The proposed algorithm provides a good performance applying to MIT-BIH Arrhythmia Database, and achieves over 99% detection rate, sensitivity and positive predictivity, respectively, and calculation burden is low. Therefore, the proposed method is appropriate for portable medical devices in Telemedicine system. PMID:25569905

  17. a Virtual Globe-Based Multi-Resolution Tin Surface Modeling and Visualizetion Method

    NASA Astrophysics Data System (ADS)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2016-06-01

    The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD) generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN)-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.

  18. Reliability of multiresolution deconvolution for improving depth resolution in SIMS analysis

    NASA Astrophysics Data System (ADS)

    Boulakroune, M.'Hamed

    2016-11-01

    This paper deals the effectiveness and reliability of multiresolution deconvolution algorithm for recovery Secondary Ions Mass Spectrometry, SIMS, profiles altered by the measurement. This new algorithm is characterized as a regularized wavelet transform. It combines ideas from Tikhonov Miller regularization, wavelet analysis and deconvolution algorithms in order to benefit from the advantages of each. The SIMS profiles were obtained by analysis of two structures of boron in a silicon matrix using a Cameca-Ims6f instrument at oblique incidence. The first structure is large consisting of two distant wide boxes and the second one is thin structure containing ten delta-layers in which the deconvolution by zone was applied. It is shown that this new multiresolution algorithm gives best results. In particular, local application of the regularization parameter of blurred and estimated solutions at each resolution level provided to smoothed signals without creating artifacts related to noise content in the profile. This led to a significant improvement in the depth resolution and peaks' maximums.

  19. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  20. An efficient multi-resolution GA approach to dental image alignment

    NASA Astrophysics Data System (ADS)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  1. Long-range force and moment calculations in multiresolution simulations of molecular systems

    SciTech Connect

    Poursina, Mohammad; Anderson, Kurt S.

    2012-08-30

    Multiresolution simulations of molecular systems such as DNAs, RNAs, and proteins are implemented using models with different resolutions ranging from a fully atomistic model to coarse-grained molecules, or even to continuum level system descriptions. For such simulations, pairwise force calculation is a serious bottleneck which can impose a prohibitive amount of computational load on the simulation if not performed wisely. Herein, we approximate the resultant force due to long-range particle-body and body-body interactions applicable to multiresolution simulations. Since the resultant force does not necessarily act through the center of mass of the body, it creates a moment about the mass center. Although this potentially important torque is neglected in many coarse-grained models which only use particle dynamics to formulate the dynamics of the system, it should be calculated and used when coarse-grained simulations are performed in a multibody scheme. Herein, the approximation for this moment due to far-field particle-body and body-body interactions is also provided.

  2. Single-resolution and multiresolution extended-Kalman-filter-based reconstruction approaches to optical refraction tomography.

    PubMed

    Naik, Naren; Vasu, R M; Ananthasayanam, M R

    2010-02-20

    The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances.

  3. Inferring Species Richness and Turnover by Statistical Multiresolution Texture Analysis of Satellite Imagery

    PubMed Central

    Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund

    2012-01-01

    Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the

  4. Proteomic and Transcriptomic Analyses of “Candidatus Pelagibacter ubique” Describe the First PII-Independent Response to Nitrogen Limitation in a Free-Living Alphaproteobacterium

    PubMed Central

    Smith, Daniel P.; Thrash, J. Cameron; Nicora, Carrie D.; Lipton, Mary S.; Burnum-Johnson, Kristin E.; Carini, Paul; Smith, Richard D.; Giovannoni, Stephen J.

    2013-01-01

    ABSTRACT Nitrogen is one of the major nutrients limiting microbial productivity in the ocean, and as a result, most marine microorganisms have evolved systems for responding to nitrogen stress. The highly abundant alphaproteobacterium “Candidatus Pelagibacter ubique,” a cultured member of the order Pelagibacterales (SAR11), lacks the canonical GlnB, GlnD, GlnK, and NtrB/NtrC genes for regulating nitrogen assimilation, raising questions about how these organisms respond to nitrogen limitation. A survey of 266 Alphaproteobacteria genomes found these five regulatory genes nearly universally conserved, absent only in intracellular parasites and members of the order Pelagibacterales, including “Ca. Pelagibacter ubique.” Global differences in mRNA and protein expression between nitrogen-limited and nitrogen-replete cultures were measured to identify nitrogen stress responses in “Ca. Pelagibacter ubique” strain HTCC1062. Transporters for ammonium (AmtB), taurine (TauA), amino acids (YhdW), and opines (OccT) were all elevated in nitrogen-limited cells, indicating that they devote increased resources to the assimilation of nitrogenous organic compounds. Enzymes for assimilating amine into glutamine (GlnA), glutamate (GltBD), and glycine (AspC) were similarly upregulated. Differential regulation of the transcriptional regulator NtrX in the two-component signaling system NtrY/NtrX was also observed, implicating it in control of the nitrogen starvation response. Comparisons of the transcriptome and proteome supported previous observations of uncoupling between transcription and translation in nutrient-deprived “Ca. Pelagibacter ubique” cells. Overall, these data reveal a streamlined, PII-independent response to nitrogen stress in “Ca. Pelagibacter ubique,” and likely other Pelagibacterales, and show that they respond to nitrogen stress by allocating more resources to the assimilation of nitrogen-rich organic compounds. PMID:24281717

  5. PSECMAC: a novel self-organizing multiresolution associative memory architecture.

    PubMed

    Teddy, S D; Quek, C; Lai, E K

    2008-04-01

    The cerebellum constitutes a vital part of the human brain system that possesses the capability to model highly nonlinear physical dynamics. The cerebellar model articulation controller (CMAC) associative memory network is a computational model inspired by the neurophysiological properties of the cerebellum, and it has been widely used for control, optimization, and various pattern recognition tasks. However, the CMAC network's highly regularized computing structure often leads to the following: 1) a suboptimal modeling accuracy, 2) poor memory utilization, and 3) the generalization-accuracy dilemma. Previous attempts to address these shortcomings have limited success and the proposed solutions often introduce a high operational complexity to the CMAC network. This paper presents a novel neurophysiologically inspired associative memory architecture named pseudo-self-evolving CMAC (PSECMAC) that nonuniformly allocates its computing cells to overcome the architectural deficiencies encountered by the CMAC network. The nonuniform memory allocation scheme employed by the proposed PSECMAC network is inspired by the cerebellar experience-driven synaptic plasticity phenomenon observed in the cerebellum, where significantly higher densities of synaptic connections are located in the frequently accessed regions. In the PSECMAC network, this biological synaptic plasticity phenomenon is emulated by employing a data-driven adaptive memory quantization scheme that defines its computing structure. A neighborhood-based activation process is subsequently implemented to facilitate the learning and computation of the PSECMAC structure. The training stability of the PSECMAC network is theoretically assured by the proof of its learning convergence, which will be presented in this paper. The performance of the proposed network is subsequently benchmarked against the CMAC network and several representative CMAC variants on three real-life applications, namely, pricing of currency futures

  6. Origin of mobility enhancement by chemical treatment of gate-dielectric surface in organic thin-film transistors: Quantitative analyses of various limiting factors in pentacene thin films

    NASA Astrophysics Data System (ADS)

    Matsubara, R.; Sakai, Y.; Nomura, T.; Sakai, M.; Kudo, K.; Majima, Y.; Knipp, D.; Nakamura, M.

    2015-11-01

    For the better performance of organic thin-film transistors (TFTs), gate-insulator surface treatments are often applied. However, the origin of mobility increase has not been well understood because mobility-limiting factors have not been compared quantitatively. In this work, we clarify the influence of gate-insulator surface treatments in pentacene thin-film transistors on the limiting factors of mobility, i.e., size of crystal-growth domain, crystallite size, HOMO-band-edge fluctuation, and carrier transport barrier at domain boundary. We quantitatively investigated these factors for pentacene TFTs with bare, hexamethyldisilazane-treated, and polyimide-coated SiO2 layers as gate dielectrics. By applying these surface treatments, size of crystal-growth domain increases but both crystallite size and HOMO-band-edge fluctuation remain unchanged. Analyzing the experimental results, we also show that the barrier height at the boundary between crystal-growth domains is not sensitive to the treatments. The results imply that the essential increase in mobility by these surface treatments is only due to the increase in size of crystal-growth domain or the decrease in the number of energy barriers at domain boundaries in the TFT channel.

  7. IMFIT Integrated Modeling Applications Supporting Experimental Analysis: Multiple Time-Slice Kinetic EFIT Reconstructions, MHD Stability Limits, and Energy and Momentum Flux Analyses

    NASA Astrophysics Data System (ADS)

    Collier, A.; Lao, L. L.; Abla, G.; Chu, M. S.; Prater, R.; Smith, S. P.; St. John, H. E.; Guo, W.; Li, G.; Pan, C.; Ren, Q.; Park, J. M.; Bisai, N.; Srinivasan, R.; Sun, A. P.; Liu, Y.; Worrall, M.

    2010-11-01

    This presentation summarizes several useful applications provided by the IMFIT integrated modeling framework to support DIII-D and EAST research. IMFIT is based on Python and utilizes modular task-flow architecture with a central manager and extensive GUI support to coordinate tasks among component modules. The kinetic-EFIT application allows multiple time-slice reconstructions by fetching pressure profile data directly from MDS+ or from ONETWO or PTRANSP. The stability application analyzes a given reference equilibrium for stability limits by performing parameter perturbation studies with MHD codes such as DCON, GATO, ELITE, or PEST3. The transport task includes construction of experimental energy and momentum fluxes from profile analysis and comparison against theoretical models such as MMM95, GLF23, or TGLF.

  8. DTMs: discussion of a new multi-resolution function based model

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Biagi, L.; Zamboni, G.

    2012-04-01

    The diffusion of new technologies based on WebGIS and virtual globes allows DTMs distribution and three dimensional representations to the Web users' community. In the Web distribution of geographical information, the database storage size represents a critical point: given a specific interest area, typically the server needs to perform some preprocessing, the data have to be sent to the client, that applies some additional processing. The efficiency of all these actions is crucial to guarantee a near real time availability of the information. DTMs are obtained from the raw observations by some sampling or interpolation technique and typically are stored and distributed as Triangular Irregular Networks (TIN) or regular grids. A new approach to store and transmit DTMs has been studied and implemented. The basic idea is to use multi-resolution bilinear spline functions to interpolate the raw observations and to represent the terrain. More in detail, the algorithm performs the following actions. 1) The spatial distribution of the raw observations is investigated. In areas where few data are available, few levels of splines are activated while more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with respect to the previous level. 2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the raw observations. The interpolation is computed by batch least squares. 3) Finally, the estimated coefficients of the splines are stored. The algorithm guarantees a local resolution consistent with the data density, exploiting all the available information provided by the sample. The model can be defined "function based" because the coefficients of a given function are stored instead of a set of heights: in particular, the resolution level, the position and the coefficient of each activated spline function are stored by the server and are

  9. Abnormality in face scanning by children with autism spectrum disorder is limited to the eye region: evidence from multi-method analyses of eye tracking data.

    PubMed

    Yi, Li; Fan, Yuebo; Quinn, Paul C; Feng, Cong; Huang, Dan; Li, Jiao; Mao, Guoquan; Lee, Kang

    2013-01-01

    There has been considerable controversy regarding whether children with autism spectrum disorder (ASD) and typically developing children (TD) show different eye movement patterns when processing faces. We investigated ASD and age- and IQ-matched TD children's scanning of faces using a novel multi-method approach. We found that ASD children spent less time looking at the whole face generally. After controlling for this difference, ASD children's fixations of the other face parts, except for the eye region, and their scanning paths between face parts were comparable either to the age-matched or IQ-matched TD groups. In contrast, in the eye region, ASD children's scanning differed significantly from that of both TD groups: (a) ASD children fixated significantly less on the right eye (from the observer's view); (b) ASD children's fixations were more biased towards the left eye region; and (c) ASD children fixated below the left eye, whereas TD children fixated on the pupil region of the eye. Thus, ASD children do not have a general abnormality in face scanning. Rather, their abnormality is limited to the eye region, likely due to their strong tendency to avoid eye contact. PMID:23929830

  10. Independent component analysis and multiresolution asymmetry ratio for brain-computer interface.

    PubMed

    Hsu, Wei-Yen

    2013-04-01

    This study proposes a brain-computer interface (BCI) system for the recognition of single-trial electroencephalogram (EEG) data. With the combination of independent component analysis (ICA) and multiresolution asymmetry ratio, a support vector machine (SVM) is used to classify left and right finger lifting or motor imagery. First, ICA and similarity measures are proposed to eliminate the electrooculography (EOG) artifacts automatically. The features are then extracted from the wavelet data by means of an asymmetry ratio. Finally, the SVM classifier is used to discriminate between the features. Compared to the EEG data without EOG artifact removal, band power, and adoptive autoregressive (AAR) parameter features, the proposed system achieves promising results in BCI applications.

  11. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  12. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  13. [Method of multi-resolution 3D image registration by mutual information].

    PubMed

    Ren, Haiping; Wu, Wenkai; Yang, Hu; Chen, Shengzu

    2002-12-01

    Maximization of mutual information is a powerful criterion for 3D medical image registration, allowing robust and fully accurate automated rigid registration of multi-modal images in a various applications. In this paper, a method based on normalized mutual information for 3D image registration was presented on the images of CT, MR and PET. Powell's direction set method and Brent's one-dimensional optimization algorithm were used as optimization strategy. A multi-resolution approach is applied to speedup the matching process. For PET images, pre-procession of segmentation was performed to reduce the background artefacts. According to the evaluation by the Vanderbilt University, Sub-voxel accuracy in multi-modality registration had been achieved with this algorithm. PMID:12561358

  14. Multiresolution parametric estimation of transparent motions and denoising of fluoroscopic images.

    PubMed

    Auvray, Vincent; Liénard, Jean; Bouthemy, Patrick

    2005-01-01

    We describe a novel multiresolution parametric framework to estimate transparent motions typically present in X-Ray exams. Assuming the presence if two transparent layers, it computes two affine velocity fields by minimizing an appropriate objective function with an incremental Gauss-Newton technique. We have designed a realistic simulation scheme of fluoroscopic image sequences to validate our method on data with ground truth and different levels of noise. An experiment on real clinical images is also reported. We then exploit this transparent-motion estimation method to denoise two layers image sequences using a motion-compensated estimation method. In accordance with theory, we show that we reach a denoising factor of 2/3 in a few iterations without bringing any local artifacts in the image sequence.

  15. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  16. Structural health monitoring system based on diffracted Lamb wave analysis by multiresolution processing

    NASA Astrophysics Data System (ADS)

    Lemistre, Michel; Balageas, Daniel

    2001-06-01

    A health monitoring system is presented composed of integrated disc-shaped, 100 µm thick and 5 mm diameter piezoelectric transducers (PZTs) working sequentially as Lamb wave emitters and receivers. The diagnostic is based on the analysis of Lamb wave signals recorded before and after damage. In the composite, delaminations are discontinuities producing mode conversion processes generating various outgoing modes. The multiresolution processing allows the isolation of various propagation modes and their extraction in order to measure, for various propagation paths, the time delay between the arrivals of the main burst and of a specific outgoing mode. This process permits, with good accuracy, the localization of damage and the estimation of its extent. The robustness and portability of this technique is demonstrated by the fact that, after validation in our laboratory, it was successfully applied to data coming from an experiment conducted in another laboratory using its own acousto-ultrasonic health monitoring hardware system.

  17. Multi-resolution entropy analysis of gait symmetry in neurological degenerative diseases and amyotrophic lateral sclerosis.

    PubMed

    Liao, Fuyuan; Wang, Jue; He, Ping

    2008-04-01

    Gait rhythm of patients with Parkinson's disease (PD), Huntington's disease (HD) and amyotrophic lateral sclerosis (ALS) has been studied focusing on the fractal and correlation properties of stride time fluctuations. In this study, we investigated gait asymmetry in these diseases using the multi-resolution entropy analysis of stance time fluctuations. Since stance time is likely to exhibit fluctuations across multiple spatial and temporal scales, the data series were decomposed into appropriate levels by applying stationary wavelet transform. The similarity between two corresponding wavelet coefficient series in terms of their regularities at each level was quantified based on a modified sample entropy method and a weighted sum was then used as gait symmetry index. We found that gait symmetry in subjects with PD and HD, especially with ALS is significantly disturbed. This method may be useful in characterizing certain pathologies of motor control and, possibly, in monitoring disease progression and evaluating the effect of an individual treatment.

  18. Combining data fusion with multiresolution analysis for improving the classification accuracy of uterine EMG signals

    NASA Astrophysics Data System (ADS)

    Moslem, Bassam; Diab, Mohamad; Khalil, Mohamad; Marque, Catherine

    2012-12-01

    Multisensor data fusion is a powerful solution for solving difficult pattern recognition problems such as the classification of bioelectrical signals. It is the process of combining information from different sensors to provide a more stable and more robust classification decisions. We combine here data fusion with multiresolution analysis based on the wavelet packet transform (WPT) in order to classify real uterine electromyogram (EMG) signals recorded by 16 electrodes. Herein, the data fusion is done at the decision level by using a weighted majority voting (WMV) rule. On the other hand, the WPT is used to achieve significant enhancement in the classification performance of each channel by improving the discrimination power of the selected feature. We show that the proposed approach tested on our recorded data can improve the recognition accuracy in labor prediction and has a competitive and promising performance.

  19. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    PubMed

    Wang, J Z

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wavelets and the IRM (Integrated Region Matching) distance. Experiments with a database of 70,000 pathology image fragments have demonstrated high retrieval accuracy and high speed. The algorithm can be combined with our previously developed wavelet-based progressive pathology image transmission and browsing algorithm and is expandable for medical image databases.

  20. Application of multi-resolution modality independent elastography for detection of multiple anomalous objects

    NASA Astrophysics Data System (ADS)

    Ou, Jao J.; Barnes, Stephanie L.; Miga, Michael I.

    2006-03-01

    This work extends a recently realized inverse problem technique of extracting soft tissue elasticity information via non-rigid model-based image registration. The algorithm uses the elastic properties of the tissue in a biomechanical model to achieve maximal similarity between image data acquired under different states of loading. A new multi-resolution, non-linear optimization framework has been employed which allows for improved performance and object detection. Prior studies have demonstrated successful reconstructions from images of a tissue-like thin membrane phantom with a single embedded inclusion that was significantly stiffer than its surroundings. For this investigation, a similar phantom was fabricated with two stiff inclusions to test the effectiveness of this method in discriminating multiple smaller objects. Elasticity values generated from both simulation and real data testing scenarios provided sufficient contrast for detection and good quantitative localization of the inclusion areas.

  1. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    NASA Technical Reports Server (NTRS)

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  2. Multi-resolution entropy analysis of gait symmetry in neurological degenerative diseases and amyotrophic lateral sclerosis.

    PubMed

    Liao, Fuyuan; Wang, Jue; He, Ping

    2008-04-01

    Gait rhythm of patients with Parkinson's disease (PD), Huntington's disease (HD) and amyotrophic lateral sclerosis (ALS) has been studied focusing on the fractal and correlation properties of stride time fluctuations. In this study, we investigated gait asymmetry in these diseases using the multi-resolution entropy analysis of stance time fluctuations. Since stance time is likely to exhibit fluctuations across multiple spatial and temporal scales, the data series were decomposed into appropriate levels by applying stationary wavelet transform. The similarity between two corresponding wavelet coefficient series in terms of their regularities at each level was quantified based on a modified sample entropy method and a weighted sum was then used as gait symmetry index. We found that gait symmetry in subjects with PD and HD, especially with ALS is significantly disturbed. This method may be useful in characterizing certain pathologies of motor control and, possibly, in monitoring disease progression and evaluating the effect of an individual treatment. PMID:17569571

  3. Exploring a multi-resolution modeling approach within the shallow-water equations

    SciTech Connect

    Ringler, Todd; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William

    2011-01-01

    The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward

  4. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  5. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  6. Multiresolution Local Binary Pattern texture analysis for false positive reduction in computerized detection of breast masses on mammograms

    NASA Astrophysics Data System (ADS)

    Choi, Jae Young; Kim, Dae Hoe; Choi, Seon Hyeong; Ro, Yong Man

    2012-03-01

    We investigated the feasibility of using multiresolution Local Binary Pattern (LBP) texture analysis to reduce falsepositive (FP) detection in a computerized mass detection framework. A new and novel approach for extracting LBP features is devised to differentiate masses and normal breast tissue on mammograms. In particular, to characterize the LBP texture patterns of the boundaries of masses, as well as to preserve the spatial structure pattern of the masses, two individual LBP texture patterns are then extracted from the core region and the ribbon region of pixels of the respective ROI regions, respectively. These two texture patterns are combined to produce the so-called multiresolution LBP feature of a given ROI. The proposed LBP texture analysis of the information in mass core region and its margin has clearly proven to be significant and is not sensitive to the precise location of the boundaries of masses. In this study, 89 mammograms were collected from the public MAIS database (DB). To perform a more realistic assessment of FP reduction process, the LBP texture analysis was applied directly to a total of 1,693 regions of interest (ROIs) automatically segmented by computer algorithm. Support Vector Machine (SVM) was applied for the classification of mass ROIs from ROIs containing normal tissue. Receiver Operating Characteristic (ROC) analysis was conducted to evaluate the classification accuracy and its improvement using multiresolution LBP features. With multiresolution LBP features, the classifier achieved an average area under the ROC curve, , z A of 0.956 during testing. In addition, the proposed LBP features outperform other state-of-the-arts features designed for false positive reduction.

  7. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression.

    PubMed

    Guo, Chenlei; Zhang, Liming

    2010-01-01

    Salient areas in natural scenes are generally regarded as areas which the human eye will typically focus on, and finding these areas is the key step in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), Neuromorphic Vision Toolkit (NVT), and others, but they demand high computational cost and computing useful results mostly relies on their choice of parameters. Although some region-based approaches were proposed to reduce the computational complexity of feature maps, these approaches still were not able to work in real time. Recently, a simple and fast approach called spectral residual (SR) was proposed, which uses the SR of the amplitude spectrum to calculate the image's saliency map. However, in our previous work, we pointed out that it is the phase spectrum, not the amplitude spectrum, of an image's Fourier transform that is key to calculating the location of salient areas, and proposed the phase spectrum of Fourier transform (PFT) model. In this paper, we present a quaternion representation of an image which is composed of intensity, color, and motion features. Based on the principle of PFT, a novel multiresolution spatiotemporal saliency detection model called phase spectrum of quaternion Fourier transform (PQFT) is proposed in this paper to calculate the spatiotemporal saliency map of an image by its quaternion representation. Distinct from other models, the added motion dimension allows the phase spectrum to represent spatiotemporal saliency in order to perform attention selection not only for images but also for videos. In addition, the PQFT model can compute the saliency map of an image under various resolutions from coarse to fine. Therefore, the hierarchical selectivity (HS) framework based on the PQFT model is introduced here to construct the tree structure representation of an image. With the help of HS, a model called multiresolution wavelet domain foveation (MWDF) is

  8. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    NASA Astrophysics Data System (ADS)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2016-06-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse ~1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to ~0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  9. Multiresolution iterative reconstruction in high-resolution extremity cone-beam CT

    NASA Astrophysics Data System (ADS)

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2016-10-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size  <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution penalized-weighted least squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10×  can be used without introducing artifacts, yielding a ~50×  speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of

  10. Sociopolitical Analyses.

    ERIC Educational Resources Information Center

    Van Galen, Jane, Ed.; And Others

    1992-01-01

    This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E. Beyer's "Educational Studies and…

  11. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    NASA Astrophysics Data System (ADS)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform – was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  12. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    NASA Astrophysics Data System (ADS)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  13. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.

  14. Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study.

    PubMed

    Liu, Z; Blasch, E; Xue, Z; Zhao, J; Laganiere, R; Wu, W

    2012-01-01

    Comparison of image processing techniques is critically important in deciding which algorithm, method, or metric to use for enhanced image assessment. Image fusion is a popular choice for various image enhancement applications such as overlay of two image products, refinement of image resolutions for alignment, and image combination for feature extraction and target recognition. Since image fusion is used in many geospatial and night vision applications, it is important to understand these techniques and provide a comparative study of the methods. In this paper, we conduct a comparative study on 12 selected image fusion metrics over six multiresolution image fusion algorithms for two different fusion schemes and input images with distortion. The analysis can be applied to different image combination algorithms, image processing methods, and over a different choice of metrics that are of use to an image processing expert. The paper relates the results to an image quality measurement based on power spectrum and correlation analysis and serves as a summary of many contemporary techniques for objective assessment of image fusion algorithms.

  15. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis.

    PubMed

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  16. A method of image multi-resolution processing based on FPGA + DSP architecture

    NASA Astrophysics Data System (ADS)

    Peng, Xiaohan; Zhong, Sheng; Lu, Hongqiang

    2015-10-01

    In real-time image processing, with the improvement of resolution and frame rate of camera imaging, not only the requirement of processing capacity is improving, but also the requirement of the optimization of process is improving. With regards to the FPGA + DSP architecture image processing system, there are three common methods to overcome the challenge above. The first is using higher performance DSP. For example, DSP with higher core frequency or with more cores can be used. The second is optimizing the processing method, make the algorithm to accomplish the same processing results but spend less time. Last but not least, pre-processing in the FPGA can make the image processing more efficient. A method of multi-resolution pre-processing by FPGA based on FPGA + DSP architecture is proposed here. It takes advantage of built-in first in first out (FIFO) and external synchronous dynamic random access memory (SDRAM) to buffer the images which come from image detector, and provides down-sampled images or cut-down images for DSP flexibly and efficiently according to the request parameters sent by DSP. DSP can thus get the degraded image instead of the whole image to process, shortening the processing time and transmission time greatly. The method results in alleviating the burden of image processing of DSP and also solving the problem of single method of image resolution reduction cannot meet the requirements of image processing task of DSP.

  17. A multiformalism and multiresolution modelling environment: application to the cardiovascular system and its regulation

    PubMed Central

    Hernández, Alfredo I.; Le Rolle, Virginie; Defontaine, Antoine; Carrault, Guy

    2009-01-01

    The role of modelling and simulation on the systemic analysis of living systems is now clearly established. Emerging disciplines, such as Systems Biology, and world-wide research actions, such as the Physiome project or the Virtual Physiological Human, are based on an intensive use of modelling and simulation methodologies and tools. One of the key aspects in this context is to perform an efficient integration of various models representing different biological or physiological functions, at different resolutions, spanning through different scales. This paper presents a multi-formalism modelling and simulation environment (M2SL) that has been conceived to ease model integration. A given model is represented as a set of coupled and atomic model components that may be based on different mathematical formalisms with heterogeneous structural and dynamical properties. A co-simulation approach is used to solve these hybrid systems. The pioneering model of the overall regulation of the cardiovascular system, proposed by Guyton, Coleman & Granger in 1972 has been implemented under M2SL and a pulsatile ventricular model, based on a time-varying elastance has been integrated, in a multi-resolution approach. Simulations reproducing physiological conditions and using different coupling methods show the benefits of the proposed environment. PMID:19884187

  18. A multiformalism and multiresolution modelling environment: application to the cardiovascular system and its regulation.

    PubMed

    Hernández, Alfredo I; Le Rolle, Virginie; Defontaine, Antoine; Carrault, Guy

    2009-12-13

    The role of modelling and simulation in the systemic analysis of living systems is now clearly established. Emerging disciplines, such as systems biology, and worldwide research actions, such as the Physiome Project or the Virtual Physiological Human, are based on an intensive use of modelling and simulation methodologies and tools. One of the key aspects in this context is to perform an efficient integration of various models representing different biological or physiological functions, at different resolutions, spanning through different scales. This paper presents a multiformalism modelling and simulation environment (M2SL) that has been conceived to ease model integration. A given model is represented as a set of coupled and atomic model components that may be based on different mathematical formalisms with heterogeneous structural and dynamical properties. A co-simulation approach is used to solve these hybrid systems. The pioneering model of the overall regulation of the cardiovascular system proposed by Guyton and co-workers in 1972 has been implemented under M2SL and a pulsatile ventricular model based on a time-varying elastance has been integrated in a multi-resolution approach. Simulations reproducing physiological conditions and using different coupling methods show the benefits of the proposed environment.

  19. Combination of geodetic measurements by means of a multi-resolution representation

    NASA Astrophysics Data System (ADS)

    Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.

    2010-12-01

    Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.

  20. Interactive, Internet Delivery of Scientific Visualization viaStructured, Prerendered Multiresolution Imagery

    SciTech Connect

    Chen, Jerry; Yoon, Ilmi; Bethel, E. Wes

    2005-04-20

    We present a novel approach for highly interactive remote delivery of visualization results. Instead of real-time rendering across the internet, our approach, inspired by QuickTime VR's Object Movieconcept, delivers pre-rendered images corresponding to different viewpoints and different time steps to provide the experience of 3D and temporal navigation. We use tiled, multiresolution image streaming to consume minimum bandwidth while providing the maximum resolution that a user can perceive from a given viewpoint. Since image data, a viewpoint and time stamps are the only required inputs, our approach is generally applicable to all visualization and graphics rendering applications capable of generating image files in an ordered fashion. Our design is a form of latency-tolerant remote visualization, where visualization and Rendering time is effectively decoupled from interactive exploration. Our approach trades off increased interactivity, flexible resolution (for individual clients), reduced load and effective reuse of coherent frames between multiple users (from the servers perspective) at the expense of unconstrained exploration. A normal web server is the vehicle for providing on-demand images to the remote client application, which uses client-pull to obtain and cache only those images required to fulfill the interaction needs. This paper presents an architectural description of the system along with a performance characterization for stage of production, delivery and viewing pipeline.

  1. Multiresolution analysis over graphs for a motor imagery based online BCI game.

    PubMed

    Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy

    2016-01-01

    Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes.

  2. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  3. Comparison of multiresolution features for texture classification of carotid atherosclerosis from B-mode ultrasound.

    PubMed

    Tsiaparas, Nikolaos N; Golemati, Spyretta; Andreadis, Ioannis; Stoitsis, John S; Valavanis, Ioannis; Nikita, Konstantina S

    2011-01-01

    In this paper, a multiresolution approach is suggested for texture classification of atherosclerotic tissue from B-mode ultrasound. Four decomposition schemes, namely, the discrete wavelet transform, the stationary wavelet transform, wavelet packets (WP), and Gabor transform (GT), as well as several basis functions, were investigated in terms of their ability to discriminate between symptomatic and asymptomatic cases. The mean and standard deviation of the detail subimages produced for each decomposition scheme were used as texture features. Feature selection included 1) ranking the features in terms of their divergence values and 2) appropriately thresholding by a nonlinear correlation coefficient. The selected features were subsequently input into two classifiers using support vector machines (SVM) and probabilistic neural networks. WP analysis and the coiflet 1 produced the highest overall classification performance (90% for diastole and 75% for systole) using SVM. This might reflect WP's ability to reveal differences in different frequency bands, and therefore, characterize efficiently the atheromatous tissue. An interesting finding was that the dominant texture features exhibited horizontal directionality, suggesting that texture analysis may be affected by biomechanical factors (plaque strains).

  4. A three-channel miniaturized optical system for multi-resolution imaging

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Thienpont, Hugo

    2013-09-01

    Inspired by the natural compound eyes of insects, multichannel imaging systems embrace many channels that scramble their entire Field-Of-View (FOV). Our aim in this work was to attain multi-resolution capability into a multi-channel imaging system by manipulating the available channels to possess different imaging properties (focal length, angular resolution). We have designed a three-channel imaging system where the first and third channels have highest and lowest angular resolution of 0.0096° and 0.078° and narrowest and widest FOVs of 7° and 80°, respectively. The design of the channels has been done for a single wavelength of 587.6 nm using CODE V. The three channels each consist of 4 aspherical lens surfaces and an absorbing baffle that avoids crosstalk among the neighbouring channels. The aspherical lens surfaces have been fabricated in PMMA by ultra-precision diamond tooling and the baffles by metal additive manufacturing. The profiles of the fabricated lens surfaces have been measured with an accurate multi-sensor coordinate measuring machine and compared with the corresponding profiles of the designed lens surfaces. The fabricated lens profiles are then incorporated into CODE V to realistically model the three channels and also compare their performances with those of the nominal design. We can conclude that the performances of the two latter models are in a good agreement.

  5. Mine detection using model-trained multiresolution neural networks and variational methods

    NASA Astrophysics Data System (ADS)

    Szymczak, William G.; Guo, Weiming

    1999-08-01

    Even under ideal conditions side-scan sonar (SSS) images of targets can vary greatly depending on the target range and orientation, even if their geometries are identical. This complicates target classification algorithms since typically only a small samples of targets are available for training purposes. This under-representation of targets can cause missed classifications and a higher false alarm ratio in the presence of clutter. This problem is addressed by using a priori information about the targets as well as the imaging system embedded in a model for simulating target images. These simulated target images can be added to the training set for a more complete target representation. Another important aspect of this research includes the use of multiple channels extracted from the images using a multi- resolution wavelet decomposition. This multi-resolution analysis is used to first provide for an efficient detection strategy, by filtering the images over the lower resolution channels. Furthermore, providing target features at different scales improves the performance of the neural network classifier. The dependence of the classifier on local image enhancement provided by total variation minimization and Mumford-Shah segmentation is also studied.

  6. Multi-resolution analysis of high density spatial and temporal cloud inhomogeneity fields from HOPE campaign

    NASA Astrophysics Data System (ADS)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Macke, Andreas

    2015-04-01

    Clouds are the most complex structures in both spatial and temporal scales of the Earth's atmosphere that effect the downward surface reaching fluxes and thus contribute to large uncertainty in the global radiation budget. Within the framework of High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE), a high density network of 99 pyranometer stations was set up around Jülich, Germany (~ 10 × 12 km2 area) during April to July 2013 to capture the small-scale variability in cloud induced radiation fields at the surface. In this study, we perform multi-resolution analysis of the downward solar irradiance variability at the surface from the pyranometer network to investigate the dependence of temporal and spatial averaging scales on the variance and spatial correlation for different cloud regimes. Preliminary results indicate that correlation is strongly scale-dependent where as the variance is dependent on the length of averaging period. Implications of our findings will be useful for quantifying the effect of spatial collocation while validating the satellite inferred solar irradiance estimates, and also to explore the link between cloud structure and radiation. We will present the details of our analysis and results.

  7. Multi-resolutional brain network filtering and analysis via wavelets on non-Euclidean space.

    PubMed

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K; Charchut, Sylvia; GadElkarim, Johnson J; Altshuler, Lori; Moody, Teena; Kumar, Anand; Singh, Vikas; Leow, Alex D

    2013-01-01

    Advances in resting state fMRI and diffusion weighted imaging (DWI) have led to much interest in studies that evaluate hypotheses focused on how brain connectivity networks show variations across clinically disparate groups. However, various sources of error (e.g., tractography errors, magnetic field distortion, and motion artifacts) leak into the data, and make downstream statistical analysis problematic. In small sample size studies, such noise have an unfortunate effect that the differential signal may not be identifiable and so the null hypothesis cannot be rejected. Traditionally, smoothing is often used to filter out noise. But the construction of convolving with a Gaussian kernel is not well understood on arbitrarily connected graphs. Furthermore, there are no direct analogues of scale-space theory for graphs--ones which allow to view the signal at multiple resolutions. We provide rigorous frameworks for performing 'multi-resolutional' analysis on brain connectivity graphs. These are based on the recent theory of non-Euclidean wavelets. We provide strong evidence, on brain connectivity data from a network analysis study (structural connectivity differences in adult euthymic bipolar subjects), that the proposed algorithm allows identifying statistically significant network variations, which are clinically meaningful, where classical statistical tests, if applied directly, fail.

  8. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction. PMID:24469684

  9. Fully automated analysis of multi-resolution four-channel micro-array genotyping data

    NASA Astrophysics Data System (ADS)

    Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.

    2006-03-01

    We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.

  10. Automatic multiresolution age-related macular degeneration detection from fundus images

    NASA Astrophysics Data System (ADS)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  11. Multiresolution Modeling of Polymer Solutions: Wavelet-Based Coarse-Graining and Reverse-Mapping

    NASA Astrophysics Data System (ADS)

    Ismail, Ahmed; Adorf, Carl Simon; Agarwal, Animesh; Iacovella, Christopher R.

    2014-03-01

    Unlike multiscale methods, which encompass multiple simulation techniques, multiresolution models uses one modeling technique at different length and time scales. We present a combined coarse-graining and reverse-mapping framework for modeling of semidilute polymer solutions, based on the wavelet-accelerated Monte Carlo (WAMC) method, which forms a hierarchy of resolutions to model polymers at length scales that cannot be reached via atomistic or even ``standard'' coarse-grained simulations. A universal scaling function is obtained so that potentials do not need to be recomputed as the scale of the system is changed. We show that coarse-grained polymer solutions can reproduce results obtained from the simulations of the more detailed atomistic system to a reasonable degree of accuracy. Reverse mapping proceeds similarly: using probability distributions obtained from coarse-graining the bond lengths, angles, torsions, and the non-bonded potentials, we can reconstruct a more detailed polymer consistent with both geometric constraints and energetic considerations. Using a ``convergence factor'' within a Monte Carlo-based energy optimization scheme, we can successfully reconstruct entire atomistic configurations from coarse-grained descriptions.

  12. Current limiters

    SciTech Connect

    Loescher, D.H.; Noren, K.

    1996-09-01

    The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.

  13. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  14. Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging

    NASA Astrophysics Data System (ADS)

    He, Wenda; Juette, Arne; Denton, Erica R. E.; Zwiggelaar, Reyer

    2015-03-01

    Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  15. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models.

    PubMed

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  16. Quantum simulations of nuclei and nuclear pasta with the multiresolution adaptive numerical environment for scientific simulations

    NASA Astrophysics Data System (ADS)

    Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.

    2016-05-01

    Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.

  17. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency, Active and Passive Microwave Observations

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Anagnostou, Emmanouil N.; Olson, William S.; Starr, David OC. (Technical Monitor)

    2002-01-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated using both simulated and airborne data. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) satellite multi-frequency active and passive observations. These observations are characterized by various spatial and sampling resolutions. This makes the retrieval problem mathematically more difficult and ill-determined because the quality of information decreases with decreasing resolution. A model that, given reflectivity profiles and a small set of parameters (including the cloud water content, the intercept drop size distribution, and a variable describing the frozen hydrometeor properties), simulates high-resolution brightness temperatures is used. The high-resolution simulated brightness temperatures are convolved at the real sensor resolution. An optimal estimation procedure is used to minimize the differences between simulated and observed brightness temperatures. The retrieval technique is investigated using cloud model synthetic and airborne data from the Fourth Convection And Moisture Experiment. Simulated high-resolution brightness temperatures and reflectivities and airborne observation strong are convolved at the resolution of the TRMM instruments and retrievals are performed and analyzed relative to the reference data used in observations synthesis. An illustration of the possible use of the technique in satellite rainfall estimation is presented through an application to TRMM data. The study suggests improvements in combined active and passive retrievals even when the instruments resolutions are significantly different. Future work needs to better quantify the retrievals performance, especially in connection with satellite applications, and the uncertainty of the models used in retrieval.

  18. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency, Active and Passive Microwave Observations

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.; Anagnostou, Emmanouil N.

    2003-01-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) observations and it is based on models that simulate high-resolution brightness temperatures as functions of observed reflectivity profiles and a parameter related to the rain drop-size-distribution. The modeled high-resolution brightness temperatures are used to determine normalized brightness temperature polarizations at the microwave radiometer resolution. An optimal estimation procedure is employed to minimize the differences between the simulated and observed normalized polarizations by adjusting the drop-size-distribution parameter. The impact of other unknowns that are not independent variables in the optimal estimation but affect the retrievals is minimized through statistical parameterizations derived from cloud model simulations. The retrieval technique is investigated using TRMM observations collected during the Kwajalein Experiment (KWAJEX). These observations cover an area extending from 5 deg to deg N latitude and 166 deg to 172 deg E longitude from July to September 1999, and are coincident with various ground-based observations, facilitating a detailed analysis of the retrieved precipitation. Using the method developed in this study, precipitation estimates consistent with both the passive and active TRMM observations are obtained. Various parameters characterizing these estimates, i.e. the rain rate, the precipitation water content, the drop-size-distribution intercept, and the mass weighted mean drop diameter, are in good qualitative agreement with independent experimental and theoretical estimates. Combined rain estimates are in general higher than the official TRMM Precipitation Radar (PR) only estimates for the area and the period considered in the study. Ground-based precipitation estimates

  19. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  20. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators.

    PubMed

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-21

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator's inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq(-1), while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq(-1). Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators. PMID:27359049

  1. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models.

    PubMed

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  2. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators

    NASA Astrophysics Data System (ADS)

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-01

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator’s inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq‑1, while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq‑1. Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  3. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators.

    PubMed

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-21

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator's inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq(-1), while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq(-1). Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  4. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models

    PubMed Central

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-01-01

    Mapping or “delimiting” landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  5. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency Active and Passive Microwave Observations.

    NASA Astrophysics Data System (ADS)

    Grecu, Mircea; Olson, William S.; Anagnostou, Emmanouil N.

    2004-04-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) observations, and it is based on models that simulate high-resolution brightness temperatures as functions of observed reflectivity profiles and a parameter related to the raindrop size distribution. The modeled high-resolution brightness temperatures are used to determine normalized brightness temperature polarizations at the microwave radiometer resolution. An optimal estimation procedure is employed to minimize the differences between the simulated and observed normalized polarizations by adjusting the drop size distribution parameter. The impact of other unknowns that are not independent variables in the optimal estimation, but affect the retrievals, is minimized through statistical parameterizations derived from cloud model simulations. The retrieval technique is investigated using TRMM observations collected during the Kwajalein Experiment (KWAJEX). These observations cover an area extending from 5° to 12°N latitude and from 166° to 172°E longitude from July to September 1999 and are coincident with various ground-based observations, facilitating a detailed analysis of the retrieved precipitation. Using the method developed in this study, precipitation estimates consistent with both the passive and active TRMM observations are obtained. Various parameters characterizing these estimates, that is, the rain rate, precipitation water content, drop size distribution intercept, and the mass- weighted mean drop diameter, are in good qualitative agreement with independent experimental and theoretical estimates. Combined rain estimates are, in general, higher than the official TRMM precipitation radar (PR)-only estimates for the area and the period considered in the study. Ground-based precipitation estimates, derived

  6. Flight assessment of a real time multi-resolution image fusion system for use in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Smith, M. I.; Sadler, J. R. E.

    2007-04-01

    Military helicopter operations are often constrained by environmental conditions, including low light levels and poor weather. Recent experience has also shown the difficulty presented by certain terrain when operating at low altitude by day and night. For example, poor pilot cues over featureless terrain with low scene contrast, together with obscuration of vision due to wind-blown and re-circulated dust at low level (brown out). These sorts of conditions can result in loss of spatial awareness and precise control of the aircraft. Atmospheric obscurants such as fog, cloud, rain and snow can similarly lead to hazardous situations and reduced situational awareness. Day Night All Weather (DNAW) systems applied research sponsored by UK Ministry of Defence (MoD) has developed a multi-resolution real time Image Fusion system that has been flown as part of a wider flight trials programme investigating increased situational awareness. Dual-band multi-resolution adaptive image fusion was performed in real-time using imagery from a Thermal Imager and a Low Light TV, both co-bore sighted on a rotary wing trials aircraft. A number of sorties were flown in a range of climatic and environmental conditions during both day and night. (Neutral density filters were used on the Low Light TV during daytime sorties.) This paper reports on the results of the flight trial evaluation and discusses the benefits offered by the use of Image Fusion in degraded visual environments.

  7. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  8. Multiresolution imaging of mantle reflectivity structure using SS and P'P' precursors

    NASA Astrophysics Data System (ADS)

    Schultz, Ryan; Gu, Yu J.

    2013-10-01

    Knowledge of the mantle reflectivity structure is highly dependent on our ability to efficiently extract, and properly interpret, small seismic arrivals. Among the various data types and techniques, long-period SS/PP precursors and high-frequency receiver functions are routinely utilized to increase the confidence of the recovered mantle stratifications at distinct spatial scales. However, low resolution and a complex Fresnel zone are glaring weaknesses of SS precursors, while over-reliance on receiver distribution is a formidable challenge for the analysis of converted waves from oceanic regions. A promising high frequency alternative to receiver functions is P'P' precursors, which are capable of resolving mantle structures at vertical and lateral resolution of ˜5 and ˜200 km, respectively, owing to their spectral content, shallow angle of incidence and near-symmetric Fresnel zones. This study presents a novel processing method for both SS (or PP) and P'P' precursors based on deconvolution, stacking, Radon transform and depth migration. A suite of synthetic tests is performed to quantify the fidelity and stability of this method under different data conditions. Our multiresolution survey of the mantle at targeted areas near Nazca-South America subduction zone reveal both olivine and garnet related transitions at depths below 400 km. We attribute a depressed 660 to thermal variations, whereas compositional variations atop the upper-mantle transition zone are needed to explain the diminished or highly complex reflected/scattered signals from the 410 km discontinuity. We also observe prominent P'P' reflections within the transition zone, and the anomalous amplitudes near the plate boundary zone indicate a sharp (˜10 km thick) transition that likely resonates with the frequency content of P'P' precursors. The migration of SS precursors in this study shows no evidence of split 660 reflections, but potential majorite-ilmenite (590-640 km) and ilmenite

  9. Radar Image and Rain-gauge Alignment using the Multi-resolution Viscous Alignment (MVA) Algorithm

    NASA Astrophysics Data System (ADS)

    Chatdarong, V.

    2007-12-01

    Rainfall is a complex environmental variable that is difficult to describe either deterministically or statistically. To understand rainfall behaviors, many types of instruments are employed to detect and collect rainfall information. Among them, radar seems to provide the most comprehensive rainfall measurement at fine spatial and temporal resolution and over a relatively wide area. Nevertheless, it does not detects surface rainfall directly like what rain-gauge does. The accuracy radar rainfall, therefore, depends greatly on the Z-R relationship which convert radar reflectivity (Z) to surface rainrate (R). This calibration is usually done by fitting the rain-gauge data with the corresponding radar reflectivity using the regression analysis. To best fit the data, the radar reflectivity at neighbor pixels are usually used to best match the rain-gauge data. However, when applying the Z-R relationship to the radar image, there is no position adjustment despite the calibration technique. Hence, it is desirable to adjust the position of the radar reflectivity images prior to applying the Z-R relationship to improve the accuracy of the rainfall estimation. In this research, the Multi-resolution Viscous Alignment (MVA) algorithm is applied to best align radar reflectivity images to rain-gauge data in order to improve rainfall estimation from the Z-R relationship. The MVA algorithm solves the motion estimation problems using a Bayesian formulation to minimize misfits between two data sets. In general, the problem are ill-posed; therefore, some regularizations and constraints based on smoothness and non-divergence assumptions are employed. This algorithm is superior to the conventional techniques and correlation based techniques. It is fast, robust, easy to implement, and does not require data training. In addition, it can handle higher-order, missing data, and small-scale deformations. The algorithm provides spatially dense, consistency, and smooth transition vector. The

  10. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery.

    PubMed

    Belgiu, Mariana; Dr Guţ, Lucian

    2014-10-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  11. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery

    PubMed Central

    Belgiu, Mariana; Drǎguţ, Lucian

    2014-01-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  12. A new, multi-resolution bedrock elevation map of the Greenland ice sheet

    NASA Astrophysics Data System (ADS)

    Griggs, J. A.; Bamber, J. L.; Grisbed Consortium

    2010-12-01

    Gridded bedrock elevation for the Greenland ice sheet has previously been constructed with a 5 km posting. The true resolution of the data set was, in places, however, considerably coarser than this due to the across-track spacing of ice-penetrating radar transects. Errors were estimated to be on the order of a few percent in the centre of the ice sheet, increasing markedly in relative magnitude near the margins, where accurate thickness is particularly critical for numerical modelling and other applications. We use new airborne and satellite estimates of ice thickness and surface elevation to determine the bed topography for the whole of Greenland. This is a dynamic product, which will be updated frequently as new data, such as that from NASA’s Operation Ice Bridge, becomes available. The University of Kansas has in recent years, flown an airborne ice-penetrating radar system with close flightline spacing over several key outlet glacier systems. This allows us to produce a multi-resolution bedrock elevation dataset with the high spatial resolution needed for ice dynamic modelling over these key outlet glaciers and coarser resolution over the more sparsely sampled interior. Airborne ice thickness and elevation from CReSIS obtained between 1993 and 2009 are combined with JPL/UCI/Iowa data collected by the WISE (Warm Ice Sounding Experiment) covering the marginal areas along the south west coast from 2009. Data collected in the 1970’s by the Technical University of Denmark were also used in interior areas with sparse coverage from other sources. Marginal elevation data from the ICESat laser altimeter and the Greenland Ice Mapping Program were used to help constrain the ice thickness and bed topography close to the ice sheet margin where, typically, the terrestrial observations have poor sampling between flight tracks. The GRISBed consortium currently consists of: W. Blake, S. Gogineni, A. Hoch, C. M. Laird, C. Leuschen, J. Meisel, J. Paden, J. Plummer, F

  13. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  14. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    SciTech Connect

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios; Skouroliakou, Aikaterini; Hazle, John D.; Kagadis, George C. E-mail: George.Kagadis@med.upatras.gr

    2014-07-15

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet

  15. The Multi-Resolution Land Characteristics (MRLC) Consortium - 20 Years of Development and Integration of U.S. National Land Cover Data

    EPA Science Inventory

    The Multi-Resolution Land Characteristics (MRLC) Consortium is a good example of the national benefits of federal collaboration. It started in the mid-1990s as a small group of federal agencies with the straightforward goal of compiling a comprehensive national Landsat dataset t...

  16. Applicability of Multi-Seasonal X-Band SAR Imagery for Multiresolution Segmentation: a Case Study in a Riparian Mixed Forest

    NASA Astrophysics Data System (ADS)

    Dabiri, Z.; Hölbling, D.; Lang, S.; Bartsch, A.

    2015-12-01

    The increasing availability of synthetic aperture radar (SAR) data from a range of different sensors necessitates efficient methods for semi-automated information extraction at multiple spatial scales for different fields of application. The focus of the presented study is two-fold: 1) to evaluate the applicability of multi-temporal TerraSAR-X imagery for multiresolution segmentation, and 2) to identify suitable Scale Parameters through different weighing of different homogeneity criteria, mainly colour variance. Multiresolution segmentation was used for segmentation of multi-temporal TerraSAR-X imagery, and the ESP (Estimation of Scale Parameter) tool was used to identify suitable Scale Parameters for image segmentation. The validation of the segmentation results was performed using very high resolution WorldView-2 imagery and a reference map, which was created by an ecological expert. The results of multiresolution segmentation revealed that in the context of object-based image analysis the TerraSAR-X images are applicable for generating optimal image objects. Furthermore, ESP tool can be used as an indicator for estimation of Scale Parameter for multiresolution segmentation of TerraSAR-X imagery. Additionally, for more reliable results, this study suggests that the homogeneity criterion of colour, in a variance based segmentation algorithm, needs to be set to high values. Setting the shape/colour criteria to 0.005/0.995 or 0.00/1 led to the best results and to the creation of adequate image objects.

  17. The planetary hydraulics analysis based on a multi-resolution stereo DTMs and LISFLOOD-FP model: Case study in Mars

    NASA Astrophysics Data System (ADS)

    Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.

    2013-12-01

    Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough

  18. A new multiresolution method applied to the 3D reconstruction of small bodies

    NASA Astrophysics Data System (ADS)

    Capanna, C.; Jorda, L.; Lamy, P. L.; Gesquiere, G.

    2012-12-01

    The knowledge of the three-dimensional (3D) shape of small solar system bodies, such as asteroids and comets, is essential in determining their global physical properties (volume, density, rotational parameters). It also allows performing geomorphological studies of their surface through the characterization of topographic features, such as craters, faults, landslides, grooves, hills, etc.. In the case of small bodies, the shape is often only constrained by images obtained by interplanetary spacecrafts. Several techniques are available to retrieve 3D global shapes from these images. Stereography which relies on control points has been extensively used in the past, most recently to reconstruct the nucleus of comet 9P/Tempel 1 [Thomas (2007)]. The most accurate methods are however photogrammetry and photoclinometry, often used in conjunction with stereography. Stereophotogrammetry (SPG) has been used to reconstruct the shapes of the nucleus of comet 19P/Borrelly [Oberst (2004)] and of the asteroid (21) Lutetia [Preusker (2012)]. Stereophotoclinometry (SPC) has allowed retrieving an accurate shape of the asteroids (25143) Itokawa [Gaskell (2008)] and (2867) Steins [Jorda (2012)]. We present a new photoclinometry method based on the deformation of a 3D triangular mesh [Capanna (2012)] using a multi-resolution scheme which starts from a sphere of 300 facets and yields a shape model with 100; 000 facets. Our strategy is inspired by the "Full Multigrid" method [Botsch (2007)] and consists in going alternatively between two resolutions in order to obtain an optimized shape model at a given resolution before going to the higher resolution. In order to improve the robustness of our method, we use a set of control points obtained by stereography. Our method has been tested on images acquired by the OSIRIS visible camera, aboard the Rosetta spacecraft of the European Space Agency, during the fly-by of asteroid (21) Lutetia in July 2010. We present the corresponding 3D shape

  19. Multi-Resolution Analysis of LiDAR data for Characterizing a Stabilized Aeolian Landscape in South Texas

    NASA Astrophysics Data System (ADS)

    Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.; Houser, C.

    2014-12-01

    Aeolian systems are ideal natural laboratories for examining self-organization in patterned landscapes, as certain wind regimes generate certain morphologies. Topographic information and scale dependent analysis offer the opportunity to study such systems and characterize process-form relationships. A statistically based methodology for differentiating aeolian features would enable the quantitative association of certain surface characteristics with certain morphodynamic regimes. We conducted a multi-resolution analysis of LiDAR elevation data to assess scale-dependent morphometric variations in an aeolian landscape in South Texas. For each pixel, mean elevation values are calculated along concentric circles moving outward at 100-meter intervals (i.e. 500 m, 600 m, 700 m from pixel). The calculated average elevation values plotted against distance from the pixel of interest as curves are used to differentiate multi-scalar variations in elevation across the landscape. In this case, it is hypothesized these curves may be used to quantitatively differentiate certain morphometries from others like a spectral signature may be used to classify paved surfaces from natural vegetation, for example. After generating multi-resolution curves for all the pixels in a selected area of interest (AOI), a Principal Components Analysis is used to highlight commonalities and singularities between generated curves from pixels across the AOI. Our findings suggest that the resulting components could be used for identification of discrete aeolian features like open sands, trailing ridges and active dune crests, and, in particular, zones of deflation. This new approach to landscape characterization not only works to mitigate bias introduced when researchers must select training pixels for morphometric investigations, but can also reveal patterning in aeolian landscapes that would not be as obvious without quantitative characterization.

  20. A multi-resolution strategy for a multi-objective deformable image registration framework that accommodates large anatomical differences

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan

    2014-03-01

    Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.

  1. A Multi-resolution, Multi-epoch Low Radio Frequency Survey of the Kepler K2 Mission Campaign 1 Field

    NASA Astrophysics Data System (ADS)

    Tingay, S. J.; Hancock, P. J.; Wayth, R. B.; Intema, H.; Jagannathan, P.; Mooley, K.

    2016-10-01

    We present the first dedicated radio continuum survey of a Kepler K2 mission field, Field 1, covering the North Galactic Cap. The survey is wide field, contemporaneous, multi-epoch, and multi-resolution in nature and was conducted at low radio frequencies between 140 and 200 MHz. The multi-epoch and ultra wide field (but relatively low resolution) part of the survey was provided by 15 nights of observation using the Murchison Widefield Array (MWA) over a period of approximately a month, contemporaneous with K2 observations of the field. The multi-resolution aspect of the survey was provided by the low resolution (4‧) MWA imaging, complemented by non-contemporaneous but much higher resolution (20″) observations using the Giant Metrewave Radio Telescope (GMRT). The survey is, therefore, sensitive to the details of radio structures across a wide range of angular scales. Consistent with other recent low radio frequency surveys, no significant radio transients or variables were detected in the survey. The resulting source catalogs consist of 1085 and 1468 detections in the two MWA observation bands (centered at 154 and 185 MHz, respectively) and 7445 detections in the GMRT observation band (centered at 148 MHz), over 314 square degrees. The survey is presented as a significant resource for multi-wavelength investigations of the more than 21,000 target objects in the K2 field. We briefly examine our survey data against K2 target lists for dwarf star types (stellar types M and L) that have been known to produce radio flares.

  2. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    PubMed Central

    Wang, Kun-Ching

    2015-01-01

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590

  3. Application of sub-image multiresolution analysis of Ground-penetrating radar data in a study of shallow structures

    NASA Astrophysics Data System (ADS)

    Jeng, Yih; Lin, Chun-Hung; Li, Yi-Wei; Chen, Chih-Sung; Yu, Hung-Ming

    2011-03-01

    Fourier-based algorithms originally developed for the processing of seismic data are applied routinely in the Ground-penetrating radar (GPR) data processing, but these conventional methods of data processing may result in an abundance of spurious harmonics without any geological meaning. We propose a new approach in this study based essentially on multiresolution wavelet analysis (MRA) for GPR noise suppression. The 2D GPR section is similar to an image in all aspects if we consider each data point of the GPR section to be an image pixel in general. This technique is an image analysis with sub-image decomposition. We start from the basic image decomposition procedure using conventional MRA approach and establish the filter bank accordingly. With reasonable knowledge of data and noise and the basic assumption of the target, it is possible to determine the components with high S/N ratio and eliminate noisy components. The MRA procedure is performed further for the components containing both signal and noise. We treated the selected component as an original image and applied the MRA procedure again to that single component with a mother wavelet of higher resolution. This recursive procedure with finer input allows us to extract features or noise events from GPR data more effectively than conventional process. To assess the performance of the MRA filtering method, we first test this method on a simple synthetic model and then on experimental data acquired from a control site using 400 MHz GPR system. A comparison of results from our method and from conventional filtering techniques demonstrates the effectiveness of the sub-image MRA method, particularly in removing ringing noise and scattering events. Field study was carried out in a trenched fault zone where a faulting structure was present at shallow depths ready for understanding the feasibility of improving the data S/N ratio by applying the sub-image multiresolution analysis. In contrast to the conventional

  4. Evaluation of the Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010) using ICESat geodetic control

    USGS Publications Warehouse

    Carabajal, C.C.; Harding, D.J.; Boy, J.-P.; Danielson, J.J.; Gesch, D.B.; Suchdeo, V.P.

    2011-01-01

    Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (?? 86?? latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete ???50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m. ?? 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

  5. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system

    NASA Technical Reports Server (NTRS)

    Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.

    1996-01-01

    We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.

  6. Evaluation of the Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010) Using ICESat Geodetic Control

    NASA Technical Reports Server (NTRS)

    Carabajal, Claudia C.; Harding, David J.; Boy, Jean-Paul; Danielson, Jeffrey J.; Gesch, Dean B.; Suchdeo, Vijay P.

    2011-01-01

    Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (+/- 86deg latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete approx.50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m.

  7. The Multi-Resolution Land Characteristics (MRLC) Consortium: 20 years of development and integration of USA national land cover data

    USGS Publications Warehouse

    Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John

    2014-01-01

    The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.

  8. A multi-resolution filtered-x LMS algorithm based on discrete wavelet transform for active noise control

    NASA Astrophysics Data System (ADS)

    Qiu, Z.; Lee, C.-M.; Xu, Z. H.; Sui, L. N.

    2016-01-01

    We have developed a new active control algorithm based on discrete wavelet transform (DWT) for both stationary and non-stationary noise control. First, the Mallat pyramidal algorithm is introduced to implement the DWT, which can decompose the reference signal into several sub-bands with multi-resolution and provides a perfect reconstruction (PR) procedure. To reduce the extra computational complexity introduced by DWT, an efficient strategy is proposed that updates the adaptive filter coefficients in the frequency domainDeepthi B.B using a fast Fourier transform (FFT). Based on the reference noise source, a 'Haar' wavelet is employed and by decomposing the noise signal into two sub-band (3-band), the proposed DWT-FFT-based FXLMS (DWT-FFT-FXLMS) algorithm has greatly reduced complexity and a better convergence performance compared to a time domain filtered-x least mean square (TD-FXLMS) algorithm. As a result of the outstanding time-frequency characteristics of wavelet analysis, the proposed DWT-FFT-FXLMS algorithm can effectively cancel both stationary and non-stationary noise, whereas the frequency domain FXLMS (FD-FXLMS) algorithm cannot approach this point.

  9. Multi-resolutional shape features via non-Euclidean wavelets: Applications to statistical analysis of cortical thickness

    PubMed Central

    Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.

    2014-01-01

    Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060

  10. Classification of glioblastoma and metastasis for neuropathology intraoperative diagnosis: a multi-resolution textural approach to model the background

    NASA Astrophysics Data System (ADS)

    Ahmad Fauzi, Mohammad Faizal; Gokozan, Hamza Numan; Elder, Brad; Puduvalli, Vinay K.; Otero, Jose J.; Gurcan, Metin N.

    2014-03-01

    Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame (typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the classification to more types of cancers.

  11. Analyses to improve operational flexibility

    SciTech Connect

    Trikouros, N.G.

    1986-01-01

    Operational flexibility is greatly enhanced if the technical bases for plant limits and design margins are fully understood, and the analyses necessary to evaluate the effect of plant modifications or changes in operating modes on these parameters can be performed as required. If a condition should arise that might jeopardize a plant limit or reduce operational flexibility, it would be necessary to understand the basis for the limit or the specific condition limiting operational flexibility and be capable of performing a reanalysis to either demonstrate that the limit will not be violated or to change the limit. This paper provides examples of GPU Nuclear efforts in this regard. Examples of Oyster Creek and Three Mile Island operating experiences are discussed.

  12. Les possibilités et limitations de l'atomisation électrothermique en spectrométrie d'absorption atomique lors de l'analyse directe des metaux lourds dans l'eau de mer

    NASA Astrophysics Data System (ADS)

    Hoenig, M.; Wollast, R.

    This work shows the analytical possibilities of an electrothermal atomizer for the direct determination of trace metals in sea-water. The high background signals generated by the matrix perturb in particular volatile elements because of the low decomposition temperature allowed. In the case of cadmium, addition of ascorbic acid to the sample permits the modification of the atomization mechanism and the reduction of the optimum temperature. Under these conditions, absorption peak of the cadmium precedes the background absorption and consequently the analysis is no longer limited by the magnitude of the matrix signal: the determination of cadmium concentrations far below the μg -1 level is easily possible. Although the direct determination of the other elements should be in principle less disturbed by the background, the analytical performance is poorer than for cadmium. Limits of determination of the order from 0.1 to 1 μg -1 can be reached for chromium, copper and manganese. Lead and nickel appeared to be the most difficult elements; their direct determination is only possible in polluted coastal or estuarine waters. The injection of the sample as an aerosol into hot graphite tube showed to be well adapted to this kind of investigations. The simultaneous visualization of specific and background signals allows interpretations which until now were impossible with commercially available apparatus.

  13. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    DOE PAGES

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; Harrison, Robert J.

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  14. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    SciTech Connect

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; Harrison, Robert J.

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using a numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.

  15. Hierarchical progressive surveys. Multi-resolution HEALPix data structures for astronomical images, catalogues, and 3-dimensional data cubes

    NASA Astrophysics Data System (ADS)

    Fernique, P.; Allen, M. G.; Boch, T.; Oberto, A.; Pineau, F.-X.; Durand, D.; Bot, C.; Cambrésy, L.; Derriere, S.; Genova, F.; Bonnarel, F.

    2015-06-01

    Context. Scientific exploitation of the ever increasing volumes of astronomical data requires efficient and practical methods for data access, visualisation, and analysis. Hierarchical sky tessellation techniques enable a multi-resolution approach to organising data on angular scales from the full sky down to the individual image pixels. Aims: We aim to show that the hierarchical progressive survey (HiPS) scheme for describing astronomical images, source catalogues, and three-dimensional data cubes is a practical solution to managing large volumes of heterogeneous data and that it enables a new level of scientific interoperability across large collections of data of these different data types. Methods: HiPS uses the HEALPix tessellation of the sphere to define a hierarchical tile and pixel structure to describe and organise astronomical data. HiPS is designed to conserve the scientific properties of the data alongside both visualisation considerations and emphasis on the ease of implementation. We describe the development of HiPS to manage a large number of diverse image surveys, as well as the extension of hierarchical image systems to cube and catalogue data. We demonstrate the interoperability of HiPS and multi-order coverage (MOC) maps and highlight the HiPS mechanism to provide links to the original data. Results: Hierarchical progressive surveys have been generated by various data centres and groups for ˜200 data collections including many wide area sky surveys, and archives of pointed observations. These can be accessed and visualised in Aladin, Aladin Lite, and other applications. HiPS provides a basis for further innovations in the use of hierarchical data structures to facilitate the description and statistical analysis of large astronomical data sets.

  16. Detecting hidden spatial and spatio-temporal structures in glasses and complex physical systems by multiresolution network clustering.

    PubMed

    Ronhovde, P; Chakrabarty, S; Hu, D; Sahu, M; Sahu, K K; Kelton, K F; Mauro, N A; Nussinov, Z

    2011-09-01

    We elaborate on a general method that we recently introduced for characterizing the "natural" structures in complex physical systems via multi-scale network analysis. The method is based on "community detection" wherein interacting particles are partitioned into an "ideal gas" of optimally decoupled groups of particles. Specifically, we construct a set of network representations ("replicas") of the physical system based on interatomic potentials and apply a multiscale clustering ("multiresolution community detection") analysis using information-based correlations among the replicas. Replicas may i) be different representations of an identical static system, ii) embody dynamics by considering replicas to be time separated snapshots of the system (with a tunable time separation), or iii) encode general correlations when different replicas correspond to different representations of the entire history of the system as it evolves in space-time. Inputs for our method are the inter-particle potentials or experimentally measured two (or higher order) particle correlations. We apply our method to computer simulations of a binary Kob-Andersen Lennard-Jones system in a mixture ratio of A(80)B(20) , a ternary model system with components "A", "B", and "C" in ratios of A(88)B(7)C(5) (as in Al(88)Y(7)Fe(5) , and to atomic coordinates in a Zr(80)Pt(20) system as gleaned by reverse Monte Carlo analysis of experimentally determined structure factors. We identify the dominant structures (disjoint or overlapping) and general length scales by analyzing extrema of the information theory measures. We speculate on possible links between i) physical transitions or crossovers and ii) changes in structures found by this method as well as phase transitions associated with the computational complexity of the community detection problem. We also briefly consider continuum approaches and discuss rigidity and the shear penetration depth in amorphous systems; this latter length scale increases as

  17. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic

    USGS Publications Warehouse

    Chavez, P.S.; Sides, S.C.; Anderson, J.A.

    1991-01-01

    The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors

  18. Comparison of three different methods to merge multiresolution and multispectral data: LANDSAT TM and SPOT panchromatic

    SciTech Connect

    Chavez, P.S. Jr.; Sides, S.C.; Anderson, J.A. )

    1990-06-01

    The merging of multisensor image data is becoming a more widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high spatial and high spectral resolution should not distort the spectral characteristics of the high spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and SPOT (Systeme Probatoire d'Observation de la Terre) panchromatic data. The comparison is from a spectral characteristics point of view and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the hue-intensity-saturation (HIS), principal component analysis (PCA), and high pass filter (HPF) procedures. The popular HIS method distorted the spectral characteristics of the data the most. The HPF method distorted them the least. The distortions made by the HPF method were minimal and difficult to detect.

  19. The STOne Transform: Multi-Resolution Image Enhancement and Compressive Video.

    PubMed

    Goldstein, Tom; Xu, Lina; Kelly, Kevin F; Baraniuk, Richard

    2015-12-01

    Compressive sensing enables the reconstruction of high-resolution signals from under-sampled data. While the compressive methods simplify data acquisition, they require the solution of difficult recovery problems to make use of the resulting measurements. This paper presents a new sensing framework that combines the advantages of both the conventional and the compressive sensing. Using the proposed sum-to-one transform, the measurements can be reconstructed instantly at the Nyquist rates at any power-of-two resolution. The same data can then be enhanced to higher resolutions using the compressive methods that leverage sparsity to beat the Nyquist limit. The availability of a fast direct reconstruction enables the compressive measurements to be processed on small embedded devices. We demonstrate this by constructing a real-time compressive video camera.

  20. VizieR Online Data Catalog: Multi-resolution images of M33 (Boquien+, 2015)

    NASA Astrophysics Data System (ADS)

    Boquien, M.; Calzetti, D.; Aalto, S.; Boselli, A.; Braine, J.; Buat, V.; Combes, F.; Israel, F.; Kramer, C.; Lord, S.; Relano, M.; Rosolowsky, E.; Stacey, G.; Tabatabaei, F.; van der Tak, F.; van der Werf, P.; Verley, S.; Xilouris, M.

    2015-02-01

    The FITS file contains maps of the flux in star formation tracing bands, maps of the SFR, maps of the attenuation in star formation tracing bands, and a map of the stellar mass of M33, each from a resolution of 8"/pixel to 512"/pixel. The FUV GALEX data from NGS were obtained directly from the GALEX website through GALEXVIEW. The observation was carried out on 25 November 2003 for a total exposure time of 3334s. Hα+[NII] observations were carried out in November 1995 on the Burrel Schmidt telescope at Kitt Peak National Observatory. The observations and the data processing are analysed in detail in Hoopes & Walterbos (2000ApJ...541..597H). The Spitzer IRAC 8um image sensitive to the emission of Polycyclic Aromatic Hydrocarbons (PAH) and the MIPS 24um image sensitive to the emission of Very Small Grains (VSG) were obtained from the NASA Extragalactic Database and have been analysed by Hinz et al. (2004ApJS..154..259H) and Verley et al. (2007A&A...476.1161V, Cat. J/A+A/476/1161). The PACS data at 70um and 100um, which are sensitive to the warm dust heated by massive stars, come from two different programmes. The 100um image was obtained in the context of the Herschel HerM33es open time key project (Kramer et al., 2010A&A...518L..67K, observation ID 1342189079 and 1342189080). The observation was carried out in parallel mode on 7 January 2010 for a duration of 6.3h. It consisted in 2 orthogonal scans at a speed of 20"/s, with a leg length of 7'. The 70um image was obtained as a follow-up open time cycle 2 programme (OT2mboquien4, observation ID 1342247408 and 1342247409). M33 was scanned on 25 June 2012 at a speed of 20"/s in 2 orthogonal directions over 50' with 5 repetitions of this scheme in order to match the depth of the 100um image. The total duration of the observation was 9.9h. The cube, cube.fits files, contains 16 extensions: * FUV * HALPHA * 8 * 24 * 70 * 100 * SFR_FUV * SFR_HALPHA * SFR_24 * SFR_70 * SFR_100 * SFRFUV24 * SFRHALPHA24 * A_FUV * A

  1. Land cover characterization and mapping of continental southeast Asia using multi-resolution satellite sensor data

    USGS Publications Warehouse

    Giri, Chandra; Defourny, P.; Shrestha, Surendra

    2003-01-01

    Land use/land cover change, particularly that of tropical deforestation and forest degradation, has been occurring at an unprecedented rate and scale in Southeast Asia. The rapid rate of economic development, demographics and poverty are believed to be the underlying forces responsible for the change. Accurate and up-to-date information to support the above statement is, however, not available. The available data, if any, are outdated and are not comparable for various technical reasons. Time series analysis of land cover change and the identification of the driving forces responsible for these changes are needed for the sustainable management of natural resources and also for projecting future land cover trajectories. We analysed the multi-temporal and multi-seasonal NOAA Advanced Very High Resolution Radiometer (AVHRR) satellite data of 1985/86 and 1992 to (1) prepare historical land cover maps and (2) to identify areas undergoing major land cover transformations (called ‘hot spots’). The identified ‘hot spot’ areas were investigated in detail using high-resolution satellite sensor data such as Landsat and SPOT supplemented by intensive field surveys. Shifting cultivation, intensification of agricultural activities and change of cropping patterns, and conversion of forest to agricultural land were found to be the principal reasons for land use/land cover change in the Oudomxay province of Lao PDR, the Mekong Delta of Vietnam and the Loei province of Thailand, respectively. Moreover, typical land use/land cover change patterns of the ‘hot spot’ areas were also examined. In addition, we developed an operational methodology for land use/land cover change analysis at the national level with the help of national remote sensing institutions.

  2. Deconstructing a Polygenetic Landscape Using LiDAR and Multi-Resolution Analysis

    NASA Astrophysics Data System (ADS)

    Houser, C.; Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.

    2015-12-01

    In many earth surface systems characteristic morphologies are associated with various regimes both past and present. Aeolian systems contain a variety of features differentiated largely by morphometric differences, which in turn reflect age and divergent process regimes. Using quantitative analysis of high-resolution elevation data to generate detailed information regarding these characteristic morphometries enables geomorphologists to effectively map process regimes from a distance. Combined with satellite imagery and other types of remotely sensed data, the outputs can even help to delineate phases of activity within aeolian systems. The differentiation of regimes and identification of relict features together enables a greater level of rigor to analyses leading to field-based investigations, which are highly dependent on site-specific historical contexts that often obscure distinctions between separate process-form regimes. We present results from a Principal Components Analysis (PCA) performed on a LiDAR-derived elevation model of a largely stabilized aeolian system in South Texas. The resulting components are layered and classified to generate a map of aeolian morphometric signatures for a portion of the landscape. Several of these areas do not immediately appear to be aeolian in nature in satellite imagery or LiDAR-derived models, yet field observations and historical imagery reveal the PCA did in fact identify stabilized and relict dune features. This methodology enables researchers to generate a morphometric classification of the land surface. We believe this method is a valuable and innovative tool for researchers identifying process regimes within a study area, particularly in field-based investigations that rely heavily on site-specific context.

  3. Introduction of wavelet analyses to rainfall/runoffs relationship for a karstic basin: the case of Licq-Atherey karstic system (France).

    PubMed

    Labat, D; Ababou, R; Mangin, A

    2001-01-01

    Karstic systems are highly heterogeneous geological formations characterized by a multiscale temporal and spatial hydrologic behavior with more or less localized temporal and spatial structures. Classical correlation and spectral analyses cannot take into account these properties. Therefore, it is proposed to introduce a new kind of transformation: the wavelet transform. Here we focus particularly on the use of wavelets to study temporal behavior of local precipitation and watershed runoffs from a part of the karstic system. In the first part of the paper, a brief mathematical overview of the continuous Morlet wavelet transform and of the multiresolution analysis is presented. An analogy with spectral analyses allows the introduction of concepts such as wavelet spectrum and cross-spectrum. In the second part, classical methods (spectral and correlation analyses) and wavelet transforms are applied and compared for daily rainfall rates and runoffs measured on a French karstic watershed (Pyrénées) over a period of 30 years. Different characteristic time scales of the rainfall and runoff processes are determined. These time scales are typically on the order of a few days for floods, but they also include significant half-year and one-year components and multi-annual components. The multiresolution cross-analysis also provides a new interpretation of the impulse response of the system. To conclude, wavelet transforms provide a valuable amount of information, which may be now taken into account in both temporal and spatially distributed karst modeling of precipitation and runoff. PMID:11447860

  4. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  5. Creation of a Multiresolution and Multiaccuracy Dtm: Problems and Solutions for Heli-Dem Case Study

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Carcano, L.; Lucchese, A.; Negretti, M.

    2013-01-01

    The work is part of "HELI-DEM" (HELvetia-Italy Digital Elevation Model) project, funded by the European Regional Development Fund within the Italy-Switzerland cooperation program. The aim of the project is the creation of a unique DTM for the alpine and subalpine area between Italy (Piedmont, Lombardy) and Switzerland (Ticino and Grisons Cantons); at present, different DTMs, that are in different reference frames and have been obtained with different technologies, accuracies, and resolutions, have been acquired. The final DTM should be correctly georeferenced and produced validating and integrating the data that are available for the project. DTMs are fundamental in hydrogeological studies, especially in alpine areas where hydrogeological risks may exist. Moreover, when an event, like for example a landslide, happens at the border between countries, a unique and integrated DTM which covers the interest area is useful to analyze the scenario. In this sense, HELI-DEM project is helpful. To perform analyses along the borders between countries, transnational geographic information is needed: a transnational DTM can be obtained by merging regional low resolution DTMs. Moreover high resolution local DTMs should be used where they are available. To be merged, low and high resolution DTMs should be in the same three dimensional reference frame, should not present biases and should be consistent in the overlapping areas. Cross-validation between the different DTMs is therefore needed. Two different problems should be solved: the merging of regional, partly overlapping low and medium resolution DTMs into a unique low/medium resolution DTM and the merging with other local high resolution/high accuracy height data. This paper discusses the preliminary processing of the data for the fusion of low and high resolution DTMs in a study-case area within the Lombardy region: Valtellina valley. In this region the Lombardy regional low resolution DTM is available, with a horizontal

  6. Multi-resolution integrated modeling for basin-scale water resources management and policy analysis

    SciTech Connect

    Gupta, Hoshin V. ,; Brookshire, David S.; Springer, E. P.; Wagener, Thorsten

    2004-01-01

    Approximately one-third of the land surface of the Earth is considered to be arid or semi-arid with an annual average of less than 12-14 inches of rainfall. The availability of water in such regions is of course, particularly sensitive to climate variability while the demand for water is experiencing explosive population growth. The competition for available water is exerting considerable pressure on the water resources management. Policy and decision makers in the southwestern U.S. increasingly have to cope with over-stressed rivers and aquifers as population and water demands grow. Other factors such as endangered species and Native American water rights further complicate the management problems. Further, as groundwater tables are drawn down due to pumping in excess of natural recharge, considerable (potentially irreversible) environmental impacts begin to be felt as, for example, rivers run dry for significant portions of the year, riparian habitats disappear (with consequent effects on the bio-diversity of the region), aquifers compact resulting in large scale subsidence, and water quality begins to suffer. The current drought (1999-2002) in the southwestern U.S. is raising new concerns about how to sustain the combination of agricultural, urban and in-stream uses of water that underlie the socio-economic and ecological structure in the region. The water stressed nature of arid and semi-arid environments means that competing water uses of various kinds vie for access to a highly limited resource. If basin-scale water sustainability is to be achieved, managers must somehow achieve a balance between supply and demand throughout the basin, not just for the surface water or stream. The need to move water around a basin such as the Rio Grande or Colorado River to achieve this balance has created the stimulus for water transfers and water markets, and for accurate hydrologic information to sustain such institutions [Matthews et al. 2002; Brookshire et al 2003

  7. Benefits of an ultra large and multiresolution ensemble for estimating available wind power

    NASA Astrophysics Data System (ADS)

    Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik

    2016-04-01

    In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 ‑ 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.

  8. Incorporating multiresolution analysis with multiclassifiers and decision fusion for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    West, Terrance R.

    The ongoing development and increased affordability of hyperspectral sensors are increasing their utilization in a variety of applications, such as agricultural monitoring and decision making. Hyperspectral Automated Target Recognition (ATR) systems typically rely heavily on dimensionality reduction methods, and particularly intelligent reduction methods referred to as feature extraction techniques. This dissertation reports on the development, implementation, and testing of new hyperspectral analysis techniques for ATR systems, including their use in agricultural applications where ground truthed observations available for training the ATR system are typically very limited. This dissertation reports the design of effective methods for grouping and down-selecting Discrete Wavelet Transform (DWT) coefficients and the design of automated Wavelet Packet Decomposition (WPD) filter tree pruning methods for use within the framework of a Multiclassifiers and Decision Fusion (MCDF) ATR system. The efficacy of the DWT MCDF and WPD MCDF systems are compared to existing ATR methods commonly used in hyperspectral remote sensing applications. The newly developed methods' sensitivity to operating conditions, such as mother wavelet selection, decomposition level, and quantity and quality of available training data are also investigated. The newly developed ATR systems are applied to the problem of hyperspectral remote sensing of agricultural food crop contaminations either by airborne chemical application, specifically Glufosinate herbicide at varying concentrations applied to corn crops, or by biological infestation, specifically soybean rust disease in soybean crops. The DWT MCDF and WPD MCDF methods significantly outperform conventional hyperspectral ATR methods. For example, when detecting and classifying varying levels of soybean rust infestation, stepwise linear discriminant analysis, results in accuracies of approximately 30%-40%, but WPD MCDF methods result in accuracies

  9. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    SciTech Connect

    Tokuhiro, Akiro; Ruggles, Art; Pointer, David

    2015-01-22

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed

  10. Benefits of an ultra large and multiresolution ensemble for estimating available wind power

    NASA Astrophysics Data System (ADS)

    Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik

    2016-04-01

    In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.

  11. Multi-resolution processing for fractal analysis of airborne remotely sensed data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, D.; Lam, N.

    1992-01-01

    Fractal geometry is increasingly becoming a useful tool for modeling natural phenomenon. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. Since they are characterized by self-similarity, an ideal fractal surface is scale-independent; i.e. at different scales a fractal surface looks the same. This is not exactly true for natural surfaces. When viewed at different spatial resolutions parts of natural surfaces look alike in a statistical manner and only for a limited range of scales. Images acquired by NASA's Thermal Infrared Multispectral Scanner are used to compute the fractal dimension as a function of spatial resolution. Three methods are used to determine the fractal dimension - Schelberg's line-divider method, the variogram method, and the triangular prism method. A description of these methods and the results of applying these methods to a remotely-sensed image is also presented. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected was the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. This corresponds to 3 different pixel sizes - 5m, 15m, and 30m. After, simulating different spatial sampling intervals within the same image for each of the 3 image sets, the results are cross-correlated to compare the extent of detail and complexity that is obtained when data is taken at lower spatial intervals.

  12. Confusion-limited galaxy fields. II - Classical analyses

    NASA Technical Reports Server (NTRS)

    Chokshi, Arati; Wright, Edward L.

    1989-01-01

    Chokshi and Wright presented a detailed model for simulating angular distribution of galaxy images in fields that extended to very high redshifts. Standard tools are used to analyze these simulated galaxy fields for the Omega(O) = 0 and the Omega(O) = 1 cases in order to test the discriminatory power of these tools. Classical number-magnitude diagrams and surface brightness-color-color diagrams are employed to study crowded galaxy fields. An attempt is made to separate the effects due to stellar evolution in galaxies from those due to the space time geometry. The results show that this discrimination is maximized at near-infrared wavelengths where the stellar photospheres are still visible but stellar evolution effects are less severe than those observed at optical wavelenghts. Rapid evolution of the stars on the asymptotic giant branch is easily recognized in the simulated data for both cosmologies and serves to discriminate between the two extreme values of Omega(O). Measurements of total magnitudes of individual galaxies are not essential for studying light distribution in galaxies as a function of redshift. Calculations for the extragalactic background radiation are carried out using the simulated data, and compared to integrals over the evolutionary models used.

  13. A Scale-Adaptive Approach for Spatially-Varying Urban Morphology Characterization in Boundary Layer Parametrization Using Multi-Resolution Analysis

    NASA Astrophysics Data System (ADS)

    Mouzourides, P.; Kyprianou, A.; Neophytou, M. K.-A.

    2013-12-01

    Urban morphology characterization is crucial for the parametrization of boundary-layer development over urban areas. One complexity in such a characterization is the three-dimensional variation of the urban canopies and textures, which are customarily reduced to and represented by one-dimensional varying parametrization such as the aerodynamic roughness length and zero-plane displacement . The scope of the paper is to provide novel means for a scale-adaptive spatially-varying parametrization of the boundary layer by addressing this 3-D variation. Specifically, the 3-D variation of urban geometries often poses questions in the multi-scale modelling of air pollution dispersion and other climate or weather-related modelling applications that have not been addressed yet, such as: (a) how we represent urban attributes (parameters) appropriately for the multi-scale nature and multi-resolution basis of weather numerical models, (b) how we quantify the uniqueness of an urban database in the context of modelling urban effects in large-scale weather numerical models, and (c) how we derive the impact and influence of a particular building in pre-specified sub-domain areas of the urban database. We illustrate how multi-resolution analysis (MRA) addresses and answers the afore-mentioned questions by taking as an example the Central Business District of Oklahoma City. The selection of MRA is motivated by its capacity for multi-scale sampling; in the MRA the "urban" signal depicting a city is decomposed into an approximation, a representation at a higher scale, and a detail, the part removed at lower scales to yield the approximation. Different levels of approximations were deduced for the building height and planar packing density . A spatially-varying characterization with a scale-adaptive capacity is obtained for the boundary-layer parameters (aerodynamic roughness length and zero-plane displacement ) using the MRA-deduced results for the building height and the planar packing

  14. Development of RESTful services and map-based user interface tools for access to the Global Multi-Resolution Topography (GMRT) Synthesis

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Barg, B.

    2015-12-01

    The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of quality controlled multibeam sonar data, collected by scientists and institutions worldwide, that is merged with gridded terrestrial and marine elevation data. The multi-resolutional elevation components of GMRT are delivered to the user through a variety of interfaces as both images and grids. The GMRT provides quantitative access to gridded data and images to the full native resolution of the sonar as well as attribution information and access to source data files. To construct the GMRT, multibeam sonar data are evaluated, cleaned and gridded by the MGDS Team and are then merged with gridded global and regional elevation data that are available at a variety of scales from 1km resolution to sub-meter resolution. As of June 2015, GMRT included processed swath data from nearly 850 research cruises with over 2.7 million ship-track miles of coverage. Several new services were developed over the past year to improve access to the GMRT Synthesis. In addition to our long-standing Web Map Services, we now offer RESTful services to provide programmatic access to gridded data in standard formats including ArcASCII, GeoTIFF, COARDS/CF-compliant NetCDF, and GMT NetCDF, as well as access to custom images of the GMRT in JPEG format. An attribution metadata XML service was also developed to return all relevant information about component data in an area, including cruise names, multibeam file names, and gridded data components. These new services are compliant with the EarthCube GeoWS Building Blocks specifications. Supplemental services include the release of data processing reports for each cruise included in the GMRT and data querying services that return elevation values at a point and great circle arc profiles using the highest available resolution data. Our new and improved map-based web application, GMRT MapTool, provides user access to the GMRT

  15. SNS shielding analyses overview

    SciTech Connect

    Popova, Irina; Gallmeier, Franz; Iverson, Erik B; Lu, Wei; Remec, Igor

    2015-01-01

    This paper gives an overview on on-going shielding analyses for Spallation Neutron Source. Presently, the most of the shielding work is concentrated on the beam lines and instrument enclosures to prepare for commissioning, save operation and adequate radiation background in the future. There is on-going work for the accelerator facility. This includes radiation-protection analyses for radiation monitors placement, designing shielding for additional facilities to test accelerator structures, redesigning some parts of the facility, and designing test facilities to the main accelerator structure for component testing. Neutronics analyses are required as well to support spent structure management, including waste characterisation analyses, choice of proper transport/storage package and shielding enhancement for the package if required.

  16. NOAA's National Snow Analyses

    NASA Astrophysics Data System (ADS)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  17. Spacelab Charcoal Analyses

    NASA Technical Reports Server (NTRS)

    Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.

    1995-01-01

    This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.

  18. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  19. Apollo 14 microbial analyses

    NASA Technical Reports Server (NTRS)

    Taylor, G. R.

    1972-01-01

    Extensive microbiological analyses that were performed on the Apollo 14 prime and backup crewmembers and ancillary personnel are discussed. The crewmembers were subjected to four separate and quite different environments during the 137-day monitoring period. The relation between each of these environments and observed changes in the microflora of each astronaut are presented.

  20. Information Omitted From Analyses.

    PubMed

    2015-08-01

    In the Original Article titled “Higher- Order Genetic and Environmental Structure of Prevalent Forms of Child and Adolescent Psychopathology” published in the February 2011 issue of JAMA Psychiatry (then Archives of General Psychiatry) (2011;68[2]:181-189), there were 2 errors. Although the article stated that the dimensions of psychopathology were measured using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder, major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder, all dimensional scores used in the reported analyses were actually based on parent reports of symptoms; youth reports were not used. In addition, whereas the article stated that each symptom dimension was residualized on age, sex, age-squared, and age by sex, the dimensions actually were only residualized on age, sex, and age-squared. All analyses were repeated using parent informants for inattention, hyperactivity-impulsivity, and oppositional defiant disorder, and a combination of parent and youth informants for conduct disorder,major depression, generalized anxiety disorder, separation anxiety disorder, social phobia, specific phobia, agoraphobia, and obsessive-compulsive disorder; these dimensional scores were residualized on age, age-squared, sex, sex by age, and sex by age-squared. The results of the new analyses were qualitatively the same as those reported in the article, with no substantial changes in conclusions. The only notable small difference was that major depression and generalized anxiety disorder dimensions had small but significant loadings on the internalizing factor in addition to their substantial loadings on the general factor in the analyses of both genetic and non-shared covariances in the selected models in the new analyses. Corrections were made to the

  1. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    Model calculations and analyses have been carried out to compare with several sets of data (dose, induced radioactivity in various experiment samples and spacecraft components, fission foil measurements, and LET spectra) from passive radiation dosimetry on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The calculations and data comparisons are used to estimate the accuracy of current models and methods for predicting the ionizing radiation environment in low earth orbit. The emphasis is on checking the accuracy of trapped proton flux and anisotropy models.

  2. A system for generating multi-resolution Digital Terrain Models of Mars based on the ESA Mars Express and NASA Mars Reconnaissance Orbiter data

    NASA Astrophysics Data System (ADS)

    Yershov, V.

    2015-10-01

    We describe a processing system for generating multiresolution digital terrain models (DTM) of Mars within the the iMars project of the European Seventh Framework Programme. This system is based on a non-rigorous sensor model for processing highresolution stereoscopic images obtained fromthe High Resolution Imaging Science Experiment (HiRISE) camera and Context Camera (CTX) onboard the NASA Mars Reconnaissance Orbiter (MRO) spacecraft. The system includes geodetic control based on the polynomial fit of the input CTX images with respect to to a reference image obtained from the ESA Mars Express High Resolution Stereo Camera (HRSC). The input image processing is based on the Integrated Software for Images and Spectrometers (ISIS) and the NASA Ames stereo pipeline. The accuracy of the produced CTX DTM is improved by aligning it with the reference HRSC DTMand the altimetry data from the Mars Orbiter Laser Altimeter (MOLA) onboard the Mars Global Surveyor (MGS) spacecraft. The higher-resolution HiRISE imagery data are processed in the the same way, except that the reference images and DTMs are taken from the CTX results obtained during the first processing stage. A quality assessment of image photogrammetric registration is demonstrated by using data generated by the NASA Ames stereo pipeline and the BAE Socet system. Such DTMs will be produced for all available stereo-pairs and be displayed asWMS layers within the iMarsWeb GIS.

  3. Top-down and bottom-up inventory approach for above ground forest biomass and carbon monitoring in REDD framework using multi-resolution satellite data.

    PubMed

    Sharma, Laxmi Kant; Nathawat, Mahendra Singh; Sinha, Suman

    2013-10-01

    This study deals with the future scope of REDD (Reduced Emissions from Deforestation and forest Degradation) and REDD+ regimes for measuring and monitoring the current state and dynamics of carbon stocks over time with integrated geospatial and field-based biomass inventory approach. Multi-temporal and multi-resolution geospatial synergic approach incorporating satellite sensors from moderate to high resolution with stratified random sampling design is used. The inventory process involves a continuous forest inventory to facilitate the quantification of possible CO2 reductions over time using statistical up-scaling procedures on various levels. The combined approach was applied on a regional scale taking Himachal Pradesh (India), as a case study, with a hierarchy of forest strata representing the forest structure found in India. Biophysical modeling implemented revealed power regression model as the best fit (R (2) = 0.82) to model the relationship between Normalized Difference Vegetation Index and biomass which was further implemented to calculate multi-temporal above ground biomass and carbon sequestration. The calculated value of net carbon sequestered by the forests totaled to 11.52 million tons (Mt) over the period of 20 years at the rate of 0.58 Mt per year since 1990 while CO2 equivalent reduced from the environment by the forests under study during 20 years comes to 42.26 Mt in the study area.

  4. Completely automated multiresolution edge snapper (CAMES): a new technique for an accurate carotid ultrasound IMT measurement and its validation on a multi-institutional database

    NASA Astrophysics Data System (ADS)

    Molinari, Filippo; Loizou, Christos; Zeng, Guang; Pattichis, Costantinos; Pantziaris, Marios; Liboni, William; Nicolaides, Andrew; Suri, Jasjit S.

    2011-03-01

    Since 2005, our research team has been developing automated techniques for carotid artery (CA) wall segmentation and intima-media thickness (IMT) measurement. We developed a snake-based technique (which we named CULEX1,2), a method based on an integrated approach of feature extraction, fitting, and classification (which we named CALEX3), and a watershed transform based algorithm4. Each of the previous methods substantially consisted in two distinct stages: Stage-I - Automatic carotid artery detection. In this step, intelligent procedures were adopted to automatically locate the CA in the image frame. Stage-II - CA wall segmentation and IMT measurement. In this second step, the CA distal (or far) wall is segmented in order to trace the lumen-intima (LI) and media-adventitia (MA) boundaries. The distance between the LI/MA borders is the IMT estimation. The aim of this paper is the description of a novel and completely automated technique for carotid artery segmentation and IMT measurement based on an innovative multi-resolution approach.

  5. Amino acid analyses of Apollo 14 samples.

    NASA Technical Reports Server (NTRS)

    Gehrke, C. W.; Zumwalt, R. W.; Kuo, K.; Aue, W. A.; Stalling, D. L.; Kvenvolden, K. A.; Ponnamperuma, C.

    1972-01-01

    Detection limits were between 300 pg and 1 ng for different amino acids, in an analysis by gas-liquid chromatography of water extracts from Apollo 14 lunar fines in which amino acids were converted to their N-trifluoro-acetyl-n-butyl esters. Initial analyses of water and HCl extracts of sample 14240 and 14298 samples showed no amino acids above background levels.

  6. Force Limited Vibration Testing

    NASA Technical Reports Server (NTRS)

    Scharton, Terry; Chang, Kurng Y.

    2005-01-01

    This slide presentation reviews the concept and applications of Force Limited Vibration Testing. The goal of vibration testing of aerospace hardware is to identify problems that would result in flight failures. The commonly used aerospace vibration tests uses artificially high shaker forces and responses at the resonance frequencies of the test item. It has become common to limit the acceleration responses in the test to those predicted for the flight. This requires an analysis of the acceleration response, and requires placing accelerometers on the test item. With the advent of piezoelectric gages it has become possible to improve vibration testing. The basic equations have are reviewed. Force limits are analogous and complementary to the acceleration specifications used in conventional vibration testing. Just as the acceleration specification is the frequency spectrum envelope of the in-flight acceleration at the interface between the test item and flight mounting structure, the force limit is the envelope of the in-flight force at the interface . In force limited vibration tests, both the acceleration and force specifications are needed, and the force specification is generally based on and proportional to the acceleration specification. Therefore, force limiting does not compensate for errors in the development of the acceleration specification, e.g., too much conservatism or the lack thereof. These errors will carry over into the force specification. Since in-flight vibratory force data are scarce, force limits are often derived from coupled system analyses and impedance information obtained from measurements or finite element models (FEM). Fortunately, data on the interface forces between systems and components are now available from system acoustic and vibration tests of development test models and from a few flight experiments. Semi-empirical methods of predicting force limits are currently being developed on the basis of the limited flight and system test

  7. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    This report covers work performed by Science Applications International Corporation (SAIC) under contract NAS8-39386 from the NASA Marshall Space Flight Center entitled LDEF Satellite Radiation Analyses. The basic objective of the study was to evaluate the accuracy of present models and computational methods for defining the ionizing radiation environment for spacecraft in Low Earth Orbit (LEO) by making comparisons with radiation measurements made on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The emphasis of the work here is on predictions and comparisons with LDEF measurements of induced radioactivity and Linear Energy Transfer (LET) measurements. These model/data comparisons have been used to evaluate the accuracy of current models for predicting the flux and directionality of trapped protons for LEO missions.

  8. EEG analyses with SOBI.

    SciTech Connect

    Glickman, Matthew R.; Tang, Akaysha

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  9. Genetic analyses of captive Alala (Corvus hawaiiensis) using AFLP analyses

    USGS Publications Warehouse

    Jarvi, Susan I.; Bianchi, Kiara R.

    2006-01-01

    affected by the mutation rate at microsatellite loci, thus introducing a bias. Also, the number of loci that can be studied is frequently limited to fewer than 10. This theoretically represents a maximum of one marker for each of 10 chromosomes. Dominant markers like AFLP allow a larger fraction of the genome to be screened. Large numbers of loci can be screened by AFLP to resolve very small individual differences that can be used for identification of individuals, estimates of pairwise relatedness and, in some cases, for parentage analyses. Since AFLP is a dominant marker (can not distinguish between +/+ homozygote versus +/- heterozygote), it has limitations for parentage analyses. Only when both parents are homozygous for the absence of alleles (-/-) and offspring show a presence (+/+ or +/-) can the parents be excluded. In this case, microsatellites become preferable as they have the potential to exclude individual parents when the other parent is unknown. Another limitation of AFLP is that the loci are generally less polymorphic (only two alleles/locus) than microsatellite loci (often >10 alleles/locus). While generally fewer than 10 highly polymorphic microsatellite loci are enough to exclude and assign parentage, it might require up to 100 or more AFLP loci. While there are pros and cons to different methodologies, the total number of loci evaluated by AFLP generally offsets the limitations imposed due to the dominant nature of this approach and end results between methods are generally comparable. Overall objectives of this study were to evaluate the level of genetic diversity in the captive population of Alala, to compare genetic data with currently available pedigree information, and to determine the extent of relatedness of mating pairs and among founding individuals.

  10. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  11. Network class superposition analyses.

    PubMed

    Pearson, Carl A B; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  12. AFR-100 safety analyses

    SciTech Connect

    Sumner, T.; Moisseytsev, A.; Wei, T. Y. C.

    2012-07-01

    The Advanced Fast Reactor-100 (AFR-100) is Argonne National Laboratory's 250 MWth metal-fueled modular sodium-cooled pool-type fast reactor concept. [1] A series of accident sequences that focused on the AFR-100's ability to provide protection against reactor damage during low probability accident sequences resulting from multiple equipment failures were examined. Protected and Unprotected Loss of Flow (PLOF and ULOF) and Unprotected Transient Over-Power (UTOP) accidents were simulated using the SAS4A/SASSYS-1 safety analysis code. The large heat capacity of the sodium in the pool-type reactor allows the AFR-100 to absorb large amounts of energy during a PLOF with relatively small temperature increases throughout the system. During a ULOF with a 25-second flow halving time, coolant and cladding temperatures peak around 720 deg. C within the first minute before reactivity feedback effects decrease power to match the flow. Core radial expansion and fuel Doppler provide the necessary feedback during the UTOP to bring the system back to critical before system temperatures exceed allowable limits. Simulation results indicate that adequate ULOF safety margins exist for the AFR-100 design with flow halving times of twenty-five seconds. Significant safety margins are maintained for PLOF accidents as well as UTOP accidents if a rod stop is used. (authors)

  13. Analyses and characterization of double shell tank

    SciTech Connect

    Not Available

    1994-10-04

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams.

  14. Advanced toroidal facility vaccuum vessel stress analyses

    SciTech Connect

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs.

  15. A multiresolution clinical decision support system based on fractal model design for classification of histological brain tumours.

    PubMed

    Al-Kadi, Omar S

    2015-04-01

    Tissue texture is known to exhibit a heterogeneous or non-stationary nature; therefore using a single resolution approach for optimum classification might not suffice. A clinical decision support system that exploits the subbands' textural fractal characteristics for best bases selection of meningioma brain histopathological image classification is proposed. Each subband is analysed using its fractal dimension instead of energy, which has the advantage of being less sensitive to image intensity and abrupt changes in tissue texture. The most significant subband that best identifies texture discontinuities will be chosen for further decomposition, and its fractal characteristics would represent the optimal feature vector for classification. The performance was tested using the support vector machine (SVM), Bayesian and k-nearest neighbour (kNN) classifiers and a leave-one-patient-out method was employed for validation. Our method outperformed the classical energy based selection approaches, achieving for SVM, Bayesian and kNN classifiers an overall classification accuracy of 94.12%, 92.50% and 79.70%, as compared to 86.31%, 83.19% and 51.63% for the co-occurrence matrix, and 76.01%, 73.50% and 50.69% for the energy texture signatures; respectively. These results indicate the potential usefulness as a decision support system that could complement radiologists' diagnostic capability to discriminate higher order statistical textural information; for which it would be otherwise difficult via ordinary human vision.

  16. Time-series analysis of multi-resolution optical imagery for quantifying forest cover loss in Sumatra and Kalimantan, Indonesia

    NASA Astrophysics Data System (ADS)

    Broich, Mark; Hansen, Matthew C.; Potapov, Peter; Adusei, Bernard; Lindquist, Erik; Stehman, Stephen V.

    2011-04-01

    Indonesia than change maps based on image composites. Unlike other time-series analyses employing observations with a consistent periodicity, our study area was characterized by highly unequal observation counts and frequencies due to persistent cloud cover, scan line corrector off (SLC-off) gaps, and the absence of a complete archive. Our method accounts for this variation by generating a generic variable space. We evaluated our results against an independent probability sample-based estimate of gross forest cover loss and expert mapped gross forest cover loss at 64 sample sites. The mapped gross forest cover loss for Sumatra and Kalimantan was 2.86% of the land area, or 2.86 Mha from 2000 to 2005, with the highest concentration having occurred in Riau and Kalimantan Tengah provinces.

  17. Application des ondelettes à l'analyse de texture et à l'inspection de surface industrielle

    NASA Astrophysics Data System (ADS)

    Wolf, D.; Husson, R.

    1993-11-01

    This paper presents a method of texture analysis based on multiresolution wavelets analysis. We discuss the problem of theoretical and experimental choice of the wavelet. Statistical modelling of wavelet images is treated and it results in considering statistical distribution to be a generalized Gaussian law. An algorithm for texture classification is developed with respect of the variances of different wavelet images. An industrial application of this algorithm illustrates its quality and proves its aptitude for automation of certain tasks in industrial control. Nous présentons une méthode d'analyse de texture fondée sur l'analyse multirésolution par ondelettes. Nous discutons du problème du choix théorique et expérimental de l'ondelette. Le problème de la modélisation statistique des images d'ondelettes est traité et aboutit à considérer la distribution statistique comme une loi de Gauss généralisée. Un algorithme de classification de texture est construit à l'aide de la variance des différentes images d'ondelettes. Enfin, une application industrielle de cet algorithme illustre ses qualités et démontre son aptitude à l'automatisation de certaines tâches de contrôle industriel.

  18. Optimizing multi-resolution segmentation scale using empirical methods: Exploring the sensitivity of the supervised discrepancy measure Euclidean distance 2 (ED2)

    NASA Astrophysics Data System (ADS)

    Witharana, Chandi; Civco, Daniel L.

    2014-01-01

    Multiresolution segmentation (MRS) has proven to be one of the most successful image segmentation algorithms in the geographic object-based image analysis (GEOBIA) framework. This algorithm is relatively complex and user-dependent; scale, shape, and compactness are the main parameters available to users for controlling the algorithm. Plurality of segmentation results is common because each parameter may take a range of values within its parameter space or different combinations of values among parameters. Finding optimal parameter values through a trial-and-error process is commonly practiced at the expense of time and labor, thus, several alternative supervised and unsupervised methods for supervised automatic parameter setting have been proposed and tested. In the case of supervised empirical assessments, discrepancy measures are employed for computing measures of dissimilarity between a reference polygon and an image object candidate. Evidently the reliability of the optimal-parameter prediction heavily relies on the sensitivity of the segmentation quality metric. The idea behind pursuing optimal parameter setting is that, for instance, a given scale setting provides image object candidates different from the other scale setting; thus, by design the supervised quality metric should capture this difference. In this exploratory study, we selected the Euclidean distance 2 (ED2) metric, a recently proposed supervised metric, whose main design goal is to optimize the geometrical discrepancy (potential segmentation error (PSE)) and arithmetic discrepancy between image objects and reference polygons (number-of segmentation ratio (NSR)) in two dimensional Euclidean space, as a candidate to investigate the validity and efficacy of empirical discrepancy measures for finding the optimal scale parameter setting of the MRS algorithm. We chose test image scenes from four different space-borne sensors with varying spatial resolutions and scene contents and systematically

  19. Multiresolution analysis and classification of river discharges in France and their climate forcing over the Euro-Atlantic area using Wavelet transforms and Composite analysis

    NASA Astrophysics Data System (ADS)

    Fossa, Manuel; Nicolle, Marie; Massei, Nicolas; Fournier, Matthieu; Laignel, Benoit

    2016-04-01

    heights and meridional and zonal winds in the Euro-Atlantic area both for the winter and summer seasons for each station. The links are studied at different time scales of variability using multiresolution analysis. This allows assessing the large scale pattern that partly explains each scale of variability within the discharges. A cluster analysis is done on the obtained composite maps. A comparison is then realized between this classification and the one established in the first part of this study in order to test if stations that have similar time scales of variability also share the same climate forcings.

  20. EEO Implications of Job Analyses.

    ERIC Educational Resources Information Center

    Lacy, D. Patrick, Jr.

    1979-01-01

    Discusses job analyses as they relate to the requirements of Title VII of the Civil Rights Act of 1964, the Equal Pay Act of 1963, and the Rehabilitation Act of 1973. Argues that job analyses can establish the job-relatedness of entrance requirements and aid in defenses against charges of discrimination. Journal availability: see EA 511 615.

  1. Scalar limitations of diffractive optical elements

    NASA Technical Reports Server (NTRS)

    Johnson, Eric G.; Hochmuth, Diane; Moharam, M. G.; Pommet, Drew

    1993-01-01

    In this paper, scalar limitations of diffractive optic components are investigated using coupled wave analyses. Results are presented for linear phase gratings and fanout devices. In addition, a parametric curve is given which correlates feature size with scalar performance.

  2. FUEL CASK IMPACT LIMITER VULNERABILITIES

    SciTech Connect

    Leduc, D; Jeffery England, J; Roy Rothermel, R

    2009-02-09

    Cylindrical fuel casks often have impact limiters surrounding just the ends of the cask shaft in a typical 'dumbbell' arrangement. The primary purpose of these impact limiters is to absorb energy to reduce loads on the cask structure during impacts associated with a severe accident. Impact limiters are also credited in many packages with protecting closure seals and maintaining lower peak temperatures during fire events. For this credit to be taken in safety analyses, the impact limiter attachment system must be shown to retain the impact limiter following Normal Conditions of Transport (NCT) and Hypothetical Accident Conditions (HAC) impacts. Large casks are often certified by analysis only because of the costs associated with testing. Therefore, some cask impact limiter attachment systems have not been tested in real impacts. A recent structural analysis of the T-3 Spent Fuel Containment Cask found problems with the design of the impact limiter attachment system. Assumptions in the original Safety Analysis for Packaging (SARP) concerning the loading in the attachment bolts were found to be inaccurate in certain drop orientations. This paper documents the lessons learned and their applicability to impact limiter attachment system designs.

  3. Uncertainty quantification approaches for advanced reactor analyses.

    SciTech Connect

    Briggs, L. L.; Nuclear Engineering Division

    2009-03-24

    The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.

  4. Feed analyses and their interpretation.

    PubMed

    Hall, Mary Beth

    2014-11-01

    Compositional analysis is central to determining the nutritional value of feedstuffs for use in ration formulation. The utility of the values and how they should be used depends on how representative the feed subsample is, the nutritional relevance and analytical variability of the assays, and whether an analysis is suitable to be applied to a particular feedstuff. Commercial analyses presently available for carbohydrates, protein, and fats have improved nutritionally pertinent description of feed fractions. Factors affecting interpretation of feed analyses and the nutritional relevance and application of currently available analyses are discussed.

  5. Neuronal network analyses: premises, promises and uncertainties

    PubMed Central

    Parker, David

    2010-01-01

    Neuronal networks assemble the cellular components needed for sensory, motor and cognitive functions. Any rational intervention in the nervous system will thus require an understanding of network function. Obtaining this understanding is widely considered to be one of the major tasks facing neuroscience today. Network analyses have been performed for some years in relatively simple systems. In addition to the direct insights these systems have provided, they also illustrate some of the difficulties of understanding network function. Nevertheless, in more complex systems (including human), claims are made that the cellular bases of behaviour are, or will shortly be, understood. While the discussion is necessarily limited, this issue will examine these claims and highlight some traditional and novel aspects of network analyses and their difficulties. This introduction discusses the criteria that need to be satisfied for network understanding, and how they relate to traditional and novel approaches being applied to addressing network function. PMID:20603354

  6. Assessments of feline plasma biochemistry reference intervals for three in-house analysers and a commercial laboratory analyser.

    PubMed

    Baral, Randolph M; Dhand, Navneet K; Krockenberger, Mark B; Govendir, Merran

    2015-08-01

    For each species, the manufacturers of in-house analysers (and commercial laboratories) provide standard reference intervals (RIs) that do not account for any differences such as geographical population differences and do not overtly state the potential for variation between results obtained from serum or plasma. Additionally, biases have been demonstrated for in-house analysers which result in different RIs for each different type of analyser. The objective of this study was to calculate RIs (with 90% confidence intervals [CIs]) for 13 biochemistry analytes when tested on three commonly used in-house veterinary analysers, as well as a commercial laboratory analyser. The calculated RIs were then compared with those provided by the in-house analyser manufacturers and the commercial laboratory. Plasma samples were collected from 53 clinically normal cats. After centrifugation, plasma was divided into four aliquots; one aliquot was sent to the commercial laboratory and the remaining three were tested using the in-house biochemistry analysers. The distribution of results was used to choose the appropriate statistical technique for each analyte from each analyser to calculate RIs. Provided reference limits were deemed appropriate if they fell within the 90% CIs of the calculated reference limits. Transference validation was performed on provided and calculated RIs. Twenty-nine of a possible 102 provided reference limits (28%) were within the calculated 90% CIs. To ensure proper interpretation of laboratory results, practitioners should determine RIs for their practice populations and/or use reference change values when assessing their patients' clinical chemistry results.

  7. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Oberst, Jürgen; Yershov, Vladimir; Muller, Jan-Peter; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004, the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25 m nadir images) with 87% coverage with more than 65% useful for stereo mapping. NASA began imaging the surface of Mars, initially from flybys in the 1960s and then from the first orbiter with image resolution less than 100 m in the late 1970s from Viking Orbiter. The most recent orbiter, NASA MRO, has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈20 cm) and ≈5% from CTX (≈6 m) in stereo. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing (“Big Data”) solution is being developed to generate the best possible multi-resolution DTM of Mars. In addition, HRSC OrthoRectified Images (ORI) will be used as a georeference basis so that all higher resolution ORIs will be co-registered to the HRSC DTMs (50-100m grid) products generated at DLR and, from CTX (6-20 m grid) and HiRISE (1-3 m grids) on a large-scale Linux cluster based at MSSL. The HRSC products will be employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points and initial results will be shown here. In 2015, many of the entire NASA and ESA orbital images will be co-registered and the updated georeferencing

  8. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Muller, Jan-Peter; Tao, Yu; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis; Fanara, Lida; Waenlish, Marita; Walter, Sebastian; Steinkert, Ralf; Schreiner, Bjorn; Cantini, Federico; Wardlaw, Jessica; Sprinks, James; Giordano, Michele; Marsh, Stuart

    2016-07-01

    Understanding planetary atmosphere-surface and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back in time to the mid 1970s, to examine time-varying changes, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, UCL have developed a fully automated multi-resolution DTM processing chain, called the Co-registration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP), which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed and is being applied to level-1 EDR images taken by the 4 NASA orbital cameras since 1976 using the HRSC map products (both mosaics and orbital strips) as a map-base. The project has also included Mars Radar profiles from Mars Express and Mars Reconnaissance Orbiter missions. A webGIS has been developed for displaying this time sequence of imagery and a demonstration will be shown applied to one of the map-sheets. Automated quality control techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. These data mining techniques are then being employed within a citizen science project within the Zooniverse family

  9. Pawnee Nation Energy Option Analyses

    SciTech Connect

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  10. Feed analyses and their interpretation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Compositional analysis is central to determining the nutritional value of feedstuffs. The utility of the values and how they should be used depends on how representative the feed subsample is, the nutritional relevance of the assays, analytical variability of the analyses, and whether a feed is suit...

  11. Mitogenomic analyses of caniform relationships.

    PubMed

    Arnason, Ulfur; Gullberg, Anette; Janke, Axel; Kullberg, Morgan

    2007-12-01

    Extant members of the order Carnivora split into two basal groups, Caniformia (dog-like carnivorans) and Feliformia (cat-like carnivorans). In this study we address phylogenetic relationships within Caniformia applying various methodological approaches to analyses of complete mitochondrial genomes. Pinnipeds are currently well represented with respect to mitogenomic data and here we add seven mt genomes to the non-pinniped caniform collection. The analyses identified a basal caniform divergence between Cynoidea and Arctoidea. Arctoidea split into three primary groups, Ursidae (including the giant panda), Pinnipedia, and a branch, Musteloidea, which encompassed Ailuridae (red panda), Mephitidae (skunks), Procyonidae (raccoons) and Mustelidae (mustelids). The analyses favored a basal arctoid split between Ursidae and a branch containing Pinnipedia and Musteloidea. Within the Musteloidea there was a preference for a basal divergence between Ailuridae and remaining families. Among the latter, the analyses identified a sister group relationship between Mephitidae and a branch that contained Procyonidae and Mustelidae. The mitogenomic distance between the wolf and the dog was shown to be at the same level as that of basal human divergences. The wolf and the dog are commonly considered as separate species in the popular literature. The mitogenomic result is inconsistent with that understanding at the same time as it provides insight into the time of the domestication of the dog relative to basal human mitogenomic divergences.

  12. Introduction to Project Materials Analyses

    ERIC Educational Resources Information Center

    Haley, Frances

    1972-01-01

    The author introduces twenty-six analyses, describes the method of analysis, includes a selection policy for this issue, and lists ten analysts. Each project, analyzed by the combined criteria of the CMAS and the NCSS Guidelines, is examined for background information, product characteristics, rationale and objectives, content, methodology,…

  13. Analysing Children's Drawings: Applied Imagination

    ERIC Educational Resources Information Center

    Bland, Derek

    2012-01-01

    This article centres on a research project in which freehand drawings provided a richly creative and colourful data source of children's imagined, ideal learning environments. Issues concerning the analysis of the visual data are discussed, in particular, how imaginative content was analysed and how the analytical process was dependent on an…

  14. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: One year on with a focus on auto-DTM, auto-coregistration and citizen science.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Sidiropoulos, Panagiotis; Yershov, Vladimir; Gwinner, Klaus; van Gasselt, Stephan; Walter, Sebastian; Ivanov, Anton; Morley, Jeremy; Sprinks, James; Houghton, Robert; Bamford, Stephen; Kim, Jung-Rack

    2015-04-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as impact craters, RSLs, CO2 geysers, gullies, boulder movements and a host of ice-related phenomena). Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 98% coverage with images ≤100m and more than 70% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. Recently [Gwinner et al., 2015] have shown the ability to generate mosaiced DTM and BRDF-corrected surface reflectance maps. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈25cm) and ≈5% from CTX (≈6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing changes in time

  15. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: an overview and a request for scientific inputs.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Gwinner, Klaus; van Gasselt, Stephan; Ivanov, Anton; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Yershov, Vladimir; Sidirpoulos, Panagiotis; Kim, Jungrack

    2014-05-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 7 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement [Orloff et al., 2011] or the sublimation of sub-surface ice revealed by meteoritic impact [Byrne et al., 2009] as well as examine geophysical phenomena, such as surface roughness on different length scales. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 87% coverage with images ≤25m and more than 65% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≡20cm) and ≡5% from CTX (≡6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing

  16. Workload analyse of assembling process

    NASA Astrophysics Data System (ADS)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  17. Mitogenomic analyses of eutherian relationships.

    PubMed

    Arnason, U; Janke, A

    2002-01-01

    Reasonably correct phylogenies are fundamental to the testing of evolutionary hypotheses. Here, we present phylogenetic findings based on analyses of 67 complete mammalian mitochondrial (mt) genomes. The analyses, irrespective of whether they were performed at the amino acid (aa) level or on nucleotides (nt) of first and second codon positions, placed Erinaceomorpha (hedgehogs and their kin) as the sister group of remaining eutherians. Thus, the analyses separated Erinaceomorpha from other traditional lipotyphlans (e.g., tenrecs, moles, and shrews), making traditional Lipotyphla polyphyletic. Both the aa and nt data sets identified the two order-rich eutherian clades, the Cetferungulata (comprising Pholidota, Carnivora, Perissodactyla, Artiodactyla, and Cetacea) and the African clade (Tenrecomorpha, Macroscelidea, Tubulidentata, Hyracoidea, Proboscidea, and Sirenia). The study corroborated recent findings that have identified a sister-group relationship between Anthropoidea and Dermoptera (flying lemurs), thereby making our own order, Primates, a paraphyletic assembly. Molecular estimates using paleontologically well-established calibration points, placed the origin of most eutherian orders in Cretaceous times, 70-100 million years before present (MYBP). The same estimates place all primate divergences much earlier than traditionally believed. For example, the divergence between Homo and Pan is estimated to have taken place approximately 10 MYBP, a dating consistent with recent findings in primate paleontology.

  18. Biological aerosol warner and analyser

    NASA Astrophysics Data System (ADS)

    Schlemmer, Harry; Kürbitz, Gunther; Miethe, Peter; Spieweck, Michael

    2006-05-01

    The development of an integrated sensor device BiSAM (Biological Sampling and Analysing Module) is presented which is designed for rapid detection of aerosol or dust particles potentially loaded with biological warfare agents. All functional steps from aerosol collection via immuno analysis to display of results are fully automated. The core component of the sensor device is an ultra sensitive rapid analyser PBA (Portable Benchtop Analyser) based on a 3 dimensional immuno filtration column of large internal area, Poly HRP marker technology and kinetic optical detection. High sensitivity despite of the short measuring time, high chemical stability of the micro column and robustness against interferents make the PBA an ideal tool for fielded sensor devices. It is especially favourable to combine the PBA with a bio collector because virtually no sample preparation is necessary. Overall, the BiSAM device is capable to detect and identify living micro organisms (bacteria, spores, viruses) as well as toxins in a measuring cycle of typically half an hour duration. In each batch up to 12 different tests can be run in parallel together with positive and negative controls to keep the false alarm rate low.

  19. Analysing photonic structures in plants

    PubMed Central

    Vignolini, Silvia; Moyroud, Edwige; Glover, Beverley J.; Steiner, Ullrich

    2013-01-01

    The outer layers of a range of plant tissues, including flower petals, leaves and fruits, exhibit an intriguing variation of microscopic structures. Some of these structures include ordered periodic multilayers and diffraction gratings that give rise to interesting optical appearances. The colour arising from such structures is generally brighter than pigment-based colour. Here, we describe the main types of photonic structures found in plants and discuss the experimental approaches that can be used to analyse them. These experimental approaches allow identification of the physical mechanisms producing structural colours with a high degree of confidence. PMID:23883949

  20. Repeatability of published microarray gene expression analyses.

    PubMed

    Ioannidis, John P A; Allison, David B; Ball, Catherine A; Coulibaly, Issa; Cui, Xiangqin; Culhane, Aedín C; Falchi, Mario; Furlanello, Cesare; Game, Laurence; Jurman, Giuseppe; Mangion, Jon; Mehta, Tapan; Nitzberg, Michael; Page, Grier P; Petretto, Enrico; van Noort, Vera

    2009-02-01

    Given the complexity of microarray-based gene expression studies, guidelines encourage transparent design and public data availability. Several journals require public data deposition and several public databases exist. However, not all data are publicly available, and even when available, it is unknown whether the published results are reproducible by independent scientists. Here we evaluated the replication of data analyses in 18 articles on microarray-based gene expression profiling published in Nature Genetics in 2005-2006. One table or figure from each article was independently evaluated by two teams of analysts. We reproduced two analyses in principle and six partially or with some discrepancies; ten could not be reproduced. The main reason for failure to reproduce was data unavailability, and discrepancies were mostly due to incomplete data annotation or specification of data processing and analysis. Repeatability of published microarray studies is apparently limited. More strict publication rules enforcing public data availability and explicit description of data processing and analysis should be considered.

  1. Analyses of containment structures with corrosion damage

    SciTech Connect

    Cherry, J.L.

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a {open_quotes}lower bound{close_quotes}, {open_quotes}best estimate{close_quotes}, and {open_quotes}upper bound{close_quotes} failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties.

  2. Transportation systems analyses: Volume 1: Executive Summary

    NASA Astrophysics Data System (ADS)

    1993-05-01

    The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This executive summary of the transportation systems analyses (TSM) semi-annual report addresses the SSF logistics resupply. Our analysis parallels the ongoing NASA SSF redesign effort. Therefore, there could be no SSF design to drive our logistics analysis. Consequently, the analysis attempted to bound the reasonable SSF design possibilities (and the subsequent transportation implications). No other strategy really exists until after a final decision is rendered on the SSF configuration.

  3. Waste Stream Analyses for Nuclear Fuel Cycles

    SciTech Connect

    N. R. Soelberg

    2010-08-01

    A high-level study was performed in Fiscal Year 2009 for the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE) Advanced Fuel Cycle Initiative (AFCI) to provide information for a range of nuclear fuel cycle options (Wigeland 2009). At that time, some fuel cycle options could not be adequately evaluated since they were not well defined and lacked sufficient information. As a result, five families of these fuel cycle options are being studied during Fiscal Year 2010 by the Systems Analysis Campaign for the DOE NE Fuel Cycle Research and Development (FCRD) program. The quality and completeness of data available to date for the fuel cycle options is insufficient to perform quantitative radioactive waste analyses using recommended metrics. This study has been limited thus far to qualitative analyses of waste streams from the candidate fuel cycle options, because quantitative data for wastes from the front end, fuel fabrication, reactor core structure, and used fuel for these options is generally not yet available.

  4. Computational analyses of multilevel discourse comprehension.

    PubMed

    Graesser, Arthur C; McNamara, Danielle S

    2011-04-01

    The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework.

  5. ISFSI site boundary radiation dose rate analyses.

    PubMed

    Hagler, R J; Fero, A H

    2005-01-01

    Across the globe nuclear utilities are in the process of designing and analysing Independent Spent Fuel Storage Installations (ISFSI) for the purpose of above ground spent-fuel storage primarily to mitigate the filling of spent-fuel pools. Using a conjoining of discrete ordinates transport theory (DORT) and Monte Carlo (MCNP) techniques, an ISFSI was analysed to determine neutron and photon dose rates for a generic overpack, and ISFSI pad configuration and design at distances ranging from 1 to -1700 m from the ISFSI array. The calculated dose rates are used to address the requirements of 10CFR72.104, which provides limits to be enforced for the protection of the public by the NRC in regard to ISFSI facilities. For this overpack, dose rates decrease by three orders of magnitude through the first 200 m moving away from the ISFSI. In addition, the contributions from different source terms changes over distance. It can be observed that although side photons provide the majority of dose rate in this calculation, scattered photons and side neutrons take on more importance as the distance from the ISFSI is increased. PMID:16604670

  6. THOR Turbulence Electron Analyser: TEA

    NASA Astrophysics Data System (ADS)

    Fazakerley, Andrew; Moore, Tom; Owen, Chris; Pollock, Craig; Wicks, Rob; Samara, Marilia; Rae, Jonny; Hancock, Barry; Kataria, Dhiren; Rust, Duncan

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. The Turbulence Electron Analyser (TEA) will measure the plasma electron populations in the mission's Regions of Interest. It will collect a 3D electron velocity distribution with cadences as short as 5 ms. The instrument will be capable of measuring energies up to 30 keV. TEA consists of multiple electrostatic analyser heads arranged so as to measure electrons arriving from look directions covering the full sky, i.e. 4 pi solid angle. The baseline concept is similar to the successful FPI-DES instrument currently operating on the MMS mission. TEA is intended to have a similar angular resolution, but a larger geometric factor. In comparison to earlier missions, TEA improves on the measurement cadence. For example, MMS FPI-DES routinely operates at 30 ms cadence. The objective of measuring distributions at rates as fast as 5 ms is driven by the mission's scientific requirements to resolve electron gyroscale size structures, where plasma heating and fluctuation dissipation is predicted to occur. TEA will therefore be capable of making measurements of the evolution of distribution functions across thin (a few km) current sheets travelling past the spacecraft at up to 600 km/s, of the Power Spectral Density of fluctuations of electron moments and of distributions fast enough to match frequencies with waves expected to be dissipating turbulence (e.g. with 100 Hz whistler waves).

  7. [Network analyses in neuroimaging studies].

    PubMed

    Hirano, Shigeki; Yamada, Makiko

    2013-06-01

    Neurons are anatomically and physiologically connected to each other, and these connections are involved in various neuronal functions. Multiple important neural networks involved in neurodegenerative diseases can be detected using network analyses in functional neuroimaging. First, the basic methods and theories of voxel-based network analyses, such as principal component analysis, independent component analysis, and seed-based analysis, are described. Disease- and symptom-specific brain networks have been identified using glucose metabolism images in patients with Parkinson's disease. These networks enable us to objectively evaluate individual patients and serve as diagnostic tools as well as biomarkers for therapeutic interventions. Many functional MRI studies have shown that "hub" brain regions, such as the posterior cingulate cortex and medial prefrontal cortex, are deactivated by externally driven cognitive tasks; such brain regions form the "default mode network." Recent studies have shown that this default mode network is disrupted from the preclinical phase of Alzheimer's disease and is associated with amyloid deposition in the brain. Some recent studies have shown that the default mode network is also impaired in Parkinson's disease, whereas other studies have shown inconsistent results. These incongruent results could be due to the heterogeneous pharmacological status, differences in mesocortical dopaminergic impairment status, and concomitant amyloid deposition. Future neuroimaging network analysis studies will reveal novel and interesting findings that will uncover the pathomechanisms of neurological and psychiatric disorders. PMID:23735528

  8. Perturbation analyses of intermolecular interactions.

    PubMed

    Koyama, Yohei M; Kobayashi, Tetsuya J; Ueda, Hiroki R

    2011-08-01

    Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the

  9. Perturbation analyses of intermolecular interactions

    NASA Astrophysics Data System (ADS)

    Koyama, Yohei M.; Kobayashi, Tetsuya J.; Ueda, Hiroki R.

    2011-08-01

    Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the

  10. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  11. Chemical analyses of provided samples

    NASA Technical Reports Server (NTRS)

    Becker, Christopher H.

    1993-01-01

    Two batches of samples were received and chemical analysis was performed of the surface and near surface regions of the samples by the surface analysis by laser ionization (SALI) method. The samples included four one-inch optics and several paint samples. The analyses emphasized surface contamination or modification. In these studies, pulsed sputtering by 7 keV Ar+ and primarily single-photon ionization (SPI) by coherent 118 nm radiation (at approximately 5 x 10(exp 5) W/cm(sup 2) were used. For two of the samples, also multiphoton ionization (MPI) at 266 nm (approximately 5 x 10(exp 11) W/cm(sup 2) was used. Most notable among the results was the silicone contamination on Mg2 mirror 28-92, and that the Long Duration Exposure Facility (LDEF) paint sample had been enriched in K and Na and depleted in Zn, Si, B, and organic compounds relative to the control paint.

  12. An Adaptive TVD Limiter

    NASA Astrophysics Data System (ADS)

    Jeng, Yih Nen; Payne, Uon Jan

    1995-05-01

    An adaptive TVD limiter, based on a limiter approximating the upper boundary of the TVD range and that of the third-order upwind TVD scheme, is developed in this work. The limiter switches to the comprressive limiter near a discontinuity, to the third-order TVD scheme's limiter in the smooth region, and to a weighted averaged scheme in the transition region between smooth and high gradient solutions. Numerical experiments show that the proposed scheme works very well for one-dimensional scalar equation problems but becomes less effective in one- and two-dimensional Euler equation problems. Further study is required for the two-dimensional scalar equation problems.

  13. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    PubMed Central

    Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David

    2016-01-01

    Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175

  14. Isotopic signatures by bulk analyses

    SciTech Connect

    Efurd, D.W.; Rokop, D.J.

    1997-12-01

    Los Alamos National Laboratory has developed a series of measurement techniques for identification of nuclear signatures by analyzing bulk samples. Two specific applications for isotopic fingerprinting to identify the origin of anthropogenic radioactivity in bulk samples are presented. The first example is the analyses of environmental samples collected in the US Arctic to determine the impact of dumping of radionuclides in this polar region. Analyses of sediment and biota samples indicate that for the areas sampled the anthropogenic radionuclide content of sediments was predominantly the result of the deposition of global fallout. The anthropogenic radionuclide concentrations in fish, birds and mammals were very low. It can be surmised that marine food chains are presently not significantly affected. The second example is isotopic fingerprinting of water and sediment samples from the Rocky Flats Facility (RFP). The largest source of anthropogenic radioactivity presently affecting surface-waters at RFP is the sediments that are currently residing in the holding ponds. One gram of sediment from a holding pond contains approximately 50 times more plutonium than 1 liter of water from the pond. Essentially 100% of the uranium in Ponds A-1 and A-2 originated as depleted uranium. The largest source of radioactivity in the terminal Ponds A-4, B-5 and C-2 was naturally occurring uranium and its decay product radium. The uranium concentrations in the waters collected from the terminal ponds contained 0.05% or less of the interim standard calculated derived concentration guide for uranium in waters available to the public. All of the radioactivity observed in soil, sediment and water samples collected at RFP was naturally occurring, the result of processes at RFP or the result of global fallout. No extraneous anthropogenic alpha, beta or gamma activities were detected. The plutonium concentrations in Pond C-2 appear to vary seasonally.

  15. Imprecise probabilities in engineering analyses

    NASA Astrophysics Data System (ADS)

    Beer, Michael; Ferson, Scott; Kreinovich, Vladik

    2013-05-01

    Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which involve both probabilistic and non-probabilistic information. A common feature of the various concepts of imprecise probabilities is the consideration of an entire set of probabilistic models in one analysis. The theoretical differences between the concepts mainly concern the mathematical description of the set of probabilistic models and the connection to the probabilistic models involved. This paper provides an overview on developments which involve imprecise probabilities for the solution of engineering problems. Evidence theory, probability bounds analysis with p-boxes, and fuzzy probabilities are discussed with emphasis on their key features and on their relationships to one another. This paper was especially prepared for this special issue and reflects, in various ways, the thinking and presentation preferences of the authors, who are also the guest editors for this special issue.

  16. Wide area microprobe analyser (WAMPA)

    NASA Astrophysics Data System (ADS)

    Rogoyski, A.; Skidmore, B.; Maheswaran, V.; Wright, I.; Zarnecki, J.; Pillinger, C.

    2006-10-01

    Wide area microprobe analyser (WAMPA) represents a new scientific instrument concept for planetary exploration. WAMPA builds on recently published research such as sensor webs and distributed microsensors [The sensor web: a new instrument concept, SPIE Symposium on Integrated Optics, 20 26 January 2001, San Jose, CA; Design considerations for distributed microsensor systems, Proceedings of the IEEE 1999 Custom Integrated Circuits Conference (CICC ’99), May 1999, pp. 279 286] but adds new sensor and localisation concepts. WAMPA is driven by the recurrent theme in spacecraft and sensor design to achieve smaller, lighter and lower cost systems. The essential characteristics of the WAMPA design that differentiates it from other space science instruments are that WAMPA is both a wide area instrument, consisting of a distributed set of sensors, and that each probe is designed to use little, if any, power. It achieves the former by being utilised in large numbers (>10), requiring that the individual probes be low mass (<100g) and low volume (<10cm). It is envisaged that the probes would be dispersed by landers or rovers as mission support instruments rather than primary science instruments and would be used in hostile environments and rugged terrains where the lander/rover could not be risked (see Fig. 1).

  17. Network analyses in systems pharmacology

    PubMed Central

    Berger, Seth I.; Iyengar, Ravi

    2009-01-01

    Systems pharmacology is an emerging area of pharmacology which utilizes network analysis of drug action as one of its approaches. By considering drug actions and side effects in the context of the regulatory networks within which the drug targets and disease gene products function, network analysis promises to greatly increase our knowledge of the mechanisms underlying the multiple actions of drugs. Systems pharmacology can provide new approaches for drug discovery for complex diseases. The integrated approach used in systems pharmacology can allow for drug action to be considered in the context of the whole genome. Network-based studies are becoming an increasingly important tool in understanding the relationships between drug action and disease susceptibility genes. This review discusses how analysis of biological networks has contributed to the genesis of systems pharmacology and how these studies have improved global understanding of drug targets, suggested new targets and approaches for therapeutics, and provided a deeper understanding of the effects of drugs. Taken together, these types of analyses can lead to new therapeutic options while improving the safety and efficacy of existing medications. Contact: ravi.iyengar@mssm.edu PMID:19648136

  18. Comparison between Inbreeding Analyses Methodologies.

    PubMed

    Esparza, Mireia; Martínez-Abadías, Neus; Sjøvold, Torstein; González-José, Rolando; Hernández, Miquel

    2015-12-01

    Surnames are widely used in inbreeding analysis, but the validity of results has often been questioned due to the failure to comply with the prerequisites of the method. Here we analyze inbreeding in Hallstatt (Austria) between the 17th and the 19th centuries both using genealogies and surnames. The high and significant correlation of the results obtained by both methods demonstrates the validity of the use of surnames in this kind of studies. On the other hand, the inbreeding values obtained (0.24 x 10⁻³ in the genealogies analysis and 2.66 x 10⁻³ in the surnames analysis) are lower than those observed in Europe for this period and for this kind of population, demonstrating the falseness of the apparent isolation of Hallstatt's population. The temporal trend of inbreeding in both analyses does not follow the European general pattern, but shows a maximum in 1850 with a later decrease along the second half of the 19th century. This is probably due to the high migration rate that is implied by the construction of transport infrastructures around the 1870's. PMID:26987150

  19. Detector limitations, STAR

    SciTech Connect

    Underwood, D. G.

    1998-07-13

    Every detector has limitations in terms of solid angle, particular technologies chosen, cracks due to mechanical structure, etc. If all of the presently planned parts of STAR [Solenoidal Tracker At RHIC] were in place, these factors would not seriously limit our ability to exploit the spin physics possible in RHIC. What is of greater concern at the moment is the construction schedule for components such as the Electromagnetic Calorimeters, and the limited funding for various levels of triggers.

  20. Systematics and limit calculations

    SciTech Connect

    Fisher, Wade; /Fermilab

    2006-12-01

    This note discusses the estimation of systematic uncertainties and their incorporation into upper limit calculations. Two different approaches to reducing systematics and their degrading impact on upper limits are introduced. An improved {chi}{sup 2} function is defined which is useful in comparing Poisson distributed data with models marginalized by systematic uncertainties. Also, a technique using profile likelihoods is introduced which provides a means of constraining the degrading impact of systematic uncertainties on limit calculations.

  1. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    SciTech Connect

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings.

  2. NOx analyser interefence from alkenes

    NASA Astrophysics Data System (ADS)

    Bloss, W. J.; Alam, M. S.; Lee, J. D.; Vazquez, M.; Munoz, A.; Rodenas, M.

    2012-04-01

    Nitrogen oxides (NO and NO2, collectively NOx) are critical intermediates in atmospheric chemistry. NOx abundance controls the levels of the primary atmospheric oxidants OH, NO3 and O3, and regulates the ozone production which results from the degradation of volatile organic compounds. NOx are also atmospheric pollutants in their own right, and NO2 is commonly included in air quality objectives and regulations. In addition to their role in controlling ozone formation, NOx levels affect the production of other pollutants such as the lachrymator PAN, and the nitrate component of secondary aerosol particles. Consequently, accurate measurement of nitrogen oxides in the atmosphere is of major importance for understanding our atmosphere. The most widely employed approach for the measurement of NOx is chemiluminescent detection of NO2* from the NO + O3 reaction, combined with NO2 reduction by either a heated catalyst or photoconvertor. The reaction between alkenes and ozone is also chemiluminescent; therefore alkenes may contribute to the measured NOx signal, depending upon the instrumental background subtraction cycle employed. This interference has been noted previously, and indeed the effect has been used to measure both alkenes and ozone in the atmosphere. Here we report the results of a systematic investigation of the response of a selection of NOx analysers, ranging from systems used for routine air quality monitoring to atmospheric research instrumentation, to a series of alkenes ranging from ethene to the biogenic monoterpenes, as a function of conditions (co-reactants, humidity). Experiments were performed in the European Photoreactor (EUPHORE) to ensure common calibration, a common sample for the monitors, and to unequivocally confirm the alkene (via FTIR) and NO2 (via DOAS) levels present. The instrument responses ranged from negligible levels up to 10 % depending upon the alkene present and conditions used. Such interferences may be of substantial importance

  3. Ergonomic analyses of downhill skiing.

    PubMed

    Clarys, J P; Publie, J; Zinzen, E

    1994-06-01

    The purpose of this study was to provide electromyographic feedback for (1) pedagogical advice in motor learning, (2) the ergonomics of materials choice and (3) competition. For these purposes: (1) EMG data were collected for the Stem Christie, the Stem Turn and the Parallel Christie (three basic ski initiation drills) and verified for the complexity of patterns; (2) integrated EMG (iEMG) and linear envelopes (LEs) were analysed from standardized positions, motions and slopes using compact, soft and competition skis; (3) in a simulated 'parallel special slalom', the muscular activity pattern and intensity of excavated and flat snow conditions were compared. The EMG data from the three studies were collected on location in the French Alps (Tignes). The analog raw EMG was recorded on the slopes with a portable seven-channel FM recorder (TEAC MR30) and with pre-amplified bipolar surface electrodes supplied with a precision instrumentation amplifier (AD 524, Analog Devices, Norwood, USA). The raw signal was full-wave rectified and enveloped using a moving average principle. This linear envelope was normalized according to the highest peak amplitude procedure per subject and was integrated in order to obtain a reference of muscular intensity. In the three studies and for all subjects (elite skiers: n = 25 in studies 1 and 2, n = 6 in study 3), we found a high level of co-contractions in the lower limb extensors and flexors, especially during the extension phase of the ski movement. The Stem Christie and the Parallel Christie showed higher levels of rhythmic movement (92 and 84%, respectively).(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8064970

  4. ITER Safety Analyses with ISAS

    NASA Astrophysics Data System (ADS)

    Gulden, W.; Nisan, S.; Porfiri, M.-T.; Toumi, I.; de Gramont, T. Boubée

    1997-06-01

    Detailed analyses of accident sequences for the International Thermonuclear Experimental Reactor (ITER), from an initiating event to the environmental release of activity, have involved in the past the use of different types of computer codes in a sequential manner. Since these codes were developed at different time scales in different countries, there is no common computing structure to enable automatic data transfer from one code to the other, and no possibility exists to model or to quantify the effect of coupled physical phenomena. To solve this problem, the Integrated Safety Analysis System of codes (ISAS) is being developed, which allows users to integrate existing computer codes in a coherent manner. This approach is based on the utilization of a command language (GIBIANE) acting as a “glue” to integrate the various codes as modules of a common environment. The present version of ISAS allows comprehensive (coupled) calculations of a chain of codes such as ATHENA (thermal-hydraulic analysis of transients and accidents), INTRA (analysis of in-vessel chemical reactions, pressure built-up, and distribution of reaction products inside the vacuum vessel and adjacent rooms), and NAUA (transport of radiological species within buildings and to the environment). In the near future, the integration of S AFALY (simultaneous analysis of plasma dynamics and thermal behavior of in-vessel components) is also foreseen. The paper briefly describes the essential features of ISAS development and the associated software architecture. It gives first results of a typical ITER accident sequence, a loss of coolant accident (LOCA) in the divertor cooling loop inside the vacuum vessel, amply demonstrating ISAS capabilities.

  5. 14 CFR 23.681 - Limit load static tests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Limit load static tests. 23.681 Section 23... Control Systems § 23.681 Limit load static tests. (a) Compliance with the limit load requirements of this... the main structure is included. (b) Compliance must be shown (by analyses or individual load...

  6. 14 CFR 23.681 - Limit load static tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Limit load static tests. 23.681 Section 23... Control Systems § 23.681 Limit load static tests. (a) Compliance with the limit load requirements of this... the main structure is included. (b) Compliance must be shown (by analyses or individual load...

  7. Limits to Inclusion

    ERIC Educational Resources Information Center

    Hansen, Janne Hedegaard

    2012-01-01

    In this article, I will argue that a theoretical identification of the limit to inclusion is needed in the conceptual identification of inclusion. On the one hand, inclusion is formulated as a vision that is, in principle, limitless. On the other hand, there seems to be an agreement that inclusion has a limit in the pedagogical practice. However,…

  8. Dose limits for astronauts

    NASA Technical Reports Server (NTRS)

    Sinclair, W. K.

    2000-01-01

    Radiation exposures to individuals in space can greatly exceed natural radiation exposure on Earth and possibly normal occupational radiation exposures as well. Consequently, procedures limiting exposures would be necessary. Limitations were proposed by the Radiobiological Advisory Panel of the National Academy of Sciences/National Research Council in 1970. This panel recommended short-term limits to avoid deterministic effects and a single career limit (of 4 Sv) based on a doubling of the cancer risk in men aged 35 to 55. Later, when risk estimates for cancer had increased and were recognized to be age and sex dependent, the NCRP, in Report No. 98 in 1989, recommended a range of career limits based on age and sex from 1 to 4 Sv. NCRP is again in the process of revising recommendations for astronaut exposure, partly because risk estimates have increased further and partly to recognize trends in limiting radiation exposure occupationally on the ground. The result of these considerations is likely to be similar short-term limits for deterministic effects but modified career limits.

  9. Dose limits for astronauts.

    PubMed

    Sinclair, W K

    2000-11-01

    Radiation exposures to individuals in space can greatly exceed natural radiation exposure on Earth and possibly normal occupational radiation exposures as well. Consequently, procedures limiting exposures would be necessary. Limitations were proposed by the Radiobiological Advisory Panel of the National Academy of Sciences/National Research Council in 1970. This panel recommended short-term limits to avoid deterministic effects and a single career limit (of 4 Sv) based on a doubling of the cancer risk in men aged 35 to 55. Later, when risk estimates for cancer had increased and were recognized to be age and sex dependent, the NCRP, in Report No. 98 in 1989, recommended a range of career limits based on age and sex from 1 to 4 Sv. NCRP is again in the process of revising recommendations for astronaut exposure, partly because risk estimates have increased further and partly to recognize trends in limiting radiation exposure occupationally on the ground. The result of these considerations is likely to be similar short-term limits for deterministic effects but modified career limits. PMID:11045534

  10. Designing forgiveness interventions: guidance from five meta-analyses.

    PubMed

    Recine, Ann C

    2015-06-01

    The Nursing Interventions Classification system includes forgiveness facilitation as part of the research-based taxonomy of nursing interventions. Nurses need practical guidance in finding the type of intervention that works best in the nursing realm. Five meta-analyses of forgiveness interventions were reviewed to illuminate best practice. The only studies included were meta-analyses of forgiveness interventions in which the authors calculated effect size. Forgiveness interventions were shown to be helpful in addressing mental/emotional health. Components of effective interventions include recalling the offense, empathizing with the offender, committing to forgive, and overcoming feelings of unforgiveness. The meta-analyses showed that people receiving forgiveness interventions reported more forgiveness than those who had no intervention. Forgiveness interventions resulted in more hope and less depression and anxiety than no treatment. A process-based intervention is more effective than a shorter cognitive decision-based model. Limitations of the meta-analyses included inconsistency of measures and a lack of consensus on a definition of forgiveness. Notwithstanding these limitations, the meta-analyses offer strong evidence of what contributes to the effectiveness of forgiveness interventions. The implications of the studies are useful for designing evidence-based clinical forgiveness interventions to enhance nursing practice. PMID:25487180

  11. Tokamak pump limiters

    NASA Astrophysics Data System (ADS)

    Conn, Robert W.

    1984-12-01

    Experiments with pump limiters on several operating tokamaks have established them as efficient collectors of particles. The gas pressure rise within the chamber behind the limiters has been as high as 50 mTorr when there is no internal chamber pumping. Observations of the plasma power distribution over the front face of these limiter modules yield estimates for the scale length of radial power decay consistent with predictions of relatively simple theory. Interaction of the in-flowing plasma with recycling neutral gas near the limiter deflector plate is predicted to become important when the effective ionization mean free path is comparable to or less than the neutral atom mean path length within the throat structure of the limiter. Recent experiments with a scoop limiter without active internal pumping have been carried out in the PDX tokamak with up to 6 MW of auxiliary neutral beam heating. Experiments have also been performed with a rotating head pump limiter in the PLT tokamak in conjunction with RF plasma heating. Extensive experiments have been done in the ISX-B tokamak and first experiments have been completed with the ALT-I limiter in TEXTOR. The pump limiter modules in these latter two machines have internal getter pumping. Experiments in ISX-B are with ohmic and auxiliary neutral beam heating. The results in ISX-B and TEXTOR show that active density control and particle removal is achieved with pump limiters. In ISX-B, the boundary layer (or scape-off layer) plasma partially screens the core plasma from gas injection. In both ISX-B and TEXTOR, the pressure internal to the module scales linearly with plasma density but in ISX-B, with neutral beam injection, a nonlinear increase is observed at the highest densities studied. Plasma plugging is the suspected cause. Results from PDX suggest that a regime may exist in which core plasma energy confinement improves using a pump limiter during neutral beam injection. Asymmetric radial profiles and an increased

  12. Characterizing limit order prices

    NASA Astrophysics Data System (ADS)

    Withanawasam, R. M.; Whigham, P. A.; Crack, Timothy Falcon

    2013-11-01

    A computational model of a limit order book is used to study the effect of different limit order distribution offsets. Reference prices such as same side/contra side best market prices and last traded price are considered in combination with different price offset distributions. We show that when characterizing limit order prices, varying the offset distribution only produces different behavior when the reference price is the contra side best price. Irrespective of the underlying mechanisms used in computing the limit order prices, the shape of the price graph and the behavior of the average order book profile distribution are strikingly similar in all the considered reference prices/offset distributions. This implies that existing averaging methods can cancel variabilities in limit order book shape/attributes and may be misleading.

  13. The seed bank longevity index revisited: limited reliability evident from a burial experiment and database analyses

    PubMed Central

    Saatkamp, Arne; Affre, Laurence; Dutoit, Thierry; Poschlod, Peter

    2009-01-01

    Background and Aims Seed survival in the soil contributes to population persistence and community diversity, creating a need for reliable measures of soil seed bank persistence. Several methods estimate soil seed bank persistence, most of which count seedlings emerging from soil samples. Seasonality, depth distribution and presence (or absence) in vegetation are then used to classify a species' soil seed bank into persistent or transient, often synthesized into a longevity index. This study aims to determine if counts of seedlings from soil samples yield reliable seed bank persistence estimates and if this is correlated to seed production. Methods Seeds of 38 annual weeds taken from arable fields were buried in the field and their viability tested by germination and tetrazolium tests at 6 month intervals for 2·5 years. This direct measure of soil seed survival was compared with indirect estimates from the literature, which use seedling emergence from soil samples to determine seed bank persistence. Published databases were used to explore the generality of the influence of reproductive capacity on seed bank persistence estimates from seedling emergence data. Key Results There was no relationship between a species' soil seed survival in the burial experiment and its seed bank persistence estimate from published data using seedling emergence from soil samples. The analysis of complementary data from published databases revealed that while seed bank persistence estimates based on seedling emergence from soil samples are generally correlated with seed production, estimates of seed banks from burial experiments are not. Conclusions The results can be explained in terms of the seed size–seed number trade-off, which suggests that the higher number of smaller seeds is compensated after germination. Soil seed bank persistence estimates correlated to seed production are therefore not useful for studies on population persistence or community diversity. Confusion of soil seed survival and seed production can be avoided by separate use of soil seed abundance and experimental soil seed survival. PMID:19549641

  14. Phylogenomic analyses and improved resolution of Cetartiodactyla.

    PubMed

    Zhou, Xuming; Xu, Shixia; Yang, Yunxia; Zhou, Kaiya; Yang, Guang

    2011-11-01

    The remarkable antiquity, diversity, and significance in the ecology and evolution of Cetartiodactyla have inspired numerous attempts to resolve their phylogenetic relationships. However, previous analyses based on limited samples of nuclear genes or mitochondrial DNA sequences have generated results that were either inconsistent with one another, weakly supported, or highly sensitive to analytical conditions. Here, we present strongly supported results based upon over 1.4 Mb of an aligned DNA sequence matrix from 110 single-copy nuclear protein-coding genes of 21 Cetartiodactyla species, which represent major Cetartiodactyla lineages, and three species of Perissodactyla and Carnivora as outgroups. Phylogenetic analysis of this newly developed genomic sequence data using a codon-based model and recently developed models of the rate autocorrelation resolved the phylogenetic relationships of the major cetartiodactylan lineages and of those lineages with a high degree of confidence. Cetacea was found to nest within Artiodactyla as the sister group of Hippopotamidae, and Tylopoda was corroborated as the sole base clade of Cetartiodactyla. Within Cetacea, the monophyletic status of Odontoceti relative to Mysticeti, the basal position of Physeteroidea in Odontoceti, the non-monophyly of the river dolphins, and the sister relationship between Delphinidae and Monodontidae+Phocoenidae were strongly supported. In particular, the groups of Tursiops (bottlenose dolphins) and Stenella (spotted dolphins) were validated as unnatural groups. Additionally, a very narrow time frame of ∼3 My (million years) was found for the rapid diversification of delphinids in the late Miocene, which made it difficult to resolve the phylogenetic relationships within the Delphinidae, especially for previous studies with limited data sets. The present study provides a statistically well-supported phylogenetic framework of Cetartiodactyla, which represents an important step toward ending some of

  15. El Paso Electric photovoltaic-system analyses

    SciTech Connect

    Not Available

    1982-05-01

    Four analyses were performed on the Newman Power Station PV system. Two were performed using the Photovoltaic Transient Analysis Program (PV-TAP) and two with the SOLCEL II code. The first was to determine the optimum tilt angle for the array and the sensitivity of the annual energy production to variation in tilt angle. The optimum tilt angle was found to be 28/sup 0/, and variations of 2/sup 0/ produce losses of only 0.06% in the annual energy production. The second analysis assesses the power loss due to cell-to-cell variations in short circuit current and the degree of improvement attainable by sorting cells and matching modules. Typical distributions on short circuit current can cause losses of about 9.5 to 11 percent in peak array power, and sorting cells into 4 bins prior to module assembly can reduce the losses to about 6 to 8 percent. Using modules from the same cell bins in building series strings can reduce the losses to about 4.5 to 6 percent. Results are nearly the same if the array is operated at a fixed votage. The third study quantifies the magnitude and frequency of occurrence of high cell temperatures due to reverse bias caused by shadowing, and it demonstrates that cell temperatures achieved in reverse bias are higher for cells with larger shunt resistance. The last study assesses the adequacy of transient protection devices on the dc power lines to transients produced by array switching and lightning. Large surge capacitors on the dc power line effectively limit voltage excursions at the array and at the control room due to lightning. Without insertion of series resistors, the current may be limited only by cable and switch impedances, and all elements could be severely stressed. (LEW)

  16. Analyses of Transistor Punchthrough Failures

    NASA Technical Reports Server (NTRS)

    Nicolas, David P.

    1999-01-01

    The failure of two transistors in the Altitude Switch Assembly for the Solid Rocket Booster followed by two additional failures a year later presented a challenge to failure analysts. These devices had successfully worked for many years on numerous missions. There was no history of failures with this type of device. Extensive checks of the test procedures gave no indication for a source of the cause. The devices were manufactured more than twenty years ago and failure information on this lot date code was not readily available. External visual exam, radiography, PEID, and leak testing were performed with nominal results Electrical testing indicated nearly identical base-emitter and base-collector characteristics (both forward and reverse) with a low resistance short emitter to collector. These characteristics are indicative of a classic failure mechanism called punchthrough. In failure analysis punchthrough refers to an condition where a relatively low voltage pulse causes the device to conduct very hard producing localized areas of thermal runaway or "hot spots". At one or more of these hot spots, the excessive currents melt the silicon. Heavily doped emitter material diffuses through the base region to the collector forming a diffusion pipe shorting the emitter to base to collector. Upon cooling, an alloy junction forms between the pipe and the base region. Generally, the hot spot (punch-through site) is under the bond and no surface artifact is visible. The devices were delidded and the internal structures were examined microscopically. The gold emitter lead was melted on one device, but others had anomalies in the metallization around the in-tact emitter bonds. The SEM examination confirmed some anomalies to be cosmetic defects while other anomalies were artifacts of the punchthrough site. Subsequent to these analyses, the contractor determined that some irregular testing procedures occurred at the time of the failures heretofore unreported. These testing

  17. Design basis event consequence analyses for the Yucca Mountain project

    SciTech Connect

    Orvis, D.D.; Haas, M.N.; Martin, J.H.

    1997-02-01

    Design basis event (DBE) definition and analysis is an ongoing and integrated activity among the design and analysis groups of the Yucca Mountain Project (YMP). DBE`s are those that potentially lead to breach of the waste package and waste form (e.g., spent fuel rods) with consequent release of radionuclides to the environment. A Preliminary Hazards Analysis (PHA) provided a systematic screening of external and internal events that were candidate DBE`s that will be subjected to analyses for radiological consequences. As preparation, pilot consequence analyses for the repository subsurface and surface facilities have been performed to define the methodology, data requirements, and applicable regulatory limits.

  18. Aerothermodynamic Analyses of Towed Ballutes

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Buck, Greg; Moss, James N.; Nielsen, Eric; Berger, Karen; Jones, William T.; Rudavsky, Rena

    2006-01-01

    A ballute (balloon-parachute) is an inflatable, aerodynamic drag device for application to planetary entry vehicles. Two challenging aspects of aerothermal simulation of towed ballutes are considered. The first challenge, simulation of a complete system including inflatable tethers and a trailing toroidal ballute, is addressed using the unstructured-grid, Navier-Stokes solver FUN3D. Auxiliary simulations of a semi-infinite cylinder using the rarefied flow, Direct Simulation Monte Carlo solver, DSV2, provide additional insight into limiting behavior of the aerothermal environment around tethers directly exposed to the free stream. Simulations reveal pressures higher than stagnation and corresponding large heating rates on the tether as it emerges from the spacecraft base flow and passes through the spacecraft bow shock. The footprint of the tether shock on the toroidal ballute is also subject to heating amplification. Design options to accommodate or reduce these environments are discussed. The second challenge addresses time-accurate simulation to detect the onset of unsteady flow interactions as a function of geometry and Reynolds number. Video of unsteady interactions measured in the Langley Aerothermodynamic Laboratory 20-Inch Mach 6 Air Tunnel and CFD simulations using the structured grid, Navier-Stokes solver LAURA are compared for flow over a rigid spacecraft-sting-toroid system. The experimental data provides qualitative information on the amplitude and onset of unsteady motion which is captured in the numerical simulations. The presence of severe unsteady fluid - structure interactions is undesirable and numerical simulation must be able to predict the onset of such motion.

  19. Optical limiting materials

    DOEpatents

    McBranch, Duncan W.; Mattes, Benjamin R.; Koskelo, Aaron C.; Heeger, Alan J.; Robinson, Jeanne M.; Smilowitz, Laura B.; Klimov, Victor I.; Cha, Myoungsik; Sariciftci, N. Serdar; Hummelen, Jan C.

    1998-01-01

    Optical limiting materials. Methanofullerenes, fulleroids and/or other fullerenes chemically altered for enhanced solubility, in liquid solution, and in solid blends with transparent glass (SiO.sub.2) gels or polymers, or semiconducting (conjugated) polymers, are shown to be useful as optical limiters (optical surge protectors). The nonlinear absorption is tunable such that the energy transmitted through such blends saturates at high input energy per pulse over a wide range of wavelengths from 400-1100 nm by selecting the host material for its absorption wavelength and ability to transfer the absorbed energy into the optical limiting composition dissolved therein. This phenomenon should be generalizable to other compositions than substituted fullerenes.

  20. CONTROL LIMITER DEVICE

    DOEpatents

    DeShong, J.A.

    1960-03-01

    A control-limiting device for monltoring a control system is described. The system comprises a conditionsensing device, a condition-varying device exerting a control over the condition, and a control means to actuate the condition-varying device. A control-limiting device integrates the total movement or other change of the condition-varying device over any interval of time during a continuum of overlapping periods of time, and if the tothl movement or change of the condition-varying device exceeds a preset value, the control- limiting device will switch the control of the operated apparatus from automatic to manual control.

  1. Novel limiter pump topologies

    SciTech Connect

    Schultz, J.H.

    1981-01-01

    The use of limiter pumps as the principle plasma exhaust system of a magnetic confinement fusion device promises significant simplification, when compared to previously investigating divertor based systems. Further simplifications, such as the integration of the exhaust system with a radio frequency heating system and with the main reactor shield and structure are investigated below. The integrity of limiters in a reactor environment is threatened by many mechanisms, the most severe of which may be erosion by sputtering. Two novel topolgies are suggested which allow high erosion without limiter failure.

  2. DNA microarray analyses in higher plants.

    PubMed

    Galbraith, David W

    2006-01-01

    DNA microarrays were originally devised and described as a convenient technology for the global analysis of plant gene expression. Over the past decade, their use has expanded enormously to cover all kingdoms of living organisms. At the same time, the scope of applications of microarrays has increased beyond expression analyses, with plant genomics playing a leadership role in the on-going development of this technology. As the field has matured, the rate-limiting step has moved from that of the technical process of data generation to that of data analysis. We currently face major problems in dealing with the accumulating datasets, not simply with respect to how to archive, access, and process the huge amounts of data that have been and are being produced, but also in determining the relative quality of the different datasets. A major recognized concern is the appropriate use of statistical design in microarray experiments, without which the datasets are rendered useless. A vigorous area of current research involves the development of novel statistical tools specifically for microarray experiments. This article describes, in a necessarily selective manner, the types of platforms currently employed in microarray research and provides an overview of recent activities using these platforms in plant biology.

  3. Genomic analyses of the CAM plant pineapple.

    PubMed

    Zhang, Jisen; Liu, Juan; Ming, Ray

    2014-07-01

    The innovation of crassulacean acid metabolism (CAM) photosynthesis in arid and/or low CO2 conditions is a remarkable case of adaptation in flowering plants. As the most important crop that utilizes CAM photosynthesis, the genetic and genomic resources of pineapple have been developed over many years. Genetic diversity studies using various types of DNA markers led to the reclassification of the two genera Ananas and Pseudananas and nine species into one genus Ananas and two species, A. comosus and A. macrodontes with five botanical varieties in A. comosus. Five genetic maps have been constructed using F1 or F2 populations, and high-density genetic maps generated by genotype sequencing are essential resources for sequencing and assembling the pineapple genome and for marker-assisted selection. There are abundant expression sequence tag resources but limited genomic sequences in pineapple. Genes involved in the CAM pathway has been analysed in several CAM plants but only a few of them are from pineapple. A reference genome of pineapple is being generated and will accelerate genetic and genomic research in this major CAM crop. This reference genome of pineapple provides the foundation for studying the origin and regulatory mechanism of CAM photosynthesis, and the opportunity to evaluate the classification of Ananas species and botanical cultivars.

  4. Transportation systems analyses. Volume 1: Executive summary

    NASA Astrophysics Data System (ADS)

    1992-11-01

    The principal objective is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform crew delivery and return, cargo transfer, cargo delivery and return, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include: the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationship between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. Conceptual studies of transportation elements contribute to the systems approach by identifying elements (such as ETO node and transfer/excursion vehicles) needed in current and planned transportation systems. These studies are also a mechanism to integrate the results of relevant parallel studies.

  5. Bayesian network learning for natural hazard analyses

    NASA Astrophysics Data System (ADS)

    Vogel, K.; Riggelsen, C.; Korup, O.; Scherbaum, F.

    2014-09-01

    Modern natural hazards research requires dealing with several uncertainties that arise from limited process knowledge, measurement errors, censored and incomplete observations, and the intrinsic randomness of the governing processes. Nevertheless, deterministic analyses are still widely used in quantitative hazard assessments despite the pitfall of misestimating the hazard and any ensuing risks. In this paper we show that Bayesian networks offer a flexible framework for capturing and expressing a broad range of uncertainties encountered in natural hazard assessments. Although Bayesian networks are well studied in theory, their application to real-world data is far from straightforward, and requires specific tailoring and adaptation of existing algorithms. We offer suggestions as how to tackle frequently arising problems in this context and mainly concentrate on the handling of continuous variables, incomplete data sets, and the interaction of both. By way of three case studies from earthquake, flood, and landslide research, we demonstrate the method of data-driven Bayesian network learning, and showcase the flexibility, applicability, and benefits of this approach. Our results offer fresh and partly counterintuitive insights into well-studied multivariate problems of earthquake-induced ground motion prediction, accurate flood damage quantification, and spatially explicit landslide prediction at the regional scale. In particular, we highlight how Bayesian networks help to express information flow and independence assumptions between candidate predictors. Such knowledge is pivotal in providing scientists and decision makers with well-informed strategies for selecting adequate predictor variables for quantitative natural hazard assessments.

  6. Local spin analyses using density functional theory

    NASA Astrophysics Data System (ADS)

    Abate, Bayileyegn; Peralta, Juan

    Local spin analysis is a valuable technique in computational investigations magnetic interactions on mono- and polynuclear transition metal complexes, which play vital roles in catalysis, molecular magnetism, artificial photosynthesis, and several other commercially important materials. The relative size and complex electronic structure of transition metal complexes often prohibits the use of multi-determinant approaches, and hence, practical calculations are often limited to single-determinant methods. Density functional theory (DFT) has become one of the most successful and widely used computational tools for the electronic structure study of complex chemical systems; transition metal complexes in particular. Within the DFT formalism, a more flexible and complete theoretical modeling of transition metal complexes can be achieved by considering noncollinear spins, in which the spin density is 'allowed to' adopt noncollinear structures in stead of being constrained to align parallel/antiparallel to a universal axis of magnetization. In this meeting, I will present local spin analyses results obtained using different DFT functionals. Local projection operators are used to decompose the expectation value of the total spin operator; first introduced by Clark and Davidson.

  7. Limits in decision making arise from limits in memory retrieval.

    PubMed

    Giguère, Gyslain; Love, Bradley C

    2013-05-01

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people's memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people's test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

  8. PLT rotating pumped limiter

    SciTech Connect

    Cohen, S.A.; Budny, R.V.; Corso, V.; Boychuck, J.; Grisham, L.; Heifetz, D.; Hosea, J.; Luyber, S.; Loprest, P.; Manos, D.

    1984-07-01

    A limiter with a specially contoured front face and the ability to rotate during tokamak discharges has been installed in a PLT pump duct. These features have been selected to handle the unique particle removal and heat load requirements of ICRF heating and lower-hybrid current-drive experiments. The limiter has been conditioned and commissioned in an ion-beam test stand by irradiation with 1 MW power, 200 ms duration beams of 40 keV hydrogen ions. Operation in PLT during ohmic discharges has proven the ability of the limiter to reduce localized heating caused by energetic electron bombardment and to remove about 2% of the ions lost to the PLT walls and limiters.

  9. PEAK LIMITING AMPLIFIER

    DOEpatents

    Goldsworthy, W.W.; Robinson, J.B.

    1959-03-31

    A peak voltage amplitude limiting system adapted for use with a cascade type amplifier is described. In its detailed aspects, the invention includes an amplifier having at least a first triode tube and a second triode tube, the cathode of the second tube being connected to the anode of the first tube. A peak limiter triode tube has its control grid coupled to thc anode of the second tube and its anode connected to the cathode of the second tube. The operation of the limiter is controlled by a bias voltage source connected to the control grid of the limiter tube and the output of the system is taken from the anode of the second tube.

  10. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 8 2013-10-01 2013-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For major and significant transactions, applicants shall submit impact analyses (exhibit 12) describing...

  11. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 8 2012-10-01 2012-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For major and significant transactions, applicants shall submit impact analyses (exhibit 12) describing...

  12. Pawnee Nation Energy Option Analyses

    SciTech Connect

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-31

    introduced two model energy codes Pawnee Nation should consider for adoption. Summary of Current and Expected Future Electricity Usage The research team provided a summary overview of electricity usage patterns in current buildings and included discussion of known plans for new construction. Utility Options Review Pawnee Nation electric utility options were analyzed through a four-phase process, which included: 1) summarizing the relevant utility background information; 2) gathering relevant utility assessment data; 3) developing a set of realistic Pawnee electric utility service options, and 4) analyzing the various Pawnee electric utility service options for the Pawnee Energy Team’s consideration. III. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor market developments in the bio-energy industry, establish contacts with research institutions with which the tribe could potentially partner in grant-funded research initiatives. In addition, a substantial effort by the Kaw and Cherokee tribes is underway to pursue wind development at the Chilocco School Site in northern Oklahoma where Pawnee is a joint landowner. Pawnee Nation representatives should become actively involved in these development discussions and should explore the potential for joint investment in wind development at the Chilocco site.

  13. Comparison of retrospective analyses of the global ocean heat content

    NASA Astrophysics Data System (ADS)

    Chepurin, Gennady A.; Carton, James A.

    1999-07-01

    In this study, we compare seven retrospective analyses of basin- to global-scale upper ocean temperature. The analyses span a minimum of 10 years during the 50-year period since World War II. Three of the analyses (WOA-94, WHITE, BMRC) are based on objective analysis and thus, do not rely on a numerical forecast model. The remaining four (NCEP, WAJSOWICZ, ROSATI, SODA) are based on data assimilation in which the numerical forecast is provided by some form of the Geophysical Fluid Dynamics Laboratory Modular Ocean Model driven by historical winds. The comparison presented here is limited to heat content in the upper 250 m, information that is available for all analyses. The results are presented in three frequency bands: seasonal, interannual (periods of 1-5 years), and decadal (periods of 5-25 years). At seasonal frequencies, all of the analyses are quite similar. Otherwise, the differences among analyses are limited to the regions of the western boundary currents, and some regions in the Southern Hemisphere. At interannual frequencies, significant differences appear between the objective analyses and the data assimilation analyses. Along the equator in the Pacific, where variability is dominated by El Niño, the objective analyses have somewhat noisier fields, as well as reduced variance prior to 1980 due to lack of observations. Still, the correlation among analyses generally exceeds 80% in this region. Along the equator in the Atlantic, the correlation is lower (30-60%) although inspection of the time series shows that the same biennial progression of warm and cool events appears in all analyses since 1980. In the midlatitude Pacific agreement among objective analyses and data assimilation analyses is good. The analysis of Rosati et al. [Rosati, A., Gudgel, R., Miyakoda, K., 1995. Decadal analysis produced from an ocean assimilation system. Mon. Weather Rev., 123, 2, 206.] differs somewhat from the others apparently because in this analysis, the forecast model

  14. Considerations for planning and evaluating economic analyses of telemental health.

    PubMed

    Luxton, David D

    2013-08-01

    The economic evaluation of telemental health (TMH) is necessary to inform ways to decrease the cost of delivering care, to improve access to care, and to make decisions about the allocation of resources. Previous reviews of telehealth economic analysis studies have concluded that there are significant methodological deficiencies and inconsistencies that limit the ability to make generalized conclusions about the costs and benefits of telehealth programs. Published economic evaluations specific to TMH are also limited. There are unique factors that influence costs in TMH that are necessary for those who are planning and evaluating economic analyses to consider. The purpose of this review is to summarize the main problems and limitations of published economic analyses, to discuss considerations specific to TMH, and to inform and encourage the economic evaluation of TMH in both the public and private sectors. The topics presented here include perspective of costs, direct and indirect costs, and technology, as well as research methodology considerations. The integration of economic analyses into effectiveness trials, the standardization of outcome measurement, and the development of TMH economic evaluation guidelines are recommended.

  15. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  16. Improved limited discrepancy search

    SciTech Connect

    Korf, R.E.

    1996-12-31

    We present an improvement to Harvey and Ginsberg`s limited discrepancy search algorithm, which eliminates much of the redundancy in the original, by generating each path from the root to the maximum search depth only once. For a complete binary tree of depth d this reduces the asymptotic complexity from O(d+2/2 2{sup d}) to O(2{sup d}). The savings is much less in a partial tree search, or in a heavily pruned tree. The overhead of the improved algorithm on a complete binary tree is only a factor of b/(b - 1) compared to depth-first search. While this constant factor is greater on a heavily pruned tree, this improvement makes limited discrepancy search a viable alternative to depth-first search, whenever the entire tree may not be searched. Finally, we present both positive and negative empirical results on the utility of limited discrepancy search, for the problem of number partitioning.

  17. Force Limit System

    NASA Technical Reports Server (NTRS)

    Pawlik, Ralph; Krause, David; Bremenour, Frank

    2011-01-01

    The Force Limit System (FLS) was developed to protect test specimens from inadvertent overload. The load limit value is fully adjustable by the operator and works independently of the test system control as a mechanical (non-electrical) device. When a test specimen is loaded via an electromechanical or hydraulic test system, a chance of an overload condition exists. An overload applied to a specimen could result in irreparable damage to the specimen and/or fixturing. The FLS restricts the maximum load that an actuator can apply to a test specimen. When testing limited-run test articles or using very expensive fixtures, the use of such a device is highly recommended. Test setups typically use electronic peak protection, which can be the source of overload due to malfunctioning components or the inability to react quickly enough to load spikes. The FLS works independently of the electronic overload protection.

  18. Limitations of angiotensin inhibition.

    PubMed

    Nobakht, Niloofar; Kamgar, Mohammad; Rastogi, Anjay; Schrier, Robert W

    2011-06-01

    Angiotensin-converting-enzyme (ACE) inhibitors and angiotensin-receptor blockers (ARBs) have beneficial effects in patients with cardiovascular disease and in those with diabetes-related and diabetes-independent chronic kidney diseases. These beneficial effects are independent of the antihypertensive properties of these drugs. However, ACE inhibitors, ARBs, and combinations of agents in these two classes are limited in the extent to which they inhibit the activity of the renin-angiotensin-aldosterone system (RAAS). Angiotensin breakthrough and aldosterone breakthrough may be important mechanisms involved in limiting the effects of ACE inhibitors and ARBs. Whether direct renin inhibitors will overcome some of the limitations of ACE-inhibitor and ARB therapy by blocking the deleterious effects of the RAAS remains to be proven. This important area is, however, in need of further investigation.

  19. Estimating turbine limit load

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1993-01-01

    A method for estimating turbine limit-load pressure ratio from turbine map information is presented and demonstrated. It is based on a mean line analysis at the last-rotor exit. The required map information includes choke flow rate at all speeds as well as pressure ratio and efficiency at the onset of choke at design speed. One- and two-stage turbines are analyzed to compare the results with those from a more rigorous off-design flow analysis and to show the sensitivities of the computed limit-load pressure ratios to changes in the key assumptions.

  20. Finite Element analyses of soil bioengineered slopes

    NASA Astrophysics Data System (ADS)

    Tamagnini, Roberto; Switala, Barbara Maria; Sudan Acharya, Madhu; Wu, Wei; Graf, Frank; Auer, Michael; te Kamp, Lothar

    2014-05-01

    Soil Bioengineering methods are not only effective from an economical point of view, but they are also interesting as fully ecological solutions. The presented project is aimed to define a numerical model which includes the impact of vegetation on slope stability, considering both mechanical and hydrological effects. In this project, a constitutive model has been developed that accounts for the multi-phase nature of the soil, namely the partly saturated condition and it also includes the effects of a biological component. The constitutive equation is implemented in the Finite Element (FE) software Comes-Geo with an implicit integration scheme that accounts for the collapse of the soils structure due to wetting. The mathematical formulation of the constitutive equations is introduced by means of thermodynamics and it simulates the growth of the biological system during the time. The numerical code is then applied in the analysis of an ideal rainfall induced landslide. The slope is analyzed for vegetated and non-vegetated conditions. The final results allow to quantitatively assessing the impact of vegetation on slope stability. This allows drawing conclusions and choosing whenever it is worthful to use soil bioengineering methods in slope stabilization instead of traditional approaches. The application of the FE methods show some advantages with respect to the commonly used limit equilibrium analyses, because it can account for the real coupled strain-diffusion nature of the problem. The mechanical strength of roots is in fact influenced by the stress evolution into the slope. Moreover, FE method does not need a pre-definition of any failure surface. FE method can also be used in monitoring the progressive failure of the soil bio-engineered system as it calculates the amount of displacements and strains of the model slope. The preliminary study results show that the formulated equations can be useful for analysis and evaluation of different soil bio

  1. Limits to Stability

    ERIC Educational Resources Information Center

    Cottey, Alan

    2012-01-01

    The author reflects briefly on what limited degree of global ecological stability and human cultural stability may be achieved, provided that humanity retains hope and does not give way to despair or hide in denial. These thoughts were triggered by a recent conference on International Stability and Systems Engineering. (Contains 5 notes.)

  2. The Outer Limits: English.

    ERIC Educational Resources Information Center

    Tyler, Barbara R.; Biesekerski, Joan

    The Quinmester course "The Outer Limits" involves an exploration of unknown worlds, mental and physical, through fiction and nonfiction. Its purpose is to focus attention on the ongoing conquest of the frontiers of the mind, the physical world, and outer space. The subject matter includes identification and investigation of unknown worlds in the…

  3. Defined by Limitations

    ERIC Educational Resources Information Center

    Arriola, Sonya; Murphy, Katy

    2010-01-01

    Undocumented students are a population defined by limitations. Their lack of legal residency and any supporting paperwork (e.g., Social Security number, government issued identification) renders them essentially invisible to the American and state governments. They cannot legally work. In many states, they cannot legally drive. After the age of…

  4. Limitations in scatter propagation

    NASA Astrophysics Data System (ADS)

    Lampert, E. W.

    1982-04-01

    A short description of the main scatter propagation mechanisms is presented; troposcatter, meteor burst communication and chaff scatter. For these propagation modes, in particular for troposcatter, the important specific limitations discussed are: link budget and resulting hardware consequences, diversity, mobility, information transfer and intermodulation and intersymbol interference, frequency range and future extension in frequency range for troposcatter, and compatibility with other services (EMC).

  5. Learning without Limits

    ERIC Educational Resources Information Center

    Hart, Susan; Dixon, Annabelle; Drummond, Mary Jane; McIntyre, Donald

    2004-01-01

    This book explores ways of teaching that are free from determinist beliefs about ability. In a detailed critique of the practices of ability labelling and ability-focused teaching, "Learning without Limits" examines the damage these practices can do to young people, teachers and the curriculum. Drawing on a research project at the University of…

  6. Thermal background noise limitations

    NASA Technical Reports Server (NTRS)

    Gulkis, S.

    1982-01-01

    Modern detection systems are increasingly limited in sensitivity by the background thermal photons which enter the receiving system. Expressions for the fluctuations of detected thermal radiation are derived. Incoherent and heterodyne detection processes are considered. References to the subject of photon detection statistics are given.

  7. Intellectually Limited Mothers.

    ERIC Educational Resources Information Center

    Kaminer, Ruth K.; Cohen, Herbert J.

    The paper examines whether a relationship exists between intellectual limitation on the mother's part and unfavorable outcomes for her children. The scope of the problem is examined and the difficulties inherent in estimating prevalence are noted. The issue of child neglect, rather than abuse is shown to be a major problem among institutionalized…

  8. Fracture mechanics validity limits

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.; Ernst, Hugo A.

    1994-01-01

    Fracture behavior is characteristics of a dramatic loss of strength compared to elastic deformation behavior. Fracture parameters have been developed and exhibit a range within which each is valid for predicting growth. Each is limited by the assumptions made in its development: all are defined within a specific context. For example, the stress intensity parameters, K, and the crack driving force, G, are derived using an assumption of linear elasticity. To use K or G, the zone of plasticity must be small as compared to the physical dimensions of the object being loaded. This insures an elastic response, and in this context, K and G will work well. Rice's J-integral has been used beyond the limits imposed on K and G. J requires an assumption of nonlinear elasticity, which is not characteristic of real material behavior, but is thought to be a reasonable approximation if unloading is kept to a minimum. As well, the constraint cannot change dramatically (typically, the crack extension is limited to ten-percent of the initial remaining ligament length). Rice, et al investigated the properties required of J-type parameters, J(sub x), and showed that the time rate, dJ(sub x)/dt, must not be a function of the crack extension rate, da/dt. Ernst devised the modified-J parameter, J(sub M), that meets this criterion. J(sub M) correlates fracture data to much higher crack growth than does J. Ultimately, a limit of the validity of J(sub M) is anticipated, and this has been estimated to be at a crack extension of about 40-percent of the initial remaining ligament length. None of the various parameters can be expected to describe fracture in an environment of gross plasticity, in which case the process is better described by deformation parameters, e.g., stress and strain. In the current study, various schemes to identify the onset of the plasticity-dominated behavior, i.e., the end of fracture mechanics validity, are presented. Each validity limit parameter is developed in

  9. Electron/proton spectrometer certification documentation analyses

    NASA Technical Reports Server (NTRS)

    Gleeson, P.

    1972-01-01

    A compilation of analyses generated during the development of the electron-proton spectrometer for the Skylab program is presented. The data documents the analyses required by the electron-proton spectrometer verification plan. The verification plan was generated to satisfy the ancillary hardware requirements of the Apollo Applications program. The certification of the spectrometer requires that various tests, inspections, and analyses be documented, approved, and accepted by reliability and quality control personnel of the spectrometer development program.

  10. MELCOR analyses for accident progression issues

    SciTech Connect

    Dingman, S.E.; Shaffer, C.J.; Payne, A.C.; Carmel, M.K. )

    1991-01-01

    Results of calculations performed with MELCOR and HECTR in support of the NUREG-1150 study are presented in this report. The analyses examined a wide range of issues. The analyses included integral calculations covering an entire accident sequence, as well as calculations that addressed specific issues that could affect several accident sequences. The results of the analyses for Grand Gulf, Peach Bottom, LaSalle, and Sequoyah are described, and the major conclusions are summarized. 23 refs., 69 figs., 8 tabs.

  11. Database-Driven Analyses of Astronomical Spectra

    NASA Astrophysics Data System (ADS)

    Cami, Jan

    2012-03-01

    species to the fullerene species C60 and C70 [4]. Given the large number and variety of molecules detected in space, molecular infrared spectroscopy can be used to study pretty much any astrophysical environment that is not too energetic to dissociate the molecules. At the lowest energies, it is interesting to note that molecules such as CN have been used to measure the temperature of the Cosmic Microwave Background (see e.g., Ref. 15). The great diagnostic potential of infrared molecular spectroscopy comes at a price though. Extracting the physical parameters from the observations requires expertise in knowing how various physical processes and instrumental characteristics play together in producing the observed spectra. In addition to the astronomical aspects, this often includes interpreting and understanding the limitations of laboratory data and quantum-chemical calculations; the study of the interaction of matter with radiation at microscopic scales (called radiative transfer, akin to ray tracing) and the effects of observing (e.g., smoothing and resampling) on the resulting spectra and possible instrumental effects (e.g., fringes). All this is not trivial. To make matters worse, observational spectra often contain many components, and might include spectral contributions stemming from very different physical conditions. Fully analyzing such observations is thus a time-consuming task that requires mastery of several techniques. And with ever-increasing rates of observational data acquisition, it seems clear that in the near future, some form of automation is required to handle the data stream. It is thus appealing to consider what part of such analyses could be done without too much human intervention. Two different aspects can be separated: the first step involves simply identifying the molecular species present in the observations. Once the molecular inventory is known, we can try to extract the physical parameters from the observed spectral properties. For both

  12. Limits in decision making arise from limits in memory retrieval.

    PubMed

    Giguère, Gyslain; Love, Bradley C

    2013-05-01

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people's memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people's test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers. PMID:23610402

  13. Telescopic limiting magnitudes

    NASA Technical Reports Server (NTRS)

    Schaefer, Bradley E.

    1990-01-01

    The prediction of the magnitude of the faintest star visible through a telescope by a visual observer is a difficult problem in physiology. Many prediction formulas have been advanced over the years, but most do not even consider the magnification used. Here, the prediction algorithm problem is attacked with two complimentary approaches: (1) First, a theoretical algorithm was developed based on physiological data for the sensitivity of the eye. This algorithm also accounts for the transmission of the atmosphere and the telescope, the brightness of the sky, the color of the star, the age of the observer, the aperture, and the magnification. (2) Second, 314 observed values for the limiting magnitude were collected as a test of the formula. It is found that the formula does accurately predict the average observed limiting magnitudes under all conditions.

  14. Limited health literacy in advanced kidney disease.

    PubMed

    Taylor, Dominic M; Bradley, John A; Bradley, Clare; Draper, Heather; Johnson, Rachel; Metcalfe, Wendy; Oniscu, Gabriel; Robb, Matthew; Tomson, Charles; Watson, Chris; Ravanan, Rommel; Roderick, Paul

    2016-09-01

    Limited health literacy may reduce the ability of patients with advanced kidney disease to understand their disease and treatment and take part in shared decision making. In dialysis and transplant patients, limited health literacy has been associated with low socioeconomic status, comorbidity, and mortality. Here, we investigated the prevalence and associations of limited health literacy using data from the United Kingdom-wide Access to Transplantation and Transplant Outcome Measures (ATTOM) program. Incident dialysis, incident transplant, and transplant wait-listed patients ages 18 to 75 were recruited from 2011 to 2013 and data were collected from patient questionnaires and case notes. A score >2 in the Single-Item Literacy Screener was used to define limited health literacy. Univariate and multivariate analyses were performed to identify patient factors associated with limited health literacy. We studied 6842 patients, 2621 were incident dialysis, 1959 were wait-listed, and 2262 were incident transplant. Limited health literacy prevalence was 20%, 15%, and 12% in each group, respectively. Limited health literacy was independently associated with low socioeconomic status, poor English fluency, and comorbidity. However, transplant wait-listing, preemptive transplantation, and live-donor transplantation were associated with increasing health literacy. PMID:27521115

  15. Limited health literacy in advanced kidney disease.

    PubMed

    Taylor, Dominic M; Bradley, John A; Bradley, Clare; Draper, Heather; Johnson, Rachel; Metcalfe, Wendy; Oniscu, Gabriel; Robb, Matthew; Tomson, Charles; Watson, Chris; Ravanan, Rommel; Roderick, Paul

    2016-09-01

    Limited health literacy may reduce the ability of patients with advanced kidney disease to understand their disease and treatment and take part in shared decision making. In dialysis and transplant patients, limited health literacy has been associated with low socioeconomic status, comorbidity, and mortality. Here, we investigated the prevalence and associations of limited health literacy using data from the United Kingdom-wide Access to Transplantation and Transplant Outcome Measures (ATTOM) program. Incident dialysis, incident transplant, and transplant wait-listed patients ages 18 to 75 were recruited from 2011 to 2013 and data were collected from patient questionnaires and case notes. A score >2 in the Single-Item Literacy Screener was used to define limited health literacy. Univariate and multivariate analyses were performed to identify patient factors associated with limited health literacy. We studied 6842 patients, 2621 were incident dialysis, 1959 were wait-listed, and 2262 were incident transplant. Limited health literacy prevalence was 20%, 15%, and 12% in each group, respectively. Limited health literacy was independently associated with low socioeconomic status, poor English fluency, and comorbidity. However, transplant wait-listing, preemptive transplantation, and live-donor transplantation were associated with increasing health literacy.

  16. Quantum limits of thermometry

    SciTech Connect

    Stace, Thomas M.

    2010-07-15

    The precision of typical thermometers consisting of N particles scales as {approx}1/{radical}(N). For high-precision thermometry and thermometric standards, this presents an important theoretical noise floor. Here it is demonstrated that thermometry may be mapped onto the problem of phase estimation, and using techniques from optimal phase estimation, it follows that the scaling of the precision of a thermometer may in principle be improved to {approx}1/N, representing a Heisenberg limit to thermometry.

  17. Heat flux limiting sleeves

    DOEpatents

    Harris, William G.

    1985-01-01

    A heat limiting tubular sleeve extending over only a portion of a tube having a generally uniform outside diameter, the sleeve being open on both ends, having one end thereof larger in diameter than the other end thereof and having a wall thickness which decreases in the same direction as the diameter of the sleeve decreases so that the heat transfer through the sleeve and tube is less adjacent the large diameter end of the sleeve than adjacent the other end thereof.

  18. Limits of proton conductivity.

    PubMed

    Kreuer, Klaus-Dieter; Wohlfarth, Andreas

    2012-10-15

    Parasitic current seems to be the cause for the "highest proton conductivity" of a material reported to date. Kreuer and Wohlfarth verify this hypothesis by measuring the conductivity of the same materials after preparing them in a different way. They further explain the limits of proton conductivity and comment on the problems of determining the conductivity of small objects (e.g., whiskers, see picture).

  19. Limits of social mobilization.

    PubMed

    Rutherford, Alex; Cebrian, Manuel; Dsouza, Sohan; Moro, Esteban; Pentland, Alex; Rahwan, Iyad

    2013-04-16

    The Internet and social media have enabled the mobilization of large crowds to achieve time-critical feats, ranging from mapping crises in real time, to organizing mass rallies, to conducting search-and-rescue operations over large geographies. Despite significant success, selection bias may lead to inflated expectations of the efficacy of social mobilization for these tasks. What are the limits of social mobilization, and how reliable is it in operating at these limits? We build on recent results on the spatiotemporal structure of social and information networks to elucidate the constraints they pose on social mobilization. We use the DARPA Network Challenge as our working scenario, in which social media were used to locate 10 balloons across the United States. We conduct high-resolution simulations for referral-based crowdsourcing and obtain a statistical characterization of the population recruited, geography covered, and time to completion. Our results demonstrate that the outcome is plausible without the presence of mass media but lies at the limit of what time-critical social mobilization can achieve. Success relies critically on highly connected individuals willing to mobilize people in distant locations, overcoming the local trapping of diffusion in highly dense areas. However, even under these highly favorable conditions, the risk of unsuccessful search remains significant. These findings have implications for the design of better incentive schemes for social mobilization. They also call for caution in estimating the reliability of this capability. PMID:23576719

  20. Limits of social mobilization.

    PubMed

    Rutherford, Alex; Cebrian, Manuel; Dsouza, Sohan; Moro, Esteban; Pentland, Alex; Rahwan, Iyad

    2013-04-16

    The Internet and social media have enabled the mobilization of large crowds to achieve time-critical feats, ranging from mapping crises in real time, to organizing mass rallies, to conducting search-and-rescue operations over large geographies. Despite significant success, selection bias may lead to inflated expectations of the efficacy of social mobilization for these tasks. What are the limits of social mobilization, and how reliable is it in operating at these limits? We build on recent results on the spatiotemporal structure of social and information networks to elucidate the constraints they pose on social mobilization. We use the DARPA Network Challenge as our working scenario, in which social media were used to locate 10 balloons across the United States. We conduct high-resolution simulations for referral-based crowdsourcing and obtain a statistical characterization of the population recruited, geography covered, and time to completion. Our results demonstrate that the outcome is plausible without the presence of mass media but lies at the limit of what time-critical social mobilization can achieve. Success relies critically on highly connected individuals willing to mobilize people in distant locations, overcoming the local trapping of diffusion in highly dense areas. However, even under these highly favorable conditions, the risk of unsuccessful search remains significant. These findings have implications for the design of better incentive schemes for social mobilization. They also call for caution in estimating the reliability of this capability.

  1. Elastic limit of silicane.

    PubMed

    Peng, Qing; De, Suvranu

    2014-10-21

    Silicane is a fully hydrogenated silicene-a counterpart of graphene-having promising applications in hydrogen storage with capacities larger than 6 wt%. Knowledge of its elastic limit is critical in its applications as well as tailoring its electronic properties by strain. Here we investigate the mechanical response of silicane to various strains using first-principles calculations based on density functional theory. We illustrate that non-linear elastic behavior is prominent in two-dimensional nanomaterials as opposed to bulk materials. The elastic limits defined by ultimate tensile strains are 0.22, 0.28, and 0.25 along armchair, zigzag, and biaxial directions, respectively, an increase of 29%, 33%, and 24% respectively in reference to silicene. The in-plane stiffness and Poisson ratio are reduced by a factor of 16% and 26%, respectively. However, hydrogenation/dehydrogenation has little effect on its ultimate tensile strengths. We obtained high order elastic constants for a rigorous continuum description of the nonlinear elastic response. The limitation of second, third, fourth, and fifth order elastic constants are in the strain range of 0.02, 0.08, and 0.13, and 0.21, respectively. The pressure effect on the second order elastic constants and Poisson's ratio were predicted from the third order elastic constants. Our results could provide a safe guide for promising applications and strain-engineering the functions and properties of silicane monolayers. PMID:25190587

  2. LIMITS ON QUAOAR'S ATMOSPHERE

    SciTech Connect

    Fraser, Wesley C.; Gwyn, Stephen; Kavelaars, J. J.; Trujillo, Chad; Stephens, Andrew W.; Gimeno, German

    2013-09-10

    Here we present high cadence photometry taken by the Acquisition Camera on Gemini South, of a close passage by the {approx}540 km radius Kuiper belt object, (50000) Quaoar, of a r' = 20.2 background star. Observations before and after the event show that the apparent impact parameter of the event was 0.''019 {+-} 0.''004, corresponding to a close approach of 580 {+-} 120 km to the center of Quaoar. No signatures of occultation by either Quaoar's limb or its potential atmosphere are detectable in the relative photometry of Quaoar and the target star, which were unresolved during closest approach. From this photometry we are able to put constraints on any potential atmosphere Quaoar might have. Using a Markov chain Monte Carlo and likelihood approach, we place pressure upper limits on sublimation supported, isothermal atmospheres of pure N{sub 2}, CO, and CH{sub 4}. For N{sub 2} and CO, the upper limit surface pressures are 1 and 0.7 {mu}bar, respectively. The surface temperature required for such low sublimation pressures is {approx}33 K, much lower than Quaoar's mean temperature of {approx}44 K measured by others. We conclude that Quaoar cannot have an isothermal N{sub 2} or CO atmosphere. We cannot eliminate the possibility of a CH{sub 4} atmosphere, but place upper surface pressure and mean temperature limits of {approx}138 nbar and {approx}44 K, respectively.

  3. Limits of social mobilization

    PubMed Central

    Rutherford, Alex; Cebrian, Manuel; Dsouza, Sohan; Moro, Esteban; Pentland, Alex; Rahwan, Iyad

    2013-01-01

    The Internet and social media have enabled the mobilization of large crowds to achieve time-critical feats, ranging from mapping crises in real time, to organizing mass rallies, to conducting search-and-rescue operations over large geographies. Despite significant success, selection bias may lead to inflated expectations of the efficacy of social mobilization for these tasks. What are the limits of social mobilization, and how reliable is it in operating at these limits? We build on recent results on the spatiotemporal structure of social and information networks to elucidate the constraints they pose on social mobilization. We use the DARPA Network Challenge as our working scenario, in which social media were used to locate 10 balloons across the United States. We conduct high-resolution simulations for referral-based crowdsourcing and obtain a statistical characterization of the population recruited, geography covered, and time to completion. Our results demonstrate that the outcome is plausible without the presence of mass media but lies at the limit of what time-critical social mobilization can achieve. Success relies critically on highly connected individuals willing to mobilize people in distant locations, overcoming the local trapping of diffusion in highly dense areas. However, even under these highly favorable conditions, the risk of unsuccessful search remains significant. These findings have implications for the design of better incentive schemes for social mobilization. They also call for caution in estimating the reliability of this capability. PMID:23576719

  4. Deriving exposure limits

    NASA Astrophysics Data System (ADS)

    Sliney, David H.

    1990-07-01

    Historically many different agencies and standards organizations have proposed laser occupational exposure limits (EL1s) or maximum permissible exposure (MPE) levels. Although some safety standards have been limited in scope to manufacturer system safety performance standards or to codes of practice most have included occupational EL''s. Initially in the 1960''s attention was drawn to setting EL''s however as greater experience accumulated in the use of lasers and some accident experience had been gained safety procedures were developed. It became clear by 1971 after the first decade of laser use that detailed hazard evaluation of each laser environment was too complex for most users and a scheme of hazard classification evolved. Today most countries follow a scheme of four major hazard classifications as defined in Document WS 825 of the International Electrotechnical Commission (IEC). The classifications and the associated accessible emission limits (AEL''s) were based upon the EL''s. The EL and AEL values today are in surprisingly good agreement worldwide. There exists a greater range of safety requirements for the user for each class of laser. The current MPE''s (i. e. EL''s) and their basis are highlighted in this presentation. 2. 0

  5. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., an analysis of traffic flows indicating patterns of geographic competition or product competition... 49 Transportation 8 2011-10-01 2011-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a)...

  6. Aviation System Analysis Capability Executive Assistant Analyses

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen; Kostiuk, Peter

    1999-01-01

    This document describes the analyses that may be incorporated into the Aviation System Analysis Capability Executive Assistant. The document will be used as a discussion tool to enable NASA and other integrated aviation system entities to evaluate, discuss, and prioritize analyses.

  7. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  8. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  9. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  10. 14 CFR 27.681 - Limit load static tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Limit load static tests. 27.681 Section 27... static tests. (a) Compliance with the limit load requirements of this part must be shown by tests in.... (b) Compliance must be shown (by analyses or individual load tests) with the special...

  11. 14 CFR 27.681 - Limit load static tests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Limit load static tests. 27.681 Section 27... static tests. (a) Compliance with the limit load requirements of this part must be shown by tests in.... (b) Compliance must be shown (by analyses or individual load tests) with the special...

  12. 14 CFR 29.681 - Limit load static tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Limit load static tests. 29.681 Section 29... load static tests. (a) Compliance with the limit load requirements of this part must be shown by tests... included; (b) Compliance must be shown (by analyses or individual load tests) with the special...

  13. 14 CFR 29.681 - Limit load static tests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Limit load static tests. 29.681 Section 29... load static tests. (a) Compliance with the limit load requirements of this part must be shown by tests... included; (b) Compliance must be shown (by analyses or individual load tests) with the special...

  14. 14 CFR 25.681 - Limit load static tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Limit load static tests. 25.681 Section 25... load static tests. (a) Compliance with the limit load requirements of this Part must be shown by tests... included. (b) Compliance must be shown (by analyses or individual load tests) with the special...

  15. 14 CFR 25.681 - Limit load static tests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Limit load static tests. 25.681 Section 25... load static tests. (a) Compliance with the limit load requirements of this Part must be shown by tests... included. (b) Compliance must be shown (by analyses or individual load tests) with the special...

  16. Operator-free flow injection analyser

    PubMed Central

    de Faria, Lourival C.

    1991-01-01

    A flow injection analyser has been constructed to allow an operator-free determination of up to 40 samples. Besides the usual FIA apparatus, the analyser includes a home-made sample introduction device made with three electromechanical three-way valves and an auto-sampler from Technicon which has been adapted to be commanded by an external digital signal. The analyser is controlled by a single board SDK-8085 microcomputer. The necessary interface to couple the analyser components to the microcomputer is also described. The analyser was evaluated for a Cr(VI)-FIA determination showing a very good performance with a relative standard deviation for 15 signals from the injection of 100 μl of a 1.0 mg.ml-1 standard Cr(VI) solution being equal to 0.5%. PMID:18924899

  17. Fault current limiter

    DOEpatents

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  18. Advancing the quantification of humid tropical forest cover loss with multi-resolution optical remote sensing data: Sampling & wall-to-wall mapping

    NASA Astrophysics Data System (ADS)

    Broich, Mark

    Humid tropical forest cover loss is threatening the sustainability of ecosystem goods and services as vast forest areas are rapidly cleared for industrial scale agriculture and tree plantations. Despite the importance of humid tropical forest in the provision of ecosystem services and economic development opportunities, the spatial and temporal distribution of forest cover loss across large areas is not well quantified. Here I improve the quantification of humid tropical forest cover loss using two remote sensing-based methods: sampling and wall-to-wall mapping. In all of the presented studies, the integration of coarse spatial, high temporal resolution data with moderate spatial, low temporal resolution data enable advances in quantifying forest cover loss in the humid tropics. Imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used as the source of coarse spatial resolution, high temporal resolution data and imagery from the Landsat Enhanced Thematic Mapper Plus (ETM+) sensor are used as the source of moderate spatial, low temporal resolution data. In a first study, I compare the precision of different sampling designs for the Brazilian Amazon using the annual deforestation maps derived by the Brazilian Space Agency for reference. I show that sampling designs can provide reliable deforestation estimates; furthermore, sampling designs guided by MODIS data can provide more efficient estimates than the systematic design used for the United Nations Food and Agricultural Organization Forest Resource Assessment 2010. Sampling approaches, such as the one demonstrated, are viable in regions where data limitations, such as cloud contamination, limit exhaustive mapping methods. Cloud-contaminated regions experiencing high rates of change include Insular Southeast Asia, specifically Indonesia and Malaysia. Due to persistent cloud cover, forest cover loss in Indonesia has only been mapped at a 5-10 year interval using photo interpretation of single

  19. Structural equation modeling: strengths, limitations, and misconceptions.

    PubMed

    Tomarken, Andrew J; Waller, Niels G

    2005-01-01

    Because structural equation modeling (SEM) has become a very popular data-analytic technique, it is important for clinical scientists to have a balanced perception of its strengths and limitations. We review several strengths of SEM, with a particular focus on recent innovations (e.g., latent growth modeling, multilevel SEM models, and approaches for dealing with missing data and with violations of normality assumptions) that underscore how SEM has become a broad data-analytic framework with flexible and unique capabilities. We also consider several limitations of SEM and some misconceptions that it tends to elicit. Major themes emphasized are the problem of omitted variables, the importance of lower-order model components, potential limitations of models judged to be well fitting, the inaccuracy of some commonly used rules of thumb, and the importance of study design. Throughout, we offer recommendations for the conduct of SEM analyses and the reporting of results. PMID:17716081

  20. (Limiting the greenhouse effect)

    SciTech Connect

    Rayner, S.

    1991-01-07

    Traveler attended the Dahlem Research Conference organized by the Freien Universitat, Berlin. The subject of the conference was Limiting the Greenhouse Effect: Options for Controlling Atmospheric CO{sub 2} Accumulation. Like all Dahlem workshops, this was a meeting of scientific experts, although the disciplines represented were broader than usual, ranging across anthropology, economics, international relations, forestry, engineering, and atmospheric chemistry. Participation by scientists from developing countries was limited. The conference was divided into four multidisciplinary working groups. Traveler acted as moderator for Group 3 which examined the question What knowledge is required to tackle the principal social and institutional barriers to reducing CO{sub 2} emissions'' The working rapporteur was Jesse Ausubel of Rockefeller University. Other working groups examined the economic costs, benefits, and technical feasibility of options to reduce emissions per unit of energy service; the options for reducing energy use per unit of GNP; and the significant of linkage between strategies to reduce CO{sub 2} emissions and other goals. Draft reports of the working groups are appended. Overall, the conference identified a number of important research needs in all four areas. It may prove particularly important in bringing the social and institutional research needs relevant to climate change closer to the forefront of the scientific and policy communities than hitherto.

  1. Limitations of inclusive fitness.

    PubMed

    Allen, Benjamin; Nowak, Martin A; Wilson, Edward O

    2013-12-10

    Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed.

  2. Limitations of inclusive fitness

    PubMed Central

    Allen, Benjamin; Nowak, Martin A.; Wilson, Edward O.

    2013-01-01

    Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed. PMID:24277847

  3. Limits to biofuels

    NASA Astrophysics Data System (ADS)

    Johansson, S.

    2013-06-01

    Biofuel production is dependent upon agriculture and forestry systems, and the expectations of future biofuel potential are high. A study of the global food production and biofuel production from edible crops implies that biofuel produced from edible parts of crops lead to a global deficit of food. This is rather well known, which is why there is a strong urge to develop biofuel systems that make use of residues or products from forest to eliminate competition with food production. However, biofuel from agro-residues still depend upon the crop production system, and there are many parameters to deal with in order to investigate the sustainability of biofuel production. There is a theoretical limit to how much biofuel can be achieved globally from agro-residues and this amounts to approximately one third of todays' use of fossil fuels in the transport sector. In reality this theoretical potential may be eliminated by the energy use in the biomass-conversion technologies and production systems, depending on what type of assessment method is used. By surveying existing studies on biofuel conversion the theoretical limit of biofuels from 2010 years' agricultural production was found to be either non-existent due to energy consumption in the conversion process, or up to 2-6000TWh (biogas from residues and waste and ethanol from woody biomass) in the more optimistic cases.

  4. ITER global stability limits

    SciTech Connect

    Hogan, J.T.; Uckan, N.A.

    1990-01-01

    The MHD stability limits to the ITER operational space have been examined with the PEST ideal stability code. Constraints on ITER operation have been examined for the nominal operational scenarios and for possible design variants. Rather than rely on evaluation of a relatively small number of sample cases, the approach has been to construct an approximation to the overall operational space, and to compare this with the observed limits in high-{beta} tokamaks. An extensive database with {approximately}20,000 stability results has been compiled for use by the ITER design team. Results from these studies show that the design values of the Troyon factor (g {approximately} 2.5 for ignition studies, and g {approximately} 3 for the technology phase) which are based on present experiments, are also expected to be attainable for ITER conditions, for which the configuration and wall-stabilisation environment differ from those in present experiments. Strongly peaked pressure profiles lead to degraded high-{beta} performance. Values of g {approximately} 4 are found for higher safety factor (q {sub {Psi}} {le} 4) than that of the present design (q{sub {Psi}} {approximately} 3). Profiles with q(0) < 1 are shown to give g {approximately} 2.5, if the current density profile provides optimum shear. The overall operational spaces are presented for g-q{sub {Psi}}, q{sub {Psi}}-1{sub i}, q-{alpha}{sub p} and l{sub i}-q{sub {psi}}.

  5. Phospholipid and Respiratory Quinone Analyses From Extreme Environments

    NASA Astrophysics Data System (ADS)

    Pfiffner, S. M.

    2008-12-01

    Extreme environments on Earth have been chosen as surrogate sites to test methods and strategies for the deployment of space craft in the search for extraterrestrial life. Surrogate sites for many of the NASA astrobiology institutes include the South African gold mines, Canadian subpermafrost, Atacama Desert, and acid rock drainage. Soils, sediments, rock cores, fracture waters, biofilms, and service and drill waters represent the types of samples collected from these sites. These samples were analyzed by gas chromatography mass spectrometry for phospholipid fatty acid methyl esters and by high performance liquid chromatography atmospheric pressure chemical ionization tandem mass spectrometry for respiratory quinones. Phospholipid analyses provided estimates of biomass, community composition, and compositional changes related to nutritional limitations or exposure to toxic conditions. Similar to phospholipid analyses, respiratory quinone analyses afforded identification of certain types of microorganisms in the community based on respiration and offered clues to in situ redox conditions. Depending on the number of samples analyzed, selected multivariate statistical methods were applied to relate membrane lipid results with site biogeochemical parameters. Successful detection of life signatures and refinement of methodologies at surrogate sites on Earth will be critical for the recognition of extraterrestrial life. At this time, membrane lipid analyses provide useful information not easily obtained by other molecular techniques.

  6. Risk based limits for Operational Safety Requirements

    SciTech Connect

    Cappucci, A.J. Jr.

    1993-01-18

    OSR limits are designed to protect the assumptions made in the facility safety analysis in order to preserve the safety envelope during facility operation. Normally, limits are set based on ``worst case conditions`` without regard to the likelihood (frequency) of a credible event occurring. In special cases where the accident analyses are based on ``time at risk`` arguments, it may be desirable to control the time at which the facility is at risk. A methodology has been developed to use OSR limits to control the source terms and the times these source terms would be available, thus controlling the acceptable risk to a nuclear process facility. The methodology defines a new term ``gram-days``. This term represents the area under a source term (inventory) vs time curve which represents the risk to the facility. Using the concept of gram-days (normalized to one year) allows the use of an accounting scheme to control the risk under the inventory vs time curve. The methodology results in at least three OSR limits: (1) control of the maximum inventory or source term, (2) control of the maximum gram-days for the period based on a source term weighted average, and (3) control of the maximum gram-days at the individual source term levels. Basing OSR limits on risk based safety analysis is feasible, and a basis for development of risk based limits is defensible. However, monitoring inventories and the frequencies required to maintain facility operation within the safety envelope may be complex and time consuming.

  7. EU-FP7-iMARS: Analysis of Mars Multi-Resolution Images Using Auto-Coregistration Data Mining and Crowd Source Techniques: Processed Results - a First Look

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Tao, Yu; Sidiropoulos, Panagiotis; Gwinner, Klaus; Willner, Konrad; Fanara, Lida; Waehlisch, Marita; van Gasselt, Stephan; Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Ivanov, Anton; Cantini, Federico; Wardlaw, Jessica; Morley, Jeremy; Sprinks, James; Giordano, Michele; Marsh, Stuart; Kim, Jungrack; Houghton, Robert; Bamford, Steven

    2016-06-01

    Understanding planetary atmosphere-surface exchange and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to overlay image data and derived information from different epochs, back in time to the mid 1970s, to examine changes through time, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, we have developed a fully automated multi-resolution DTM processing chain, called the Coregistration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP) [Tao et al., this conference], which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR [Gwinner et al., 2015] have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed by [Sidiropoulos & Muller, this conference]. Using the HRSC map products (both mosaics and orbital strips) as a map-base it is being applied to many of the 400,000 level-1 EDR images taken by the 4 NASA orbital cameras. In particular, the NASA Viking Orbiter camera (VO), Mars Orbiter Camera (MOC), Context Camera (CTX) as well as the High Resolution Imaging Science Experiment (HiRISE) back to 1976. A webGIS has been developed [van Gasselt et al., this conference] for displaying this time sequence of imagery and will be demonstrated showing an example from one of the HRSC quadrangle map-sheets. Automated quality control [Sidiropoulos & Muller, 2015] techniques are applied to screen for

  8. Functional analyses and treatment of precursor behavior.

    PubMed

    Najdowski, Adel C; Wallace, Michele D; Ellsworth, Carrie L; MacAleese, Alicia N; Cleveland, Jackie M

    2008-01-01

    Functional analysis has been demonstrated to be an effective method to identify environmental variables that maintain problem behavior. However, there are cases when conducting functional analyses of severe problem behavior may be contraindicated. The current study applied functional analysis procedures to a class of behavior that preceded severe problem behavior (precursor behavior) and evaluated treatments based on the outcomes of the functional analyses of precursor behavior. Responding for all participants was differentiated during the functional analyses, and individualized treatments eliminated precursor behavior. These results suggest that functional analysis of precursor behavior may offer an alternative, indirect method to assess the operant function of severe problem behavior. PMID:18468282

  9. Functional Analyses and Treatment of Precursor Behavior

    PubMed Central

    Najdowski, Adel C; Wallace, Michele D; Ellsworth, Carrie L; MacAleese, Alicia N; Cleveland, Jackie M

    2008-01-01

    Functional analysis has been demonstrated to be an effective method to identify environmental variables that maintain problem behavior. However, there are cases when conducting functional analyses of severe problem behavior may be contraindicated. The current study applied functional analysis procedures to a class of behavior that preceded severe problem behavior (precursor behavior) and evaluated treatments based on the outcomes of the functional analyses of precursor behavior. Responding for all participants was differentiated during the functional analyses, and individualized treatments eliminated precursor behavior. These results suggest that functional analysis of precursor behavior may offer an alternative, indirect method to assess the operant function of severe problem behavior. PMID:18468282

  10. Limits of Executive Control

    PubMed Central

    Verbruggen, Frederick; McAndrew, Amy; Weidemann, Gabrielle; Stevens, Tobias; McLaren, Ian P. L.

    2016-01-01

    Cognitive-control theories attribute action control to executive processes that modulate behavior on the basis of expectancy or task rules. In the current study, we examined corticospinal excitability and behavioral performance in a go/no-go task. Go and no-go trials were presented in runs of five, and go and no-go runs alternated predictably. At the beginning of each trial, subjects indicated whether they expected a go trial or a no-go trial. Analyses revealed that subjects immediately adjusted their expectancy ratings when a new run started. However, motor excitability was primarily associated with the properties of the previous trial, rather than the predicted properties of the current trial. We also observed a large latency cost at the beginning of a go run (i.e., reaction times were longer for the first trial in a go run than for the second trial). These findings indicate that actions in predictable environments are substantially influenced by previous events, even if this influence conflicts with conscious expectancies about upcoming events. PMID:27000177

  11. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...

  12. Physical limits to magnetogenetics.

    PubMed

    Meister, Markus

    2016-01-01

    This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report magnetic control of membrane conductance by attaching ferritin to an ion channel protein and then tugging the ferritin or heating it with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims conflict with basic laws of physics. The discrepancies are large: from 5 to 10 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells. PMID:27529126

  13. Nature limits filarial transmission

    PubMed Central

    Chandra, Goutam

    2008-01-01

    Lymphatic filariasis, caused by Wuchereria bancrofti, Brugia malayi and B. timori is a public health problem of considerable magnitude of the tropics and subtropics. Presently 1.3 billion people are at risk of lymphatic filariasis (LF) infection and about 120 million people are affected in 83 countries. In this context it is worth mentioning that 'nature' itself limits filarial transmission to a great extent in a number of ways such as by reducing vector populations, parasitic load and many other bearings. Possibilities to utilize these bearings of natural control of filariasis should be searched and if manipulations on nature, like indiscriminate urbanization and deforestation, creating sites favourable for the breeding of filarial vectors and unsanitary conditions, water pollution with organic matters etc., are reduced below the threshold level, we will be highly benefited. Understandings of the factors related to natural phenomena of control of filariasis narrated in this article may help to adopt effective control strategies. PMID:18500974

  14. Clinical limitations of Invisalign.

    PubMed

    Phan, Xiem; Ling, Paul H

    2007-04-01

    Adult patients seeking orthodontic treatment are increasingly motivated by esthetic considerations. The majority of these patients reject wearing labial fixed appliances and are looking instead to more esthetic treatment options, including lingual orthodontics and Invisalign appliances. Since Align Technology introduced the Invisalign appliance in 1999 in an extensive public campaign, the appliance has gained tremendous attention from adult patients and dental professionals. The transparency of the Invisalign appliance enhances its esthetic appeal for those adult patients who are averse to wearing conventional labial fixed orthodontic appliances. Although guidelines about the types of malocclusions that this technique can treat exist, few clinical studies have assessed the effectiveness of the appliance. A few recent studies have outlined some of the limitations associated with this technique that clinicians should recognize early before choosing treatment options.

  15. The Limits to Relevance

    NASA Astrophysics Data System (ADS)

    Averill, M.; Briggle, A.

    2006-12-01

    Science policy and knowledge production lately have taken a pragmatic turn. Funding agencies increasingly are requiring scientists to explain the relevance of their work to society. This stems in part from mounting critiques of the "linear model" of knowledge production in which scientists operating according to their own interests or disciplinary standards are presumed to automatically produce knowledge that is of relevance outside of their narrow communities. Many contend that funded scientific research should be linked more directly to societal goals, which implies a shift in the kind of research that will be funded. While both authors support the concept of useful science, we question the exact meaning of "relevance" and the wisdom of allowing it to control research agendas. We hope to contribute to the conversation by thinking more critically about the meaning and limits of the term "relevance" and the trade-offs implicit in a narrow utilitarian approach. The paper will consider which interests tend to be privileged by an emphasis on relevance and address issues such as whose goals ought to be pursued and why, and who gets to decide. We will consider how relevance, narrowly construed, may actually limit the ultimate utility of scientific research. The paper also will reflect on the worthiness of research goals themselves and their relationship to a broader view of what it means to be human and to live in society. Just as there is more to being human than the pragmatic demands of daily life, there is more at issue with knowledge production than finding the most efficient ways to satisfy consumer preferences or fix near-term policy problems. We will conclude by calling for a balanced approach to funding research that addresses society's most pressing needs but also supports innovative research with less immediately apparent application.

  16. SCM Forcing Data Derived from NWP Analyses

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    Forcing data, suitable for use with single column models (SCMs) and cloud resolving models (CRMs), have been derived from NWP analyses for the ARM (Atmospheric Radiation Measurement) Tropical Western Pacific (TWP) sites of Manus Island and Nauru.

  17. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... OF TRANSPORTATION RULES OF PRACTICE RAILROAD ACQUISITION, CONTROL, MERGER, CONSOLIDATION PROJECT, TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For... company's marketing plan and existing and potential competitive alternatives (inter- as well as...

  18. Quality control considerations in performing washability analyses

    SciTech Connect

    Graham, R.D.

    1984-10-01

    The author describes, in considerable detail, the procedures for carrying out washability analyses as laid down in ASTM Standard Test Method D4371. These include sampling, sample preparation, hydrometer standardisation, washability testing, and analysis of specific gravity fractions.

  19. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and... by conducting additional analyses using any standard engineering economics method such as sensitivity... energy or water system alternative....

  20. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and... by conducting additional analyses using any standard engineering economics method such as sensitivity... energy or water system alternative....

  1. Anthocyanin analyses of Vaccinium fruit dietary supplements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Vaccinium fruit ingredients within dietary supplements were identified by comparisons with anthocyanin analyses of known Vaccinium profiles (demonstration of anthocyanin fingerprinting). Available Vaccinium supplements were purchased and analyzed; their anthocyanin profiles (based on HPLC separation...

  2. Comparison with Russian analyses of meteor impact

    SciTech Connect

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  3. Analyses and forecasts with LAWS winds

    NASA Technical Reports Server (NTRS)

    Wang, Muyin; Paegle, Jan

    1994-01-01

    Horizontal fluxes of atmospheric water vapor are studied for summer months during 1989 and 1992 over North and South America based on analyses from European Center for Medium Range Weather Forecasts, US National Meteorological Center, and United Kingdom Meteorological Office. The calculations are performed over 20 deg by 20 deg box-shaped midlatitude domains located to the east of the Rocky Mountains in North America, and to the east of the Andes Mountains in South America. The fluxes are determined from operational center gridded analyses of wind and moisture. Differences in the monthly mean moisture flux divergence determined from these analyses are as large as 7 cm/month precipitable water equivalent over South America, and 3 cm/month over North America. Gridded analyses at higher spatial and temporal resolution exhibit better agreement in the moisture budget study. However, significant discrepancies of the moisture flux divergence computed from different gridded analyses still exist. The conclusion is more pessimistic than Rasmusson's estimate based on station data. Further analysis reveals that the most significant sources of error result from model surface elevation fields, gaps in the data archive, and uncertainties in the wind and specific humidity analyses. Uncertainties in the wind analyses are the most important problem. The low-level jets, in particular, are substantially different in the different data archives. Part of the reason for this may be due to the way the different analysis models parameterized physical processes affecting low-level jets. The results support the inference that the noise/signal ratio of the moisture budget may be improved more rapidly by providing better wind observations and analyses than by providing better moisture data.

  4. A History of Rotorcraft Comprehensive Analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2013-01-01

    A history of the development of rotorcraft comprehensive analyses is presented. Comprehensive analyses are digital computer programs that calculate the aeromechanical behavior of the rotor and aircraft, bringing together the most advanced models of the geometry, structure, dynamics, and aerodynamics available in rotary wing technology. The development of the major codes of the last five decades from industry, government, and universities is described. A number of common themes observed in this history are discussed.

  5. Network analyses structure genetic diversity in independent genetic worlds.

    PubMed

    Halary, Sébastien; Leigh, Jessica W; Cheaib, Bachar; Lopez, Philippe; Bapteste, Eric

    2010-01-01

    DNA flows between chromosomes and mobile elements, following rules that are poorly understood. This limited knowledge is partly explained by the limits of current approaches to study the structure and evolution of genetic diversity. Network analyses of 119,381 homologous DNA families, sampled from 111 cellular genomes and from 165,529 phage, plasmid, and environmental virome sequences, offer challenging insights. Our results support a disconnected yet highly structured network of genetic diversity, revealing the existence of multiple "genetic worlds." These divides define multiple isolated groups of DNA vehicles drawing on distinct gene pools. Mathematical studies of the centralities of these worlds' subnetworks demonstrate that plasmids, not viruses, were key vectors of genetic exchange between bacterial chromosomes, both recently and in the past. Furthermore, network methodology introduces new ways of quantifying current sampling of genetic diversity.

  6. Physical limits to magnetogenetics

    PubMed Central

    Meister, Markus

    2016-01-01

    This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report magnetic control of membrane conductance by attaching ferritin to an ion channel protein and then tugging the ferritin or heating it with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims conflict with basic laws of physics. The discrepancies are large: from 5 to 10 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells. DOI: http://dx.doi.org/10.7554/eLife.17210.001 PMID:27529126

  7. Limits of computational biology.

    PubMed

    Bray, Dennis

    2015-01-01

    Are we close to a complete inventory of living processes so that we might expect in the near future to reproduce every essential aspect necessary for life? Or are there mechanisms and processes in cells and organisms that are presently inaccessible to us? Here I argue that a close examination of a particularly well-understood system--that of Escherichia coli chemotaxis--shows we are still a long way from a complete description. There is a level of molecular uncertainty, particularly that responsible for fine-tuning and adaptation to myriad external conditions, which we presently cannot resolve or reproduce on a computer. Moreover, the same uncertainty exists for any process in any organism and is especially pronounced and important in higher animals such as humans. Embryonic development, tissue homeostasis, immune recognition, memory formation, and survival in the real world, all depend on vast numbers of subtle variations in cell chemistry most of which are presently unknown or only poorly characterized. Overcoming these limitations will require us to not only accumulate large quantities of highly detailed data but also develop new computational methods able to recapitulate the massively parallel processing of living cells.

  8. Integrated Current Limiter

    NASA Astrophysics Data System (ADS)

    Pappalardo, S.; Alfonso, M. M.; Mirabella, I. B.

    2011-10-01

    The LCL has been extensively used in ESA scientific satellites and since a few years ago is being also the baseline device for earth observation satellites such as CRYOSAT 1 and 2, SENTINAL 1, 2 and 3, EARTWATCH, etc. It seems that the use of this LCL is also being considered as an alternative to fuse approach for commercial telecommunication satellites. Scope of this document is to provide a technical description of the Integrated Current Limiter device (shortly ICL later on) developed inside the domain of ESTECContract22049-09-NL-A Twith STMicroelectronics s.r.l. (ref. Invitation to Tender AO/1-5784/08/NL/A T). The design of the ICL device takes into account both ESA and power electronics designer's experience. This experience is more than 25 years long in Europe. The ICL design has been leaded in order to be fully compliant with the applicable specification issued by ESA and the major European power electronics manufacturers that have participated in its edition.

  9. Limits of computational biology

    PubMed Central

    Bray, Dennis

    2015-01-01

    Abstract Are we close to a complete inventory of living processes so that we might expect in the near future to reproduce every essential aspect necessary for life? Or are there mechanisms and processes in cells and organisms that are presently inaccessible to us? Here I argue that a close examination of a particularly well-understood system— that of Escherichia coli chemotaxis— shows we are still a long way from a complete description. There is a level of molecular uncertainty, particularly that responsible for fine-tuning and adaptation to myriad external conditions, which we presently cannot resolve or reproduce on a computer. Moreover, the same uncertainty exists for any process in any organism and is especially pronounced and important in higher animals such as humans. Embryonic development, tissue homeostasis, immune recognition, memory formation, and survival in the real world, all depend on vast numbers of subtle variations in cell chemistry most of which are presently unknown or only poorly characterized. Overcoming these limitations will require us to not only accumulate large quantities of highly detailed data but also develop new computational methods able to recapitulate the massively parallel processing of living cells. PMID:25318467

  10. Finite element analyses of CCAT preliminary design

    NASA Astrophysics Data System (ADS)

    Sarawit, Andrew T.; Kan, Frank W.

    2014-07-01

    This paper describes the development of the CCAT telescope finite element model (FEM) and the analyses performed to support the preliminary design work. CCAT will be a 25 m diameter telescope operating in the 0.2 to 2 mm wavelength range. It will be located at an elevation of 5600 m on Cerro Chajnantor in Northern Chile, near ALMA. The telescope will be equipped with wide-field cameras and spectrometers mounted at the two Nasmyth foci. The telescope will be inside an enclosure to protect it from wind buffeting, direct solar heating, and bad weather. The main structures of the telescope include a steel Mount and a carbon-fiber-reinforced-plastic (CFRP) primary truss. The finite element model developed in this study was used to perform modal, frequency response, seismic response spectrum, stress, and deflection analyses of telescope. Modal analyses of telescope were performed to compute the structure natural frequencies and mode shapes and to obtain reduced order modal output at selected locations in the telescope structure to support the design of the Mount control system. Modal frequency response analyses were also performed to compute transfer functions at these selected locations. Seismic response spectrum analyses of the telescope subject to the Maximum Likely Earthquake were performed to compute peak accelerations and seismic demand stresses. Stress analyses were performed for gravity load to obtain gravity demand stresses. Deflection analyses for gravity load, thermal load, and differential elevation drive torque were performed so that the CCAT Observatory can verify that the structures meet the stringent telescope surface and pointing error requirements.

  11. Prismatic analyser concept for neutron spectrometers

    SciTech Connect

    Birk, Jonas O.; Jacobsen, Johan; Hansen, Rasmus L.; Lefmann, Kim; Markó, Márton; Niedermayer, Christof; Freeman, Paul G.; Christensen, Niels B.; Månsson, Martin; Rønnow, Henrik M.

    2014-11-15

    Developments in modern neutron spectroscopy have led to typical sample sizes decreasing from few cm to several mm in diameter samples. We demonstrate how small samples together with the right choice of analyser and detector components makes distance collimation an important concept in crystal analyser spectrometers. We further show that this opens new possibilities where neutrons with different energies are reflected by the same analyser but counted in different detectors, thus improving both energy resolution and total count rate compared to conventional spectrometers. The technique can readily be combined with advanced focussing geometries and with multiplexing instrument designs. We present a combination of simulations and data showing three different energies simultaneously reflected from one analyser. Experiments were performed on a cold triple axis instrument and on a prototype inverse geometry Time-of-flight spectrometer installed at PSI, Switzerland, and shows excellent agreement with the predictions. Typical improvements will be 2.0 times finer resolution and a factor of 1.9 in flux gain compared to a focussing Rowland geometry, or of 3.3 times finer resolution and a factor of 2.4 in flux gain compared to a single flat analyser slab.

  12. Limits to Tidal Power

    NASA Astrophysics Data System (ADS)

    Garrett, C.

    2008-12-01

    Ocean tides have been proposed as a source of renewable energy, though the maximum available power may be shown to be only a fraction of the present dissipation rate of 3.5 TW, which is small compared with global insolation (nearly 105 TW), wind dissipation (103 TW), and even human power usage of 15 TW. Nonetheless, tidal power could be a useful contributor in some locations. Traditional use of tidal power, involving the trapping of water behind a barrage at high tide, can produce an average power proportional to the area of the headpond and the square of the tidal range; the power density is approximately 6 W per square meter for a tidal range of 10 m. Capital costs and fears of environmental damage have put barrage schemes in disfavor, with interest turning to the exploitation of strong tidal currents, using turbines in a manner similar to wind turbines. There is a limit to the available power, however, as adding turbines reduces the flow, ultimately reducing the power. For sinusoidal forcing of flow in a channel connecting two large open basins, the maximum available power may be shown to be given approximately by 0.2ρ g a Q_max, where ρ is the water density, g gravity, a the amplitude of the tidal sea level difference along the channel, and Q_max is the maximum volume flux in the natural state. The same formula applies if the channel is the entrance to a semi-enclosed basin, with a now the amplitude of the external tide. A flow reduction of approximately 40% is typically associated with the maximum power extraction. The power would be reduced if only smaller environmental changes are acceptable, and reduced further by drag on supporting structures, dissipation in turbine wakes, and internal inefficiencies. It can be suggested that the best use of strong, cold, tidal currents is to provide cooling water for nuclear reactors.

  13. Meta-analyses of randomized controlled trials.

    PubMed

    Sacks, H S; Berrier, J; Reitman, D; Ancona-Berk, V A; Chalmers, T C

    1987-02-19

    A new type of research, termed meta-analysis, attempts to analyze and combine the results of previous reports. We found 86 meta-analyses of reports of randomized controlled trials in the English-language literature. We evaluated the quality of these meta-analyses, using a scoring method that considered 23 items in six major areas--study design, combinability, control of bias, statistical analysis, sensitivity analysis, and application of results. Only 24 meta-analyses (28 percent) addressed all six areas, 31 (36 percent) addressed five, 25 (29 percent) addressed four, 5 (6 percent) addressed three, and 1 (1 percent) addressed two. Of the 23 individual items, between 1 and 14 were addressed satisfactorily (mean +/- SD, 7.7 +/- 2.7). We conclude that an urgent need exists for improved methods in literature searching, quality evaluation of trials, and synthesizing of the results.

  14. Geomagnetic local and regional harmonic analyses.

    USGS Publications Warehouse

    Alldredge, L.R.

    1982-01-01

    Procedures are developed for using rectangular and cylindrical harmonic analyses in local and regional areas. Both the linear least squares analysis, applicable when component data are available, and the nonlinear least squares analysis, applicable when only total field data are available, are treated. When component data are available, it is advantageous to work with residual fields obtained by subtracting components derived from a harmonic potential from the observed components. When only total field intensity data are available, they must be used directly. Residual values cannot be used. Cylindrical harmonic analyses are indicated when fields tend toward cylindrical symmetry; otherwise, rectangular harmonic analyses will be more advantageous. Examples illustrating each type of analysis are given.-Author

  15. A qualitative method for analysing multivoicedness

    PubMed Central

    Aveling, Emma-Louise; Gillespie, Alex; Cornish, Flora

    2015-01-01

    ‘Multivoicedness’ and the ‘multivoiced Self’ have become important theoretical concepts guiding research. Drawing on the tradition of dialogism, the Self is conceptualised as being constituted by a multiplicity of dynamic, interacting voices. Despite the growth in literature and empirical research, there remains a paucity of established methodological tools for analysing the multivoiced Self using qualitative data. In this article, we set out a systematic, practical ‘how-to’ guide for analysing multivoicedness. Using theoretically derived tools, our three-step method comprises: identifying the voices of I-positions within the Self’s talk (or text), identifying the voices of ‘inner-Others’, and examining the dialogue and relationships between the different voices. We elaborate each step and illustrate our method using examples from a published paper in which data were analysed using this method. We conclude by offering more general principles for the use of the method and discussing potential applications. PMID:26664292

  16. Advanced laser stratospheric monitoring systems analyses

    NASA Technical Reports Server (NTRS)

    Larsen, J. C.

    1984-01-01

    This report describes the software support supplied by Systems and Applied Sciences Corporation for the study of Advanced Laser Stratospheric Monitoring Systems Analyses under contract No. NAS1-15806. This report discusses improvements to the Langley spectroscopic data base, development of LHS instrument control software and data analyses and validation software. The effect of diurnal variations on the retrieved concentrations of NO, NO2 and C L O from a space and balloon borne measurement platform are discussed along with the selection of optimum IF channels for sensing stratospheric species from space.

  17. Identifying, analysing and solving problems in practice.

    PubMed

    Hewitt-Taylor, Jaqui

    When a problem is identified in practice, it is important to clarify exactly what it is and establish the cause before seeking a solution. This solution-seeking process should include input from those directly involved in the problematic situation, to enable individuals to contribute their perspective, appreciate why any change in practice is necessary and what will be achieved by the change. This article describes some approaches to identifying and analysing problems in practice so that effective solutions can be devised. It includes a case study and examples of how the Five Whys analysis, fishbone diagram, problem tree analysis, and Seven-S Model can be used to analyse a problem.

  18. Transportation systems analyses. Volume 2: Technical/programmatics

    NASA Astrophysics Data System (ADS)

    1993-05-01

    The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This report documents the three principal transportation systems analyses (TSA) efforts during the period 7 November 92 - 6 May 93. The analyses are as follows: Mixed-Fleet (STS/ELV) strategies for SSF resupply; Transportation Systems Data Book - overview; and Operations Cost Model - overview/introduction.

  19. Determining the least limiting water range using limited soil data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Least Limiting Water Range (LLWR) is a useful tool to evaluate changes in soil physical condition caused by changing soil management. It incorporates limitations to plant growth based on limiting aeration, water holding capacity and soil strength. A disadvantage of the LLWR is the need to determ...

  20. Search for single top production using multivariate analyses at CDF

    SciTech Connect

    Hirschbuhl, Dominic; /Karlsruhe U., EKP

    2007-10-01

    This article reports on recent searches for single-top-quark production by the CDF collaboration at the Tevatron using a data set that corresponds to an integrated luminosity of 955 pb{sup -1}. Three different analyses techniques are employed, one using likelihood discriminants, one neural networks and one matrix elements. The sensitivity to single-top production at the rate predicted by the standard model ranges from 2.1 to 2.6 {sigma}. While the first two analyses observe a deficit of single-top like events compared to the expectation, the matrix element method observes an excess corresponding to a background fluctuation of 2.3 {sigma}. The null results of the likelihood and neural network analyses translate in upper limits on the cross section of 2.6 pb for the t-channel production mode and 3.7 pb for the s-channel mode at the 95% C.L. The matrix element result corresponds to a measurement of 2.7{sub -1.3}{sup +1.5} pb for the combined t- and s-channel single-top cross section. In addition, CDF has searched for non-standard model production of single-top-quarks via the s-channel exchange of a heavy W{prime} boson. No signal of this process is found resulting in lower mass limits of 760 GeV/c{sup 2} in case the mass of the right-handed neutrino is smaller than the mass of the right-handed W{prime} or 790 GeV/c{sup 2} in the opposite case.

  1. Limiting depth of magnetization in cratonic lithosphere

    NASA Technical Reports Server (NTRS)

    Toft, Paul B.; Haggerty, Stephen E.

    1988-01-01

    Values of magnetic susceptibility and natural remanent magnetization (NRM) of clino-pyroxene-garnet-plagioclase granulite facies lower crustal xenoliths from a kimberlite in west Africa are correlated to bulk geochemistry and specific gravity. Thermomagnetic and alternating-field demagnetization analyses identify magnetite (Mt) and native iron as the dominant magnetic phases (totaling not more than 0.1 vol pct of the rocks) along with subsidiary sulfides. Oxidation states of the granulites are not greater than MW, observed Mt occurs as rims on coarse (about 1 micron) Fe particles, and inferred single domain-pseudosingle domain Mt may be a result of oxidation of fine-grained Fe. The deepest limit of lithospheric ferromagnetism is 95 km, but a limit of 70 km is most reasonable for the West African Craton and for modeling Magsat anomalies over exposed Precambrian shields.

  2. FAME: Software for analysing rock microstructures

    NASA Astrophysics Data System (ADS)

    Hammes, Daniel M.; Peternell, Mark

    2016-05-01

    Determination of rock microstructures leads to a better understanding of the formation and deformation of polycrystalline solids. Here, we present FAME (Fabric Analyser based Microstructure Evaluation), an easy-to-use MATLAB®-based software for processing datasets recorded by an automated fabric analyser microscope. FAME is provided as a MATLAB®-independent Windows® executable with an intuitive graphical user interface. Raw data from the fabric analyser microscope can be automatically loaded, filtered and cropped before analysis. Accurate and efficient rock microstructure analysis is based on an advanced user-controlled grain labelling algorithm. The preview and testing environments simplify the determination of appropriate analysis parameters. Various statistic and plotting tools allow a graphical visualisation of the results such as grain size, shape, c-axis orientation and misorientation. The FAME2elle algorithm exports fabric analyser data to an elle (modelling software)-supported format. FAME supports batch processing for multiple thin section analysis or large datasets that are generated for example during 2D in-situ deformation experiments. The use and versatility of FAME is demonstrated on quartz and deuterium ice samples.

  3. Chemical Analyses of Silicon Aerogel Samples

    SciTech Connect

    van der Werf, I.; Palmisano, F.; De Leo, Raffaele; Marrone, Stefano

    2008-04-01

    After five years of operating, two Aerogel counters: A1 and A2, taking data in Hall A at Jefferson Lab, suffered a loss of performance. In this note possible causes of degradation have been studied. In particular, various chemical and physical analyses have been carried out on several Aerogel tiles and on adhesive tape in order to reveal the presence of contaminants.

  4. Multiphase Method for Analysing Online Discussions

    ERIC Educational Resources Information Center

    Häkkinen, P.

    2013-01-01

    Several studies have analysed and assessed online performance and discourse using quantitative and qualitative methods. Quantitative measures have typically included the analysis of participation rates and learning outcomes in terms of grades. Qualitative measures of postings, discussions and context features aim to give insights into the nature…

  5. Correlation Functions Aid Analyses Of Spectra

    NASA Technical Reports Server (NTRS)

    Beer, Reinhard; Norton, Robert H., Jr.

    1989-01-01

    New uses found for correlation functions in analyses of spectra. In approach combining elements of both pattern-recognition and traditional spectral-analysis techniques, spectral lines identified in data appear useless at first glance because they are dominated by noise. New approach particularly useful in measurement of concentrations of rare species of molecules in atmosphere.

  6. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... identify and address relevant markets and issues, and provide additional information as requested by the...). (b) For major transactions, applicants shall submit “full system” impact analyses (incorporating any... (including inter- and intramodal competition, product competition, and geographic competition) and...

  7. Cosmetology: Task Analyses. Competency-Based Education.

    ERIC Educational Resources Information Center

    Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum Center.

    These task analyses are designed to be used in combination with the "Trade and Industrial Education Service Area Resource" in order to implement competency-based education in the cosmetology program in Virginia. The task analysis document contains the task inventory, suggested task sequence lists, and content outlines for the secondary courses…

  8. What's missing from avian global diversification analyses?

    PubMed

    Reddy, Sushma

    2014-08-01

    The accumulation of vast numbers of molecular phylogenetic studies has contributed to huge knowledge gains in the evolutionary history of birds. This permits subsequent analyses of avian diversity, such as how and why diversification varies across the globe and among taxonomic groups. However, available genetic data for these meta-analyses are unevenly distributed across different geographic regions and taxonomic groups. To comprehend the impact of this variation on the interpretation of global diversity patterns, I examined the availability of genetic data for possible biases in geographic and taxonomic sampling of birds. I identified three main disparities of sampling that are geographically associated with latitude (temperate, tropical), hemispheres (East, West), and range size. Tropical regions, which host the vast majority of species, are substantially less studied. Moreover, Eastern regions, such as the Old World Tropics and Australasia, stand out as being disproportionately undersampled, with up to half of communities not being represented in recent studies. In terms of taxonomic discrepancies, a majority of genetically undersampled clades are exclusively found in tropical regions. My analysis identifies several disparities in the key regions of interest of global diversity analyses. Differential sampling can have considerable impacts on these global comparisons and call into question recent interpretations of latitudinal or hemispheric differences of diversification rates. Moreover, this review pinpoints understudied regions whose biota are in critical need of modern systematic analyses.

  9. The Economic Cost of Homosexuality: Multilevel Analyses

    ERIC Educational Resources Information Center

    Baumle, Amanda K.; Poston, Dudley, Jr.

    2011-01-01

    This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…

  10. Functional Analyses and Treatment of Precursor Behavior

    ERIC Educational Resources Information Center

    Najdowski, Adel C.; Wallace, Michele D.; Ellsworth, Carrie L.; MacAleese, Alicia N.; Cleveland, Jackie

    2008-01-01

    Functional analysis has been demonstrated to be an effective method to identify environmental variables that maintain problem behavior. However, there are cases when conducting functional analyses of severe problem behavior may be contraindicated. The current study applied functional analysis procedures to a class of behavior that preceded severe…

  11. Using Solo to Analyse Group Responses

    ERIC Educational Resources Information Center

    Reading, Chris; Lawrie, Christine

    2004-01-01

    The increased use of group work in teaching and learning has seen an increased need for knowledge about assessment of group work. This report considers exploratory research where the SOLO Taxonomy, previously used to analyse the quality of individual responses, is applied to group responses. The responses were created as part of an activity…

  12. Analysing Simple Electric Motors in the Classroom

    ERIC Educational Resources Information Center

    Yap, Jeff; MacIsaac, Dan

    2006-01-01

    Electromagnetic phenomena and devices such as motors are typically unfamiliar to both teachers and students. To better visualize and illustrate the abstract concepts (such as magnetic fields) underlying electricity and magnetism, we suggest that students construct and analyse the operation of a simply constructed Johnson electric motor. In this…

  13. Impact analyses after pipe rupture. [PWR; BWR

    SciTech Connect

    Chun, R.C.; Chuang, T.Y.

    1983-12-13

    Two of the French pipe whip experiments are reproduced with the computer code WIPS. The WIPS results are in good agreement with the experimental data and the French computer code TEDEL. This justifies the use of its pipe element in conjunction with its U-bar element in a simplified method of impact analyses.

  14. Multiresolution Morphology and Metabolism of the Metropolis

    NASA Astrophysics Data System (ADS)

    Band, L.; Tenenbaum, D.; Tague, C.; Kenworthy, S.; Law, N.; Cadenasso, M.; Pickett, S.

    2002-12-01

    Small watershed research is a hallmark of the LTER network, with catchments subject to different treatments used to develop input-output budgets of water, nutrients and carbon, as well as an understanding of the relations of energy and material cycling to ecological communities and trophic systems. In the Baltimore Ecosystem Study (BES), this approach emphasizes the role of human society as part of the ecosystem such that individual and institutional activity are defined as important aspects of ecological community and trophic system dynamics. We have been adapting a spatial hydroecological modeling approach to operate across the urban-rural gradient, incorporating an explicit description of the drainage sequence, as well as human sources of irrigation water and fertilizer. As human activity tends to produce sharp gradients in land cover and topographic structure (e.g. property lines, drainage infrastructure), the behavior of human dominated ecosystems may require higher resolution information to adequately characterize system structure and function. We hypothesize that our spatial analysis and modeling methods will show greater sensitivity to topographic and land cover information in the suburban sites than in the agricultural or forested ecosystems. In this paper we concentrate on three of the headwater catchments, including a fully forested catchment (Pond Branch), a suburban catchment (Glyndon) and an agricultural catchment (McDonogh). Continuous discharge gauging by the USGS at each of the three catchments and weekly sampling for stream chemistry have been carried out for all three catchments. Soil moisture has been sampled weekly at a set of sites along a topographic wetness gradient using portable soil moisture meters. Topographically defined flowpath networks were extracted from high resolution digital elevation models (DEMs) at 30m resolution, 5m resolution from photogrammetric sources and at 0.5m from LIDAR. Land cover at these resolutions are also extracted from high resolution airborne imagery and ETM scenes. One of the key features of the catchments we concentrate on is the ecological patch structure along topographic flowpaths and the nature of the land cover and topographic drainage right around the stream channel as these features often have an important role in modifying streamflow generation and water chemistry. Using measured soil moisture and streamflow discharge and chemistry, we test the impact of source data resolutions used to generate topographic and land cover information on our ability to model the measured soil moisture, streamflow and chemistry from these catchments.

  15. Multiresolution ARMA modeling of facial color images

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Al-Jarrah, Inad

    2002-05-01

    Human face perception is the key to identify confirmation in security systems, video teleconference, picture telephony, and web navigation. Modeling of human faces and facial expressions for different persons can be dealt with by building a point distribution model (PDM) based on spatial (shape) information or a gray-level model (GLM) based on spectral (intensity) information. To avoid short-comings of the local modeling of PDM and GLM, we propose a new approach for recognizing human faces and discriminating expressions associated with them in color images. It is based on the Laplacian of Gaussian (LoG) edge detection, KL transformation, and auto-regressive moving average (ARMA) filtering. First, the KL transform is applied to the R, G, and B dimensions, and a facial image is described by its principal component. A LoG edge-detector is then used for line drawing schematic of a face. The resultant face silhouette is divided into 5 X 5 non-overlapping blocks, each of which is represented by the auto-regressive (AR) parameter vector a. The ensample average of a over the whole image is taken as the feature vector for the description of a facial pattern. Each face class is represented by such ensample average vector a. Efficacy of the ARMA model is evaluated by the non-metric similarity measure S equals a.b/a.b for two facial images whose feature vectors, and a and b, are the ensample average of their ARMA parameters. Our measurements show that the ARMA modeling is effective for discriminating facial features in color images, and has the potential of distinguishing the corresponding facial expressions.

  16. A Multiresolution Image Cache for Volume Rendering

    SciTech Connect

    LaMar, E; Pascucci, V

    2003-02-27

    The authors discuss the techniques and implementation details of the shared-memory image caching system for volume visualization and iso-surface rendering. One of the goals of the system is to decouple image generation from image display. This is done by maintaining a set of impostors for interactive display while the production of the impostor imagery is performed by a set of parallel, background processes. The system introduces a caching basis that is free of the gap/overlap artifacts of earlier caching techniques. instead of placing impostors at fixed, pre-defined positions in world space, the technique is to adaptively place impostors relative to the camera viewpoint. The positions translate with the camera but stay aligned to the data; i.e., the positions translate, but do not rotate, with the camera. The viewing transformation is factored into a translation transformation and a rotation transformation. The impostor imagery is generated using just the translation transformation and visible impostors are displayed using just the rotation transformation. Displayed image quality is improved by increasing the number of impostors and the frequency that impostors are re-rendering is improved by decreasing the number of impostors.

  17. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D; Bertram, M; Duchaineau, M; Max, N

    2002-01-14

    Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  18. Multi-Resolution Representation of Topology

    SciTech Connect

    Cole-McLaughlin, K; Pascucci, V

    2004-12-16

    The Contour Tree of a scalar field is the graph obtained by contracting all the connected components of the level sets of the field into points. This is a powerful abstraction for representing the structure of the field with explicit description of the topological changes of its level sets. It has proven effective as a data-structure for fast extraction of isosurfaces and its application has been advocated as a user interface component guiding interactive data exploration sessions. We propose a new metaphor for visualizing the Contour Tree borrowed from the classical design of a mechanical orrery reproducing a hierarchy of orbits of the planets around the sun or moons around a planet. In the toporrery the hierarchy of stars, planets and moons is replaced with a hierarchy of maxima, minima and saddles that can be interactively filtered, both uniformly and adaptively, by importance with respect to a given metric.

  19. Limits to Open Class Performance?

    NASA Technical Reports Server (NTRS)

    Bowers, Albion H.

    2007-01-01

    This viewgraph presentation describes the limits to open class performance. The contents include: 1) Standard Class; 2) 15m/Racing Class; 3) Open Class; and 4) Design Solutions associated with assumptions, limiting parameters, airfoil performance, current trends, and analysis.

  20. COMMENTARY:Limits to adaptation

    SciTech Connect

    Preston, Benjamin L

    2013-01-01

    An actor-centered, risk-based approach to defining limits to social adaptation provides a useful analytic framing for identifying and anticipating these limits and informing debates over society s responses to climate change.

  1. Updates on Force Limiting Improvements

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; Scharton, Terry

    2013-01-01

    The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.

  2. Analysing organic transistors based on interface approximation

    SciTech Connect

    Akiyama, Yuto; Mori, Takehiko

    2014-01-15

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.

  3. Reliability of chemical analyses of water samples

    SciTech Connect

    Beardon, R.

    1989-11-01

    Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.

  4. Identifying, analysing and solving problems in practice.

    PubMed

    Hewitt-Taylor, Jaqui

    When a problem is identified in practice, it is important to clarify exactly what it is and establish the cause before seeking a solution. This solution-seeking process should include input from those directly involved in the problematic situation, to enable individuals to contribute their perspective, appreciate why any change in practice is necessary and what will be achieved by the change. This article describes some approaches to identifying and analysing problems in practice so that effective solutions can be devised. It includes a case study and examples of how the Five Whys analysis, fishbone diagram, problem tree analysis, and Seven-S Model can be used to analyse a problem. PMID:22848969

  5. Sensitivity in risk analyses with uncertain numbers.

    SciTech Connect

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  6. [Clinical research=design*measurements*statistical analyses].

    PubMed

    Furukawa, Toshiaki

    2012-06-01

    A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.

  7. Inelastic and Dynamic Fracture and Stress Analyses

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.

    1984-01-01

    Large deformation inelastic stress analysis and inelastic and dynamic crack propagation research work is summarized. The salient topics of interest in engine structure analysis that are discussed herein include: (1) a path-independent integral (T) in inelastic fracture mechanics, (2) analysis of dynamic crack propagation, (3) generalization of constitutive relations of inelasticity for finite deformations , (4) complementary energy approaches in inelastic analyses, and (5) objectivity of time integration schemes in inelastic stress analysis.

  8. Evaluation of the Technicon Axon analyser.

    PubMed

    Martínez, C; Márquez, M; Cortés, M; Mercé, J; Rodriguez, J; González, F

    1990-01-01

    An evaluation of the Technicon Axon analyser was carried out following the guidelines of the 'Sociedad Española de Química Clínica' and the European Committee for Clinical Laboratory Standards.A photometric study revealed acceptable results at both 340 nm and 404 nm. Inaccuracy and imprecision were lower at 404 nm than at 340 nm, although poor dispersion was found at both wavelengths, even at low absorbances. Drift was negligible, the imprecision of the sample pipette delivery system was greater for small sample volumes, the reagent pipette delivery system imprecision was acceptable and the sample diluting system study showed good precision and accuracy.Twelve analytes were studied for evaluation of the analyser under routine working conditions. Satisfactory results were obtained for within-run imprecision, while coefficients of variation for betweenrun imprecision were much greater than expected. Neither specimenrelated nor specimen-independent contamination was found in the carry-over study. For all analytes assayed, when comparing patient sample results with those obtained in a Hitachi 737 analyser, acceptable relative inaccuracy was observed.

  9. Does volunteering moderate the relation between functional limitations and mortality?

    PubMed

    Okun, Morris A; August, Kristin J; Rook, Karen S; Newsom, Jason T

    2010-11-01

    Previous studies have demonstrated that functional limitations increase, and organizational volunteering decreases, the risk of mortality in later life. However, scant attention has been paid to investigating the joint effect of functional limitations and organizational volunteering on mortality. Accordingly, we tested the hypothesis that volunteering moderates the relation between functional limitations and risk of mortality. This prospective study used baseline survey data from a representative sample of 916 non-institutionalized adults 65 years old and older who lived in the continental United States. Data on mortality were extracted six years later from the National Death Index. Survival analyses revealed that functional limitations were associated with an increased risk of dying only among participants who never or almost never volunteered, suggesting that volunteering buffers the association between functional limitations and mortality. We conclude that although it may be more difficult for older adults with functional limitations to volunteer, they may receive important benefits from doing so.

  10. Optical Monitors and OT/GRB analyses

    NASA Astrophysics Data System (ADS)

    Hudec, R.; Krizek, M.

    2005-07-01

    The Optical Monitors, despite of lower detection limits, are still valuable for detection of prompt real-time and (hypothetical) pre-burst optical emission of gamma-ray bursts. We refer on the ongoing project at the Astronomical Institute in Ondřejov based on digitized data from the photographic EN network.

  11. NASA contributions to radial turbine aerodynamic analyses

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1980-01-01

    A brief description of the radial turbine and its analysis needs is followed by discussions of five analytical areas; design geometry and performance, off design performance, blade row flow, scroll flow, and duct flow. The functions of the programs, areas of applicability, and limitations and uncertainties are emphasized. Both past contributions and current activities are discussed.

  12. a Cylindrical Mirror Analyser for Neutrino Mass Measurement.

    NASA Astrophysics Data System (ADS)

    Williams, Simon Shaughan

    Available from UMI in association with The British Library. Requires signed TDF. The design of an electrostatic Cylindrical Mirror Analyser (CMA) for neutrino mass measurement is given. The resolution is 15 eV FWHM, being achieved with second order focusing and tight collimation. The field-matching grids are unique, being sets of accurately positioned vertical wires producing negligible resolution effects. Luminosity is maximised with an extended source and by utilisation of the full 2pi of the CMA. Background is minimised as the deflecting voltage is half the analysing energy so that field--emission can be discriminated against. Cosmic -ray secondaries are largely removed by high resolution silicon surface barrier detectors. The construction of the CMA to an accuracy of ~10 eV in base resolution, and of the magnetic shielding and vacuum systems is outlined. The power supplies and monitoring systems, signal processing electronics and data acquisition software are also described. Ytterbium conversion electron measurements confirm the calculated CMA optical resolution function to better than 10% in half-width. These measurements demonstrate that the CMA calibration and dispersion are at their theoretical values and also identify a small mis-alignment in the CMA, consistent with construction accuracy. Correcting fields are subsequently designed. The end-point spectrum of tritium is measured using a Langmuir-Blodgett mono-layer source, yielding a neutrino mass limit of <64 eV (90% CL), including total resolution systematic error and being limited principally by statistical errors. Tritium measurements also verify the luminosity of the CMA as ~ 9 times 10^ {-4} cm^2 and demonstrate the extremely low background of 2 times 10^{-3} s^{-1} . Monte-Carlo simulations indicate that with this optical resolution, knowledge of molecular final states and energy loss, a lower limit of 10 eV (95% CL) should be measured for a neutrino mass of 30 eV with a suitable source. For a zero

  13. A cryogenically coolable microwave limiter

    PubMed

    Rinard; Quine; Eaton

    1999-02-01

    A microwave (ca. 3 GHz) limiter, constructed using a GaAs PIN diode and microstrip impedance transformation circuit, limited 300-ns long 11-W microwave pulses to 70 mW at ca. 4.2 K. This limiter was implemented in a pulsed electron paramagnetic resonance (EPR) spectrometer to protect a low-noise microwave preamplifier from the high-power pulses. Copyright 1999 Academic Press. PMID:9986762

  14. Uncertainty quantification and validation of combined hydrological and macroeconomic analyses.

    SciTech Connect

    Hernandez, Jacquelynne; Parks, Mancel Jordan; Jennings, Barbara Joan; Kaplan, Paul Garry; Brown, Theresa Jean; Conrad, Stephen Hamilton

    2010-09-01

    Changes in climate can lead to instabilities in physical and economic systems, particularly in regions with marginal resources. Global climate models indicate increasing global mean temperatures over the decades to come and uncertainty in the local to national impacts means perceived risks will drive planning decisions. Agent-based models provide one of the few ways to evaluate the potential changes in behavior in coupled social-physical systems and to quantify and compare risks. The current generation of climate impact analyses provides estimates of the economic cost of climate change for a limited set of climate scenarios that account for a small subset of the dynamics and uncertainties. To better understand the risk to national security, the next generation of risk assessment models must represent global stresses, population vulnerability to those stresses, and the uncertainty in population responses and outcomes that could have a significant impact on U.S. national security.

  15. Meta-Analyses of Predictors of Hope in Adolescents.

    PubMed

    Yarcheski, Adela; Mahon, Noreen E

    2016-03-01

    The purposes of this study were to identify predictors of hope in the literature reviewed, to use meta-analysis to determine the mean effect size (ES) across studies between each predictor and hope, and to examine four moderators on each predictor-hope relationship. Using preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines for the literature reviewed, 77 published studies or doctoral dissertations completed between 1990 and 2012 met the inclusion criteria. Eleven predictors of hope were identified and each predictor in relation to hope was subjected to meta-analysis. Five predictors (positive affect, life satisfaction, optimism, self-esteem, and social support) of hope had large mean ESs, 1 predictor (depression) had a medium ES, 4 predictors (negative affect, stress, academic achievement, and violence) had small ESs, and 1 predictor (gender) had a trivial ES. Findings are interpreted for the 11 predictors in relation to hope. Limitations and conclusions are addressed; future studies are recommended.

  16. Impact of workstations on criticality analyses at ABB combustion engineering

    SciTech Connect

    Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F. )

    1993-01-01

    During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bit word size and function as the computer servers and network administrative CPUS, providing a virtual memory system.

  17. Study of Residual Gas Analyser (RGA) Response towards Known Leaks

    NASA Astrophysics Data System (ADS)

    Pathan, Firozkhan S.; Khan, Ziauddin; Semwal, Pratibha; George, Siju; Raval, Dilip C.; Thankey, Prashant L.; Manthena, Himabindu; Yuvakiran, Paravastu; Dhanani, Kalpesh R.

    2012-11-01

    Helium leak testing is the most versatile form of weld qualification test for any vacuum application. Almost every ultra-high vacuum (UHV) system utilizes this technique for insuring leak tightness for the weld joints as well as demountable joints. During UHV system under operational condition with many other integrated components, in-situ developed leaks identification becomes one of the prime aspect for maintaining the health of such system and for continuing the experiments onwards. Since online utilization of leak detector (LD) has many practical limitations, residual gas analyser (RGA) can be used as a potential instrument for online leak detection. For this purpose, a co-relation for a given leak rate between Leak Detector and RGA is experimentally established. This paper describes the experimental aspect and the relationship between leak detector and RGA.

  18. Department of Energy's team's analyses of Soviet designed VVERs

    SciTech Connect

    Not Available

    1989-09-01

    This document provides Appendices A thru K of this report. The topics discussed respectively are: radiation induced embrittlement and annealing of reactor pressure vessel steels; loss of coolant accident blowdown analyses; LOCA blowdown response analyses; non-seismic structural response analyses; seismic analyses; S'' seal integrity; reactor transient analyses; fire protection; aircraft impacts; and boric acid induced corrosion. (FI).

  19. Evaluation of Model Operational Analyses during DYNAMO

    NASA Astrophysics Data System (ADS)

    Ciesielski, Paul; Johnson, Richard

    2013-04-01

    A primary component of the observing system in the DYNAMO-CINDY2011-AMIE field campaign was an atmospheric sounding network comprised of two sounding quadrilaterals, one north and one south of the equator over the central Indian Ocean. During the experiment a major effort was undertaken to ensure the real-time transmission of these data onto the GTS (Global Telecommunication System) for dissemination to the operational centers (ECMWF, NCEP, JMA, etc.). Preliminary estimates indicate that ~95% of the soundings from the enhanced sounding network were successfully transmitted and potentially used in their data assimilation systems. Because of the wide use of operational and reanalysis products (e.g., in process studies, initializing numerical simulations, construction of large-scale forcing datasets for CRMs, etc.), their validity will be examined by comparing a variety of basic and diagnosed fields from two operational analyses (ECMWF and NCEP) to similar analyses based solely on sounding observations. Particular attention will be given to the vertical structures of apparent heating (Q1) and drying (Q2) from the operational analyses (OA), which are strongly influenced by cumulus parameterizations, a source of model infidelity. Preliminary results indicate that the OA products did a reasonable job at capturing the mean and temporal characteristics of convection during the DYNAMO enhanced observing period, which included the passage of two significant MJO events during the October-November 2011 period. For example, temporal correlations between Q2-budget derived rainfall from the OA products and that estimated from the TRMM satellite (i.e., the 3B42V7 product) were greater than 0.9 over the Northern Sounding Array of DYNAMO. However closer inspection of the budget profiles show notable differences between the OA products and the sounding-derived results in low-level (surface to 700 hPa) heating and drying structures. This presentation will examine these differences and

  20. Combustion Devices CFD Team Analyses Review

    NASA Technical Reports Server (NTRS)

    Rocker, Marvin

    2008-01-01

    A variety of CFD simulations performed by the Combustion Devices CFD Team at Marshall Space Flight Center will be presented. These analyses were performed to support Space Shuttle operations and Ares-1 Crew Launch Vehicle design. Results from the analyses will be shown along with pertinent information on the CFD codes and computational resources used to obtain the results. Six analyses will be presented - two related to the Space Shuttle and four related to the Ares I-1 launch vehicle now under development at NASA. First, a CFD analysis of the flow fields around the Space Shuttle during the first six seconds of flight and potential debris trajectories within those flow fields will be discussed. Second, the combusting flows within the Space Shuttle Main Engine's main combustion chamber will be shown. For the Ares I-1, an analysis of the performance of the roll control thrusters during flight will be described. Several studies are discussed related to the J2-X engine to be used on the upper stage of the Ares I-1 vehicle. A parametric study of the propellant flow sequences and mixture ratios within the GOX/GH2 spark igniters on the J2-X is discussed. Transient simulations will be described that predict the asymmetric pressure loads that occur on the rocket nozzle during the engine start as the nozzle fills with combusting gases. Simulations of issues that affect temperature uniformity within the gas generator used to drive the J-2X turbines will described as well, both upstream of the chamber in the injector manifolds and within the combustion chamber itself.